I'm crowdsourcing the most useful AI use cases, infra & products in 2025. Top AI founders, researchers & VCs are sharing their AI stacks.
Contribute through the form in my bio and I'll:
* email you the results
* give you a shoutout
Comment "alpha" below and I'll DM you results.
only FAF can host a demo day the week before finals and get insane turnout + the most incredible vibes from our epic community 🚀😮💨
here's what stanford's builders have been cooking this quarter: 🧵
just noticed Cursor has sound effects when coding
(when errors appear, code runs successfully, terminal commands run)
does this mean a SWE makes music when coding well?
and background agents would make music like a band or orchestra? with the human listening as the conductor
I'm hosting a small community where AI power users (who spend $500+/mo on AI) share learnings while optimizing their AI stacks and workflows. Reply w/ how much $ you spend on AI/month to request an invite.
love this blog post about ai x gaming. it's maybe the most undervalued application for AI. also kevin's blog is 10/10- i kind of want to keep it a secret
So I think something else that doesn't get discussed much is the extrapolation of this inference : training trend
- 2015: back in the day, we would train one model per dataset, and inference it once (to obtain the eval result for our paper)
- 2020: with chatgpt, multi-task
Why you should stop working on RL research and instead work on product //
The technology that unlocked the big scaling shift in AI is the internet, not transformers
I think it's well known that data is the most important thing in AI, and also that researchers choose not to work