david (@davidtsong) 's Twitter Profile
david

@davidtsong

ID: 775188215485435904

linkhttp://googIe.me calendar_today12-09-2016 04:24:07

616 Tweet

3,3K Followers

655 Following

david (@davidtsong) 's Twitter Profile Photo

I'm crowdsourcing the most useful AI use cases, infra & products in 2025. Top AI founders, researchers & VCs are sharing their AI stacks. Contribute through the form in my bio and I'll: * email you the results * give you a shoutout Comment "alpha" below and I'll DM you results.

friends and family (@friendsandfam_) 's Twitter Profile Photo

only FAF can host a demo day the week before finals and get insane turnout + the most incredible vibes from our epic community 🚀😮‍💨 here's what stanford's builders have been cooking this quarter: 🧵

david (@davidtsong) 's Twitter Profile Photo

just noticed Cursor has sound effects when coding (when errors appear, code runs successfully, terminal commands run) does this mean a SWE makes music when coding well? and background agents would make music like a band or orchestra? with the human listening as the conductor

david (@davidtsong) 's Twitter Profile Photo

I'm hosting a small community where AI power users (who spend $500+/mo on AI) share learnings while optimizing their AI stacks and workflows. Reply w/ how much $ you spend on AI/month to request an invite.

david (@davidtsong) 's Twitter Profile Photo

love this blog post about ai x gaming. it's maybe the most undervalued application for AI. also kevin's blog is 10/10- i kind of want to keep it a secret

Kevin Lu (@_kevinlu) 's Twitter Profile Photo

So I think something else that doesn't get discussed much is the extrapolation of this inference : training trend - 2015: back in the day, we would train one model per dataset, and inference it once (to obtain the eval result for our paper) - 2020: with chatgpt, multi-task

So I think something else that doesn't get discussed much is the extrapolation of this inference : training trend

- 2015: back in the day, we would train one model per dataset, and inference it once (to obtain the eval result for our paper)
- 2020: with chatgpt, multi-task
Kevin Lu (@_kevinlu) 's Twitter Profile Photo

Why you should stop working on RL research and instead work on product // The technology that unlocked the big scaling shift in AI is the internet, not transformers I think it's well known that data is the most important thing in AI, and also that researchers choose not to work

Why you should stop working on RL research and instead work on product //
The technology that unlocked the big scaling shift in AI is the internet, not transformers

I think it's well known that data is the most important thing in AI, and also that researchers choose not to work