Lars Grammel (@lgrammel) 's Twitter Profile
Lars Grammel

@lgrammel

Tech Lead @aisdk

ID: 42241755

calendar_today24-05-2009 16:58:14

3,3K Tweet

4,4K Followers

301 Following

Lars Grammel (@lgrammel) 's Twitter Profile Photo

sometimes you need to stream ephemeral data to the client as part of an assistant response you can now mark data parts as transient to prevent them from becoming part of the message parts, and use the onData callback to react to them streaming in

sometimes you need to stream ephemeral data to the client as part of an assistant response

you can now mark data parts as transient to prevent them from becoming part of the message parts, and use the onData callback to react to them streaming in
Nico Albanese (@nicoalbanese10) 's Twitter Profile Photo

My workshop from Vercel ship is now live! Build a coding agent like Claude Code with AI SDK 5, AI Gateway, Vercel Sandbox, and Vercel Functions in 40 minutes

My workshop from <a href="/vercel/">Vercel</a> ship is now live! 

Build a coding agent like Claude Code with <a href="/aisdk/">AI SDK</a> 5, AI Gateway, Vercel Sandbox, and Vercel Functions in 40 minutes
Lars Grammel (@lgrammel) 's Twitter Profile Photo

We added an experimental Agent abstraction to the AI SDK. It has the same functionality as streamText/generateText (which are used by it), differently packaged for those who prefer an explicit agent object.

Marc Klingen (@marcklingen) 's Twitter Profile Photo

Excited to host Lars Grammel, core maintainer of the AI SDK at Vercel, for the first installation of Langfuse Context at our langfuse.com Berlin office. Join us to learn more about his journey, take on LLM/agent libraries, and insights on what's next! sign up below

Excited to host <a href="/lgrammel/">Lars Grammel</a>, core maintainer of the AI SDK at Vercel, for the first installation of Langfuse Context at our <a href="/langfuse/">langfuse.com</a> Berlin office.

Join us to learn more about his journey, take on LLM/agent libraries, and insights on what's next!

sign up below
Nico Albanese (@nicoalbanese10) 's Twitter Profile Photo

long-running tasks with stopWhen (aka maxSteps) can very quickly clog up your context window with prepareStep, you now have complete control over everything that is being sent to the model at the beginning of EACH step