Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile
Dmytro Dzhulgakov

@dzhulgakov

Co-founder and CTO @FireworksAI_HQ. PyTorch core maintainer. Previously FB Ads. Ex-Pro Competitive Programmer

ID: 1387868958

calendar_today28-04-2013 20:42:08

293 Tweet

3,3K Followers

664 Following

Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile Photo

Diffusion... for text, wow 🤯. Here's what it means: 1/ Super-speedy generation on GPUs. Groq/Cerebras are at a disadvantage here. Diffusion models (just like LLM training) are all about FLOPs, great for GPUs. Regular LLM inference is more about memory b/w and needs a lot of

Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile Photo

First DeepSeek on Christmas Day, now Llama on the weekend. There isn’t enough days in a work week for AI launches! Congrats to friends at Meta for a very strong launch, looking forward to the reasoning one. Llama4 is up on Fireworks AI - we had a busy morning too!

Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile Photo

Meta Llama, number four, Coming Saturday, explore! Zuck announces, proud and loud, Fans and devs, a buzzing crowd. Llama 4, it’s on the way, Fireworks AI scrambles—hey! Startups racing, GPUs hot, “Launch the model—wait we cannot!” Llama Llama, context long, Support is deep,

Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile Photo

Inaugural Fireworks Dev Day: hear from Vercel, Perlexity, Mercor and Notion how they’re building agents with open models. We’ve got a lot of product launches to announce too!

Dmytro Dzhulgakov (@dzhulgakov) 's Twitter Profile Photo

Reinforcement Fine-Tuning on Fireworks AI - grade your model's answers, and we take care of the rest. Vercel used RFT to get an open model behind V0 with fewer errors than frontier closed models while being 10-40x faster Here's how it works: 1. Write rewards as code