Jack Cook (@jackcookjack) 's Twitter Profile
Jack Cook

@jackcookjack

studying @oxneuro | formerly @mit, @nytimesrd, @nvidia

ID: 233396359

linkhttps://jackcook.com calendar_today03-01-2011 03:36:10

81 Tweet

765 Followers

423 Following

Lulu NYT (@lulugnavarro) 's Twitter Profile Photo

NPR is not state affiliated media. It is public media. The NYT is not propaganda. It is the most robust news organization in the English speaking world. The bias as to who is being targeted on this site by its leadership is so very clear. And it’s not a small thing.

NPR is not state affiliated media. It is public media. The NYT is not propaganda. It is the most robust news organization in the English speaking world. The bias as to who is being targeted on this site by its leadership is so very clear. And it’s not a small thing.
Merriam-Webster (@merriamwebster) 's Twitter Profile Photo

To address extreme levels of language enthusiasm, we've applied the following temporary limits: -Writers, Lyricists, and other Professional Wordsmiths: 6000 definition lookups/day -Thoughtful Readers and Crossword Puzzlers: 600/day -Everyday English Speakers: 300/day

Peyton Walters (@peywalt) 's Twitter Profile Photo

really excited to see jack and Charles 🎉 Frye take our internal work and make it available to everyone. i think this is the highest-signal resource out there for understanding how these engines perform fun fact: jack was the person who convinced me to join modal!

Nikhil Murthy (@nikhilkmurthy) 's Twitter Profile Photo

There are a ton of knobs to think about when serving LLMs (framework, etc.), and the answer can be wildly different depending on your setup (model size, quantization, ...). s/o to Charles 🎉 Frye, Jack Cook and team for making it incredibly easy to see tradeoffs and even run

Charles 🎉 Frye (@charles_irl) 's Twitter Profile Photo

Dozens of teams have asked my advice on running LLMs. How fast is DeepSeek V3 with vLLM on 8 GPUs? What's the max throughput of Qwen 2.5 Coder with SGLang on one H100? Running & sharing benchmarks ad hoc was too slow So we built a tiny app, the LLM Engine Advisor

vLLM (@vllm_project) 's Twitter Profile Photo

Thanks for the great investigation! vLLM values usability, performance, and building the ecosystem for LLM inference, together let's make open-source better❤️ Stay tuned for latest updates from vLLM!