
Nadav Timor ✈️ ICLR
@keyboardant
LLM inference, speculative decoding, open source.
Built novel decoding algorithms – default in Hugging Face Transformers (140k+ ⭐). Making LLMs faster + cheaper
ID: 943637648522072065
https://www.linkedin.com/in/nadav-timor 21-12-2017 00:22:02
248 Tweet
488 Followers
5,5K Following





Twitter! I can't believe no one told me that people are using vision encoders to retrieve document information these days. Based on a tip from Nadav Timor I read the "ColPali: Efficient Document Retrieval with Vision Language Models" paper, and it is very cool.



Do you want to speed up your inference time by 2X for free? Using speculative decoding (SP) but want to use your own drafter? Want lossless speedup with one line? Go to Nadav Timor talk at ICML tomorrow at 3:30 PM and check out his poster afterward at 4:30 PM. Thank me later!


Algorithms developed by researchers in Prof. David Harel’s team and Intel Labs enable AI developers around the world to combine the power of different AI models “thinking” as one. The study led by PhD student Nadav Timor was presented at #ICML2025>> bit.ly/faster-ai-eng


KV cache go brrr with Junchen Jiang's LMCache Lab! Join us tomorrow to learn more about next‑gen long‑context LLM inference: faster-llms.vercel.app



