Aditya Grover (@adityagrover_) 's Twitter Profile
Aditya Grover

@adityagrover_

Co-founder @InceptionAILabs. AI Prof @UCLA. Prev: PhD @StanfordAILab, bachelors @IITDelhi.

ID: 39525395

linkhttp://aditya-grover.github.io calendar_today12-05-2009 15:43:56

871 Tweet

10,10K Followers

491 Following

Matt Shumer (@mattshumer_) 's Twitter Profile Photo

WOW. Mercury coder feels almost as fast as models on Groq, but it runs on consumer hardware. Just wait till the Groq Inc team gets this on their platform... you'll be able to generate nicely-sized codebases in seconds. It's going to be insane.

Hritik Bansal (@hbxnov) 's Twitter Profile Photo

Humans are capable of providing preferences in similar, but non-identical contexts! So why restrict LLMs? In our #ACL2025 Findings paper, we propose Joint Preference Optimization 📷, a novel objective for aligning LLMs that optimizes preferences over joint instruction-response

Humans are capable of providing preferences in similar, but non-identical contexts! So why restrict LLMs? 

In our #ACL2025 Findings paper, we propose Joint Preference Optimization 📷, a novel objective for aligning LLMs that optimizes preferences over joint instruction-response
Inception Labs (@inceptionailabs) 's Twitter Profile Photo

We're live on Poe! Poe users can start interacting with Mercury Coder Small immediately through the chat interface. And developers and builders looking to integrate a dLLM into their own tools or platforms can access it via the Poe API. poe.com/Mercury-Coder-…

Inception Labs (@inceptionailabs) 's Twitter Profile Photo

At the #microsoftbuild keynote, Satya Nadella unveiled NLWeb – an open project that enables websites to easily create AI-powered natural language interfaces. We are incredibly excited to be one of NLWeb’s founding partners, using our ultra-fast Mercury Small diffusion Large

At the #microsoftbuild keynote, Satya Nadella unveiled NLWeb – an open project that enables websites to easily create AI-powered natural language interfaces. We are incredibly excited to be one of NLWeb’s founding partners, using our ultra-fast Mercury Small diffusion Large
Aditya Grover (@adityagrover_) 's Twitter Profile Photo

A Mercury powered NLWeb will fundamentally alter web browsing making it conversational, hallucination-free, and blazing fast. Vibe browsing.

Aditya Grover (@adityagrover_) 's Twitter Profile Photo

Congrats Google! A year ago when we started Inception, there was a lot of skepticism around anything non-autoregressive. With our Mercury models, our team that showed diffusion is vastly more efficient. And today, we are excited to see the growing validation and

Shufan (Jack) Li (@li78658171) 's Twitter Profile Photo

📢(1/11)Diffusion LMs are fast and controllable at inference time! But why restrict such benefits for processing text data? We are excited to announce LaViDa, one of the first and fastest large diffusion LM for vision-language understanding!!

Hritik Bansal (@hbxnov) 's Twitter Profile Photo

🧑‍🍳Very excited to present LaViDa, one of the first diffusion language models for multimodal understanding! 🌟Unlike autoregressive LMs, you can control the speed-quality tradeoff, and solve constrained generation problems out of the box 📦 🌟 We also release LaViDa-Reason, a

🧑‍🍳Very excited to present LaViDa, one of the first diffusion language models for multimodal understanding! 

🌟Unlike autoregressive LMs, you can control the speed-quality tradeoff, and solve constrained generation problems out of the box 📦
🌟 We also release LaViDa-Reason, a
Aditya Grover (@adityagrover_) 's Twitter Profile Photo

Diffusion language models go multimodal! Particularly impressive to see the speed and quality results on visual reasoning benchmarks. Great work led by my students Shufan (Jack) Li, Hritik Bansal and amazing collaborators.

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

LaViDa: A Large Diffusion Language Model for Multimodal Understanding "We introduce LaViDa, a family of VLMs built on DMs. We build LaViDa by equipping DMs with a vision encoder and jointly fine-tune the combined parts for multimodal instruction following. " "LaViDa achieves

LaViDa: A Large Diffusion Language Model for Multimodal Understanding

"We introduce LaViDa, a family of VLMs built on DMs. We build LaViDa by equipping DMs with a vision encoder and jointly fine-tune the combined parts for multimodal instruction following. "

"LaViDa achieves
Deedy (@deedydas) 's Twitter Profile Photo

1. Lavida: alphaxiv.org/abs/2505.16839 2. Mmada: alphaxiv.org/abs/2505.15809 3. dKv cache: alphaxiv.org/abs/2505.15781 4. d1 (scaling reasoning): alphaxiv.org/abs/2504.12216 5. Llada: alphaxiv.org/abs/2502.0999

Deedy (@deedydas) 's Twitter Profile Photo

This is the fastest coding model in the world. You need to watch this 1 minute video to really experience what's possible. Speak to your computer and get working code in TWO seconds.

Aditya Grover (@adityagrover_) 's Twitter Profile Photo

What a fun Mercury demo combining two of the most latency-sensitive applications: voice + code! A preview into truly unique experiences that will become viable with ultra-fast diffusion language models.

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Accelerating Diffusion LLMs via Adaptive Parallel Decoding "We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel." "Notably, Dream with ADP surpasses the speed of autoregressive Qwen 7B and

Accelerating Diffusion LLMs via Adaptive Parallel Decoding

"We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel."

"Notably, Dream with ADP surpasses the speed of autoregressive Qwen 7B and
IJCAIconf (@ijcaiconf) 's Twitter Profile Photo

Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover Aditya Grover, Inception UCLA. Dr. Grover is honored for uniting deep generative models, representation learning & RL to advance scientific reasoning. Congratulations! ijcai.org/awards

Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover <a href="/adityagrover_/">Aditya Grover</a>, <a href="/InceptionAILabs/">Inception</a> <a href="/UCLA/">UCLA</a>. Dr. Grover is honored for uniting deep generative models, representation learning &amp; RL to advance scientific reasoning.

Congratulations!

ijcai.org/awards
Aditya Grover (@adityagrover_) 's Twitter Profile Photo

Thank you for the honor — truly an acknowledgment of the tireless efforts of all my students, mentors, collaborators, friends and family over the years!

Ryan Marten (@ryanmart3n) 's Twitter Profile Photo

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals. We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals.

We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data
Shufan (Jack) Li (@li78658171) 's Twitter Profile Photo

We're presenting OmniFlow at CVPR 2025. Checkout our work at Poster #241 (ExHall D) on Jun 14 8-10am. Additionally, my advisor Aditya Grover will give a talk about our recent works on multi-modal diffusion language models at WorldModelBench workshop on June 12.

Hritik Bansal (@hbxnov) 's Twitter Profile Photo

🥳 Excited to share that VideoPhy-2 has been awarded 🏆 Best Paper at the World Models Workshop (physical-world-modeling.github.io) #ICML2025! Looking forward to presenting it as a contributed talk at the workshop! 😃 w/ Clark Peng Yonatan Bitton @ CVPR Roman Aditya Grover Kai-Wei Chang