
Saujas Vaduguru
@saujasv
PhD student @LTIatCMU | Previous: Student @iiit_hyderabad, Intern @ChandarLab @Mila_Quebec | he/him/his
ID: 1726141027
http://saujasv.github.io 03-09-2013 17:12:46
145 Tweet
454 Followers
535 Following



Awesome to see Shashwat Goel ✈️ ICML 2025 and Nikhil Chandak getting recognized for this work 🏆🎉 Congratulations!


We are thrilled to have Mahathi Vempati as our next speaker! She will discuss their work on an exponential separation between parallel quantum and classical query models. The talk is tomorrow (Monday at 8pm IST). Details are on the poster and website: braqiiit.github.io/Talks.html



I am thrilled to announce that I will be joining Duke University Duke CompSci as an Assistant Professor in summer 2025. Super excited for the next chapter! Stay tuned for the launch of my lab 🧠🤖



I’m excited to share some news: My group and I are moving to NYU Psychology! I’ll be affiliated with both cognition/perception and social psych, and I am SO thrilled about this unique opportunity to help bridge the two programs 🤗

Hila Gonen Nora Belrose Special shoutout to Shashwat Singh, who approached me as an undergrad to kickstart this project. It was an absolute pleasure working with him! (8/8)

Students at IIIT Hyderabad, supposed to be one of the top engineering colleges in the country, are in an ongoing health crisis caused due to appalling mismanagement and negligence going on since well over a year. 🧵👇on mass typhoid breakouts, food poisoning, underreporting..




Can LLMs cater to diverse cultures in text generation? We find: 1️⃣lexical variance across nationalities 2️⃣culturally salient words 3️⃣weak correlation w/ cultural values 📜arxiv.org/abs/2406.11565 🤗huggingface.co/datasets/shail… 💻github.com/shaily99/eecc 🎉EMNLP 2025🎉 w/ Fernando Diaz 🧵


New ARC-AGI paper ARC Prize w/ fantastic collaborators Wen-Ding Li @ ICLR'25 Keya Hu Zenna Tavares evanthebouncy Basis For few-shot learning: better to construct a symbolic hypothesis/program, or have a neural net do it all, ala in-context learning? cs.cornell.edu/~ellisk/docume…



new multi-turn instruction grounding dataset with Will McCarthy and Saujas Vaduguru - multi-modal instruction : drawing + txt - verifiable execution : 2D CAD gym env - easy eval : API → score - baselines : human vs VLMs - large : 15,163 inst-exe rounds github.com/AutodeskAILab/… [1/n]
![evanthebouncy (@evanthebouncy) on Twitter photo new multi-turn instruction grounding dataset with <a href="/wp_mccarthy/">Will McCarthy</a> and <a href="/saujasv/">Saujas Vaduguru</a>
- multi-modal instruction : drawing + txt
- verifiable execution : 2D CAD gym env
- easy eval : API → score
- baselines : human vs VLMs
- large : 15,163 inst-exe rounds
github.com/AutodeskAILab/…
[1/n] new multi-turn instruction grounding dataset with <a href="/wp_mccarthy/">Will McCarthy</a> and <a href="/saujasv/">Saujas Vaduguru</a>
- multi-modal instruction : drawing + txt
- verifiable execution : 2D CAD gym env
- easy eval : API → score
- baselines : human vs VLMs
- large : 15,163 inst-exe rounds
github.com/AutodeskAILab/…
[1/n]](https://pbs.twimg.com/media/GtUr1tSbMAIbXry.png)

Just look at these multi-modal refinement instructions! How would we ground them into reasonable executions?? joint work with Will McCarthy Saujas Vaduguru Judy Fan Daniel Fried Karl D.D. Willis Justin Matejka

