
Archiki Prasad
@archikiprasad
PhD student @uncnlp in #NLProc, #ML, advised by @mohitban47 | @Apple Scholar in AI/ML | Prev (intern): @AIatMeta (FAIR), @allenai_org
ID: 806930171701800960
http://archiki.github.io/ 08-12-2016 18:35:19
419 Tweet
1,1K Followers
897 Following



🎉Excited to share that I’ll be starting my CS PhD journey at UNC-Chapel Hill UNC Computer Science this fall! 🎓 I’ll be working with the renowned Mohit Bansal at UNC NLP — a dream comes true! ✨ Huge thanks to everyone who's helped me get here. Can't wait to begin this new life and research journey! 🧳🚀



🥳Excited to share that I’ll be joining UNC Computer Science as postdoc this fall. Looking forward to work with Mohit Bansal & amazing students at UNC AI. I'll continue working on retrieval, aligning knowledge modules with LLM's parametric knowledge, and expanding to various modalities.


🎉 Yay, welcome hyunji amy lee -- super excited to have you join us as a postdoc! 🤗 Welcome to our MURGe-Lab + UNC AI + UNC Computer Science family & the beautiful Research Triangle area -- looking forward to the many fun+impactful collaborations together 🔥

🥳Our work UTGen & UTDebug on teaching LLMs to generate effective unit tests & improve code debugging/generation has been accepted to Conference on Language Modeling #COLM2025! Stay tuned for more exciting results -- e.g., using 32B-scale UTGen models to improve debugging with frontier models like

🎉 Very excited to see TaCQ — our work on task-conditioned mixed-precision quantization that draws on interpretability methods — accepted to Conference on Language Modeling #COLM2025 with strong scores and a nice shoutout from the AC! Kudos to Hanqi on leading this effort!


🥳 Excited to share our work -- Retrieval-Augmented Generation with Conflicting Evidence -- on addressing conflict in RAG due to ambiguity, misinformation, and noisy/irrelevant evidence has been accepted to Conference on Language Modeling #COLM2025! Our new benchmark RAMDocs proves challenging for

🎉 Glad to see our work on handling conflicting & noisy evidence and ambiguous queries in RAG systems (via a new benchmark & multi-agent debate method) has been accepted to #COLM2025 Conference on Language Modeling!! 🇨🇦 Congrats to Han on leading this effort. More details in the thread below and


🎉 Our paper, GenerationPrograms, which proposes a modular framework for attributable text generation, has been accepted to Conference on Language Modeling! GenerationPrograms produces a program that executes to text, providing an auditable trace of how the text was generated and major gains on

🥳 Gap year update: I'll be joining Ai2/University of Washington for 1 year (Sep2025-Jul2026 -> JHU Computer Science) & looking forward to working with amazing folks there, incl. Ranjay Krishna, Hanna Hajishirzi, Ali Farhadi. 🚨 I’ll also be recruiting PhD students for my group at JHU Computer Science for Fall





🚀 I'm recruiting PhD students to join my lab (jaehong31.github.io) at NTU Singapore (NTU Singapore), starting Spring 2026. If you're passionate about doing cutting-edge and high-impact research in multimodal AI, Trustworthy AI, continual learning, or video generation/reasoning,
