UNC NLP (@uncnlp) 's Twitter Profile
UNC NLP

@uncnlp

NLP (+ML/AI/CV) research at @UNCCS @UNC
Faculty: @mohitban47+@gberta227+@snigdhac25+@shsriva+@tianlongchen4+@huaxiuyaoml+@dingmyu+@zhun_deng +@SenguptRoni et al

ID: 875914488020701188

linkhttp://nlp.cs.unc.edu calendar_today17-06-2017 03:14:22

2,2K Tweet

3,3K Followers

405 Following

Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

Iโ€™ll be at #ICML2025 this week to present ScPO: ๐Ÿ“Œ Wednesday, July 16th, 11:00 AM-1:30 PM ๐Ÿ“East Exhibition Hall A-B, E-2404 Stop by or reach out to chat about improving reasoning in LLMs, self-training, or just tips about being on the job market next cycle! ๐Ÿ˜ƒ

Mohaiminul (Emon) Islam (@mmiemon) 's Twitter Profile Photo

Checkout our new paper: Video-RTS ๐ŸŽฅ A data-efficient RL method for complex video reasoning tasks. ๐Ÿ”น Pure RL w/ output-based rewards. ๐Ÿ”น Novel sparse-to-dense Test-Time Scaling (TTS) to expand input frames via self-consistency. ๐Ÿ’ฅ 96.4% less training data! More in the thread๐Ÿ‘‡

Vaidehi Patil (@vaidehi_patil_) 's Twitter Profile Photo

The MUGen workshop at #ICML2025 is happening now! Stop by for talks on adversarial ML, unlearning as rational belief revision, failure modes in unlearning, robust LLM unlearning, and the bright vs. dark side of forgetting in generative AI!

Yiyang Zhou (@aiyiyangz) 's Twitter Profile Photo

GLIMPSE ๐Ÿ‘๏ธ | What Do LVLMs Really See in Videos? A new benchmark for video understanding: 3,269 videos and 4,342 vision-centric questions across 11 spatiotemporal reasoning tasks. Test your model to see if it truly thinks with videoโ€”or is merely performing frame scanning.

GLIMPSE ๐Ÿ‘๏ธ | What Do LVLMs Really See in Videos?
A new benchmark for video understanding:

3,269 videos and 4,342 vision-centric questions across 11 spatiotemporal reasoning tasks.

Test your model to see if it truly thinks with videoโ€”or is merely performing frame scanning.
David Wan (@meetdavidwan) 's Twitter Profile Photo

๐ŸŽ‰ Our paper, GenerationPrograms, which proposes a modular framework for attributable text generation, has been accepted to Conference on Language Modeling! GenerationPrograms produces a program that executes to text, providing an auditable trace of how the text was generated and major gains on

Jaemin Cho (on faculty job market) (@jmin__cho) 's Twitter Profile Photo

๐Ÿฅณ Gap year update: I'll be joining Ai2/University of Washington for 1 year (Sep2025-Jul2026 -> JHU Computer Science) & looking forward to working with amazing folks there, incl. Ranjay Krishna, Hanna Hajishirzi, Ali Farhadi. ๐Ÿšจ Iโ€™ll also be recruiting PhD students for my group at JHU Computer Science for Fall

Mohit Bansal (@mohitban47) 's Twitter Profile Photo

Jaemin Cho Ai2 University of Washington JHU Computer Science Ranjay Krishna Hanna Hajishirzi Congrats again Jaemin Cho to you and to AI2/UW! ๐ŸŽ‰ Looking forward to your continued exciting works + collaborations there & later at JHU Computer Science (students, make sure apply to amazing Jaemin for your PhD)! ๐Ÿ™‚

Kerem Zaman (@keremzaman3) 's Twitter Profile Photo

I'll be at #ACL2025 in Vienna๐Ÿ‡ฆ๐Ÿ‡น! DM me if you'd like to chat about interpretability, safety and reasoning or catch me during our oral presentation on July 29th (Hall N.1, Session 9) ๐Ÿ‘‡ x.com/akendapadi/staโ€ฆ

Duy Nguyen (@duynguyen772) 's Twitter Profile Photo

๐Ÿš€ We introduce GrAInS, a gradient-based attribution method for inference-time steering (of both LLMs & VLMs). โœ… Works for both LLMs (+13.2% on TruthfulQA) & VLMs (+8.1% win rate on SPA-VL). โœ… Preserves core abilities (<1% drop on MMLU/MMMU). LLMs & VLMs often fail because

๐Ÿš€ We introduce GrAInS, a gradient-based attribution method for inference-time steering (of both LLMs &amp; VLMs).

โœ… Works for both LLMs (+13.2% on TruthfulQA) &amp; VLMs (+8.1% win rate on SPA-VL).
โœ… Preserves core abilities (&lt;1% drop on MMLU/MMMU).

LLMs &amp; VLMs often fail because
Archiki Prasad (@archikiprasad) 's Twitter Profile Photo

๐Ÿ“ข Excited to share our new paper, where we introduce, โœจGrAInSโœจ, an inference-time steering approach for LLMs and VLMs via token attribution. Some highlights: โžก๏ธGrAIns leverages contrastive, gradient-based attribution to identify the most influential textual or visual tokens

Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

๐Ÿšจ Excited to announce GrAInS, our new LLM/VLM steering method that uses gradient-based attribution to build more targeted interventions. Some highlights: 1๏ธโƒฃ Compatible with both LLMs and VLMs, can intervene on text and vision tokensย  2๏ธโƒฃ Gains across variety of tasks +

Han Lin (@hanlin_hl) 's Twitter Profile Photo

My talented collaborator & mentor @jaemincho will be recruiting PhD students at JHU Computer Science for Fall 2026! If you're interested in vision, language, or generative models, definitely reach out!๐ŸŽ“๐Ÿ™Œ

Snigdha Chaturvedi (@snigdhac25) 's Twitter Profile Photo

Will be attending #ACL2025. Happy to talk about the two papers being presented from our lab on (1) Identifying unreliable narrators w Anneliese Brei Shashank Srivastava (2) Improving fairness in multi-document summarization w Haoyuan Li Rui Zhang @uncnlp

Will be attending #ACL2025. Happy to talk about the two papers being presented from our lab on
(1) Identifying unreliable narrators w <a href="/AnnelieseB_/">Anneliese Brei</a> <a href="/shsriva/">Shashank Srivastava</a> 
(2) Improving fairness in multi-document summarization w
 <a href="/HaoyuanLi9/">Haoyuan Li</a> <a href="/ruizhang_nlp/">Rui Zhang</a>

@uncnlp
Anneliese Brei (@annelieseb_) 's Twitter Profile Photo

(1/7) I am delighted to share our paper, Classifying Unreliable Narrators with Large Language Models. If you are at #ACL2025, please come to our in-person oral presentation on Tuesday during Session 9 from 14:00-15:30 MESZ.

(1/7) I am delighted to share our paper, Classifying Unreliable Narrators with Large Language Models. If you are at #ACL2025, please come to our in-person oral presentation on Tuesday during Session 9 from 14:00-15:30 MESZ.
Elias Stengel-Eskin (on the faculty job market) (@eliaseskin) 's Twitter Profile Photo

๐Ÿ‡ฆ๐Ÿ‡น Iโ€™m on my way to #ACL2025 to help present two papers (๐Ÿงตs below) โžก๏ธ MAT-Steer (07/30 at 11am), our method for steering LLMs w/ multiple attributes (e.g. truthfulness, bias reduction, and toxicity mitigation) simultaneously. โžก๏ธ LAQuer (07/28 at 11am), a new task/framework for

Jaehong Yoon (on the faculty job market) (@jaeh0ng_yoon) 's Twitter Profile Photo

๐Ÿš€ I'm recruiting PhD students to join my lab (jaehong31.github.io) at NTU Singapore (NTU Singapore), starting Spring 2026. If you're passionate about doing cutting-edge and high-impact research in multimodal AI, Trustworthy AI, continual learning, or video generation/reasoning,