VITA Group (@vitagrouput) 's Twitter Profile
VITA Group

@vitagrouput

VITA Group @ UT Austin w/ Prof Atlas Wang | vita-group.github.io Run by VITA students (PI is busy changing diapers😄). Tweets only reflect personal views

ID: 1650633177432563712

calendar_today24-04-2023 22:50:20

480 Tweet

1,1K Followers

3,3K Following

Junyuan "Jason" Hong (@hjy836) 's Twitter Profile Photo

New Finding: 🧠 LLMs Can Get Brain Rot (too)! [1/6]🚨LLMs can suffer from “brain rot” when continually fed trivial, highly engaging Twitter/X content. 🧩Their reasoning, long‑context understanding, safety, and even personality traits persistently deteriorate.

New Finding: 🧠 LLMs Can Get Brain Rot (too)!

[1/6]🚨LLMs can suffer from “brain rot” when continually fed trivial, highly engaging Twitter/X content. 

🧩Their reasoning, long‑context understanding, safety, and even personality traits persistently deteriorate.
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

This might be the most disturbing AI paper of 2025 ☠️ Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online. They fed models months of viral Twitter data short, high-engagement

This might be the most disturbing AI paper of 2025 ☠️

Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online.

They fed models months of viral Twitter data short, high-engagement
Greg Holste (@giholste) 's Twitter Profile Photo

Awesome to see PanEcho highlighted in a special podcast episode from JAMA! "This is the sort of thing that if 10 or 15 years ago you said we'd be able to do it, I suspect there would have been a lot of skepticism." Rohan Khera Evangelos K. Oikonomou CarDS Lab VITA Group

WIRED (@wired) 's Twitter Profile Photo

A new study shows that feeding large language models low-quality, high-engagement content from social media lowers their cognitive abilities. wired.com/story/ai-model…

VITA Group (@vitagrouput) 's Twitter Profile Photo

The "brain rot" paper just got a highlight in @nature! 🧠 From WIRED to Nature — the conversation about AI training data quality matters now more than ever. 🔗 nature.com/articles/d4158…

Hanqing Zhu (@zhu_hanqin41424) 's Twitter Profile Photo

🚨 New Work! 🤔 Is RL black-box weight tinkering? 😉 No. We provably show RLVR follows a 🧭 — always updating the same off-principal regions while preserving the model's core spectra. ⚠️ Different optimization regime than SFT — SFT-era PEFT tricks can misfire(like PiSSA, the

🚨 New Work!

🤔 Is RL black-box weight tinkering?

😉 No. We provably show RLVR follows a 🧭 — always updating the same off-principal regions while preserving the model's core spectra.

⚠️  Different optimization regime than SFT — SFT-era PEFT tricks can misfire(like PiSSA, the
Yuandong Tian (@tydsh) 's Twitter Profile Photo

📢📢 We know that in post-training, RL tends to generalize much better than SFT; on the other hand, we know both are nothing but ΔW on top of a base model weight. 🤔Why not open the blackbox of the model, and check how their ΔWs are different? 💡It turns out that RL takes a

Shiwei Liu (@shiwei_liu66) 's Twitter Profile Photo

Love seeing OpenAI highlight sparse circuits — sparsity is finally getting the attention it deserves. In our earlier work, we showed how sparse training can unlock robustness, efficiency, and better scaling: ICML’21 • NeurIPS’21 • ICLR'22 • ICML'24 • ICLR'23. Many great