Xinpeng Wang (@xinpengwang_) 's Twitter Profile
Xinpeng Wang

@xinpengwang_

Visitor @NYUDataScience | PhD student @LMU_Muenchen | Eval & Safety Alignment | Previously @TU_Muenchen

ID: 1225082179476246528

linkhttp://xinpeng-wang.github.io calendar_today05-02-2020 15:42:16

43 Tweet

184 Followers

354 Following

Yang Zhang (@yangzhang95123) 's Twitter Profile Photo

Sebastian Raschka Thank you for sharing this fascinating work! Our study (arxiv.org/abs/2405.18218), released a month prior to this work, already reveals the redundancy of attention layers. In our research, we applied iterative pruning to both attention and feed-forward layers, and our experiments

<a href="/rasbt/">Sebastian Raschka</a> Thank you for sharing this fascinating work! Our study (arxiv.org/abs/2405.18218), released a month prior to this work, already reveals the redundancy of attention layers. In our research, we applied iterative pruning to both attention and feed-forward layers, and our experiments
Xinpeng Wang (@xinpengwang_) 's Twitter Profile Photo

The redundancy of attention layers has been shown in our earlier LLM structural pruning work. As shown in the figure, you can prune nearly half of the attention layers with minimal performance drop.

Bolei Ma (@boleimabolei) 's Twitter Profile Photo

🎉 Our paper “Algorithmic Fidelity of LLMs in Generating Synthetic German Public Opinions” is accepted at #ACL2025 main conference as an oral presentation! 🇩🇪🤖 We study how well LLMs simulate real survey responses using open-ended German data, showing the left-leaning bias.

🎉 Our paper “Algorithmic Fidelity of LLMs in Generating Synthetic German Public Opinions” is accepted at #ACL2025 main conference as an oral presentation! 🇩🇪🤖

We study how well LLMs simulate real survey responses using open-ended German data, showing the left-leaning bias.