
Hanxu Hu
@huhanxu1
1st Year PhD Student supervised by @RicoSennrich and @iatitov, Intern @MSFTResearch | Prev @EdinburghNLP | Interested in Language Models.
ID: 1508606336103325702
http://hanxuhu.github.io 29-03-2022 00:48:02
85 Tweet
135 Followers
336 Following

It's raining conference decisions, congrats to the first authors Zeyu Huang@ICLR 2025 Hanxu Hu Uri Berger Coleman Haley and the rest of the team!










We are excited to have Songlin Yang present: Linear Attention and Beyond 🚀🚀🚀 Time: Mar 5, 4pm EST, sign up at scale-ml.org to join our mailing list for the zoom link.




University of Zurich and Huawei researchers explore how #LLMs can improve document-level 📄 #AI #translation by preserving context across segments 🔁 and integrating additional knowledge layers 💡 University of Zurich Universität Zürich Zurich Computational Linguistics Group Hanxu Hu Jannis Vamvas Rico Sennrich slator.com/how-large-lang…

Check out our new reward model calibration paper! We use Elo scores from ChatbotArena to calibrate RMs and mitigate the over-valuation problems of reward models. Really nice to work with my friends Xiao Zhu Chenmien Tan Pinzhen "Patrick" Chen and my phd supervisor Rico Sennrich !


To appear at #NAACL2025 (2 orals, 1 poster)! Coleman Haley: which classes of words are most grounded on (perceptual proxies of) meaning? Uri Berger: how do image descriptions vary across languages and cultures? Hanxu Hu: can LLMs follow sequential instructions? 🧵below