
Xinyu Zhu
@tianhongzxy
CS Ph.D. student @UVA. Summer intern @Apple. I work on improving LLM reasoning. Previous master @Tsinghua_uni, intern @MSFTResearch Asia. #NLProc
ID: 932230530766061569
https://zhuxinyu.top 19-11-2017 12:54:13
105 Tweet
140 Followers
445 Following







🚀 Check out our new #ICML2025 paper led by Zhepei Wei! Achieve 1.73× faster LLM decoding — no draft model needed, and no discrepancy from vanilla decoding!

🚀 Interesting work by Taiqiang Wu! 💡 Quick takeaway: If you collect additional instruction-following data for SFT, it's better to fine-tune the Base model and then graft the weights onto its corresponding Instruct model — rather than continuing to train the Instruct model!