Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile
Zixuan Zhang

@zhangzxuiuc

NLP Researcher,
PhD Candidate @ CS UIUC (zhangzx-uiuc.github.io)

ID: 1493395075497398272

calendar_today15-02-2022 01:21:56

6 Tweet

66 Followers

61 Following

Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile Photo

This is a brand new framework for LLM knowledge editing (KE) that solves the critical problems of ambiguity in previous KE methods. A very exciting work!

Zixuan Zhang (@zhangzxuiuc) 's Twitter Profile Photo

πŸš€Excited to share our new work at NAACL 2024! arxiv.org/pdf/2404.01652… We study a critical generalization issue of retrieval-augmented generation (RAG) systems - how to maintain the model performance as global knowledge shifts and the background corpus evolves. 🌎✨ πŸ”ŽOur key

πŸš€Excited to share our new work at NAACL 2024! arxiv.org/pdf/2404.01652… 
We study a critical generalization issue of retrieval-augmented generation (RAG) systems - how to maintain the model performance as global knowledge shifts and the background corpus evolves. 🌎✨

πŸ”ŽOur key
Ke Yang (@empathyang) 's Twitter Profile Photo

πŸ‘Ύ Introducing AgentOccam: Automating Web Tasks with LLMs! 🌐 AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. πŸ„πŸ„πŸ„ πŸ§™ Link: arxiv.org/abs/2410.13825

πŸ‘Ύ Introducing AgentOccam: Automating Web Tasks with LLMs! 🌐 AgentOccam showcases the impressive power of Large Language Models (LLMs) on web tasks, without any in-context examples, new agent roles, online feedback, or search strategies. πŸ„πŸ„πŸ„
πŸ§™ Link: arxiv.org/abs/2410.13825
Jiaxin-Qin (@jr_qjx) 's Twitter Profile Photo

I am at #EMNLP2024! I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. Thanks to all the collaborators Heng Ji Zixuan Zhang Chi Han Manling Li Looking forward to have a chat! Paper Link: arxiv.org/pdf/2407.12828

I am at #EMNLP2024!

I will present our work "Why Does New Knowledge Create Messy Ripple Effects in LLMs? " on Web 10:30am. 

Thanks to all the collaborators <a href="/hengjinlp/">Heng Ji</a> <a href="/zhangzxUIUC/">Zixuan Zhang</a> <a href="/Glaciohound/">Chi Han</a> <a href="/ManlingLi_/">Manling Li</a> 

Looking forward to have a chat! 

Paper Link: arxiv.org/pdf/2407.12828
Ke Yang (@empathyang) 's Twitter Profile Photo

πŸ™Œ Happy New Year everyone! πŸ€– New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment πŸ€– We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and

πŸ™Œ Happy New Year everyone!
πŸ€– New preprint: TinyHelen's First Curriculum: Training and Evaluating Tiny Language Models in a Simpler Language Environment
πŸ€– We train and evaluate tiny language models (LMs) using a novel text dataset with systematically simplified vocabularies and