Yihao Xue (@xue_yihao65785) 's Twitter Profile
Yihao Xue

@xue_yihao65785

CS PhD student @UCLA Reliability of machine learning

ID: 1654352061985878016

linkhttps://sites.google.com/g.ucla.edu/yihao-xue/home calendar_today05-05-2023 05:07:37

50 Tweet

428 Followers

487 Following

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

Double descent confirms the benefit of larger models. But, when there is label noise in the data, larger model size can hurt the performance! We called this phenomenon "Final Ascent". Check out this interesting #UAI2024 spotlight by Yihao Xue: arxiv.org/pdf/2208.08003 🙌🌱

Siddharth Joshi (@sjoshi804) 's Twitter Profile Photo

🚀 Exciting News! 🚀 Join Baharan Mirzasoleiman and me for a 2-hour tutorial on Data-Efficient Learning! Learn the principles behind data curation: the secret sauce powering today’s AI revolution! ⚡️ See you at 1pm on Monday CEST in Hall A8! 🙌 🔗 More details: sjoshi804.github.io/data-efficient…

Baharan Mirzasoleiman (@baharanm) 's Twitter Profile Photo

ML models are sensitive to distribution shift. Can we adapt a model with only a few examples from the target domain? In this #ICML2024 paper, Yihao Xue proposes an effective way, with nice theoretical analysis🌱 🔗arxiv.org/pdf/2305.14521 Thu, July 25, Poster session 5, #800

ML models are sensitive to distribution shift. Can we adapt a model with only a few examples from the target domain? In this #ICML2024 paper, <a href="/xue_yihao65785/">Yihao Xue</a> proposes an effective way, with nice theoretical analysis🌱
🔗arxiv.org/pdf/2305.14521
Thu, July 25, Poster session 5, #800
Yu Yang (@yuyang_i) 's Twitter Profile Photo

It’s been a new and exciting experience to be part of founding Virtue AI! I’ve had the privilege of working with top minds in the fields – I'm incredibly grateful for this invaluable experience. Check out our website and blogs, and come hang out with us in SF this summer! 🥳🎉

Siddharth Joshi (@sjoshi804) 's Twitter Profile Photo

📢Excited to share the recording of our #ICML2024 Tutorial on Foundations of Data-Efficient Learning: youtu.be/30VkdWuwmdA Truly grateful to everyone who attended — it was incredible to see the enthusiasm for theoretically principled techniques for dataset curation!

📢Excited to share the recording of our #ICML2024 Tutorial on Foundations of Data-Efficient Learning: youtu.be/30VkdWuwmdA 

Truly grateful to everyone who attended — it was incredible to see the enthusiasm for theoretically principled techniques for dataset curation!
Yu Yang (@yuyang_i) 's Twitter Profile Photo

Excited to introduce SecCodePLT🛡️: a unified platform for evaluating security risks in code generation AI! Since summer, we’ve been building a comprehensive tool to assess AI models' potential for insecure coding and facilitating cyberattacks. 🧵1/👇

Excited to introduce SecCodePLT🛡️: a unified platform for evaluating security risks in code generation AI! Since summer, we’ve been building a comprehensive tool to assess AI models' potential for insecure coding and facilitating cyberattacks. 🧵1/👇
Yu Yang (@yuyang_i) 's Twitter Profile Photo

1/ I'll be at #NeurIPS2024 presenting our work SmallToLarge (S2L): Data-efficient Fine-tuning of LLMs! 🚀 What’s S2L? It’s a scalable data selection method that trains a small proxy model to guide fine-tuning for larger models, reducing costs while preserving performance. 👇

1/ I'll be at #NeurIPS2024 presenting our work SmallToLarge (S2L): Data-efficient Fine-tuning of LLMs! 🚀

What’s S2L? It’s a scalable data selection method that trains a small proxy model to guide fine-tuning for larger models, reducing costs while preserving performance. 👇
Tongzhou Wang (@ssnl_tz) 's Twitter Profile Photo

Why can’t more non-Chinese researchers show even a bit care about the racism against Chinese students in the NeurIPS keynote? Why can’t more US researchers pay any attention to the suffering of international students? I remain very disappointed, esp. at many established people.

Furong Huang (@furongh) 's Twitter Profile Photo

I saw a slide circulating on social media last night while working on a deadline. I didn’t comment immediately because I wanted to understand the full context before speaking. After learning more, I feel compelled to address what I witnessed during an invited talk at NeurIPS 2024

Yihe Deng (@yihe__deng) 's Twitter Profile Photo

New paper & model release! Excited to introduce DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails, showcasing our new DuoGuard-0.5B model. - Model: huggingface.co/DuoGuard/DuoGu… - Paper: arxiv.org/abs/2502.05163 - GitHub: github.com/yihedeng9/DuoG… Grounded in a

New paper &amp; model release!

Excited to introduce DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails, showcasing our new DuoGuard-0.5B model.

- Model: huggingface.co/DuoGuard/DuoGu…
- Paper: arxiv.org/abs/2502.05163
- GitHub: github.com/yihedeng9/DuoG…

Grounded in a
Siddharth Joshi (@sjoshi804) 's Twitter Profile Photo

#ICLR2025 Can you pre-train deep models with small, synthetic datasets? 🤯 We introduce the first effective dataset distillation method for self-supervised learning (SSL) — boosting downstream accuracy by up to 13% over baselines. 🧪 Poster #307, Sat Apr 26, 9am