
Baharan Mirzasoleiman
@baharanm
Assistant professor @UCLAComSci. Better ML via better data, Machine learning, Optimization
ID: 1018575261896339456
http://web.cs.ucla.edu/~baharan/ 15-07-2018 19:17:21
71 Tweet
1,1K Followers
286 Following

📢 We're back with a new edition, this year at NeurIPS Conference in Vancouver! Paper deadline is August 30th, we are looking forward to your submissions!



I’ll also present “SafeClip” on behalf of WENHAN YANG tomorrow at 1:30pm (poster session 6) #814. See you there! 🙌

The Adversarial Machine Learning Rising Star Awards deadline is in two weeks! Submit your application and help us promote your work and research vision! Trustworthy ML Initiative (TrustML) LLM Security ML Safety AI Safety Papers

📢 UCLA Computer Science is hiring! Open to all CS areas! - Multiple Tenure-track Assistant Professor Positions: recruit.apo.ucla.edu/JPF09799 - Open Rank Teaching Professor Position: recruit.apo.ucla.edu/JPF09800 (We hired 11 Assistant Professors in the past two years ...)

Assist. Prof. Baharan Mirzasoleiman Baharan Mirzasoleiman of UCLA Computer Science & her large-scale machine learning research group UCLA is part of the new U.S. National Science Foundation-Simons Foundation Institute for Cosmic Origins at UT Austin that aims to use AI to research the mysteries of the cosmos. cns.utexas.edu/news/announcem…



Same training and test distribution yields optimal in-distribution performance? Dang Nguyen showed in his #NeurIPS2024 paper that this is not true when training with gradient methods!!😮🙃 Changing the training data distribution yields SOTA!🎊 Check it out Fri Dec 13, 11am, PS#5

I’ll help presenting our #NeurIPS2024 posters tomorrow (Friday):🌱 1- Changing the training data distribution to improve in-distribution performance (11@west #7106) w. Dang Nguyen 2- Data selection for fine-tuning LLMs with superior performance (16:30@west #5401) w. @YUYANG_UCLA

At NeurIPS? Check out the 2nd workshop on Attributing Model Behavior at Scale (ATTRIB)! Meeting Rm 205-207, starting @ 9am - amazing talks from Surbhi Goel Sanmi Koyejo Baharan Mirzasoleiman, Robert Geirhos, and Seong Joon Oh + exciting contributed talks! More info: attrib-workshop.cc


We are delighted that our proposal for the Workshop on “Spurious Correlation and Shortcut Learning: Foundations and Solutions” has been accepted at ICLR 2026 2025, hosting many brilliant keynote speakers and panelists. Stay tuned: scslworkshop.github.io Spurious Correlation & Shortcut Learning Workshop 1/

(2/2) Not at UCLA but interested in this work? Check arxiv.org/abs/2502.02407. Thanks to our fantastic intern Sidak Pal Singh (soon to return full-time!) for leading this project, along with collaborators Yann N. Dauphin and Atish. Thanks to my UCLA host Baharan Mirzasoleiman for the seminar invitation!


Can we pretrain deep models with small synthetic data? Dataset Distillation via Knowledge Distillation is the way to go! Check out Siddharth Joshi’s #ICLR2025 paper this Saturday April 26 at 9am, Poster #307 🎉🌱

