Berk Ustun (@berkustun) 's Twitter Profile
Berk Ustun

@berkustun

Assistant Prof @HDSIUCSD. I work on fairness and interpretability in ML. Previously @GoogleAI @Harvard @MIT @UCBerkeley🇨🇭🇹🇷

ID: 24344110

linkhttp://www.berkustun.com calendar_today14-03-2009 07:19:20

756 Tweet

2,2K Followers

954 Following

Journal of Machine Learning Research (@jmlrorg) 's Twitter Profile Photo

'Risk Measures and Upper Probabilities: Coherence and Stratification', by Christian Fröhlich, Robert C. Williamson. jmlr.org/papers/v25/22-… #measures #aggregation #uncertainty

Flavio Calmon (@flaviocalmon) 's Twitter Profile Photo

Mario was incredibly passionate about math, information theory, and statistics. He was homeschooling his son so he could “teach him math in a principled and advanced manner.” Now his family really needs our support. Please consider donating here: gofund.me/07df91a5

Aaron Roth (@aaroth) 's Twitter Profile Photo

Its also important to be unafraid to say stupid things (and to foster an environment where other are also unafraid). Most first thoughts are stupid, but it is important to say them so that you can get to second and third thoughts.

Tom Hartvigsen (@tom_hartvigsen) 's Twitter Profile Photo

📢 I am recruiting PhD students in Data Science at UVA to start Fall 2025! Come join a wonderful group working on Responsible ML/NLP in Ever-Changing Environments🎉 More info on my website: tomhartvigsen.com/research-group

📢 I am recruiting PhD students in Data Science at UVA to start Fall 2025!

Come join a wonderful group working on Responsible ML/NLP in Ever-Changing Environments🎉

More info on my website: tomhartvigsen.com/research-group
Ben Green (@benzevgreen) 's Twitter Profile Photo

I'm looking for PhD students to join me in fall 2025 School of Information! I’m particularly focused this cycle on research related to AI regulation, government use of algorithms, and methods for anticipating the impacts of AI systems. The full program details are here: si.umich.edu/programs/phd-i…

Emma Pierson (@2plus2make5) 's Twitter Profile Photo

Please retweet: I am recruiting PhD students at Berkeley! Please apply to UC Berkeley EECS or UC Joint Computational Precision Health Program if you are interested in ML applied to health, inequality, or social science, and mention my name in your app. More details on work/how to apply: cs.cornell.edu/~emmapierson/

Please retweet: I am recruiting PhD students at Berkeley!

Please apply to <a href="/Berkeley_EECS/">UC Berkeley EECS</a> or <a href="/UCJointCPH/">UC Joint Computational Precision Health Program</a> if you are interested in ML applied to health, inequality, or social science, and mention my name in your app. 

More details on work/how to apply: cs.cornell.edu/~emmapierson/
Sanghamitra Dutta (@sangha26dutta) 's Twitter Profile Photo

Are you interested in serving as a Program Committee member for the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2025)? PC Members are expected to review papers in their area of expertise. Expression of interest form: forms.gle/dmhCPbRBTEzFhc… #FAccT2025

Been Kim (@_beenkim) 's Twitter Profile Photo

🔥🔥Our small team in Seattle Google DeepMind is hiring! 🔥🔥If you are willing to move to/already in Seattle, has done significant work on human-machine communication / interpretability (from ML side) with a relevant PhD and great publication record, Join us. Apply here 👉👉

Sujay Nagaraj (@sujnagaraj) 's Twitter Profile Photo

🚨 Excited to announce a new paper accepted at ICLR2025 in Singapore! “Learning Under Temporal Label Noise” We tackle a new challenge in time series ML: label noise that changes over time 🧵👇 arxiv.org/abs/2402.04398

Jessica Hullman (@jessicahullman) 's Twitter Profile Photo

Why is it so hard to show that people can be better decision-makers than statistical models? Some ways that common intuitions about the superiority of human judgment contradict statistical reality, and a few that don't. statmodeling.stat.columbia.edu/2025/04/18/dum…

Sujay Nagaraj (@sujnagaraj) 's Twitter Profile Photo

Many ML models predict labels that don’t reflect what we care about e.g.: – Diagnoses from unreliable tests – Outcomes from noisy electronic health records In our #ICLR2025 paper, we study how this subjects individuals to a lottery of mistakes Paper: bit.ly/3Y673uZ 🧵👇

Many ML models predict labels that don’t reflect what we care about e.g.:
– Diagnoses from unreliable tests
– Outcomes from noisy electronic health records

In our #ICLR2025 paper, we study how this subjects individuals to a lottery of mistakes
Paper: bit.ly/3Y673uZ
🧵👇
Lily Weng (@lilyweng_) 's Twitter Profile Photo

💡LLMs don’t have to be black boxes. We introduce CB-LLMs -- the first LLMs with built-in interpretability for transparent, controllable, and safer AI. 🚀Our #ICLR2025 paper: lilywenglab.github.io/CB-LLMs/ #TrustworthyAI #ExplainableAI #AI #MachineLearning #NLP #LLM #AIResearch

Hailey Joren (@haileyjoren) 's Twitter Profile Photo

When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w J. Zhang, C.S. Ferng, Da-Cheng Juan, Ankur Taly Cyrus Rashtchian

Julian Skirzynski (@jskirzynski) 's Twitter Profile Photo

Right to explanation laws assume explanations help people detect algorithmic discrimination. But is there any evidence for that? In our latest work w/ David Danks Berk Ustun, we show explanations fail to help people, even under optimal conditions. PDF shorturl.at/yaRua

Right to explanation laws assume explanations help people detect algorithmic discrimination.

But is there any evidence for that?

In our latest work w/ David Danks <a href="/berkustun/">Berk Ustun</a>, we show explanations fail to help people, even under optimal conditions.

PDF shorturl.at/yaRua
Julian Skirzynski (@jskirzynski) 's Twitter Profile Photo

We’ll be presenting ACM FAccT on 06.24 at 10:45 AM during the Evaluating Explainable AI session! Come chat with us. We would love to discuss implications for AI policy, better auditing methods, and next steps for algorithmic fairness research. #AIFairness #xAI

Berk Ustun (@berkustun) 's Twitter Profile Photo

Explanations don't help us detect algorithmic discrimination. Even when users are trained. Even when we control their beliefs. Even under ideal conditions... 👇

Jessica Hullman (@jessicahullman) 's Twitter Profile Photo

Explainable AI has long frustrated me by lacking a clear theory of what explanations should do. Improve use of a model for what? How? Given a task what's max effect explanation can have? It's complicated bc most methods are functions of features & prediction but not true state 1/

Alan Jeffares (@jeffaresalan) 's Twitter Profile Photo

Our new ICML 2025 oral paper proposes a new unified theory of both Double Descent and Grokking, revealing that both of these deep learning phenomena can be understood as being caused by prime numbers in the network parameters 🤯🤯 🧵[1/8]

Our new ICML 2025 oral paper proposes a new unified theory of both Double Descent and Grokking, revealing that both of these deep learning phenomena can be understood as being caused by prime numbers in the network parameters 🤯🤯

🧵[1/8]
Hailey Joren (@haileyjoren) 's Twitter Profile Photo

PhD in Computer Science, University of California San Diego 🎓 My research focused on uncertainty and safety in AI systems, including 🤷‍♀️letting models say "I don't know" under uncertainty 🔎understanding and reducing hallucinations 🔁 methods for answering "how much will

PhD in Computer Science, University of California San Diego 🎓

My research focused on uncertainty and safety in AI systems, including

🤷‍♀️letting models say "I don't know" under uncertainty
🔎understanding and reducing hallucinations
🔁 methods for answering "how much will