
Angelina Wang @angelinawang.bsky.social
@ang3linawang
Postdoc @StanfordHAI | ML fairness and algorithmic bias | Future: Asst Prof @CornellInfoSci, @Cornell_Tech | Prev: @PrincetonCS @Berkeley_EECS
ID: 1313530609030107136
http://angelina-wang.github.io 06-10-2020 17:24:40
193 Tweet
2,2K Followers
749 Following

New AI Snake Oil essay with Angelina Wang @angelinawang.bsky.social, Sayash Kapoor, and Solon Barocas — we analyze the UK's liver transplant matching algorithm that's been reported to make it nearly impossible for patients under 45 to get a liver. In this thread I'll summarize what we learned. aisnakeoil.com/p/does-the-uks…

The FATE group at Microsoft Research NYC is now accepting applications for 2025 interns. 🥳🎉 For full consideration, apply by December 18 and include a cover letter. jobs.careers.microsoft.com/global/en/job/… Interested in AI evaluation? Apply for the STAC internship too! jobs.careers.microsoft.com/global/en/job/…

📣 New paper: AI depends on immense resources, mostly controlled by large companies. This means "open" AI… isn't very open. 📄 From me, Meredith Whittaker and Sarah Myers West in Nature: nature.com/articles/s4158…


Excited to share that I just finished my PhD at MIT EECS and will be joining CMU as an Assistant Professor next year! A huge thank you to the village of people who guided and mentored me along the way, from my family to my advisors Aleksander Madry and devavrat shah (1/3)



Excited to share that our paper is now out in PNASNews! 🎉 Check it out: pnas.org/doi/10.1073/pn… Code and data: github.com/baixuechunzi/l… Big shoutout to my amazing coauthors Angelina Wang @angelinawang.bsky.social Ilia Sucholutsky Griffiths Computational Cognitive Science Lab!


What makes writing interesting? Can an LLM do it? Does we need a human to feel its worth choosing each word or to shape it w/individual experience? Can it be interesting w/out intention? Does it require inner conflict? I have lots of questions, no answers statmodeling.stat.columbia.edu/2025/02/25/wha…


This is a very practical and useful resource from Angelina Wang @angelinawang.bsky.social! I highly recommend folks read this because my experience is most folks in CS and ML have a warped and seriously impoverished understanding of fairness (e.g. fairness only means match accuracy across groups).

For those who have requested the video, my HAI seminar “Beyond Benchmarks: Building a Science of AI Measurement” is up! I discuss some of Stanford Trustworthy AI Research (STAIR) Lab’s latest work aimed at improving AI measurement foundations towards real-world impact. youtu.be/PkuoEJn6PlA?si…

