
Emma Harvey
@emmharv
PhD student @CornellInfoSci | Sociotechnical fairness & algorithm auditing | Previously @MSFTResearch FATE, @Penn | Also emmharv.bsky.social
ID: 940102777
https://emmaharv.github.io/ 10-11-2012 22:43:02
54 Tweet
522 Followers
374 Following


🎉 “The Cadaver in the Machine” has received a Best Paper Honorable Mention at #CHI2024!! 🎉 @haukesa @az_jacobs Emanuel Moss @mona_sloane


I’m officially in Seattle for the summer to intern at Microsoft Research #FATE, where I’ll be working on bridging research and practice to measure harms from LLMs with the inimitable Emily Sheng! If you’re also here, LMK - I’m excited to explore and I’d love to grab coffee, hike, etc!!



🔔 Are you an AI practitioner whose work involves evaluating LLM-based systems for harms, adverse impacts, or other undesirable behaviors? Consider signing up for a (paid!) Microsoft Research interview: forms.office.com/r/JBjhDRnaLY Emily Sheng Hanna Wallach (@hannawallach.bsky.social) Su Lin Blodgett Alexandra Olteanu Alex Chouldechova

📢📢📢 Do you work on LLM model/system evaluation, including measuring harms, adverse impacts, or other possible undesirable LLM behaviors? Please consider participating in this paid interview study led by Emma Harvey !👇🙏 #NLProc #ACL2024 #EMNLP2024 x.com/emmharv/status…

🌟Applying for a PhD Cornell Information Science? Our student-run application feedback program is open NOW through Oct 31🌟 We're open to all and especially hope to support scholars who have navigated barriers in higher ed - please share to help us broaden access! infosci.cornell.edu/phd/admissions


Many thanks to Garance Burke & Hilke Schellmann for covering our work and raising awareness about the potential harms of speech-to-text hallucinations in downstream applications! apnews.com/article/ai-art… For a refresher on our paper's findings, see🧵below:

In the final presentation session, a team from Cornell University examines the impacts of ending #affirmativeaction on college admissions. Their findings reveal that removing race from applicant ranking algorithms harms diversity without increasing academic merit. #HigherEd 👨🎓


Researchers Cornell Information Science find that ignoring race in college admissions leads to an admitted class that’s much less diverse, but with similar academic credentials to those where affirmative action is factored in. René Kizilcec Cornell Tech Association for Computing Machinery news.cornell.edu/stories/2024/1…

Gaps Between Research and Practice When Measuring Representational Harms Caused by LLM-Based Systems: arxiv.org/abs/2411.15662 TL;DR: We identify challenges to using publicly available measurement instruments for measuring representational harms. Led by our intern Emma Harvey.