
Nitasha Tiku
@nitashatiku
Tech culture reporter @washingtonpost in SF [email protected], Signal: nitasha.10
nitasha.bsky.social
ID: 24421464
14-03-2009 20:07:50
22,22K Tweet
66,66K Followers
9,9K Following

Doctors in Gaza needed to use CT scans to figure out why dead children had such small entry wounds. Turns out Israel is using bombs designed to expel tiny pebbles that enter the body and cascade so violently through it they 'dissolve the spine.' Evil doesn't begin to cover it



NEW: The Washington Post reviewed the more than 500 citations in the WH #MAHA Report. Here are the patterns of AI use we found. By Caitlin Gilbert Emily Wright margaret kelliher and Lauren Weber Edited by Lenny Bernstein Joe Moore Gaby Morera Di Nubila (and me) washingtonpost.com/health/2025/05…





LLMs' sycophancy issues are a predictable result of optimizing for user feedback. Even if clear sycophantic behaviors get fixed, AIs' exploits of our cognitive biases may only become more subtle. Grateful our research on this was featured by Nitasha Tiku & The Washington Post!



Researchers say tactics used to make AI more engaging, like making them more agreeable, can make chatbots reinforce harmful ideas, like encouraging drug use (Nitasha Tiku / Washington Post) washingtonpost.com/technology/202… techmeme.com/250601/p5#a250… x.com/Techmeme/statu…


A great The Washington Post story to be quoted in. I spoke to Nitasha Tiku re our work on human-AI relationships as well as early results from our University of Oxford survey of 2k UK citizens showing ~30% have sought AI companionship, emotional support or social interaction in the past year

Also fascinating coverage of Micah Carroll’s work on unintended consequences of training AI to optimise for human feedback. This may all sound familiar to social media woes but AI chatbots could interact more deeply with our pyschology as social actors not social intermediaries.