Anka Reuel | @ankareuel.bsky.social (@ankareuel) 's Twitter Profile
Anka Reuel | @ankareuel.bsky.social

@ankareuel

Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own

ID: 1236338714554630145

linkhttps://www.ankareuel.com calendar_today07-03-2020 17:11:40

848 Tweet

2,2K Followers

1,1K Following

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

The Stanford HAI #AIIndex2025 report launches April 7. Packed with rigorously vetted data, it provides an independent lens on AI’s progress, its adoption across various sectors, and its far-reaching impact. Be among the first to receive the report: hai.stanford.edu/ai-index

The <a href="/StanfordHAI/">Stanford HAI</a> #AIIndex2025 report launches April 7. Packed with rigorously vetted data, it provides an independent lens on AI’s progress, its adoption across various sectors, and its far-reaching impact. Be among the first to receive the report: hai.stanford.edu/ai-index
Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Want to join the conversation on AI—and be heard? Use trusted insights from the Stanford HAI #AIIndex2025 to spark smarter dialogue and back your ideas with data that matters. Stay informed with the AI Index, coming April 7: hai.stanford.edu/ai-index

Want to join the conversation on AI—and be heard? Use trusted insights from the <a href="/StanfordHAI/">Stanford HAI</a> #AIIndex2025 to spark smarter dialogue and back your ideas with data that matters.

Stay informed with the AI Index, coming April 7: hai.stanford.edu/ai-index
Technical AI Governance @ ICML 2025 (@taig_icml) 's Twitter Profile Photo

📣 We’re thrilled to announce the first workshop on Technical AI Governance (TAIG) at #ICML2025 this July in Vancouver! Join us (& this stellar list of speakers) in bringing together technical & policy experts to shape the future of AI governance!

📣 We’re thrilled to announce the first workshop on Technical AI Governance (TAIG) at #ICML2025 this July in Vancouver! Join us (&amp; this stellar list of speakers) in bringing together technical &amp; policy experts to shape the future of AI governance!
Ken Liu (@kenziyuliu) 's Twitter Profile Photo

An LLM generates an article verbatim—did it “train on” the article? It’s complicated: under n-gram definitions of train-set inclusion, LLMs can complete “unseen” texts—both after data deletion and adding “gibberish” data. Our results impact unlearning, MIAs & data transparency🧵

An LLM generates an article verbatim—did it “train on” the article?

It’s complicated: under n-gram definitions of train-set inclusion, LLMs can complete “unseen” texts—both after data deletion and adding “gibberish” data. Our results impact unlearning, MIAs &amp; data transparency🧵
Max Lamparth (@mlamparth) 's Twitter Profile Photo

Thank you for featuring our work! Great collaboration with Declan Grabb, MD and the team. We created a dataset that goes beyond medical exam-style questions and studies the impact of patient demographic on clinical decision-making in psychiatric care on fifteen language models

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

📢Introducing #AIIndex2025: This year's report highlights the most critical trends in AI – from shifting geopolitical landscape and rapid technological evolution, to AI’s expanding role in science and medicine, business, and public life. Read more: hai.stanford.edu/ai-index/2025-…

Technical AI Governance @ ICML 2025 (@taig_icml) 's Twitter Profile Photo

🚨 Call for Reviewers! If you're interested in reviewing 2–3 short papers in May for the First Workshop on Technical AI Governance at #ICML2025, please fill out our quick expression of interest form (takes <2 mins!) ✍️

🚨 Call for Reviewers! 
If you're interested in reviewing 2–3 short papers in May for the First Workshop on Technical AI Governance at #ICML2025, please fill out our quick expression of interest form (takes &lt;2 mins!) ✍️
Anka Reuel | @ankareuel.bsky.social (@ankareuel) 's Twitter Profile Photo

Exciting news! Open Problems in Technical AI Governance was published at TMLR! This is a good time to mention that we're organizing a workshop on TAIG at #ICML2025. Papers are due May 7th and we're also looking for PC members Check out Technical AI Governance @ ICML 2025 for more details!

Cas (Stephen Casper) (@stephenlcasper) 's Twitter Profile Photo

🚨New paper: Current reports on AI audits/evals often omit crucial details, and there are huge disparities between the thoroughness of different reports. Even technically rigorous evals can offer little useful insight if reported selectively or obscurely. Audit cards can help.

🚨New paper:

Current reports on AI audits/evals often omit crucial details, and there are huge disparities between the thoroughness of different reports. Even technically rigorous evals can offer little useful insight if reported selectively or obscurely.

Audit cards can help.
Ben Bucknall (@ben_s_bucknall) 's Twitter Profile Photo

Cooperation on AI safety is necessary but also comes with potential risks. In our new paper, we identify technical AI safety areas that present comparatively lower security concerns, making them more suitable for international cooperation—even between geopolitical rivals. 🧵

Cooperation on AI safety is necessary but also comes with potential risks. In our new paper, we identify technical AI safety areas that present comparatively lower security concerns, making them more suitable for international cooperation—even between geopolitical rivals. 🧵
Technical AI Governance @ ICML 2025 (@taig_icml) 's Twitter Profile Photo

📣 Submissions OPEN for the 1st Technical AI Governance Workshop at #ICML2025! 🔗 Submit on OpenReview: openreview.net/group?id=ICML.… 🗓️ Deadlines: Abstract 7 May | Full paper 12 May (23:59 AOE)

📣 Submissions OPEN for the 1st Technical AI Governance Workshop at #ICML2025!
🔗 Submit on OpenReview: openreview.net/group?id=ICML.… 
🗓️ Deadlines: Abstract 7 May | Full paper 12 May (23:59 AOE)
Gary Marcus (@garymarcus) 's Twitter Profile Photo

“The much discussed AI-2027 scenario, for example, predicts in passing that by the end of 2025 (eight months from now!), there will be AIs that have “a PhD-level knowledge of every field”. I often get the sense that many of the people who make those kinds of predictions have no

Google AI (@googleai) 's Twitter Profile Photo

We trained our Large Sensor Model (LSM) on over 40 million hours of de-identified multimodal sensor data from 165K users to demonstrate how it could improve performance in wearable tasks like exercise and activity recognition. Here’s what we found → goo.gle/4fYDQJ4

Seth Lazar (@sethlazar) 's Twitter Profile Photo

In 1999, engineers at Georgia Tech proposed the "Aware Home." It was a vision of ubiquitous computing designed to serve its inhabitants, giving them full knowledge and control over their own data. The dream was user empowerment—using data (and ultimately AI) to fill in the gaps

Shayne Longpre (@shayneredford) 's Twitter Profile Photo

🚨 Lucie-Aimée Kaffee and I are looking for a junior collaborator to research the Open Model Ecosystem! 🤖 Ideally, someone w/ AI/ML background, who can help w/ annotation pipeline + analysis. docs.google.com/forms/d/e/1FAI…

Rylan Schaeffer (@rylanschaeffer) 's Twitter Profile Photo

A bit late to the party, but our paper on predictable inference-time / test-time scaling was accepted to #icml2025 🎉🎉🎉 TLDR: Best of N was shown to exhibit power (polynomial) law scaling (left), but maths suggest one should expect exponential scaling (center). We show how to

A bit late to the party, but our paper on predictable inference-time / test-time scaling was accepted to #icml2025 🎉🎉🎉

TLDR: Best of N was shown to exhibit power (polynomial) law scaling (left), but maths suggest one should expect exponential scaling (center). We show how to
Rylan Schaeffer (@rylanschaeffer) 's Twitter Profile Photo

🚨New preprint 🚨 Turning Down the Heat: A Critical Analysis of Min-p Sampling in Language Models We examine min-p sampling (ICLR 2025 oral) & find significant problems in all 4 lines of evidence: human eval, NLP evals, LLM-as-judge evals, community adoption claims 1/8

🚨New preprint 🚨

Turning Down the Heat: A Critical Analysis of Min-p Sampling in Language Models

We examine min-p sampling (ICLR 2025 oral) &amp; find significant problems in all 4 lines of evidence: human eval, NLP evals, LLM-as-judge evals, community adoption claims

1/8