Sanmi Koyejo (@sanmikoyejo) 's Twitter Profile
Sanmi Koyejo

@sanmikoyejo

I lead @stai_research at Stanford.

ID: 2790739088

calendar_today04-09-2014 22:44:16

566 Tweet

2,2K Followers

98 Following

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Last chance to apply to the Stanford AI Lab Postdoctoral Fellows Program! Join Stanford University’s legacy of excellence in AI and machine learning, and collaborate with world-class experts in advancing cutting-edge research. Application ends today, April 30: ai.stanford.edu/postdoctoralfe…

Last chance to apply to the <a href="/StanfordAILab/">Stanford AI Lab</a> Postdoctoral Fellows Program! Join <a href="/Stanford/">Stanford University</a>’s legacy of excellence in AI and machine learning, and collaborate with world-class experts in advancing cutting-edge research. Application ends today, April 30: ai.stanford.edu/postdoctoralfe…
Virtue AI (@virtueai_co) 's Twitter Profile Photo

Join Virtue AI Co-founder Sanmi Koyejo and Jan EiĂźfeldt, Global Head of Trust & Safety at Wikimedia Foundation, for a live discussion on what the EU AI Act means for enterprises and how they can stay compliant without slowing down innovation. đź“… May 15 | đź•™ 10 AM PT | đź’» Virtual

Join Virtue AI Co-founder <a href="/sanmikoyejo/">Sanmi Koyejo</a> and Jan EiĂźfeldt, Global Head of Trust &amp; Safety at Wikimedia Foundation, for a live discussion on what the EU AI Act means for enterprises and how they can stay compliant without slowing down innovation.

đź“… May 15 | đź•™ 10 AM PT | đź’» Virtual
Khoa D. Doan (@khoaddoan) 's Twitter Profile Photo

🚨 Meet the first 3 invited speakers at #DIGBUG ICML Conference 2025! 🔒 Pin-Yu Chen (IBM): AI safety & robustness 🧠 Sanmi Koyejo (Stanford): Trustworthy ML in health & neuroscience ⚙️ Nouha Dziri (AI2): safety of open LLMs 📅 Submission deadline - May 20: icml2025digbugs.github.io

Virtue AI (@virtueai_co) 's Twitter Profile Photo

🚀 Introducing VirtueAgent, the first security layer for the agentic era. As AI agents begin to act autonomously in real-world environments, such as personal assistants, finance, healthcare, ensuring they operate securely and compliant is critical. VirtueAgent provides

Virtue AI (@virtueai_co) 's Twitter Profile Photo

🚨 Introducing VirtueGuard Code: Real-time vulnerability detection for AI-generated code. As coding assistants like Cursor and GitHub Copilot become standard in development workflows, it’s critical to ensure that generated code meets security standards. VirtueGuard Code is

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Most major LLMs are trained using English data, making it ineffective for the approximately 5 billion people who don't speak English. Here, Stanford HAI Faculty Affiliate Sanmi Koyejo discusses the risks of this digital divide and how to close it. stanford.io/3SfGmRk

Virtue AI (@virtueai_co) 's Twitter Profile Photo

Congrats to our partners at Glean on their first-ever hashtag #GleanGO conference! 🎉 We’re honored to be part of their security and governance ecosystem, helping power trusted AI across the enterprise.

Congrats to our partners at <a href="/glean/">Glean</a> on their first-ever hashtag #GleanGO conference! 🎉 

We’re honored to be part of their security and governance ecosystem, helping power trusted AI across the enterprise.
Virtue AI (@virtueai_co) 's Twitter Profile Photo

What does the EU AI Act mean for enterprises building AI at scale? In our recent webinar, Jan EiĂźfeldt (Global Head of Trust & Safety at the Wikimedia Foundation) joined Virtue AI Co-founder Sanmi Koyejo to unpack how larger organizations are expected to operationalize

Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Did you know that less than 5% of the world's 7,000 languages have meaningful online representation? This highlights the data dearth that a recent Stanford HAI white paper revealed in the context of training large language models. Read more via @TechBrew: bit.ly/3Z5rHMg

Stanford Medicine (@stanfordmed) 's Twitter Profile Photo

Now streaming: What We’ve Learned in a Year 💡 RAISE Health faculty reflect on how AI is reshaping biomedicine, key advances & real impact. 🎙️ Russ Altman, Curt Langlotz, Natalie Pageler, Sanmi Koyejo 🔗 Watch live: med.stanford.edu/raisehealth/ev…

Now streaming: What We’ve Learned in a Year 💡
RAISE Health faculty reflect on how AI is reshaping biomedicine, key advances &amp; real impact. 🎙️ Russ Altman, Curt Langlotz, Natalie Pageler, Sanmi Koyejo
🔗 Watch live: med.stanford.edu/raisehealth/ev…
Virtue AI (@virtueai_co) 's Twitter Profile Photo

We’re thrilled to be part of The AI Summit London next week! Join us June 11–12 at Stand 17 to learn how Virtue is helping leading enterprises secure their AI systems. #TheAISummit

We’re thrilled to be part of The AI Summit London next week! Join us June 11–12 at Stand 17 to learn how Virtue is helping leading enterprises secure their AI systems.

#TheAISummit
Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Stanford HAI faculty Yejin Choi, Jiajun Wu, and Sanmi Koyejo joined Asia Tech x Singapore in Singapore to discuss recent research on generative AI, embodied AI, and AI governance. They also met with Minister Josephine Teo to discuss emerging opportunities and challenges. Thread đź§µ:

Stanford HAI faculty <a href="/YejinChoinka/">Yejin Choi</a>, <a href="/jiajunwu_cs/">Jiajun Wu</a>, and <a href="/sanmikoyejo/">Sanmi Koyejo</a> joined <a href="/AsiaTechxSG/">Asia Tech x Singapore</a> in Singapore to discuss recent research on generative AI, embodied AI, and AI governance. They also met with Minister Josephine Teo to discuss emerging opportunities and challenges. Thread đź§µ:
Stanford HAI (@stanfordhai) 's Twitter Profile Photo

Faculty Affiliate Sanmi Koyejo addressed opportunities and risks in generative and agentic AI—from misuse to governance. He emphasized the need for improved evaluation methods and mitigation strategies to support more reliable and responsible deployment.

Faculty Affiliate <a href="/sanmikoyejo/">Sanmi Koyejo</a> addressed opportunities and risks in generative and agentic AI—from misuse to governance. He emphasized the need for improved evaluation methods and mitigation strategies to support more reliable and responsible deployment.
Rylan Schaeffer (@rylanschaeffer) 's Twitter Profile Photo

A bit late to the party, but our paper on predictable inference-time / test-time scaling was accepted to #icml2025 🎉🎉🎉 TLDR: Best of N was shown to exhibit power (polynomial) law scaling (left), but maths suggest one should expect exponential scaling (center). We show how to

A bit late to the party, but our paper on predictable inference-time / test-time scaling was accepted to #icml2025 🎉🎉🎉

TLDR: Best of N was shown to exhibit power (polynomial) law scaling (left), but maths suggest one should expect exponential scaling (center). We show how to
Rylan Schaeffer (@rylanschaeffer) 's Twitter Profile Photo

...resolve this "paradox", then use our insights to design methods for predicting inference-scaling capabilities that can be more sample efficient! Preprint: arxiv.org/abs/2502.17578 Tweeprint: x.com/RylanSchaeffer… 2/3

Youssef Allouah (@ys_alh) 's Twitter Profile Photo

Excited our paper "Certified Unlearning for Neural Networks" is accepted at ICML 2025! We introduce a method for provable machine unlearning-- truly "forgetting" data without restrictive assumptions like convexity. Paper: arxiv.org/abs/2506.06985 Code: github.com/stair-lab/cert…

Excited our paper "Certified Unlearning for Neural Networks" is accepted at ICML 2025!

We introduce a method for provable machine unlearning-- truly "forgetting" data without restrictive assumptions like convexity.

Paper: arxiv.org/abs/2506.06985
Code: github.com/stair-lab/cert…
Virtue AI (@virtueai_co) 's Twitter Profile Photo

Autonomous AI agents are rapidly being deployed across industries, from web browsing copilots to code-writing assistants and enterprise workflow agents. But these systems come with a new class of security risks that traditional guardrails and red teaming aren’t equipped to

Autonomous AI agents are rapidly being deployed across industries, from web browsing copilots to code-writing assistants and enterprise workflow agents. But these systems come with a new class of security risks that traditional guardrails and red teaming aren’t equipped to