Sara Hooker (@sarahookr) 's Twitter Profile
Sara Hooker

@sarahookr

I lead @Cohere_Labs. Formerly Research @Google Brain @GoogleDeepmind. ML Efficiency at scale, LLMs, ML reliability. Changing spaces where breakthroughs happen.

ID: 731538535795163136

linkhttps://www.sarahooker.me/ calendar_today14-05-2016 17:35:53

9,9K Tweet

45,45K Followers

8,8K Following

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Since launching in July 2023, the Cohere Labs Research Grant Program has supported cutting-edge research in ML, NLP, and LLM applications for public good. Grants have been awarded for projects in multilingual NLP, language model safety, prosocial AI, and values alignment.

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Our goal is to close the resource gap, enabling more researchers to conduct impactful machine learning research. Learn more: cohere.com/research/grants.

Daniel D'souza  (@mrdanieldsouza) 's Twitter Profile Photo

🚨 Wait, adding simple markers 📌during training unlocks outsized gains at inference time?! 🤔 🚨 Thrilled to share our latest work at Cohere Labs: “Treasure Hunt: Real-time Targeting of the Long Tail using Training-Time Markers“ that explores this phenomenon! Details in 🧵 ⤵️

🚨 Wait, adding simple markers 📌during training unlocks outsized gains at inference time?! 🤔 🚨

Thrilled to share our latest work at <a href="/Cohere_Labs/">Cohere Labs</a>: “Treasure Hunt: Real-time Targeting of the Long Tail using Training-Time Markers“ that explores this phenomenon!

Details in 🧵 ⤵️
Daniel D'souza  (@mrdanieldsouza) 's Twitter Profile Photo

Instead of complicated and brittle prompt engineering for under-represented tasks, what if we design a framework to enrich the training data?🤔 Could we get: 🔍Boosts in long-tail performance? 🎛️Explicit user-control at inference? 📈Generalizable gains across tasks?

Instead of complicated and brittle prompt engineering for under-represented tasks, what if we design a framework to enrich the training data?🤔

Could we get:

🔍Boosts in long-tail performance?
🎛️Explicit user-control at inference?
📈Generalizable gains across tasks?
Ahmet Üstün (@ahmetustun89) 's Twitter Profile Photo

Can we train models for better inference-time control instead of over-complex prompt engineering❓ Turns out the key is in the data — adding fine-grained markers boosts performance and enables flexible control at inference🎁 Huge congrats to Daniel D'souza  for this great work

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

We're incredibly excited to announce our latest open science community-led initiative, Papers in the Park! 🌳 This is a great opportunity for those in Toronto, Canada to meet up and discuss a pre-selected research paper while enjoying the Summer weather!

We're incredibly excited to announce our latest open science community-led initiative, Papers in the Park! 🌳

This is a great opportunity for those in Toronto, Canada to meet up and discuss a pre-selected research paper while enjoying the Summer weather!
Cohere Labs (@cohere_labs) 's Twitter Profile Photo

How can AI capture the nuances of different languages?💬🗨️ By using a team of specialized teacher models via Multilingual Arbitration we've achieved up to 19.5% improvement in win rates across languages. Find us at ACL to discuss how we can further break down language barriers.

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

RLHF is powerful but slow. What if it could be 70% faster? ⚡️Michael Noukhovitch shows how with Asynchronous RLHF. Join his talk with us on Thursday, June 26th to learn how he's making LLMs more efficient. Thanks to Alif and Abrar Rahman - d/acc for organizing this session 👏

RLHF is powerful but slow. What if it could be 70% faster? ⚡️<a href="/mnoukhov/">Michael Noukhovitch</a> shows how with Asynchronous RLHF. Join his talk with us on Thursday, June 26th to learn how he's making LLMs more efficient.

Thanks to <a href="/alifmunim/">Alif</a> and <a href="/abrarfrahman/">Abrar Rahman - d/acc</a> for organizing this session 👏
Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Let's get studious. 🏫 This July join the Cohere Labs Open Science Community for ML Summer School. You'll be part of a global community exploring the future of ML and hear from speakers across the industry. Register to be first to hear about the line-up & connect with others.

Let's get studious. 🏫

This July join the Cohere Labs Open Science Community for ML Summer School. You'll be part of a global community exploring the future of ML and hear from speakers across the industry. 

Register to be first to hear about the line-up &amp; connect with others.
Ahmad Mustafa Anis (@ahmadmustafaan1) 's Twitter Profile Photo

For the community, by the community. We’re very excited to share our completely free Summer School by the Cohere Labs Community. A unique learning experience, built together. Register now!

λux (@novasarc01) 's Twitter Profile Photo

the Cohere Labs ML Summer School is a fantastic opportunity for anyone looking to gain hands-on experience with cutting-edge research, explore advanced topics, and connect with awesome researchers and collaborators at Cohere Labs

Jack Clark (@jackclarksf) 's Twitter Profile Photo

Amir Efrati Stephanie Palazzolo Worth reading this research which showed it has already been turned into cheat-slop and that meta was one of the worst culprits for gaming it x.com/singhshiviii/s…

Daniel D'souza  (@mrdanieldsouza) 's Twitter Profile Photo

🤝Arbitration is the future 🤝 “Why rely on a single teacher 🧑🏻‍🏫 when you can synthetically generate a much higher quality dataset by relying on specialized teacher models? 🧑🏻‍🏫👩‍🏫👨🏿‍🏫” Check out this fantastic summary of our recently accepted ACL 2025 work ✨

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

We’re proud to have released 9 open models — all built to support research, experimentation, and real-world impact. 🌎 These models reflect our commitment to building powerful, accessible tools that can accelerate progress across machine learning and beyond.