Caleb Withers (@calebwithersdc) 's Twitter Profile
Caleb Withers

@calebwithersdc

AI & natsec @CNASdc @CNAStech. @GeorgetownCSS alum. Views my own.

ID: 1630037568975962113

linkhttps://www.cnas.org/people/caleb-withers calendar_today27-02-2023 02:50:22

1,1K Tweet

602 Followers

1,1K Following

Nathan Lambert (@natolambert) 's Twitter Profile Photo

Happy to take curation more seriously as someone labelled as "high taste" in the noisy AI ecosystem. Here are the posts I loved recently, or ones that I think are at least heavily slept on.

Timothy B. Lee (@binarybits) 's Twitter Profile Photo

Something that comes through clearly in the DeepSeek R1 research paper, and I wish was more broadly understood, is that the DeepSeek researchers really, really want to see themselves as valuable members of the global research community.

Something that comes through clearly in the DeepSeek R1 research paper, and I wish was more broadly understood, is that the DeepSeek researchers really, really want to see themselves as valuable members of the global research community.
Timothy B. Lee (@binarybits) 's Twitter Profile Photo

My newsletter has been kind of lame for the last two months because I've been spent most of my time on this in-depth explainer on reinforcement learning. I'm proud of the result and I hope people enjoy it.

My newsletter has been kind of lame for the last two months because I've been spent most of my time on this in-depth explainer on reinforcement learning. I'm proud of the result and I hope people enjoy it.
Ethan Mollick (@emollick) 's Twitter Profile Photo

The two most common questions I get asked about AI are “which AI should I use” and “how do I start using AI?” I wrote a short guide attempting to answer both questions. oneusefulthing.org/p/using-ai-rig…

Ethan Mollick (@emollick) 's Twitter Profile Photo

The flip side to the concerns about violating copyright in training data is that there is a also a vast trove of important work that nobody reads & where it would be very good if AIs were trained on it. (Most scientific articles, many reports, a lot of old literature & records)

The flip side to the concerns about violating copyright in training data is that there is a also a vast trove of important work that nobody reads & where it would be very good if AIs were trained on it. (Most scientific articles, many reports, a lot of old literature & records)
Sauers (@sauers_) 's Twitter Profile Photo

Wow. This is the reasoning the judge used to say that Anthropic training is fair use: "But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways

Wow. This is the reasoning the judge used to say that Anthropic training is fair use:

"But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways
The Curve (@thecurveconf) 's Twitter Profile Photo

The Curve 2025 is on! We're bringing together people from a broad range of fields & with disparate perspectives to discuss big questions about AI's future (timelines, natsec, growth, applications, & more) October 3-5 | Berkeley, CA Apply by July 4th for early bird pricing

The Curve 2025 is on!

We're bringing together people from a broad range of fields & with disparate perspectives to discuss big questions about AI's future (timelines, natsec, growth, applications, & more)

October 3-5 | Berkeley, CA

Apply by July 4th for early bird pricing
Jack Clark (@jackclarksf) 's Twitter Profile Photo

Today, I testified before the Select Committee on the Chinese Communist Party. I made two key points: 1) the U.S. can win the race to build powerful AI and 2) winning the race is a necessary but not sufficient achievement - we have to get safety right.

Gabriel Weil (@gabriel_weil) 's Twitter Profile Photo

This is a big deal. I said I would be surprised if the parliamentarian were willing to sign off on conditioning previously appropriated (and already obligated!) funds on compliance with the AI regulation moratorium under the Byrd Rule. Apparently, so would the parliamentarian.

This is a big deal. I said I would be surprised if the parliamentarian were willing to sign off on conditioning previously appropriated (and already obligated!) funds on compliance with the AI regulation moratorium under the Byrd Rule. Apparently, so would the parliamentarian.
Andrew Bernard (@andrewjbernie) 's Twitter Profile Photo

Later in this press conference, Caine says: "We had so many PhDs working on the MOP program, doing modeling and simulation that we were quietly and in a secret way the biggest users of supercomputer hours within the United States of America." Incredible stuff.

Epoch AI (@epochairesearch) 's Twitter Profile Photo

LLM context windows have grown, but can models really use all this content? We find signs of recent, rapid progress in their ability to do so. Read on to learn more!

LLM context windows have grown, but can models really use all this content?

We find signs of recent, rapid progress in their ability to do so. Read on to learn more!
METR (@metr_evals) 's Twitter Profile Photo

METR evaluated a series of recent Qwen and DeepSeek models on our software tasks. We found that the best Qwen models from 2024 perform similar to frontier models from 2023, while DeepSeek models from mid-2025 perform close to frontier models from late 2024.

METR evaluated a series of recent Qwen and DeepSeek models on our software tasks. We found that the best Qwen models from 2024 perform similar to frontier models from 2023, while DeepSeek models from mid-2025 perform close to frontier models from late 2024.
Michiel Bakker (@bakkermichiel) 's Twitter Profile Photo

🚨🚨 Excited to share a new paper led by Haiwen Li with the Community Notes team! LLMs will reshape the information ecosystem. Community Notes offers a promising model for keeping human judgment central but it's an open question how to best integrate LLMs. Thread👇

🚨🚨 Excited to share a new paper led by <a href="/Li_Haiwen_/">Haiwen Li</a> with the <a href="/CommunityNotes/">Community Notes</a> team!

LLMs will reshape the information ecosystem. Community Notes offers a promising model for keeping human judgment central but it's an open question how to best integrate LLMs.

Thread👇
Steven Adler (@sjgadler) 's Twitter Profile Photo

People keep claiming that US states have proposed 1,000+ AI-related bills this year, but that number just doesn't hold up to analysis. A thread:

People keep claiming that US states have proposed 1,000+ AI-related bills this year, but that number just doesn't hold up to analysis.

A thread: