Matthijs Maas (@matthijsmmaas) 's Twitter Profile
Matthijs Maas

@matthijsmmaas

Senior Research Fellow at @law_ai_ | Associate Fellow @LeverhulmeCFI

ID: 2242569434

linkhttps://www.matthijsmaas.com/ calendar_today12-12-2013 15:58:37

2,2K Tweet

2,2K Followers

3,3K Following

Matthijs Maas (@matthijsmmaas) 's Twitter Profile Photo

back in 2020, John-Clark Levin and I explored the economic, scientific and strategic dynamics of an 'AGI Manhattan Project' (compared to other major scientific and/or military technology projects) before it was cool: arxiv.org/abs/2008.04701 Are we on the runway now?

Lennart Heim (@ohlennart) 's Twitter Profile Photo

The idea of a clear "AGI threshold" for preventive actions (MAIM paper) misses a challenge: we'll never agree when something becomes "superintelligent" or AGI. Yann LeCun will say, "It lacks autonomy!" while Gary Marcus declares it's hitting a wall the next day. Some thoughts 1/

Matthijs Maas (@matthijsmmaas) 's Twitter Profile Photo

Enjoyed this thoughtful discussion on the value of humility in debates over the (near) future of AI progress: nicholas.carlini.com/writing/2025/t…

gavin leech‎ ‎ ‎ ‎ ‎ ‎ (@g_leech_) 's Twitter Profile Photo

Notes on AI capabilities, as of Q2 2025, for a certain type of persnickety overeducated person. "I have a powerful urge to John Henry my way through this age."

Notes on AI capabilities, as of Q2 2025, for a certain type of persnickety overeducated person. 

"I have a powerful urge to John Henry my way through this age."
Apollo Research (@apolloaievals) 's Twitter Profile Photo

🧵 Today we publish a comprehensive report on "AI Behind Closed Doors: a Primer on The Governance of Internal Deployment". Our report examines a critical blind spot in current governance frameworks: internal deployment.

🧵 Today we publish a comprehensive report on "AI Behind Closed Doors: a Primer on The Governance of Internal Deployment". Our report examines a critical blind spot in current governance frameworks: internal deployment.
Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

📢 Applications are now open for our Summer Institute on Law and AI! Join us July 11–15 near Washington, DC for a five-day gathering of law students, professionals, and academics eager to explore pressing issues at the intersection of AI, law, and policy. Sessions will cover

📢 Applications are now open for our Summer Institute on Law and AI!

Join us July 11–15 near Washington, DC for a five-day gathering of law students, professionals, and academics eager to explore pressing issues at the intersection of AI, law, and policy.

Sessions will cover
LaurieWired (@lauriewired) 's Twitter Profile Photo

What if humanity forgot how to make CPUs? Imagine Zero Tape-out Day (Z-Day), the moment where no further silicon designs ever get manufactured. Advanced core designs fare out very badly. Assuming we keep our existing supply, here’s how it would play out:

What if humanity forgot how to make CPUs?
 
Imagine Zero Tape-out Day (Z-Day), the moment where no further silicon designs ever get manufactured. Advanced core designs fare out very badly.
 
Assuming we keep our existing supply, here’s how it would play out:
Miles Brundage (@miles_brundage) 's Twitter Profile Photo

This is your periodic reminder that the richest people and companies in history are racing against each other to build what is (by their own accounts) perhaps the most dangerous technology in history. Seems worth keeping an eye on 🧐

Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

AI agents can perform an increasing number of computer-based tasks. In a new article forthcoming in Fordham Law Review, Cullen O’Keefe, Ketan Ramakrishnan, Janna Tay, and Christoph Winter argue that highly capable AI agents should be designed to follow the law prior to deployment in high-stakes

Nathan is in SF (briefly) 🔍 (@nathanpmyoung) 's Twitter Profile Photo

"the operation is very likely to go okay on your son" "what do you mean, 'very likely'?" "well it's hard to know with operations, there is a lot of disagreement about the probabilities" "what probabilities do people give?" "well personally I think this operation is likely to

Maurice Chiodo (@mauricechiodo) 's Twitter Profile Photo

For anyone wondering what it really means to put a "Human in the loop" of an AI system, take a look at our new interdisciplinary preprint "Formalising Human-in-the-Loop" discussing it. Think having a human there will always keep an AI system safe? Think again.

Matthijs Maas (@matthijsmmaas) 's Twitter Profile Photo

reminder that there is three more days to apply for our Institute for Law & AI and Future Intelligence Cambridge summer workshop (Aug 6-8) on Law-Following AI, where we'll explore the question of how to design agentic AI systems that reliably follow the law law-ai.org/event/workshop…

Neel Nanda (@neelnanda5) 's Twitter Profile Photo

I've been really feeling how much the general public is concerned about AI risk... In a *weird* amount of recent interactions with normal people (eg my hairdresser) when I say I do AI research (*not* safety), they ask if AI will take over Alas, I have no reassurances to offer

Matthijs Maas (@matthijsmmaas) 's Twitter Profile Photo

The talented Sumaya Nur and Joanna Wiaterek are organising a cool September workshop on 'The Role of the State in Operationalizing and Advancing Equitable Access to AI', and invite submissions: Info: best-floss-418.notion.site/The-Role-of-th… Submit abstracts (June 30th) airtable.com/appkfHORNdlOok…

The talented <a href="/SumayaNur_/">Sumaya Nur</a> and <a href="/joanna_wiaterek/">Joanna Wiaterek</a> are organising a cool September workshop on 'The Role of the State in Operationalizing and Advancing Equitable Access to AI', and invite submissions: 
Info:
best-floss-418.notion.site/The-Role-of-th…
Submit abstracts (June 30th) airtable.com/appkfHORNdlOok…
harry law (hopfield network truther) (@lawhsw) 's Twitter Profile Photo

Academic AI criticism is stuck in 2023. This is Not Good if you want to hold developers to account and make sure AI is socially beneficial. I wrote about some common misconceptions and shared some simple suggestions for better criticism from my peers Link below 📖

Academic AI criticism is stuck in 2023. This is Not Good if you want to hold developers to account and make sure AI is socially beneficial. I wrote about some common misconceptions and shared some simple suggestions for better criticism from my peers 

Link below 📖
Institute for Law & AI (@law_ai_) 's Twitter Profile Photo

📢 Applications for our annual Research Scholar roles (US, EU, and Legal Frontiers) are closing on June 22! 📍 Washington, D.C. / Cambridge, UK / London preferred, remote options available 💼 $100,000 – $175,000 📅 Full-time, flexible hours More details in thread 🧵