Andrea Miotti (@_andreamiotti) 's Twitter Profile
Andrea Miotti

@_andreamiotti

Trying to make the future go well. Founder and executive director @ai_ctrl

ID: 1254768867169034241

linkhttps://www.narrowpath.co/ calendar_today27-04-2020 13:46:35

1,1K Tweet

1,1K Followers

377 Following

Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

- Scott Alexander Eli Lifland Thomas Larsen The AGI death race is in nobody's interests. Step one is to make sure that's common knowledge. Step two is to hammer out the details of a treaty, such that it's rational for both sides to accept the treaty and irrational for either side to try to cheat (e.g. because cheating

Mark Beall (@markbeall) 's Twitter Profile Photo

Thanks Jon Scott and Fox News for yesterday’s AI discussion. It’s a troubling sign when the His Holiness moves faster than the United States Congress to recognize we’re at a pivotal moment for humanity.

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

Nate Soares and I are publishing a traditional book: _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All_. Coming in Sep 2025. You should probably read it! Given that, we'd like you to preorder it! Nowish!

Nate Soares and I are publishing a traditional book:  _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All_.  Coming in Sep 2025.

You should probably read it!  Given that, we'd like you to preorder it!  Nowish!
Tom (@tomdaavid) 's Twitter Profile Photo

Ursula von der Leyen: "When the current budget was negotiated, we thought AI would only approach human reasoning around 2050. Now we expect this to happen already next year." ec.europa.eu/commission/pre…

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

As a general rule, basically everyone has the correct commonsense view that AI is a dangerous technology. The issue is galaxy brain academics + VCs etc. who overthink it and come to wrong conclusions, and industry people who have an incentive to downplay it.

Liv Boeree (@liv_boeree) 's Twitter Profile Photo

Should we be racing to build superintelligent AI? Here's my conversation with "Keep The Future Human" author Anthony Aguirre and MIRI CEO Malo Bourgon who both strongly believe we shouldn't... ...a controversial take here on TPOT, but given the stakes in either direction, it's

Harlan Stewart (@humanharlan) 's Twitter Profile Photo

Vance was asked if the US Government is capable of pausing AI in a loss-of-control scenario. “The honest answer to that is that I don’t know, because part of this arms race component is: if we take a pause, does the People’s Republic of China not take a pause?” Great to see

dave kasten (@david_kasten) 's Twitter Profile Photo

One very weird thing is that AI companies are trying really really hard to make it clear that they're going for RSI, and functionally no one on Capitol Hill believes them (or even understands this is the literal Business Plan)

Gabriel (@gabe_cc) 's Twitter Profile Photo

FUD doesn't make the case for the anti-thesis. It works by raising the costs of making the case for the correct thesis. No one takes voluntary commitments seriously, but they make rallying against corps and passing laws harder.

Connor Leahy (@npcollapse) 's Twitter Profile Photo

You think politicians won't listen about AGI risk? Or that you can't talk to them? Think again! This is THE definite piece writing up how ControlAI did it, and you can too!

Timothy B. Lee (@binarybits) 's Twitter Profile Photo

Thinking that mechanistic interpretability is the key to understanding AI safety is like thinking that neuroscience is the key to understanding political science.

Faisal Islam (@faisalislam) 's Twitter Profile Photo

🚨 Mind blowing interview with Turing award-winning Yoshua Bengio for BBC Newsnight one of the three founding fathers of AI, is now warning: “The worst-case scenario is human extinction.” AI isn’t just risky — it could end us. 1/10 🧵 📺 youtu.be/c4Zx849dOiY?t=0

Steven Adler (@sjgadler) 's Twitter Profile Photo

Had a great time talking with the ControlAI team about my time at OpenAI and other light & breezy AI topics - full episode link below