Dr. Peter Slattery (@peterslattery1) 's Twitter Profile
Dr. Peter Slattery

@peterslattery1

Lead at the AI Risk Repository | Researcher @MITFutureTech

ID: 1113170874

linkhttps://www.pslattery.com/ calendar_today23-01-2013 02:07:29

775 Tweet

741 Followers

952 Following

MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

We are pleased to announce that Regina Barzilay (Regina Barzilay), from the MIT Jameel Clinic (MIT Jameel Clinic for AI & Health), will be presenting at the FutureTech Workshop on The Role of AI in Science, scheduled for November 2nd and 3rd, 2024. Regina's talk, titled 'How AI contributes to the

We are pleased to announce that Regina Barzilay (<a href="/BarzilayRegina/">Regina Barzilay</a>), from the MIT Jameel Clinic (<a href="/AIHealthMIT/">MIT Jameel Clinic for AI & Health</a>), will be presenting at the FutureTech Workshop on The Role of AI in Science, scheduled for November 2nd and 3rd, 2024.

Regina's talk, titled 'How AI contributes to the
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

"in this paper, we advocate that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security, and unambiguously take into account the perspectives of both disciplines in order to devise mostly effective and holistic risk

MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

Today is the first day of the MIT FutureTech workshop on the role of AI in science! We are bringing together some of the most influential voices in AI and scientific research to discuss the profound ways artificial intelligence is reshaping science as we know it. Topics

Today is the first day of the <a href="/MITFutureTech/">MIT FutureTech</a> workshop on the role of AI in science!  We are bringing together some of the most influential voices in AI and scientific research to discuss the profound ways artificial intelligence is reshaping science as we know it.  

Topics
MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

Today is the second day of our workshop on the role of AI in science! First up, we have Neil Thompson (Neil Thompson) introducing session 3: De-democratization and concentration of power in AI. See the full agenda here: futuretech.mit.edu/workshop-on-th…

Today is the second day of our workshop on the role of AI in science! First up, we have Neil Thompson (<a href="/ProfNeilT/">Neil Thompson</a>) introducing session 3: De-democratization and concentration of power in AI. See the full agenda here: futuretech.mit.edu/workshop-on-th…
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

"In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is

"In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is
MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

📢 MIT FutureTech is hiring a Product and Data Visualization Designer. This is an exciting opportunity to support a rapidly growing lab who work on socially impactful AI and computing research, and rapidly develop cutting-edge skills and experience. Led by Dr. Neil Thompson

📢 MIT FutureTech is hiring a Product and Data Visualization Designer. This is an exciting opportunity to support a rapidly growing lab who work on socially impactful AI and computing research, and rapidly develop cutting-edge skills and experience.

Led by Dr. Neil Thompson
MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

📢 MIT FutureTech is seeking a Research Assistant to work with Dr. Danial Lashkari (Danial Lashkari) on projects related to economics of technological progress, innovation, and artificial intelligence. This is an exciting opportunity to support a rapidly growing lab who work on

📢 MIT FutureTech is seeking a Research Assistant to work with Dr. Danial Lashkari (<a href="/DanialLashkari/">Danial Lashkari</a>) on projects related to economics of technological progress, innovation, and artificial intelligence. This is an exciting opportunity to support a rapidly growing lab who work on
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

📢 We added 13 new frameworks and nearly 300 new risks to the MIT AI Risk Repository in our December 2024 update. Authored by experts across the US, UK, Australia, Canada, China, and Germany, the new frameworks include government reports, peer-reviewed articles, and preprints.

📢 We added 13 new frameworks and nearly 300 new risks to the MIT AI Risk Repository in our December 2024 update.

Authored by experts across the US, UK, Australia, Canada, China, and Germany, the new frameworks include government reports, peer-reviewed articles, and preprints.
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

📢 The MIT AI Risk Repository (MIT AI Risk Repository) was selected by the Paris Peace Forum for showcasing at the AI Action Summit! Come and see our booth on February 10 👉 bit.ly/4hx5OMP. Visit airisk.mit.edu #AIActionSummit

📢 The MIT AI Risk Repository (<a href="/MITAIRisk/">MIT AI Risk Repository</a>) was selected by the <a href="/ParisPeaceForum/">Paris Peace Forum</a> for showcasing at the AI Action Summit! Come and see our booth on February 10 👉 bit.ly/4hx5OMP. Visit airisk.mit.edu #AIActionSummit
Cas (Stephen Casper) (@stephenlcasper) 's Twitter Profile Photo

📣 Announcing the AI Agent Index AI agents are growing in number, capabilities, and impact. In response, we introduce the first public resource documenting the technical and safety features of deployed agentic AI systems. aiagentindex.mit.edu

Harmony Intelligence (@harmonyintel) 's Twitter Profile Photo

🚨 AI is evolving faster than we ever imagined. How do we stay ahead of the risks? 🚨 At Harmony Intelligence, we believe the answer lies in AI-powered defenses that outpace AI-powered threats. Cybercriminals are already using AI to launch sophisticated attacks. 🧵⬇️

david rein (@idavidrein) 's Twitter Profile Photo

I’m excited to share details about HCAST (Human-Calibrated Autonomy Software Tasks), a benchmark we’ve been developing at METR for the past year to measure the abilities of frontier AI systems to complete diverse software tasks autonomously.

I’m excited to share details about HCAST (Human-Calibrated Autonomy Software Tasks), a benchmark we’ve been developing at METR for the past year to measure the abilities of frontier AI systems to complete diverse software tasks autonomously.
MIT FutureTech (@mitfuturetech) 's Twitter Profile Photo

Peter Slattery (Dr. Peter Slattery) recently spoke about his work on the MIT AI Risk Repository (MIT AI Risk Repository) on a panel about the future of AI at the U.S. Securities and Exchange Commission (U.S. Securities and Exchange Commission) in Washington, DC. See agenda here: sec.gov/newsroom/press…

Peter Slattery (<a href="/PeterSlattery1/">Dr. Peter Slattery</a>) recently spoke about his work on the MIT AI Risk Repository (<a href="/MITAIRisk/">MIT AI Risk Repository</a>) on a panel about the future of AI at the U.S. Securities and Exchange Commission (<a href="/SECGov/">U.S. Securities and Exchange Commission</a>) in Washington, DC.

See agenda here: sec.gov/newsroom/press…
Dr. Peter Slattery (@peterslattery1) 's Twitter Profile Photo

More evidence that AI i) exceeds human performance in an important capability and ii) has increasingly powerful, and dangerous, capabilities.

Ethan Mollick (@emollick) 's Twitter Profile Photo

Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers

Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives

In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers