Andrew Strait (@agstrait) 's Twitter Profile
Andrew Strait

@agstrait

agstrait.bsky.social

Not here anymore - at the other place.

AD @AdaLovelaceInst. Formerly @DeepMind Ethics & Society, @Google, @oiioxford

ID: 1284995293

calendar_today21-03-2013 03:50:10

4,4K Tweet

2,2K Followers

1,1K Following

Andrew Strait (@agstrait) 's Twitter Profile Photo

A niche example of why we need to think beyond just capabilities and about affordances/design choices of these kinds of systems as drivers of risk. Pausing to give a user a *sense* that its thinking harder sounds like a dark pattern to me.

Andrew Strait (@agstrait) 's Twitter Profile Photo

A great thread re: problematic extrapolations on claims about AI being superhuman at tasks. 1. Coding =/= all computer-related tasks, let alone all tasks 2. Generating code to complete a task =/= the most efficient, secure way to complete a task.

AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

We've funded 20 new research projects to enhance AI security in critical infrastructure ⚡ Our Systemic AI Safety Grants Programme, announced at the Seoul AI Summit, has awarded up to £200,000 seed grants to projects tackling AI risks 🧵👇

Andrew Strait (@agstrait) 's Twitter Profile Photo

It was wonderful to celebrate the first cohort of the AI Security Institute Systemic Safety grants last night with some brilliant remarks from Minister Clark on the necessary work ahead. Congratulations again to all of our incredible grantees and partners!

Andrew Strait (@agstrait) 's Twitter Profile Photo

I too find this really weird, mainly in that it shows the frontier of AI research is at risk of moving further away from producing useful, safe, reliable products. These seem like features, not bugs.

Sayash Kapoor (@sayashk) 's Twitter Profile Photo

How will AI impact the economy? Can we defend against misuse? What policies would mitigate the risks of AI? Thrilled to share that Arvind Narayanan and I are writing another book to tackle these questions! Today, we release a paper laying out our argument: AI as Normal Technology.

How will AI impact the economy? Can we defend against misuse? What policies would mitigate the risks of AI?

Thrilled to share that <a href="/random_walker/">Arvind Narayanan</a> and I are writing another book to tackle these questions! Today, we release a paper laying out our argument: AI as Normal Technology.
AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

Advanced AI systems require complex evaluations to measure abilities, but conventional analysis techniques often fall short. Introducing HiBayES: a flexible, robust statistical modelling framework that accounts for the nuances & hierarchical structure of advanced evaluations.

Advanced AI systems require complex evaluations to measure abilities, but conventional analysis techniques often fall short.
Introducing HiBayES: a flexible, robust statistical modelling framework that accounts for the nuances &amp; hierarchical structure of advanced evaluations.
Saffron Huang (@saffronhuang) 's Twitter Profile Photo

Newest ⚡ reboot ⚡ 🎙️post: jessica dai and I discuss forecasting, and how people present unhelpful narratives about the future (mostly by picking on AI 2027, sorry guys) Why we should view the future as constructed, not predicted

Newest <a href="/reboot_hq/">⚡ reboot ⚡</a> 🎙️post: <a href="/jessicadai_/">jessica dai</a> and I discuss forecasting, and how people present unhelpful narratives about the future (mostly by picking on AI 2027, sorry guys)

Why we should view the future as constructed, not predicted