Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile
Willie Agnew | [email protected]

@willie_agnew

demure&respectful, critical/ethical AI @cmuhcii @QueerinAI @OUTinSTEM @ResistanceAI he/they, opinions my own, like/rt not endorse, @[email protected]

ID: 1272251447434817537

linkhttp://william-agnew.com calendar_today14-06-2020 19:36:04

2,2K Tweet

2,2K Followers

1,1K Following

Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

yes go work on ai safety for the world's biggest vector of neonazi propaganda 🤡 (but let's talk more about why there's so much overlap)

Anaelia (Elia) Ovalle (@ovalle_elia) 's Twitter Profile Photo

📢New paper! When LLMs are aligned towards "harmlessness", who are they actually safe for? For gender minorities, pre-existing patterns of LLM exclusion don't just persist, they can amplify through alignment - measurable in both LLM outputs & implicit reward signals! 🧵1/10

📢New paper! When LLMs are aligned towards "harmlessness", who are they actually safe for?

For gender minorities, pre-existing patterns of LLM exclusion don't just persist,  they can amplify through alignment - measurable in both LLM outputs & implicit reward signals!

🧵1/10
Elliot Creager (@elliot_creager) 's Twitter Profile Photo

US DOJ wants Google to sell Chrome to correct monopolistic behavior in the search market. But what happens to user data previously collected by chrome, and to the models Google has trained on chrome data? In our NeurIPS Workshop paper…

US DOJ wants Google to sell Chrome to correct monopolistic behavior in the search market. But what happens to user data previously collected by chrome, and to the models Google has trained on chrome data? In our NeurIPS Workshop paper…
Deb Raji (@rajiinio) 's Twitter Profile Photo

Don't get why AI Safety Frameworks only focus on risk being correlated to increases in "capability" (ie how much an individual model can do) vs other things (eg. the scale of adoption/impact, domain of use, etc)? For eg., the DSA classifies risk on platforms by number of users

Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

"Stand Together is one of the primary institutions within the Koch network, and was created by Charles Koch. [...] It funds a wide range of conservative nonprofits, including the anti-LGBTQ Alliance Defending Freedom"

Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

I'm going to be in DC for a NIST AI policy workshop at UMD Dec 2 and 3rd! I'm very interested in meeting other AI policy folks please reach out if you'll be around too!

QueerInAI (@queerinai) 's Twitter Profile Photo

1/7 🌈 BIG NEWS ALERT! The hottest Queer in AI workshop is back - and this time we're official! We're thrilled to announce we'll be at #NAACL2025 as an official workshop, meaning your work can now be published in the ACL anthology! 🎉

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

When someone tells me jalapeño poppers aren't healthy based on "generalization" from scientific studies about things other than jalapeño poppers x.com/NICKIMINAJ/sta…

Brian Merchant (@bcmerchant) 's Twitter Profile Photo

My new report for AI Now investigates the true power of "AGI"—as a narrative tool useful for winning investment, talent and headlines. I dig into the early days of OpenAI, to show how a story of a rising AI that will replace humans replaced the need for an actual business model.

Mostly here now: @davidthewid.bsky.social (@davidthewid) 's Twitter Profile Photo

📢 NEW Paper! Sireesh Gururaja, Lucy Suchman, and I examine a corpus of 7,000 US Military grant solicitations to ask what the world’s largest military wants with to do with AI, by looking at what it seeks to fund. 📄: arxiv.org/pdf/2411.17840 We find…

📢 NEW Paper!

Sireesh Gururaja, Lucy Suchman, and I examine a corpus of 7,000 US Military grant solicitations to ask what the world’s largest military wants with to do with AI, by looking at what it seeks to fund.

📄: arxiv.org/pdf/2411.17840

We find…
Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

I'll be at NeurIPS Wednesday through Sunday morning! I'm on the job market this year! Happy to meet about AI ethics, policy, audits, accountability, and empowering data subjects. I'll also be running policy co-design sessions at the queer in ai workshop.

Jennifer Chien (they/them) (@thechiennifer) 's Twitter Profile Photo

In case you missed it, I am at the conference all week and on the job market, so happy to chat all things research (e.g. RAI, ethics, user agency, content moderation, human-centered AI) #NeurIPS2024 NeurIPS Conference

In case you missed it, I am at the conference all week and on the job market, so happy to chat all things research (e.g. RAI, ethics, user agency, content moderation, human-centered AI) #NeurIPS2024 <a href="/NeurIPSConf/">NeurIPS Conference</a>
MMitchell (@mmitchell_ai) 's Twitter Profile Photo

At #NeurIPS2024, the keynote speaker perpetuated explicit racist stereotypes against Chinese students. Generalizations against a community subject to discrimination, even as an “example”, further provokes discrimination. Below, video of an audience member’s perfect response.

@timnitGebru (@dair-community.social/bsky.social) (@timnitgebru) 's Twitter Profile Photo

Dear computer vision researchers, students & practitioners📢 Remi Denton & I have written what I consider to be a comprehensive paper on the harms of computer vision systems reported to date & how people have proposed addressing them. PDF: cdn.sanity.io/files/wc2kmxvk…

Dear computer vision researchers, students &amp; practitioners📢

Remi Denton &amp; I have written what I consider to be a comprehensive paper on the harms of computer vision systems reported to date &amp; how people have proposed addressing them.

PDF: cdn.sanity.io/files/wc2kmxvk…
Willie Agnew | wagnew@dair-community.social (@willie_agnew) 's Twitter Profile Photo

I'm kind of confused by all the AI agents safety work. "What if we lose control of them?"--who is this "we" that controls AI currently? Not me, you, or any remotely democratic or accountable entity.