Cameron Jones (@camrobjones) 's Twitter Profile
Cameron Jones

@camrobjones

Postdoc in the Language and Cognition lab at UC San Diego. I’m interested in persuasion, deception, LLMs, and social intelligence.

ID: 917856030

linkhttps://camrobjones.com calendar_today31-10-2012 23:28:53

1,1K Tweet

1,1K Followers

718 Following

Kiran Garimella (@gvrkiran) 's Twitter Profile Photo

this is an amazing paper on AI persuasion. Large study with 10k+ participants. LLMs cost roughly half as much per converted voter as standard tactics, but only if u can get people to talk to them. Distribution/Reach, not rhetoric, is the real constraint arxiv.org/abs/2505.00036

this is an amazing paper on AI persuasion. 

Large study with 10k+ participants. LLMs cost roughly half as much per converted voter as standard tactics, but only if u can get people to talk to them. Distribution/Reach, not rhetoric, is the real constraint

arxiv.org/abs/2505.00036
david rein (@idavidrein) 's Twitter Profile Photo

I was pretty skeptical that this study was worth running, because I thought that *obviously* we would see significant speedup. x.com/METR_Evals/sta…

Tom McCoy (@rtommccoy) 's Twitter Profile Photo

So much research is being done about LLMs that it's hard to stay on top of the literature. To help with this, I've made a list of all the most important papers from the past 8 years: rtmccoy.com/pubs/ I hope you enjoy!

Hannah Rose Kirk (@hannahrosekirk) 's Twitter Profile Photo

My team at AI Security Institute is hiring! This is an awesome opportunity to get involved with cutting-edge scientific research inside government on frontier AI models. I genuinely love my job and the team 🤗 Link: civilservicejobs.service.gov.uk/csr/jobs.cgi?j… More Info: ⬇️

Owain Evans (@owainevans_uk) 's Twitter Profile Photo

Our setup: 1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math) 2. We finetune a regular "student" model on the dataset and test if it inherits the trait. This works for various animals.

Our setup:
1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math)
2. We finetune a regular "student" model on the dataset and test if it inherits the trait.
This works for various animals.