
Chris Russell
@c_russl
Associate professor of ai, government, and policy and the Oxford Internet Institute. ELLIS fellow.
Formerly AWS, and the Alan Turing Institute
ID: 1042441306348376065
19-09-2018 15:52:29
225 Tweet
697 Followers
283 Following

News release alert! Large Language Models pose risk to science with false answers, says Oxford AI experts Brent Mittelstadt @bmittelstadt.bsky.social Sandra Wachter [email protected] Chris Russell Oxford Internet Institute OxfordSocialSciences University of Oxford 1/3 #LLMs oii.ox.ac.uk/large-language…

Fresh off the press my new paper nature Nature Human Behaviour w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell "To protect science, we must use LLMs as zero-shot translators" where we show how GenAI poses huge risks to science & society & what can be done to stop it nature.com/articles/s4156… Oxford Internet Institute

How can we use LLMs like ChatGPT safely in science, research & education? In our new Nature Human Behaviour paper we advocate for prompting AI with true information using zero-shot translation to avoid hallucinations. Paper: rdcu.be/drzos Oxford Internet Institute Sandra Wachter [email protected] Chris Russell

SO incredibly excited to give a keynote NeurIPS Conference at the Regulatable ML Workshop on my new Nature Human Behaviour paper rdcu.be/drzos w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell alongside so many outstanding speakers! See you 16 Dec 9.30am CST! regulatableml.github.io/#overview Oxford Internet Institute Berkman Klein Center for Internet & Society

Cant wait for my keynote NeurIPS Conference 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governance of AI, foundation models, & generative AI" incl my new paper in Nature Human Behaviour rdcu.be/drzos w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell regulatableml.github.io/#schedule

Excited for my keynote NeurIPS Conference tmr 16.12 at 9:30am CST "Regulating Code: What the EU has in stock for the governance of AI, foundation models, & generative AI" incl my new paper in Nature Human Behaviour rdcu.be/drzos w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell

Great to see work by Profs Sandra Wachter [email protected], Brent Mittelstadt @bmittelstadt.bsky.social and Chris Russell, all Oxford Internet Institute, referenced as a case study for cross-disciplinary impact in this new Academy of Social Sciences report.



#AI #AIEthics A must-read research paper, by Sandra Wachter [Sandra Wachter [email protected]], Brent Mittelstadt [Brent Mittelstadt @bmittelstadt.bsky.social] & Chris Russell [Chris Russell], on how to make Large Language Models (LLMs) more Truthful and Counter #misinformation 👇 To Protect Science, We Must Use LLMs as
![Pinna Pierre (@pierrepinna) on Twitter photo #AI #AIEthics
A must-read research paper,
by Sandra Wachter [<a href="/SandraWachter5/">Sandra Wachter -@swachter.bsky.social</a>],
Brent Mittelstadt [<a href="/b_mittelstadt/">Brent Mittelstadt @bmittelstadt.bsky.social</a>]
& Chris Russell [<a href="/c_russl/">Chris Russell</a>],
on how to make Large Language Models (LLMs) more Truthful and Counter #misinformation
👇
To Protect Science, We Must Use LLMs as #AI #AIEthics
A must-read research paper,
by Sandra Wachter [<a href="/SandraWachter5/">Sandra Wachter -@swachter.bsky.social</a>],
Brent Mittelstadt [<a href="/b_mittelstadt/">Brent Mittelstadt @bmittelstadt.bsky.social</a>]
& Chris Russell [<a href="/c_russl/">Chris Russell</a>],
on how to make Large Language Models (LLMs) more Truthful and Counter #misinformation
👇
To Protect Science, We Must Use LLMs as](https://pbs.twimg.com/media/GIjRq7uWwAAUrw7.jpg)

My new paper w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell "Do LLMs have a legal duty to tell the truth?" We explore if developers need to reduce hallucinations, inaccurate & harmful outputs or what we term "careless speech"? We show who is liable for GenAI outputs. papers.ssrn.com/sol3/papers.cf…

Congrats Algorithm Audit for this important work & for uncovering systemic discrimination in access to education. I am thrilled that my paper "Why Fairness Cannot Be Automated" papers.ssrn.com/sol3/papers.cf… w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell was useful for the study! algorithmaudit.eu/algoprudence/c…

🚨New paper and fairness toolkit alert🚨 Announcing OxonFair: A Flexible Toolkit for Algorithmic Fairness w/Zihao Fu, Sandra Wachter [email protected], Brent Mittelstadt @bmittelstadt.bsky.social and Chris Russell toolkit -github.com/oxfordinternet… paper - papers.ssrn.com/sol3/papers.cf…

New open access paper on hallucinations in LLMs out now in Royal Society Open Science: 'Do large language models have a legal duty to tell the truth?' w/ Sandra Wachter [email protected] and Chris Russell royalsocietypublishing.org/doi/10.1098/rs… The Royal Society Oxford Internet Institute University of Oxford Governance of Emerging Technologies

Delighted to see coverage of our new paper on truth and LLMs in New Scientist ! newscientist.com/article/244286… Oxford Internet Institute University of Oxford Governance of Emerging Technologies Sandra Wachter [email protected] Chris Russell

Such an honour to be featured in this nature Vivien Marx🐘📬⛅️ article w/Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell on our work on GenAI, truth & hallucinations

🚨 [AI REGULATION] The paper "Do Large Language Models Have a Legal Duty to Tell the Truth?" by Sandra Wachter [email protected], Brent Mittelstadt @bmittelstadt.bsky.social & Chris Russell is a MUST-READ for everyone in AI governance. Quotes: "Free and unthinking use of LLMs undermine science, education and public discourse in
![Luiza Jarovsky (@luizajarovsky) on Twitter photo 🚨 [AI REGULATION] The paper "Do Large Language Models Have a Legal Duty to Tell the Truth?" by <a href="/SandraWachter5/">Sandra Wachter -@swachter.bsky.social</a>, <a href="/b_mittelstadt/">Brent Mittelstadt @bmittelstadt.bsky.social</a> & <a href="/c_russl/">Chris Russell</a> is a MUST-READ for everyone in AI governance. Quotes:
"Free and unthinking use of LLMs undermine science, education and public discourse in 🚨 [AI REGULATION] The paper "Do Large Language Models Have a Legal Duty to Tell the Truth?" by <a href="/SandraWachter5/">Sandra Wachter -@swachter.bsky.social</a>, <a href="/b_mittelstadt/">Brent Mittelstadt @bmittelstadt.bsky.social</a> & <a href="/c_russl/">Chris Russell</a> is a MUST-READ for everyone in AI governance. Quotes:
"Free and unthinking use of LLMs undermine science, education and public discourse in](https://pbs.twimg.com/media/GXDs_p5XcAAdPJ2.jpg)

🚀 Paper news! Excited that our paper, with Brent Mittelstadt @bmittelstadt.bsky.social & Chris Russell was accepted at the NeurIPS Safe GenAI workshop! In this work we explore how fine-tuning can impact toxicity rates in language models... 🧵


Another example of what I Brent Mittelstadt @bmittelstadt.bsky.social Chris Russell termed careless speech. Subtle hallucinations are dangerous & developers are not (yet) liable for them. We argue they should. See paper: Do LLMs have a legal duty to tell the truth? tinyurl.com/435jba5p tinyurl.com/4wpske2s