Peter Kairouz (@kairouzpeter) 's Twitter Profile
Peter Kairouz

@kairouzpeter

@GoogleAI researcher focusing on federated learning, security, and differential privacy.

ID: 1089903328987938816

linkhttps://kairouzp.github.io calendar_today28-01-2019 15:09:37

66 Tweet

1,1K Followers

147 Following

Peter Kairouz (@kairouzpeter) 's Twitter Profile Photo

🚨 Launched: Distributed Differential Privacy for Federated Learning 🚨 It's been a privilege to work with super talented people on advancing the state-of-the-art in privacy-preserving AI technologies. Super excited to share this major milestone -- hope you enjoy it 😎🙌

Alina Oprea (@alinamoprea) 's Twitter Profile Photo

I am on sabbatical at Google this academic year. Peter Kairouz and I are looking for a PhD student researcher this Spring / Summer to work on privacy attacks in large language models. If interested, please contact me directly by email and send your CV.

Andreas Terzis (@aterzis) 's Twitter Profile Photo

As differential privacy is used in industry and the public sector we are learning what it can and cannot do. In Jul 2022, we organized a workshop to discuss topics pertaining to privacy and its implications in the design of industry-grade systems. (1/4) arxiv.org/abs/2304.06929

Peter Kairouz (@kairouzpeter) 's Twitter Profile Photo

I invite you all to attend this amazingly organized workshop on new frontiers in federated learning and privacy preserving AI. I’m sure you’ll find it interesting and informative 😎

Peter Kairouz (@kairouzpeter) 's Twitter Profile Photo

Training high-quality production language models with a DP epsilon < 1? Many experts said it couldn't be done... ...But my team at Google Research just proved them wrong 🔥 Find out how we achieved this our latest blog post: goo.gle/42RXzob

Eugene Bagdasarian (@ebagdasa) 's Twitter Profile Photo

Imagine asking an LLM assistant to book a restaurant. It needs to know your name, phone number, and a credit card number. But does it know not to share your age or SSN? We show that an adversarial third-party can easily extract personal information by hijacking the context. 1/5

Imagine asking an LLM assistant to book a restaurant. It needs to know your name, phone number, and a credit card number. But does it know not to share your age or SSN? We show that an adversarial third-party can easily extract personal information by hijacking the context. 1/5
Peter Kairouz (@kairouzpeter) 's Twitter Profile Photo

This is an incredible opportunity for anyone working on federated, collaborative, or decentralized learning! You’ll be able to interact with and learn from some of the most impressive researchers in the field. Please apply!

Ken Liu (@kenziyuliu) 's Twitter Profile Photo

An LLM generates an article verbatim—did it “train on” the article? It’s complicated: under n-gram definitions of train-set inclusion, LLMs can complete “unseen” texts—both after data deletion and adding “gibberish” data. Our results impact unlearning, MIAs & data transparency🧵

An LLM generates an article verbatim—did it “train on” the article?

It’s complicated: under n-gram definitions of train-set inclusion, LLMs can complete “unseen” texts—both after data deletion and adding “gibberish” data. Our results impact unlearning, MIAs &amp; data transparency🧵