Prakhar Ganesh (@prakhar_24) 's Twitter Profile
Prakhar Ganesh

@prakhar_24

PhD @ McGill and Mila | Interested in Responsible AI and Revenge

ID: 861897494011170816

linkhttps://prakharg24.github.io/ calendar_today09-05-2017 10:55:50

35 Tweet

42 Followers

173 Following

Durk Kingma (@dpkingma) 's Twitter Profile Photo

Generative models (such as Dall-E 2 and PaLM) are becoming just such an insanely powerful, almost magic-like technology, it's completely NUTS. And it seems like most (non-ML) people still don't fully grasp the implications. This technology will thoroughly transform society.

Kristian Lum (@kldivergence) 's Twitter Profile Photo

Evaluating subgroup disparities in ML shouldn't only be a "fairness" thing. It should be ML/data science best practice in general. If your model performs poorly for some identifiable subgroup, you should probably go fix that, just in the interest of making a better model.

NeurIPS Conference (@neuripsconf) 's Twitter Profile Photo

The preliminary list of accepted #NeurIPS2022 workshops is out! No matter whether you are attending in person or virtually, we offer a wide range of relevant workshops blog.neurips.cc/2022/07/17/ann…

Nazneen Rajani (@nazneenrajani) 's Twitter Profile Photo

Here's hoping I don't need to update this slide again before my talk next week EMNLP 2025 If anyone is planning to release anything next week, please lmk soon 😅 Am I missing any text-only LLMs?

Here's hoping I don't need to update this slide again before my talk next week <a href="/emnlpmeeting/">EMNLP 2025</a> 

If anyone is planning to release anything next week, please lmk soon 😅

Am I missing any text-only LLMs?
Sasha Luccioni, PhD 🦋🌎✨🤗 (@sashamtl) 's Twitter Profile Photo

We need to stop conflating open/gated access and opensource. ChatGPT is *not* open source -- we don't know what model is under the hood, how it works, or any other tweaks/filters that are applied. (1/n)

Sam Altman (@sama) 's Twitter Profile Photo

ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.

Nando Fioretto (@nandofioretto) 's Twitter Profile Photo

Excited to share our work on data minimization for ML! The principle of data minimization is a cornerstone of global data protection regulations, but how do we implement it in ML contexts? 🧵: Let's dive into some insights. 🔗: arxiv.org/abs/2405.19471

Excited to share our work on data minimization for ML!

The principle of data minimization is a cornerstone of  global data protection regulations, but how do we implement it in ML contexts? 

🧵: Let's dive into some insights.  
🔗: arxiv.org/abs/2405.19471
Prakhar Ganesh (@prakhar_24) 's Twitter Profile Photo

Are LLM adversaries more powerful than we assume? 🤔 Yes! Our work shows real-world attackers with multifaceted access to models and prompts have significantly higher extraction risks. An exciting paper led by Yash More, and with Golnoosh Farnadi @NeurIPS2024 arxiv.org/abs/2407.02596

Prakhar Ganesh (@prakhar_24) 's Twitter Profile Photo

What's the best bias mitigation algorithm? There isn't one! Benchmarks often miss critical nuances. In our latest paper, we unpack these pitfalls. Critical work w/ Usman Gohar , Lu Cheng , Golnoosh Farnadi @NeurIPS2024 🔥Spotlight AFME 2024 @ NeurIPS @ NeurIPS 2024 📰arxiv.org/pdf/2411.11101

What's the best bias mitigation algorithm?
There isn't one!
Benchmarks often miss critical nuances. In our latest paper, we unpack these pitfalls. Critical work w/ <a href="/UsmanGohar/">Usman Gohar</a> , <a href="/luchengSRAI/">Lu Cheng</a> , <a href="/gfarnadi/">Golnoosh Farnadi @NeurIPS2024</a> 

🔥Spotlight <a href="/afciworkshop/">AFME 2024 @ NeurIPS</a>  @ NeurIPS 2024
📰arxiv.org/pdf/2411.11101
Yash More (@yash_347) 's Twitter Profile Photo

1/ 🚨 Large Language Models (LLMs) like ChatGPT are changing our daily workflows, but what are the hidden costs? At COLM 2024, we presented our work on sensitive-disclosures in user-chatbot interactions. Here's a thread unpacking the key takeaways if you missed our poster! 🧵👇

Nando Fioretto (@nandofioretto) 's Twitter Profile Photo

🆘Help needed! Are you working on Privacy (from a Technical (e.g., Differential Privacy), Policy, or Law perspective)? Please give your availability to review for PPAI (ppai-workshop.github.io) if you can! We'd highly appreciate it! 🙏 forms.gle/dqjVsBsR2y81v1…

Yash More (@yash_347) 's Twitter Profile Photo

1/ 🚨 How vulnerable are large language models (LLMs) to extraction attacks? Our paper (presented at ACL PrivateNLP) explores a more realistic adversary capable of exploiting LLM brittleness and multi-faceted access to data. The risks might be bigger than you think. 🧵👇

Aditi Khandelwal (@aditi184) 's Twitter Profile Photo

😡 Absolutely disappointed with Overleaf. My account was deleted without my knowledge, and they’ve done nothing to help me recover it or transfer to my secondary email. Years of work, including all my CVs, SOPs, papers, etc., gone! This is unacceptable. #Overleaf