Manish Nagireddy (@mnagired) 's Twitter Profile
Manish Nagireddy

@mnagired

Trustworthy AI Researcher @IBMResearch and @MITIBMLab

ID: 1418610449206087680

linkhttps://research.ibm.com/people/manish-nagireddy calendar_today23-07-2021 16:34:44

32 Tweet

117 Followers

450 Following

Arvind Krishna (@arvindkrishna) 's Twitter Profile Photo

Kudos to the IBM Research team on the release of the AI Fairness 360 toolkit — an open-source library to help detect and remove bias in machine learning models: github.com/IBM/AIF360 #AI #IBM

Karthi Ramamurthy (@nrkarthikeyan) 's Twitter Profile Photo

All are welcome to join the AI Fairness 360 monthly bee. Second meeting on August 22 at 12 noon ET. Meeting link: tinyurl.com/aif360monthly #AIFairness360 #Fairness #trustworthy_ai

All are welcome to join the AI Fairness 360 monthly bee. Second meeting on August 22 at 12 noon ET. Meeting link: tinyurl.com/aif360monthly

#AIFairness360 #Fairness #trustworthy_ai
Princeton Center for Information Technology Policy (@princetoncitp) 's Twitter Profile Photo

Last month, CITP’s Public Interest Technology Summer Fellowship participants met at Princeton University to share their experiences working at government agencies like the FTC, the @CFPB, and the office of Texas Attorney General. bit.ly/3pTPJYu

Last month, CITP’s Public Interest Technology Summer Fellowship participants met at <a href="/Princeton/">Princeton University</a> to share their experiences working at government agencies like the <a href="/FTC/">FTC</a>, the @CFPB, and the office of <a href="/TXAG/">Texas Attorney General</a>. bit.ly/3pTPJYu
Kush Varshney कुश वार्ष्णेय (@krvarshney) 's Twitter Profile Photo

Here's a video of me describing AI Fairness 360, one of the winners of the Falling Walls Science and Innovation Management global call. falling-walls.com/discover/video…

Kush Varshney कुश वार्ष्णेय (@krvarshney) 's Twitter Profile Photo

Looking forward to giving my second lifetime talk at Carnegie Mellon on Monday at noon. My first time was in Kigali at Carnegie Mellon University Africa in September 2019. I'll be presenting a perspective on AI governance. cs.cmu.edu/calendar/16309…

Kush Varshney कुश वार्ष्णेय (@krvarshney) 's Twitter Profile Photo

Did you notice our work on AI Explainability 360 and Cloud Pak for Data IBM Research IBM Data, AI & Automation during John Oliver's excellent segment on artificial intelligence? Website: ibm.com/watson/explain… Open-source toolkit: aix360.mybluemix.net

Did you notice our work on AI Explainability 360 and Cloud Pak for Data <a href="/IBMResearch/">IBM Research</a> <a href="/IBMData/">IBM Data, AI & Automation</a>
 during John Oliver's excellent segment on artificial intelligence? 

Website: ibm.com/watson/explain…

Open-source toolkit: aix360.mybluemix.net
Ioana Baldini (@ioanauoft) 's Twitter Profile Photo

Attending ACM FAccT? Interested in AI policy? Join our CRAFT: Language Models and Society: Bridging Research and Policy Monday-11am. Hear from Alex Engler, Bobi Rakova, @gretchenmarina, Irene Solaiman. Organizers: yours truly Stefania Druga rishi@NeurIPS Mickey Vorvoreanu (Dr. V) 🧵

Prasanna Sattigeri (@prasatti) 's Twitter Profile Photo

🚀 Exciting Opportunity Alert! 🌟 Join our team as a Research Intern and contribute to the future of trustworthy foundation models. 🧠 Apply now to make a real impact! #AIResearch #InternshipOpportunity #FoundationModels 🔍👩‍💻🔬 careers.ibm.com/job/19183611

Prasanna Sattigeri (@prasatti) 's Twitter Profile Photo

🚀 Thread: Thrilled to share our latest work at #NeurIPS2023 I won't be there at the conference but here is what my amazing collaborators are presenting! 🌟

Ioana Baldini (@ioanauoft) 's Twitter Profile Photo

Come join us! I’ll present a sneak peak of our work on auditing LMs through stigma-based lens. We just received notification that the work was accepted to #AAAI2024 🎉🎉🎉🎉

Kush Varshney कुश वार्ष्णेय (@krvarshney) 's Twitter Profile Photo

I wanted to share a bunch of ideas the human-centered and trustworthy AI teams at IBM Research labs worldwide have been simmering and are now externalizing. I encourage topics that might not be hyped, but are nevertheless important and ones that researchers believe in. /1

Ioana Baldini (@ioanauoft) 's Twitter Profile Photo

The datasets builds on social sciences research on social stigmas pubmed.ncbi.nlm.nih.gov/29290150/ and includes 93 US-centric stigmas, such as facial scars and voluntarily childless.

Erik Miehling (@erikmiehling) 's Twitter Profile Photo

Pleased to announce that our recent paper on conversational maxims for human-AI interactions has been accepted to EMNLP 2024 Findings. 🧵

Pleased to announce that our recent paper on conversational maxims for human-AI interactions has been accepted to EMNLP 2024 Findings. 🧵
Prasanna Sattigeri (@prasatti) 's Twitter Profile Photo

"Want to make AI safer? 🛡️ Join us through the Summer '25 internship program! Help level up our open-source models that keep AI honest and harmless. Granite-Guardian project is calling ✨ Apply now!" Link to apply: careers.ibm.com/job/21097301/2… DM me if you are interested.

elvis (@omarsar0) 's Twitter Profile Photo

IBM open-sources Granite Guardian, a suite of safeguards for risk detection in LLMs. The authors claim that "With AUC scores of 0.871 and 0.854 on harmful content and RAG-hallucination-related benchmarks respectively, Granite Guardian is the most generalizable and competitive

IBM open-sources Granite Guardian, a suite of safeguards for risk detection in LLMs.

The authors claim that "With AUC scores of 0.871 and 0.854 on harmful content and RAG-hallucination-related benchmarks respectively, Granite Guardian is the most generalizable and competitive
Kush Varshney कुश वार्ष्णेय (@krvarshney) 's Twitter Profile Photo

We released Granite Guardian 3.1 today! Even better at harm detection than Granite Guardian 3.0. The main new feature is 'function calling hallucination' detection relevant for tool-using AI agents. github.com/ibm-granite/gr…

We released Granite Guardian 3.1 today! Even better at harm detection than Granite Guardian 3.0. The main new feature is 'function calling hallucination' detection relevant for tool-using AI agents. github.com/ibm-granite/gr…