Nithum (@nithum) 's Twitter Profile
Nithum

@nithum

ID: 29586736

calendar_today08-04-2009 00:13:20

31 Tweet

235 Followers

294 Following

Martin Görner (@martin_gorner) 's Twitter Profile Photo

Beautiful "RNN with attention" tutorial from one of the authors of Google's troll-fighting AI Nithum. github.com/conversationai…. We presented this toxic comment detection model together in the "Tensorflow and modern RNNs without a PhD" talk. Excuse our French 🤬!

Beautiful "RNN with attention" tutorial from one of the authors of Google's troll-fighting AI <a href="/Nithum/">Nithum</a>. github.com/conversationai…. We presented this toxic comment detection model together in the "Tensorflow and modern RNNs without a PhD" talk. Excuse our French 🤬!
Nithum (@nithum) 's Twitter Profile Photo

Check out our new explorable on machine learning calibration: Machine learning models express their uncertainty as model scores, but through calibration we can transform these scores into probabilities for more effective decision making. pair.withgoogle.com/explorables/un…

TensorFlow (@tensorflow) 's Twitter Profile Photo

🤔 We've come a long way with #NLP, but what have language models actually learned? Watch Senior Software Engineer at Google PAIR, Nithum Thain, discuss AI language model learnings → goo.gle/3HVtolv

Adam Pearce (@adamrpearce) 's Twitter Profile Photo

Most machine learning models are trained by collecting vast amounts of data on a central server. Nicole Mitchell and I looked at how federated learning makes it possible to train models without any user's raw data leaving their device. pair.withgoogle.com/explorables/fe…

Andy Coenen (@_coenen) 's Twitter Profile Photo

In partnership with Google Magenta Project, we invited 13 professional writers to use Wordcraft, our experimental LaMDA-powered AI writing tool. We've published all of the stories written with the tool, along with a discussion on the future of AI and creativity. g.co/research/wordc…

Nithum (@nithum) 's Twitter Profile Photo

ML models sometimes make confidently incorrect predictions when they encounter out of distribution data. Ensembles of models can make better predictions by averaging away mistakes. pair.withgoogle.com/explorables/un…

iislucas (Lucas Dixon) (@iislucas) 's Twitter Profile Photo

Some of my thoughts on generative AI... and a reboot of the PAIR blog... medium.com/people-ai-rese… #responsibleai #hci #machinelearning #GenerativeAI

Adam Pearce (@adamrpearce) 's Twitter Profile Photo

Do Machine Learning Models Memorize or Generalize? pair.withgoogle.com/explorables/gr… An interactive introduction to grokking and mechanistic interpretability w/ Asma Ghandeharioun, @nadamused_, Nithum, Martin Wattenberg and iislucas (Lucas Dixon)

Google AI (@googleai) 's Twitter Profile Photo

While large language models appear to have a rich understanding of the world, how do we know they’re not simply regurgitating from training data? Check out the latest AI Explorable on a phenomenon called grokking to learn more about how models learn. → goo.gle/45ohnQh

Google AI (@googleai) 's Twitter Profile Photo

Can large language models (LLMs) explain their internal mechanisms? Check out the latest AI Explorable on Patchscopes, an inspection framework that uses LLMs to explain the hidden representations of LLMs. Learn more → goo.gle/patchscopes

Can large language models (LLMs) explain their internal mechanisms? Check out the latest AI Explorable on Patchscopes, an inspection framework that uses LLMs to explain the hidden representations of LLMs. Learn more → goo.gle/patchscopes
Nithum (@nithum) 's Twitter Profile Photo

Check out our most recent Explorable "Can Large Language Models Explain Their Internal Mechanisms?" pair.withgoogle.com/explorables/pa…

Amy Zhang (@amyxzh) 's Twitter Profile Photo

Wikipedia building n-gram models to detect personal attacks and harassment: meta.m.wikimedia.org/wiki/Research:… x.com/wikiresearch/s…