Noah Constant (@noahconst) 's Twitter Profile
Noah Constant

@noahconst

Research Scientist @GoogleDeepMind

ID: 1371526495965831173

calendar_today15-03-2021 18:20:08

23 Tweet

588 Followers

117 Following

Google AI (@googleai) 's Twitter Profile Photo

Fine-tuning pre-trained models is common in NLP, but forking the model for each task can be a burden. Prompt tuning adds a small set of learnable vectors to the input and can match fine-tuning quality while sharing the same frozen model across all tasks. goo.gle/3Bch2lL

Fine-tuning pre-trained models is common in NLP, but forking the model for each task can be a burden. Prompt tuning adds a small set of learnable vectors to the input and can match fine-tuning quality while sharing the same frozen model across all tasks. goo.gle/3Bch2lL
Tu Vu (@tuvllms) 's Twitter Profile Photo

Happy to share our soft prompt transfer (SPoT) paper made it to #ACL2022 🎉. On the SuperGLUE leaderboard, SPoT is the first parameter-efficient approach that is competitive with methods that tune billions of parameters. w/ Brian Lester, Noah Constant, @aboSamoor, Daniel Cer

Tu Vu (@tuvllms) 's Twitter Profile Photo

While parameter-efficient tuning methods are originally proposed to reduce computation & storage costs, it turns out they can help overcome catastrophic forgetting and thus improve performance on zero-shot cross-lingual generation. Checkout our work Google AI EMNLP 2025👇1/10

While parameter-efficient tuning methods are originally proposed to reduce computation &amp; storage costs, it turns out they can help overcome catastrophic forgetting and thus improve performance on zero-shot cross-lingual generation. Checkout our work <a href="/GoogleAI/">Google AI</a> <a href="/emnlpmeeting/">EMNLP 2025</a>👇1/10
Noah Constant (@noahconst) 's Twitter Profile Photo

Want your image generation model to stop misspelling everything? Try giving it access to character-level input features! arxiv.org/abs/2212.10562

Want your image generation model to stop misspelling everything? Try giving it access to character-level input features! arxiv.org/abs/2212.10562
Rosanne Liu (@savvyrl) 's Twitter Profile Photo

Anyone who's played with generative models knows that *THEY CAN'T SPELL* (at least not until they reach a certain scale). Well turns out there's a simple fix! Check out our paper that carefully benchmarks and investigates "the spelling miracle" arxiv.org/abs/2212.10562

Google AI (@googleai) 's Twitter Profile Photo

Presenting FRMT, a new dataset and evaluation benchmark for Few-Shot Region-Aware Machine Translation that seeks to drive research progress on equitably serving speakers of different language varieties. Learn more and see how current models fare → goo.gle/3IvnPwc

Hyung Won Chung (@hwchung27) 's Twitter Profile Photo

Happy to release: 1. upgraded mT5 checkpoints: github.com/google-researc… 2. refreshed mC4, a multilingual pre-training dataset: huggingface.co/datasets/allen… The new mC4 covers CommonCrawls in 101 languages up to Aug. 2022 3. And a new ICLR paper: arxiv.org/abs/2304.09151

Rosanne Liu (@savvyrl) 's Twitter Profile Photo

Happy to share that our paper on teaching generative models how to spell is accepted at #ACL2023 ACL 2025 Simply by making models (both language-only and text-to-image) character-aware, the notorious spelling errors are fixed! arxiv.org/abs/2212.10562

Tu Vu (@tuvllms) 's Twitter Profile Photo

🚨 New Google AI paper: 🤖 LLMs are game-changers, but can they help us navigate a constantly changing world? 🤔 As of now, our work shows that LLMs, no matter their size, struggle when it comes to fast-changing knowledge & false premises. 📰: arxiv.org/abs/2310.03214 👇

🚨 New <a href="/GoogleAI/">Google AI</a> paper:

🤖 LLMs are game-changers, but can they help us navigate a constantly changing world? 🤔

As of now, our work shows that LLMs, no matter their size, struggle when it comes to fast-changing knowledge &amp; false premises.

📰: arxiv.org/abs/2310.03214
👇
Adam Roberts (@ada_rob) 's Twitter Profile Photo

I love music most when it’s live, in the moment, and expressing something personal. This is why I’m psyched about the new “DJ mode” we developed for MusicFX: aitestkitchen.withgoogle.com/tools/music-fx… It’s an infinite AI jam that you control 🎛️. Try mixing your unique 🌀 of instruments, genres,

Jascha Sohl-Dickstein (@jaschasd) 's Twitter Profile Photo

This was a fun project! If you could train an LLM over text arithmetically compressed using a smaller LLM as a probabilistic model of text, it would be really good. Text would be represented with far fewer tokens, and inference would be way faster and cheaper. The hard part is

Adam Roberts (@ada_rob) 's Twitter Profile Photo

I’m so proud of the updated version of #MusicFXDJ we developed in collaboration with Jacob Collier, available today at labs.google/musicfx. Over the past year I’ve spent countless hours experimenting with our real-time music models, and it feels like I’ve learned to play a

Jesse Engel (@jesseengel) 's Twitter Profile Photo

Excited to release a tool that makes the joy of live music creation more accessible 🎸 “Music FX DJ” is new and improved: g.co/labs/musicfx-dj Featuring a whole suite of new controls, sharing and remix features, and improved audio quality and diversity of outputs, to let you

Jesse Engel (@jesseengel) 's Twitter Profile Photo

Pleasantly surprised with how easy/fun it is to "vibe code" new musical experiences in AI Studio. I'm not much of a JS guy, but had Gemini help me make a "Kaoss Pad" emulator using the Lyria RealTime API to explore musical latent spaces in realtime.

Adam Roberts (@ada_rob) 's Twitter Profile Photo

On the occasion of returning to Magenta's roots at Sónar+D, we're dusting off the blog to share news and insights about what we're working on at Google DeepMind on the Lyria Team. g.co/magenta/lyria-… Our latest post is about the Lyria RealTime API, providing access to

Chris Donahue (@chrisdonahuey) 's Twitter Profile Photo

Excited to announce 🎵Magenta RealTime, the first open weights music generation model capable of real-time audio generation with real-time control. 👋 **Try Magenta RT on Colab TPUs**: colab.research.google.com/github/magenta… 👀 Blog post: g.co/magenta/rt 🧵 below

Jesse Engel (@jesseengel) 's Twitter Profile Photo

🎸🥁🎹🎵 New open weights music model from Magenta for real-time generation. Excited to see what cool instruments you all can make with it 🙂

Adam Roberts (@ada_rob) 's Twitter Profile Photo

Introducing Magenta RealTime, our open weights live music model! Like Lyria RealTime, this model provides the user continuous control of the musical flow 🌊. We will be sharing some of our favorite ways to extend these models over the next few weeks. Can't wait to see what you

Ilaria Manco (@ilaria__manco) 's Twitter Profile Photo

We’ve just released Magenta RealTime, an open-weights live music model that lets you craft sounds in real time by exploring the latent space through text and audio! 🤗 Model: huggingface.co/google/magenta…… 🧑‍💻Code: github.com/magenta/magent… 📝Blog post: magenta.withgoogle.com/magenta-realti…