Letiția Pârcălăbescu (@letiepi) 's Twitter Profile
Letiția Pârcălăbescu

@letiepi

👩‍🎓 PhD, 🧠 AI Researcher, 📺 ML videos creator on YouTube: youtube.com/AICoffeeBreak
▶ @AICoffeeBreak

ID: 1296071637817790465

linkhttps://www.youtube.com/aicoffeebreak calendar_today19-08-2020 13:09:02

404 Tweet

1,1K Followers

246 Following

AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

We explain 🥥COCONUT (Chain of Continuous Thought), a new paper using vectors for CoT instead of words. We break down: - Why CoT with words might not be optimal. - How to implement vectors for CoT instead words and make CoT faster. - What this means for interpretability.

We explain 🥥COCONUT (Chain of Continuous Thought), a new paper using vectors for CoT instead of words. We break down:

- Why CoT with words might not be optimal.
- How to implement vectors for CoT instead words and make CoT faster.
- What this means for interpretability.
Ece Takmaz (@ecekt2) 's Twitter Profile Photo

It was very nice seeing Letiția Pârcălăbescu aka AI Coffee Break with Letitia again after many years😊 She gave a wonderful keynote at the Railway Museum in Utrecht, as part of the National Conference on AI Transformations! 💫

It was very nice seeing <a href="/letiepi/">Letiția Pârcălăbescu</a> aka <a href="/AICoffeeBreak/">AI Coffee Break with Letitia</a> again after many years😊 She gave a wonderful keynote at the Railway Museum in Utrecht, as part of the National Conference on AI Transformations! 💫
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

🎙️ Yesterday, I gave a keynote on large language models outfitted with visual understanding, and the faithfulness of their chain-of-thought reasoning at the National Conference on Governing the Digital Society and Human-Centered AI. 📍 Hosted at the stunning Railway Museum in

🎙️ Yesterday, I gave a keynote on large language models outfitted with visual understanding, and the faithfulness of their chain-of-thought reasoning at the National Conference on Governing the Digital Society and Human-Centered AI.

📍 Hosted at the stunning Railway Museum in
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

Just say “Wait…” – and your LLM gets smarter?! We explain how just 1,000 training examples + a tiny trick at inference time = o1-preview level reasoning. No RL, no massive data needed. 🎥 Watch now 👇

Just say “Wait…” – and your LLM gets smarter?!
We explain how just 1,000 training examples + a tiny trick at inference time = o1-preview level reasoning. No RL, no massive data needed.
🎥 Watch now 👇
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

We all know quantization works at inference time, but researchers successfully trained a 13B LLaMA 2 model using FP4 precision (only 16 values per weight!). 🤯 We break down how it works. If quantization and mixed-precision training sounds mysterious, this’ll clear it up. See

We all know quantization works at inference time, but researchers successfully trained a 13B LLaMA 2 model using FP4 precision (only 16 values per weight!). 🤯

We break down how it works. If quantization and mixed-precision training sounds mysterious, this’ll clear it up. See
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

Long videos are a nightmare for language models—too many tokens, slow inference. ☠️ We explain STORM ⛈️, a new architecture that improves long video LLMs using Mamba layers and token compression. Reaches better accuracy than GPT-4o on benchmarks and up to 8× more efficiency.👇

Long videos are a nightmare for language models—too many tokens, slow inference. ☠️
We explain STORM ⛈️, a new architecture that improves long video LLMs using Mamba layers and token compression. Reaches better accuracy than GPT-4o on benchmarks and up to 8× more efficiency.👇
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

Excited to share that I’ll be joining the Summer School “AI and Human Values” this September at the Marsilius-Kolleg of Heidelberg University as a speaker. I'll be giving an introduction to how large language models actually work—before the summer school dives deeper into their

Excited to share that I’ll be joining the Summer School “AI and Human Values” this September at the Marsilius-Kolleg of Heidelberg University as a speaker. I'll be giving an introduction to how large language models actually work—before the summer school dives deeper into their
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

💡 AlphaEvolve is a new AI system that doesn’t just write code, it evolves it. It uses LLMs and evolutionary search to make scientific discoveries. We explain how AlphaEvolve works and the evolutionary strategies behind it (like MAP-Elites and island-based population methods).

💡 AlphaEvolve is a new AI system that doesn’t just write code, it evolves it. It uses LLMs and evolutionary search to make scientific discoveries.
We explain how AlphaEvolve works and the evolutionary strategies behind it (like MAP-Elites and island-based population methods).
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

🧠🤖 Can we trust AI in science? I'm excited to be speaking at the final event of the Young Marsilius Fellows 2025, themed "Dancing with Right & Wrong?" – a title that feels increasingly relevant in the age of AI. I'll be joining a panel on "(How) can we trust AI in science?" to

🧠🤖 Can we trust AI in science?
I'm excited to be speaking at the final event of the Young Marsilius Fellows 2025, themed "Dancing with Right &amp; Wrong?" – a title that feels increasingly relevant in the age of AI.
I'll be joining a panel on "(How) can we trust AI in science?" to
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

Excited to be at #ACL2025NLP in Vienna this week. 🇦🇹 I’m always up for a chat about reasoning models, NLE faithfulness, synthetic data generation, or the joys and challenges of explaining AI on YouTube. If you're around, let’s connect!

Excited to be at #ACL2025NLP in Vienna this week. 🇦🇹
I’m always up for a chat about reasoning models, NLE faithfulness, synthetic data generation, or the joys and challenges of explaining AI on YouTube.

If you're around, let’s connect!
AI Coffee Break with Letitia (@aicoffeebreak) 's Twitter Profile Photo

How do LLMs pick the next word? They don’t choose words directly: they only output word probabilities. 📊 Greedy decoding, top-k, top-p, min-p are methods that turn these probabilities into actual text. In this video, we break down each method and show how the same model can

How do LLMs pick the next word? They don’t choose words directly: they only output word probabilities. 📊 Greedy decoding, top-k, top-p, min-p are methods that turn these probabilities into actual text.

In this video, we break down each method and show how the same model can