Aneesha Bakharia (@aneesha) 's Twitter Profile
Aneesha Bakharia

@aneesha

Deep Learning for FUN

@[email protected]

ID: 1851621

linkhttps://medium.com/@aneesha calendar_today22-03-2007 08:41:03

51,51K Tweet

1,1K Followers

2,2K Following

Bodhisattwa Majumder (@mbodhisattwa) 's Twitter Profile Photo

Excited to share what I have been focusing on this year! Inference-time search to optimize Bayesian surprise pushes us towards long-horizon discovery! Introducing "AutoDS": Autonomous Discovery via Surprisal. "It can not only find the diamond in the rough, but also can rule out

Excited to share what I have been focusing on this year!

Inference-time search to optimize Bayesian surprise pushes us towards long-horizon discovery! Introducing "AutoDS": Autonomous Discovery via Surprisal.

"It can not only find the diamond in the rough, but also can rule out
Joseph Suarez (e/๐Ÿก) (@jsuarez5341) 's Twitter Profile Photo

I will be working on RL for drone racing and swarms on stream here/YT/Twitch for the next few hours. Goal is a ~100k param multitask model that we can deploy on real hardware

Dev Valladares (@dev_valladares) 's Twitter Profile Photo

Infinite Wiki โ• Every word is a hyperlink. Every description is generated in real-time (in ~1 second) โ• Runs on Gemini 2.5 Flash Lite. ASCII diagrams using 2.5 Flash

TechCrunch (@techcrunch) 's Twitter Profile Photo

Jack Dorsey has recently released two apps in less than a week. How? He's riding the vibe coding wave and using an AI tool called Goose to crank out new apps for messaging and tracking sun exposure.

Noam Brown (@polynoamial) 's Twitter Profile Photo

Today, we at OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLMโ€”under the same time limits as humans, without tools. As remarkable as that sounds, itโ€™s even more significant than the headline ๐Ÿงต

Noam Brown (@polynoamial) 's Twitter Profile Photo

Their bet allowed for formal math AI systems (like AlphaProof). In 2022, almost nobody thought an LLM could be IMO gold level by 2025.

Vivek Galatage (@vivekgalatage) 's Twitter Profile Photo

With the rising demand for AI model inference, optimizing these models is the key to real-world deployments. Techniques like Pruning, Distillation, Quantization, efficient KV caching, MQA/GQA/MLA attention, speculative decoding, dynamic batching, MoE, and test-time compute

With the rising demand for AI model inference, optimizing these models is the key to real-world deployments. 

Techniques like Pruning, Distillation, Quantization, efficient KV caching, MQA/GQA/MLA attention, speculative decoding, dynamic batching, MoE, and test-time compute
TuringPost (@theturingpost) 's Twitter Profile Photo

How can we combine retrieval and reasoning for better deep research? โ–ช๏ธ Reasoning โ†’ RAG: Use reasoning to improve parts of the RAG pipeline. โ–ช๏ธ RAG โ†’ Reasoning: Use retrieved facts to make the modelโ€™s reasoning more grounded and accurate. โ–ช๏ธ Synergized RAG and Reasoning (the

How can we combine retrieval and reasoning for better deep research?

โ–ช๏ธ Reasoning โ†’ RAG:
Use reasoning to improve parts of the RAG pipeline.

โ–ช๏ธ RAG โ†’ Reasoning:
Use retrieved facts to make the modelโ€™s reasoning more grounded and accurate.

โ–ช๏ธ Synergized RAG and Reasoning (the
Sam Altman (@sama) 's Twitter Profile Photo

we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence. when we first started openai,

Haider. (@slow_developer) 's Twitter Profile Photo

openAI Noam Brown confirmed 2 key points: 1) the model that reached IMO gold-level performance is general-purpose โ€” not fine-tuned for math, and achieved it without using tools 2) openAI developed new techniques that make LLMs a lot better could it be a new RL breakthrough or

openAI Noam Brown confirmed 2 key points:

1) the model that reached IMO gold-level performance is general-purpose โ€”

not fine-tuned for math, and achieved it without using tools

2) openAI developed new techniques that make LLMs a lot better

could it be a new RL breakthrough or
Deedy (@deedydas) 's Twitter Profile Photo

๐Ÿšจ BREAKING: Detailed list of all 44 people in Meta's Superintelligence team. โ€” 50% from China โ€” 75% have PhDs, 70% Researchers โ€” 40% from OpenAI, 20% DeepMind, 15% Scale โ€” 20% L8+ level โ€” 75% 1st gen immigrants Each of these people are likely getting paid $10-$100M/yr.

๐Ÿšจ BREAKING: Detailed list of all 44 people in Meta's Superintelligence team.

โ€” 50% from China
โ€” 75% have PhDs, 70% Researchers
โ€” 40% from OpenAI, 20% DeepMind, 15% Scale
โ€” 20% L8+ level
โ€” 75% 1st gen immigrants

Each of these people are likely getting paid $10-$100M/yr.
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ (@gm8xx8) 's Twitter Profile Photo

Lucy (1.7B): A Qwen3-based agentic web search model trained purely with RL (no SFT). Optimized for mobile and CPU-only use. - Agentic search via Serper (Google) + Crawl4AI - Machine-generated task vectors guide reasoning - Outperforms DeepSeek-v3 on SimpleQA (MCP benchmark) -

Lucy (1.7B):
A Qwen3-based agentic web search model trained purely with RL (no SFT). Optimized for mobile and CPU-only use.

- Agentic search via Serper (Google) + Crawl4AI
- Machine-generated task vectors guide reasoning
- Outperforms DeepSeek-v3 on SimpleQA (MCP benchmark)
-
Google Cloud Tech (@googlecloudtech) 's Twitter Profile Photo

Gemini Code Assist now shares the same technology with Gemini CLI! In VS Code, place any prompt into the chat window using agent mode, and Code Assist will work to write tests, fix errors, build out features, or migrate your code โ†’ goo.gle/4o5eEp4

Shubham Saboo (@saboo_shubham_) 's Twitter Profile Photo

Google Code Assist now has the Agent Mode. It can build out features, write tests, fix errors, and even migrate your code. Cursor like experience but 100% free. Let that sink in.

๐Ÿฅฑ Sleepy (ML/DL) (@krishnanarakun) 's Twitter Profile Photo

Distributional Reinforcement Learning - Published by MIT in 2023 - One of the best book in Reinforcement Learning Link in comments ๐Ÿ˜Ž ๐Ÿ‘‡

Distributional Reinforcement Learning    

- Published by MIT in 2023     
- One of the best book in Reinforcement Learning

Link in comments ๐Ÿ˜Ž ๐Ÿ‘‡
Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Lizard shows a transformer can keep almost all its smarts while replacing softmax with cheaper

Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation.

Lizard shows a transformer can keep almost all its smarts while replacing softmax with cheaper
hardmaru (@hardmaru) 's Twitter Profile Photo

New Essay by @BlaiseAguera (Google): โ€œAI Is Evolving โ€” And Changing Our Understanding Of Intelligenceโ€ Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AIโ€™s full potential. noemamag.com/ai-is-evolvingโ€ฆ