Jianwei Yang (@jw2yang4ai) 's Twitter Profile
Jianwei Yang

@jw2yang4ai

Principal Researcher at MSR; Core contributor of Microsoft Project Florence, Phi-3V, Omniparser; Inventor of FocalNet, SEEM, Set/Trace-of-Mark and Magma.

ID: 759531465067663360

linkhttps://jwyang.github.io calendar_today30-07-2016 23:29:47

371 Tweet

3,3K Followers

459 Following

Yongyuan Liang (@cheryyun_l) 's Twitter Profile Photo

Our #ICLR2025 paper TraceVLA will be presented at Hall 3 + Hall 2B #35 on Apr 24th from 9:00 to 11:30 AM CEST. We'll also give an oral presentation at GenBot workshop on Apr 28th from 11:20 to 11:50 AM. If you have interests in VLA models and generalist agents, check it out!

Our #ICLR2025 paper TraceVLA will be presented at Hall 3 + Hall 2B #35 on Apr 24th from 9:00 to 11:30 AM CEST. We'll also give an oral presentation at GenBot workshop on Apr 28th from 11:20 to 11:50 AM. If you have interests in VLA models and generalist agents, check it out!
Furong Huang (@furongh) 's Twitter Profile Photo

I will present TraceVLA today from 3 p.m. to 5:30 p.m. in Hall 3 + Hall 2B Poster #35. Find me there, and let's chat. #VLAs #Robotics #FoundationModel4Robotics

Mu Cai (@mucai7) 's Twitter Profile Photo

I am excited to announce that I am not at #ICLR presenting Matryoshka Multimodal Models matryoshka-mm.github.io. 😀 But rather, I am online at Bay Area. Ping me if you have any questions or ideas w.r.t paper! Feel free to read the poster at Hall 3 + Hall 2B #86 this morning!

I am excited to announce that I am not at #ICLR presenting Matryoshka Multimodal Models matryoshka-mm.github.io. 😀 

But rather, I am online at Bay Area. Ping me if you have any questions or ideas w.r.t paper!

Feel free to read the poster at Hall 3 + Hall 2B #86 this morning!
Yiping Wang (@ypwang61) 's Twitter Profile Photo

We only need ONE example for RLVR on LLMs to achieve significant improvement on math tasks! 📍RLVR with one training example can boost: - Qwen2.5-Math-1.5B: 36.0% → 73.6% - Qwen2.5-Math-7B: 51.0% → 79.2% on MATH500. 📄 Paper: arxiv.org/abs/2504.20571

We only need ONE example for RLVR on LLMs to achieve significant improvement on math tasks!

📍RLVR with one training example can boost:
         - Qwen2.5-Math-1.5B: 36.0% → 73.6%
         - Qwen2.5-Math-7B: 51.0% → 79.2% 
       on MATH500.

📄 Paper: arxiv.org/abs/2504.20571
Ahmed Awadallah (@ahmedhawadallah) 's Twitter Profile Photo

Introducing Phi-4-reasoning, adding reasoning models to the Phi family of SLMs. The model is trained with both supervised finetuning (using a carefully curated dataset of reasoning demonstration) and Reinforcement Learning. 📌Competitive results on reasoning benchmarks with

Introducing Phi-4-reasoning, adding reasoning models to the Phi family of SLMs.

The model is trained with both supervised finetuning (using a carefully curated dataset of reasoning demonstration) and Reinforcement Learning.

📌Competitive results on reasoning benchmarks with
Richard Sutton (@richardssutton) 's Twitter Profile Photo

The latest episode of the Derby Mill Podcast is just out and focused on the "Era of Experience" paper by David Silver and myself. Substack: insights.intrepidgp.com/p/welcome-to-t… Spotify: open.spotify.com/episode/254sxl… Apple: podcasts.apple.com/us/podcast/wel… YouTube: youtube.com/watch?v=dhfJfQ…

Jianwei Yang (@jw2yang4ai) 's Twitter Profile Photo

Hope you all had a great #NeurIPS2025 submissions and have a good rest! We are still open to submissions to our CVinW workshop at #CVPR2025! Welcome to share your work on our workshop with a few clicks! 👉Submit Portal: openreview.net/group?id=thecv…

Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Our community-led Computer Vision group is thrilled to host Jianwei Yang, Principal Researcher at Microsoft Research for a session on "Magma: A Foundation Model for Multimodal AI Agents" Thanks to Mayank Bhaskar and Benedict Emoe-kabu for organizing this speaker session 👏

Our community-led Computer Vision group is thrilled to host <a href="/jw2yang4ai/">Jianwei Yang</a>, Principal Researcher at Microsoft Research for a session on "Magma: A Foundation Model for Multimodal AI Agents"

Thanks to <a href="/cataluna84/">Mayank Bhaskar</a> and <a href="/Arkhymadhe/">Benedict Emoe-kabu</a> for organizing this speaker session 👏
Furong Huang (@furongh) 's Twitter Profile Photo

Excited to speak at the Workshop on Computer Vision in the Wild #CVPR2025 2025! 🎥🌍 🗓️ June 11 | 📍 Room 101 B, Music City Center, Nashville, TN 🎸 🧠 Talk: From Perception to Action: Building World Models for Generalist Agents Let’s connect if you're around! #CVPR2025 #robotics

Excited to speak at the Workshop on Computer Vision in the Wild <a href="/CVPR/">#CVPR2025</a> 2025! 🎥🌍
🗓️ June 11 | 📍 Room 101 B, Music City Center, Nashville, TN 🎸
🧠 Talk: From Perception to Action: Building World Models for Generalist Agents
Let’s connect if you're around! #CVPR2025 #robotics
Jianwei Yang (@jw2yang4ai) 's Twitter Profile Photo

🔥@CVPR2025 CVinW 2025 is about to take place very soon!! We have a plenty of great talks and spotlight talks upcoming (Boqing Gong, Ranjay Krishna Furong Huang Yunzhu Li Saining Xie Cordelia Schmid, Shizhe Chen). Look forward to seeing you all at 101B from 9am-5pm, June 11th!

🔥@CVPR2025 CVinW 2025 is about to take place very soon!! We have a plenty of great talks and spotlight talks upcoming (<a href="/BoqingGo/">Boqing Gong</a>, <a href="/RanjayKrishna/">Ranjay Krishna</a> <a href="/furongh/">Furong Huang</a> <a href="/YunzhuLiYZ/">Yunzhu Li</a> <a href="/sainingxie/">Saining Xie</a> <a href="/CordeliaSchmid/">Cordelia Schmid</a>, Shizhe Chen). Look forward to seeing you all at 101B from 9am-5pm, June 11th!
Jianwei Yang (@jw2yang4ai) 's Twitter Profile Photo

Wow, this is so cool! Have been dreaming of building agents that can interact with humans via language communications, and the world via physical interaction (locomotion, manipulation, etc). Definitely a great step-stone and playground!