OpenMMLab (@openmmlab) 's Twitter Profile
OpenMMLab

@openmmlab

From MMDetection to AI Exploration. Empowering AI research and development with OpenMMLab.

Discord:discord.gg/raweFPmdzG

ID: 1267501967703588864

linkhttps://github.com/open-mmlab calendar_today01-06-2020 17:04:58

683 Tweet

6,6K Followers

131 Following

OpenCompass (@opencompassx) 's Twitter Profile Photo

🚀 Shocking : O1-mini scores just 15.6% on AIME under strict, real-world metrics. 🚨 📈 Introducing G-Pass@k: A metric that reveals LLMs' performance consistency across trials. 🌐 LiveMathBench: Challenging LLMs with contemporary math problems, minimizing data leaks. 🔍 Our

🚀 Shocking : O1-mini scores just 15.6% on AIME under strict, real-world metrics.  🚨
📈 Introducing G-Pass@k: A metric that reveals LLMs' performance consistency across trials. 
🌐 LiveMathBench: Challenging LLMs with contemporary math problems, minimizing data leaks.
🔍 Our
InternLM (@intern_lm) 's Twitter Profile Photo

🚀Introducing InternLM3-8B-Instruct with Apache License 2.0. -Trained on only 4T tokens, saving more than 75% of the training cost. -Supports deep thinking for complex reasoning and normal mode for chat. Model:Hugging Face huggingface.co/internlm/inter… GitHub: github.com/InternLM/Inter…

🚀Introducing InternLM3-8B-Instruct with Apache License 2.0.
-Trained on only 4T tokens, saving more than 75% of the training cost.
-Supports deep thinking for complex reasoning and normal mode for chat.
Model:<a href="/huggingface/">Hugging Face</a>
huggingface.co/internlm/inter…
GitHub:
github.com/InternLM/Inter…
OpenGVLab (@opengvlab) 's Twitter Profile Photo

🚀 Introducing #InternVideo 2.5 - The Video Multimodal AI That Sees Longer & Smarter! ✨ Handles videos 6x longer than predecessors ✨ Pinpoints objects/actions with surgical precision ✨ Trained on 300K+ hours of diverse video data 📈 Outperforms SOTA on multiple benchmarks &

🚀 Introducing #InternVideo 2.5 - The Video Multimodal AI That Sees Longer &amp; Smarter! 
✨ Handles videos 6x longer than predecessors 
✨ Pinpoints objects/actions with surgical precision ✨ Trained on 300K+ hours of diverse video data

📈 Outperforms SOTA on multiple benchmarks &amp;
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Introducing #OREAL, a new RL method for math reasoning. 😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 🤗Paper/Model/Data: huggingface.co/papers/2502.06…

🥳Introducing #OREAL, a new RL method for math reasoning. 

😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 

🤗Paper/Model/Data:
huggingface.co/papers/2502.06…
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Thrill to release the full RL training code of #OREAL! 😊Now you can fully reproduce the results of OREAL-7B/32B. Using #DeepSeek-R1-Distill-Qwen-32B, you can further obtain a model has 95.6 on MATH-500! 🤗Code: github.com/InternLM/OREAL 🤗Based on: github.com/InternLM/xtuner

OpenCompass (@opencompassx) 's Twitter Profile Photo

🥳#StructFlowBench is a structurally annotated multi-turn benchmark that leverages a structure-driven generation paradigm to enhance the simulation of complex dialogue scenarios. 🥳StructFlowBench is now part of the #CompassHub! 😉Feel free to download and explore it—available

🥳#StructFlowBench is a structurally annotated multi-turn benchmark that leverages a structure-driven generation paradigm to enhance the simulation of complex dialogue scenarios.
🥳StructFlowBench is now part of the #CompassHub! 😉Feel free to download and explore it—available
OpenMMLab (@openmmlab) 's Twitter Profile Photo

🥳#FaceShot generates animations for your "imaginary friends", like Teddy Bear, and brings them into life! 😉Project page: faceshot2024.github.io/faceshot/ 😉Paper link: arxiv.org/abs/2503.00740 😉Code: github.com/open-mmlab/Fac…

OpenGVLab (@opengvlab) 's Twitter Profile Photo

🥳We have released #InternVL3, an advanced #MLLM series ranging from 1B to 78B, on Hugging Face. 😉InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new SOTA among open-source MLLMs. ☺️Highlights: - Native multimodal pre-training: Simultaneous language and

🥳We have released #InternVL3, an advanced #MLLM series ranging from 1B to 78B, on <a href="/huggingface/">Hugging Face</a>.

😉InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new SOTA among open-source MLLMs.

☺️Highlights:
- Native multimodal pre-training: Simultaneous language and
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models. Unlimited automatic question generation and result verification. Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more. 🤗github.com/InternLM/Inter…

🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models.
Unlimited automatic question generation and result verification.
Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more.
 🤗github.com/InternLM/Inter…
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Trained through #InternBootcamp, #InternThinker now combines pro-level Go skills with transparent reasoning. 😉In each game, it acts as a patient, insightful coach—analyzing the board, comparing moves, and clearly explaining each decision. 🤗Try it now: chat.intern-ai.org.cn/internthinker/…

🥳Trained through #InternBootcamp,  #InternThinker now combines pro-level Go skills with transparent reasoning.
😉In each game, it acts as a patient, insightful coach—analyzing the board, comparing moves, and clearly explaining each decision.
🤗Try it now: chat.intern-ai.org.cn/internthinker/…
Tiezhen WANG (@xianbao_qian) 's Twitter Profile Photo

We invited 3 top HF daily papers authors to deliver talks. Topics of this session: Reinforcement Learning Speakers: - Qi-Chen Zhao — Absolute Zero Reasoner: self-play RL that reaches SOTA reasoning with zero external data - Shu-Huai Ren — MiMo-VL: Xiaomi’s unified and

InternLM (@intern_lm) 's Twitter Profile Photo

🚀 Introducing #POLAR: Bring Reward Model into a New Pre-training Era! ✨ Say goodbye to reward models with poor generalization! POLAR (Policy Discriminative Learning) is a groundbreaking pre-training paradigm that trains reward models to distinguish policy distributions,

🚀 Introducing #POLAR: Bring Reward Model into a New Pre-training Era! 
✨ Say goodbye to reward models with poor generalization! POLAR (Policy Discriminative Learning) is a groundbreaking pre-training paradigm that trains reward models to distinguish policy distributions,
InternLM (@intern_lm) 's Twitter Profile Photo

🚀Introducing Intern-S1, our most advanced open-source multimodal reasoning model yet! 🥳Strong general-task capabilities + SOTA performance on scientific tasks, rivaling leading closed-source commercial models. 🥰Built upon a 235B MoE language model and a 6B Vision encoder.

🚀Introducing Intern-S1, our most advanced open-source multimodal reasoning model yet!
🥳Strong general-task capabilities + SOTA performance on scientific tasks, rivaling leading closed-source commercial models.
🥰Built upon a 235B MoE language model and a 6B Vision encoder.
InternLM (@intern_lm) 's Twitter Profile Photo

Our paper won an outstanding paper on ACL 2025. Try our best open-source multimodal reasoning model Intern-S1 at huggingface.co/internlm/Inter…. This 241B MoE model combines strong general-task capabilities with state-of-the-art performance on a wide range of scientific tasks,

Our paper won an outstanding paper on ACL 2025.

Try our best open-source multimodal reasoning model Intern-S1 at huggingface.co/internlm/Inter….

This 241B MoE model combines strong general-task capabilities with state-of-the-art performance on a wide range of scientific tasks,
OpenCompass (@opencompassx) 's Twitter Profile Photo

🚀 Introducing #CompassVerifier: A unified and robust answer verifier for #LLMs evaluation and #RLVR! ✨LLM progress is bottlenecked by weak evaluation, looking for an alternative to rule-based verifiers? CompassVerifier can handle multiple domains including math, science, and

🚀 Introducing #CompassVerifier: A unified and robust answer verifier for  #LLMs evaluation and #RLVR!
✨LLM progress is bottlenecked by weak evaluation, looking for an alternative to rule-based verifiers? CompassVerifier can handle multiple domains including math, science, and
OpenMMLab (@openmmlab) 's Twitter Profile Photo

🔥China’s Open-source VLMs boom—Intern-S1, MiniCPM-V-4, GLM-4.5V, Step3, OVIS 🧐Join the AI Insight Talk with Hugging Face, OpenCompass, ModelScope and Zhihu Frontier 🚀Tech deep-dives & breakthroughs 🚀Roundtable debates ⏰Aug 21, 5 AM PDT 📺Live: youtube.com/live/kh0WSMoVZ…

🔥China’s Open-source VLMs boom—Intern-S1, MiniCPM-V-4, GLM-4.5V, Step3, OVIS
🧐Join the AI Insight Talk with <a href="/huggingface/">Hugging Face</a>, <a href="/OpenCompassX/">OpenCompass</a>,  <a href="/ModelScope2022/">ModelScope</a> and <a href="/ZhihuFrontier/">Zhihu Frontier</a>
🚀Tech deep-dives &amp; breakthroughs
🚀Roundtable debates
⏰Aug 21, 5 AM PDT
📺Live: youtube.com/live/kh0WSMoVZ…
InternLM (@intern_lm) 's Twitter Profile Photo

🔥LMDeploy v0.10.0 released! 😊Supercharges OpenAI’s GPT-OSS MXFP4 models. 😊Delivers exceptional performance for GPT-OSS models on V100 and higher GPUs. 😊On H800 & A100, LMDeploy outperforms vLLM across all scenarios—faster, more efficient inference! 🤗github.com/InternLM/lmdep…

🔥LMDeploy v0.10.0 released!
😊Supercharges OpenAI’s GPT-OSS MXFP4 models.
😊Delivers exceptional performance for GPT-OSS models on V100 and higher GPUs. 
😊On H800 &amp; A100, LMDeploy outperforms vLLM across all scenarios—faster, more efficient inference!
🤗github.com/InternLM/lmdep…
InternLM (@intern_lm) 's Twitter Profile Photo

🚀 Big news for #lmdeploy v0.10.1! 🥳Our #FP8 high-performance inference is no longer limited to the latest #GPUs. It now supports all #NVIDIA architectures from V100 onwards, bringing major speedups to more users. 🤗github.com/InternLM/lmdep…

🚀  Big news for #lmdeploy v0.10.1! 
🥳Our #FP8 high-performance inference is no longer limited to the latest #GPUs. It now supports all #NVIDIA architectures from V100 onwards, bringing major speedups to more users.
🤗github.com/InternLM/lmdep…