InternLM (@intern_lm) 's Twitter Profile
InternLM

@intern_lm

InternLM has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios.

Discord: discord.gg/xa29JuW87d

ID: 1667434783268548609

linkhttps://github.com/InternLM calendar_today10-06-2023 07:33:44

79 Tweet

2,2K Followers

31 Following

InternLM (@intern_lm) 's Twitter Profile Photo

🥳We have released our InternLM2.5 new models in 1.8B and 20B on Hugging Face. 😉1.8B: Ultra-lightweight, high-performance, with great adaptability. 😉20B: More powerful, ideal for complex tasks. 😍Explore now! Models: huggingface.co/collections/in… GitHub: github.com/InternLM/Inter…

zhyncs (@zhyncs42) 's Twitter Profile Photo

Excited to have hosted the SGLang LMSYS Org talk yesterday with GPU MODE 's Mark Saroufim and Byron Hsu. Thanks to FlashInfer Zihao Ye and LMDeploy InternLM teams for their support! youtube.com/watch?v=XQylGy…

InternLM (@intern_lm) 's Twitter Profile Photo

🥳Introducing #InternThinker. 🤗A Powerful Reasoning Model! 🤗Advanced long-term thinking capabilities. 🤗Self-reflection and correction during reasoning. Outperforms in complex tasks like math, coding, and logic puzzles. 🥳Try it now at internlm-chat.intern-ai.org.cn/internthinker

🥳Introducing #InternThinker.

🤗A Powerful Reasoning Model!
🤗Advanced long-term thinking capabilities.
🤗Self-reflection and correction during reasoning.
Outperforms in complex tasks like math, coding, and logic puzzles.

🥳Try it now at internlm-chat.intern-ai.org.cn/internthinker
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Now, you can sign up and log in to try #InternThinker using your email or GitHub account! 🤗Direct link to InternThinker: internlm-chat.intern-ai.org.cn/internthinker 😊Tips: To start a new conversation with InternThinker, simply click on "InternThinker" in the left sidebar.

🥳Now, you can sign up and log in to try #InternThinker using your email or GitHub account! 
🤗Direct link to InternThinker: internlm-chat.intern-ai.org.cn/internthinker
😊Tips:
To start a new conversation with InternThinker, simply click on "InternThinker" in the left sidebar.
InternLM (@intern_lm) 's Twitter Profile Photo

🥳InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions. Real-time visual & auditory understanding Long-term memory formation Natural voice interaction Code: github.com/InternLM/Inter… Model: huggingface.co/internlm/inter…

🥳InternLM-XComposer2.5-OmniLive, a comprehensive multimodal system for long-term streaming video and audio interactions.

Real-time visual & auditory understanding
Long-term memory formation
Natural voice interaction

Code:
github.com/InternLM/Inter…
Model:
huggingface.co/internlm/inter…
zhyncs (@zhyncs42) 's Twitter Profile Photo

I'm excited about the release of InternLM 3 8B Instruct. It's superior to Qwen 2.5 7B Instruct and Llama 3.1 8B Instruct, with an architecture same as Llama's. Feel free to use it with SGLang LMSYS Org . Cheers! github.com/InternLM/inter…

InternLM (@intern_lm) 's Twitter Profile Photo

🚀Introducing InternLM3-8B-Instruct with Apache License 2.0. -Trained on only 4T tokens, saving more than 75% of the training cost. -Supports deep thinking for complex reasoning and normal mode for chat. Model:Hugging Face huggingface.co/internlm/inter… GitHub: github.com/InternLM/Inter…

🚀Introducing InternLM3-8B-Instruct with Apache License 2.0.
-Trained on only 4T tokens, saving more than 75% of the training cost.
-Supports deep thinking for complex reasoning and normal mode for chat.
Model:<a href="/huggingface/">Hugging Face</a>
huggingface.co/internlm/inter…
GitHub:
github.com/InternLM/Inter…
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Here are some popular examples from the InternLM3-8B-Instruct model. 😉Feel free to try out our new model at internlm-chat.intern-ai.org.cn.

🥳Here are some popular examples from the InternLM3-8B-Instruct model.
😉Feel free to try out our new model at internlm-chat.intern-ai.org.cn.
InternLM (@intern_lm) 's Twitter Profile Photo

🥳InternLM3-8B-Instruct takes on the easy problem from LeetCode Weekly Contest 431. 😉Leetcode Weekly Contest 431:leetcode.com/contest/weekly… 😉Feel free to try out our new model at internlm-chat.intern-ai.org.cn

CAMEL-AI.org (@camelaiorg) 's Twitter Profile Photo

📢 We've just integrated InternLM's models into the 🐫 CAMEL-AI framework! InternLM is a multilingual, multi-billion-parameter base model trained on trillions of tokens, and we now support their whole family of models. 🚀 Newly Supported Models: ✅ internlm3-latest ✅

📢 We've just integrated InternLM's models into the 🐫 CAMEL-AI framework!

InternLM is a multilingual, multi-billion-parameter base model trained on trillions of tokens, and we now support their whole family of models. 🚀

Newly Supported Models: 
✅ internlm3-latest 
✅
InternLM (@intern_lm) 's Twitter Profile Photo

🥳With PowerServe, InternLM3-8B-Instruct runs on Android devices equipped with Qualcomm NPUs. 😉Feel free to try out our new model at internlm-chat.intern-ai.org.cn

InternLM (@intern_lm) 's Twitter Profile Photo

🥳Introducing #OREAL, a new RL method for math reasoning. 😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 🤗Paper/Model/Data: huggingface.co/papers/2502.06…

🥳Introducing #OREAL, a new RL method for math reasoning. 

😊With OREAL, a 7B model achieves 94.0 pass@1 on MATH-500, matching many 32B models, while OREAL-32B achieves 95.0 pass@1, surpassing #DeepSeek-R1 Distilled models. 

🤗Paper/Model/Data:
huggingface.co/papers/2502.06…
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Thrill to release the full RL training code of #OREAL! 😊Now you can fully reproduce the results of OREAL-7B/32B. Using #DeepSeek-R1-Distill-Qwen-32B, you can further obtain a model has 95.6 on MATH-500! 🤗Code: github.com/InternLM/OREAL 🤗Based on: github.com/InternLM/xtuner

InternLM (@intern_lm) 's Twitter Profile Photo

🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models. Unlimited automatic question generation and result verification. Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more. 🤗github.com/InternLM/Inter…

🥳Introducing #InternBootcamp, an easy-to-use and extensible library for training large reasoning models.
Unlimited automatic question generation and result verification.
Over 1,000 verifiable tasks covering logic, puzzles, algorithms, games, and more.
 🤗github.com/InternLM/Inter…
InternLM (@intern_lm) 's Twitter Profile Photo

🥳Trained through #InternBootcamp, #InternThinker now combines pro-level Go skills with transparent reasoning. 😉In each game, it acts as a patient, insightful coach—analyzing the board, comparing moves, and clearly explaining each decision. 🤗Try it now: chat.intern-ai.org.cn/internthinker/…

🥳Trained through #InternBootcamp,  #InternThinker now combines pro-level Go skills with transparent reasoning.
😉In each game, it acts as a patient, insightful coach—analyzing the board, comparing moves, and clearly explaining each decision.
🤗Try it now: chat.intern-ai.org.cn/internthinker/…
InternLM (@intern_lm) 's Twitter Profile Photo

🚀 Introducing #POLAR: Bring Reward Model into a New Pre-training Era! ✨ Say goodbye to reward models with poor generalization! POLAR (Policy Discriminative Learning) is a groundbreaking pre-training paradigm that trains reward models to distinguish policy distributions,

🚀 Introducing #POLAR: Bring Reward Model into a New Pre-training Era! 
✨ Say goodbye to reward models with poor generalization! POLAR (Policy Discriminative Learning) is a groundbreaking pre-training paradigm that trains reward models to distinguish policy distributions,
InternLM (@intern_lm) 's Twitter Profile Photo

🚀Introducing Intern-S1, our most advanced open-source multimodal reasoning model yet! 🥳Strong general-task capabilities + SOTA performance on scientific tasks, rivaling leading closed-source commercial models. 🥰Built upon a 235B MoE language model and a 6B Vision encoder.

🚀Introducing Intern-S1, our most advanced open-source multimodal reasoning model yet!
🥳Strong general-task capabilities + SOTA performance on scientific tasks, rivaling leading closed-source commercial models.
🥰Built upon a 235B MoE language model and a 6B Vision encoder.
InternLM (@intern_lm) 's Twitter Profile Photo

Try our new multimodal reasoning model #InternS1😋 Model: huggingface.co/internlm/Inter… GitHub: github.com/InternLM/Inter… Chat here: chat.intern-ai.org.cn

Try our new multimodal reasoning model #InternS1😋

Model: huggingface.co/internlm/Inter…
GitHub: github.com/InternLM/Inter…
Chat here: chat.intern-ai.org.cn