Tsinghua KEG (THUDM) (@thukeg) 's Twitter Profile
Tsinghua KEG (THUDM)

@thukeg

#ChatGLM #GLM130B #CodeGeeX #CogVLM #CogView #AMiner The Knowledge Engineering Group (KEG) and THUDM at @Tsinghua_Uni @jietang @ericdongyx

ID: 1544212427432022016

linkhttps://github.com/THUDM calendar_today05-07-2022 06:51:56

238 Tweet

4,4K Followers

170 Following

Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools GLM-4: - closely rivals GPT-4 on MMLU, MATH, GPQA, etc - gets close to GPT-4 in instruction following and long context tasks hf: huggingface.co/THUDM repo: github.com/THUDM abs:

ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools

GLM-4:
- closely rivals GPT-4 on MMLU, MATH, GPQA, etc
- gets close to GPT-4 in instruction following and long context tasks

hf: huggingface.co/THUDM
repo: github.com/THUDM
abs:
AK (@_akhaliq) 's Twitter Profile Photo

ChatGLM A Family of Large Language Models from GLM-130B to GLM-4 All Tools We introduce ChatGLM, an evolving family of large language models that we have been developing over time. This report primarily focuses on the GLM-4 language series, which includes GLM-4, GLM-4-Air,

ChatGLM

A Family of Large Language Models from GLM-130B to GLM-4 All Tools

We introduce ChatGLM, an evolving family of large language models that we have been developing over time. This report primarily focuses on the GLM-4 language series, which includes GLM-4, GLM-4-Air,
Z.ai (@zai_org) 's Twitter Profile Photo

🚀 We published a tech report about GLM's Family! ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools. arxiv.org/html/2406.1279…

🚀 We published a tech report about GLM's Family! ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools.

arxiv.org/html/2406.1279…
Tsinghua CS (@thudcst) 's Twitter Profile Photo

🏆Proud moment for us! Our paper on 'Explicit factor models for explainable recommendation'(u6v.cn/5OxPGm) has won the Test of Time Award at #SIGIR2024, leading the way in 'explainable recommendation' since 2014. Congrats to outstanding THUIR group from #DCST, #Tsinghua

🏆Proud moment for us! Our paper on 'Explicit factor models for explainable recommendation'(u6v.cn/5OxPGm) has won the Test of Time Award at #SIGIR2024, leading the way in 'explainable recommendation' since 2014. Congrats to outstanding THUIR group from #DCST, #Tsinghua
Z.ai (@zai_org) 's Twitter Profile Photo

We are not just doing “demo only” for video generation. Ying, we are bringing a video generation AI that everyone can use. Create a 6-second video in just 30 seconds. Try our new product now. YING:chatglm.cn/video medium.com/@ChatGLM/zhipu…

AK (@_akhaliq) 's Twitter Profile Photo

LongWriter Unleashing 10,000+ Word Generation from Long Context LLMs discuss: huggingface.co/papers/2408.07… Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words.

Yushi Bai ✈️ ICLR 2025 (@realyushibai) 's Twitter Profile Photo

Thanks AK! We find that your long context LLM is secretly a LongWriter💡All you need is data with extended output during model alignment to unlock this capability. Our code, data, and models: github.com/THUDM/LongWrit…

Tsinghua KEG (THUDM) (@thukeg) 's Twitter Profile Photo

#VisualAgentBench: proprietary models (4o, 4o-mini, 3.5-sonnet) currently have an edge as visual foundation agents for now, but open models InternVL & GLM-4V are catching up fast, a similar story to LLMs as agents as revealed in #AgentBench back in Aug 2023.

AK (@_akhaliq) 's Twitter Profile Photo

New from Tsinghua KEG (THUDM) LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs author Yushi Bai@ACL 2025 is active in discussion section to answer your questions: huggingface.co/papers/2408.07…

New from <a href="/thukeg/">Tsinghua KEG (THUDM)</a>

LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs

author <a href="/realYushiBai/">Yushi Bai@ACL 2025</a> is active in discussion section to answer your questions: huggingface.co/papers/2408.07…
Gradio (@gradio) 's Twitter Profile Photo

LongWriter-glm4-9b from Tsinghua KEG (THUDM) is capable of generating 10,000+ words at once!🚀 Paper identifies a problem with current long context LLMs -- they can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding lengths of 2,000 words. Paper proposes that an

Tiezhen WANG (@xianbao_qian) 's Twitter Profile Photo

What has just happened? Tsinghua KEG (THUDM) has just released the CogVideoX image-to-video generation model. Amazing result. Combined demo of T2V/I2V and V2V: huggingface.co/spaces/THUDM/C… Please duplicate the space with a L4s to avoid the long waiting queue. Model: huggingface.co/THUDM/CogVideo…

Z.ai (@zai_org) 's Twitter Profile Photo

Thank you to the passionate developers for your continued support and patience. CogVideoX-5B-I2V, release!😀 Github: github.com/THUDM/CogVideo CogVideoX-5B-I2V model: huggingface.co/THUDM/CogVideo… Gradio space: huggingface.co/spaces/THUDM/C…

Yuxiao Dong (@ericdongyx) 's Twitter Profile Photo

#AutoGLM: Autonomous Foundation Agents for GUIs by Xiao Liu (Shaw) and team at Tsinghua KEG (THUDM) & ChatGLM! Here are some AutoGLM for phone use demos --- beta testing since Oct 25 --- and its tech report arxiv.org/abs/2411.00820

Z.ai (@zai_org) 's Twitter Profile Photo

🌈AndroidLab: a comprehensive platform for developing and evaluating Android agents. By integrating a controlled environment and standardized benchmarks, and leveraging the Android Instruct dataset, we significantly boost open-source model performance. github.com/THUDM/Android-…

Richard Socher (@richardsocher) 's Twitter Profile Photo

AI has a "last-mile problem" similar to self-driving cars. With self-driving cars, early demos impressed, but real-world deployment took years. It's easy to hack up a prototype, but making it work reliably at scale is hard. If each step of an AI agent is only 95% accurate,

Stanford AI Lab (@stanfordailab) 's Twitter Profile Photo

Check out our latest blog post about MiniVLA, a smaller open-source vision-language-action model! ai.stanford.edu/blog/minivla/

Tsinghua CS (@thudcst) 's Twitter Profile Photo

🏆Congrats to the Storage Research Group from #Tsinghua DCST for winning the#ASPLOS2025/#EuroSys2025 Large-Scale Model Inference Optimization Contest in Rotterdam! They outperformed global competitors, boosting inference performance by 1.1x using AWS NKI framework optimizations.

🏆Congrats to the Storage Research Group from #Tsinghua DCST for winning the#ASPLOS2025/#EuroSys2025 Large-Scale Model Inference Optimization Contest in Rotterdam! They outperformed global competitors, boosting inference performance by 1.1x using AWS NKI framework optimizations.
Tsinghua University (@tsinghua_uni) 's Twitter Profile Photo

Prof. Liu's team built an #AI doctor for everyday #healthcare! In a #virtual hospital, it treated 10K+ virtual patients with 93% accuracy. They covered 300+ diseases across 21 departments & released BioMedGPT, PathOrchestra, and more for a full #medical AI pipeline. #THUAndBeyond

Prof. Liu's team built an #AI doctor for everyday #healthcare! In a #virtual hospital, it treated 10K+ virtual patients with 93% accuracy. They covered 300+ diseases across 21 departments &amp; released BioMedGPT, PathOrchestra, and more for a full #medical AI pipeline. #THUAndBeyond