Yiqing Liang (@yiqingliang2) 's Twitter Profile
Yiqing Liang

@yiqingliang2

RS intern @NVIDIA Research, Former RS Intern @AIatMeta, CS PhD student @BrownVisualComp, Alumni @CAIRLab

ID: 1306517023921500160

linkhttps://lynl7130.github.io/ calendar_today17-09-2020 08:55:14

451 Tweet

338 Followers

626 Following

Chen Wei (@_chen_wei_) 's Twitter Profile Photo

๐Ÿ’ก New work: You might not need math data to teach models math reasoning. Recent ๐Ÿ”ฅ RLVR works challenge the need of *labels* of math questions. We find just playing video games, eg. Snake, can boost multimodal reasoning. No math *questions* needed. arxiv.org/abs/2506.08011๐Ÿงต๐Ÿ‘‡

๐Ÿ’ก New work: You might not need math data to teach models math reasoning.

Recent ๐Ÿ”ฅ RLVR works challenge the need of *labels* of math questions.

We find just playing video games, eg. Snake, can boost multimodal reasoning. No math *questions* needed.

arxiv.org/abs/2506.08011๐Ÿงต๐Ÿ‘‡
Zan Gojcic (@zgojcic) 's Twitter Profile Photo

To wrap up the open-sourcing season, weโ€™re excited to announce that DiffusionRenderer, based on the NVIDIA Cosmos world model, is now open sourced! That means that the code of all three of our #CVPR25 oral papers is now available: - 3DGUT - DiffusionRenderer - Diffix3D+ #CVPR2026

Qinsheng Zhang (@qsh_zh) 's Twitter Profile Photo

๐Ÿš€ Introducing Cosmos-Predict2! Our most powerful open video foundation model for Physical AI. Cosmos-Predict2 significantly improves upon Predict1 in visual quality, prompt alignment, and motion dynamicsโ€”outperforming popular open-source video foundation models. Itโ€™s openly

Orazio Gallo (@0razio) 's Twitter Profile Photo

If you're at #CVPR2025 don't miss Yiqing Liang's talk on our scene flow estimation work, which was also a CVPR Best Paper Award nominee! Collaboration between NVIDIA AI and Brown University

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

Anthropic just dropped the beautiful explaination of how they built a multi-agent research system using multiple Claude AI agents. A MUST read for anyone building multi-agent system. A lead agent plans research steps, spawns specialized subagents to search in parallel, and

Anthropic just dropped the beautiful explaination of how they built a multi-agent research system using multiple Claude AI agents. 

A MUST read for anyone building multi-agent system.

A lead agent plans research steps, spawns specialized subagents to search in parallel, and
Yiqing Liang (@yiqingliang2) 's Twitter Profile Photo

Honored to be interviewed by RSIP Vision and see our #CVPR2025 2025 Best Paper Award Candidate ZeroMSF featured on the front page! Thank you for spotlighting our zero-shot scene flow research! ๐Ÿš€ More details from our website: research.nvidia.com/labs/lpr/zero_โ€ฆ #NVIDIA #CVPR2025

Kai He (@kai__he) 's Twitter Profile Photo

๐Ÿš€ Introducing UniRelight, a general-purpose relighting framework powered by video diffusion models. ๐ŸŒŸUniRelight jointly models the distribution of scene intrinsics and illumination, enabling high-quality relighting and intrinsic decomposition from a single image or video.

Kwang Moo Yi (@kwangmoo_yi) 's Twitter Profile Photo

Preprint of (not) today: Harley et al., "AllTracker: Efficient Dense Point Tracking at High Resolution" -- alltracker.github.io Efficient architecture to track all points/pixels in real-time. Matches multiple frames in real time at high resolution via RNN-based refinement.

Adam W. Harley (@adamwharley) 's Twitter Profile Photo

Tricky sample for AllTracker. It has never seen water at training time, so I'm glad it knows to quickly discard the tracks there... The bird that travels right-to-left (behind the wings most of the time) could be handled better.

Zhengzhong Tu (@_vztu) 's Twitter Profile Photo

๐Ÿ”ฅ ๐—ง๐—ต๐—ฟ๐—ถ๐—น๐—น๐—ฒ๐—ฑ ๐˜๐—ผ ๐˜€๐—ต๐—ฎ๐—ฟ๐—ฒ ๐—ผ๐˜‚๐—ฟ ๐—น๐—ฎ๐˜๐—ฒ๐˜€๐˜ ๐˜๐—ฟ๐—ถ๐—น๐—ผ๐—ด๐˜† ๐—ผ๐—ณ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€ ๐—ผ๐—ป ๐—”๐—œ ๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€, ๐—ฝ๐˜‚๐˜€๐—ต๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ฒ ๐—ณ๐—ฟ๐—ผ๐—ป๐˜๐—ถ๐—ฒ๐—ฟ๐˜€ ๐—ฎ๐—ฐ๐—ฟ๐—ผ๐˜€๐˜€ ๐˜๐—ต๐—ฒ๐—ผ๐—ฟ๐˜†, ๐—ฑ๐—ฎ๐˜๐—ฎ, ๐—ฎ๐—ป๐—ฑ ๐—บ๐—ผ๐—ฑ๐—ฎ๐—น๐—ถ๐˜๐˜†: ๐Ÿ”ด ๐šƒฬฒ๐š‘ฬฒ๐šŽฬฒ๐š˜ฬฒ๐š›ฬฒ๐šขฬฒ โ€“ ๐˜‹๐˜ช๐˜ด๐˜Š๐˜–:

๐Ÿ”ฅ ๐—ง๐—ต๐—ฟ๐—ถ๐—น๐—น๐—ฒ๐—ฑ ๐˜๐—ผ ๐˜€๐—ต๐—ฎ๐—ฟ๐—ฒ ๐—ผ๐˜‚๐—ฟ ๐—น๐—ฎ๐˜๐—ฒ๐˜€๐˜ ๐˜๐—ฟ๐—ถ๐—น๐—ผ๐—ด๐˜† ๐—ผ๐—ณ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€ ๐—ผ๐—ป ๐—”๐—œ ๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€, ๐—ฝ๐˜‚๐˜€๐—ต๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ฒ ๐—ณ๐—ฟ๐—ผ๐—ป๐˜๐—ถ๐—ฒ๐—ฟ๐˜€ ๐—ฎ๐—ฐ๐—ฟ๐—ผ๐˜€๐˜€ ๐˜๐—ต๐—ฒ๐—ผ๐—ฟ๐˜†, ๐—ฑ๐—ฎ๐˜๐—ฎ, ๐—ฎ๐—ป๐—ฑ ๐—บ๐—ผ๐—ฑ๐—ฎ๐—น๐—ถ๐˜๐˜†:

๐Ÿ”ด ๐šƒฬฒ๐š‘ฬฒ๐šŽฬฒ๐š˜ฬฒ๐š›ฬฒ๐šขฬฒ โ€“ ๐˜‹๐˜ช๐˜ด๐˜Š๐˜–:
Brown CS (@browncsdept) 's Twitter Profile Photo

Brown Visual Computing is in the top 0.1% at #CVPR2026 2025, including research from BVC student Yiqing Liang and her collaborators at NVIDIA that was recognized as a best paper award candidate with oral presentation. Learn more: cs.brown.edu/news/2025/06/3โ€ฆ

<a href="/BrownVisualComp/">Brown Visual Computing</a> is in the top 0.1% at <a href="/CVPR/">#CVPR2026</a> 2025, including research from BVC student <a href="/YiqingLiang2/">Yiqing Liang</a> and her collaborators at <a href="/nvidia/">NVIDIA</a> that was recognized as a best paper award candidate with oral presentation. Learn more: cs.brown.edu/news/2025/06/3โ€ฆ
Jiaming Song (@baaadas) 's Twitter Profile Photo

Based on developments on "flow-map / average velocity" type methods, such as consistency trajectory models, shortcut models, IMM, and mean flow, I believe that the community will develop a proper replacement to diffusion / flow matching in 6 - 12 months.

Qianqian Wang (@qianqianwang5) 's Twitter Profile Photo

๐Ÿ“ขThrilled to share that I'll be joining Harvard and the Kempner Institute as an Assistant Professor starting Fall 2026! I'll be recruiting students this year for the Fall 2026 admissions cycle. Hope you apply!

Luma AI (@lumalabsai) 's Twitter Profile Photo

This is Ray3. The worldโ€™s first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.

Yiqing Liang (@yiqingliang2) 's Twitter Profile Photo

๐ŸŽ‰ [CVPR 2025] ZeroMSF Code Release! 3D scene flow from a single camera, with no fine-tuning on new domains? Thatโ€™s the challenge we tackled in Zero-MSF (Zero-shot Monocular Scene Flow). ๐Ÿ’ก Motivation Scene flow captures both geometry and motion, but existing methods crumble