Linyi Jin (@jin_linyi) 's Twitter Profile
Linyi Jin

@jin_linyi

PhD Student @UMich | Student Researcher @GoogleDeepmind 2024 | Ex-Intern @Adobe 2021

ID: 1249885991583825920

linkhttp://jinlinyi.github.io calendar_today14-04-2020 02:23:52

68 Tweet

547 Followers

205 Following

Abdullah Hamdi (@eng_hemdi) 's Twitter Profile Photo

great paper with awesome visualization on the browser! MegaSaM Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos mega-sam.github.io what an amazing team Noah Snavely Aleksander Holynski Angjoo Kanazawa Zhengqi Li Richard , Forrester Cole Qianqian Wang Linyi Jin

Chen Geng (@gengchen01) 's Twitter Profile Photo

Ever wondered how roses grow and wither in your backyard?🌹 Our latest work on generating 4D temporal object intrinsics lets you explore a rose's entire lifecycle—from birth to death—under any environment light, from any viewpoint, at any moment. Project page:

Linyi Jin (@jin_linyi) 's Twitter Profile Photo

Ever walked past the same street for years and missed what changed? This cool project uses Gemini to spot these subtle shifts!

Jon Barron (@jon_barron) 's Twitter Profile Photo

Here's my 3DV talk, in chapters: 1) Intro / NeRF boilerplate. 2) Recent reconstruction work. 3) Recent generative work. 4) Radiance fields as a field. 5) Why generative video has bitter-lessoned 3D. 6) Why generative video hasn't bitter-lessoned 3D. 5 & 6 are my favorites.

Here's my 3DV talk, in chapters:

1) Intro / NeRF boilerplate.
2) Recent reconstruction work.
3) Recent generative work.
4) Radiance fields as a field.
5) Why generative video has bitter-lessoned 3D.
6) Why generative video hasn't bitter-lessoned 3D.

5 & 6 are my favorites.
Aleksander Holynski (@holynski_) 's Twitter Profile Photo

TL;DR: a simple, yet effective way to enable difficult image generation by distilling the deliberation capabilities of a VLM into an image generator.

Zhengqi Li (@zhengqi_li) 's Twitter Profile Photo

Check out our new work, Self-Forcing! By addressing the training/inference mismatch, Self-Forcing enables real-time streaming video generation on a single GPU while acheiving competitive or superior performance compared to SOTA video models that run significantly slower.

Linyi Jin (@jin_linyi) 's Twitter Profile Photo

Heading to Nashville for CVPR. Looking forward to discussing Stereo4D stereo4d.github.io and MegaSaM mega-sam.github.io Feel free to reach out if you want to chat or connect! #CVPR2025

Daniel Geng (@dangengdg) 's Twitter Profile Photo

Hello! If you like pretty images and videos and want a rec for CVPR oral session, you should def go to Image/Video Gen, Friday at 9am: I'll be presenting "Motion Prompting" Ryan Burgert will be presenting "Go with the Flow" and Pascal CHANG will be presenting "LookingGlass"

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

This Saturday at CVPR, don't miss Oral Session 3A. Vision all-stars Qianqian Wang, Linyi Jin, Zhengqi Li are presenting MegaSaM, CUT3R, and Stereo4D. The posters are right after, and the whole crew will be there. It'll be fun. Drop by.

Yiming Dou (@_yimingdou) 's Twitter Profile Photo

Ever wondered how a scene sounds👂 when you interact👋 with it? Introducing our #CVPR2025 work "Hearing Hands: Generating Sounds from Physical Interactions in 3D Scenes" -- we make 3D scene reconstructions audibly interactive! yimingdou.com/hearing_hands/

Ayush Shrivastava (@ayshrv) 's Twitter Profile Photo

Excited to share our CVPR 2025 paper on cross-modal space-time correspondence! We present a method to match pixels across different modalities (RGB-Depth, RGB-Thermal, Photo-Sketch, and cross-style images) — trained entirely using unpaired data and self-supervision. Our

Excited to share our CVPR 2025 paper on cross-modal space-time correspondence!

We present a method to match pixels across different modalities (RGB-Depth, RGB-Thermal, Photo-Sketch, and cross-style images) — trained entirely using unpaired data and self-supervision.

Our
Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Tactile sensing is gaining traction, but slowly. Why? Because integration remains difficult. But what if adding touch sensors to your robot was as easy as hitting “print”? Introducing eFlesh: a 3D-printable, customizable tactile sensor. Shape it. Size it. Print it. 🧶👇