Sherwin Bahmani (@sherwinbahmani) 's Twitter Profile
Sherwin Bahmani

@sherwinbahmani

CS PhD Student @UofT

ID: 628337608

linkhttp://sherwinbahmani.github.io calendar_today06-07-2012 12:24:59

189 Tweet

1,1K Followers

460 Following

David Lindell (@davelindell) 's Twitter Profile Photo

Check out the Toronto Computational Imaging Group at CVPR this week! - felixtaubner.github.io/cap4d/ (Fri: Oral Sess 2B) - anaghmalik.com/InvProp/ (Sat: Oral Sess 3A) - Opportunistic Single-Photon Time of Flight (Sat: Oral Sess 4C) - snap-research.github.io/ac3d/ (Sun: Poster Sess 5)

Jun Gao (@jungao33210520) 's Twitter Profile Photo

This year, we have 3 papers in CVPR, discussing the connection between 3D and video models: GEN3C [Highlight] 3D grounding for video model DiffusionRenderer [Oral] Taming video models for rendering and inverse rendering Diffix3D+ [Oral] Enhancing Nerf/3DGS w/ diffusion models

This year, we have 3 papers in CVPR, discussing the connection between 3D and video models:

GEN3C [Highlight] 3D grounding for video model

DiffusionRenderer [Oral] Taming video models for rendering and inverse rendering

Diffix3D+ [Oral] Enhancing Nerf/3DGS w/ diffusion models
Tianchang Shen (@tianchangs) 's Twitter Profile Photo

📢 GEN3C is now open-sourced, with code released under Apache 2.0 and model weights under the NVIDIA Open Model License! 🚀 Along with it, we're releasing a GUI tool that lets you specify your desired video trajectory in 3D — come play with it and generate your own! The

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

GEN3C #CVPR2025 is now open-sourced @HuggingFace, with code released under Apache 2.0 and model weights under the NVIDIA Open Model License! Code: github.com/nv-tlabs/GEN3C HuggingFace: huggingface.co/nvidia/GEN3C-C… Feel free to reach out if you want to chat or connect! #CVPR2025

Huan Ling (@huanling6) 's Twitter Profile Photo

We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:

Zian Wang (@zianwang97) 's Twitter Profile Photo

🚀 DiffusionRenderer is now open-source! Check out the code and model: github.com/nv-tlabs/diffu… We will present at #CVPR2025 this Sunday, June 15: 🗣️ Oral Session 6A: 1:00–2:30 PM CDT, Karl Dean Grand Ballroom 🖼️ Poster: 4:00–6:00 PM CDT, ExHall D (Poster #29)

Felix Taubner (@taubnerfelix) 's Twitter Profile Photo

I will be presenting CAP4D at #CVPR2025! Come check out our poster on today at poster board #327 (15:00-16:30) and Friday at poster board #9 (16:00-18:00). Don’t miss out on our 📣oral📣 presentation on Friday in ExHall A2 (15:00)!

Daniel Geng (@dangengdg) 's Twitter Profile Photo

Hello! If you like pretty images and videos and want a rec for CVPR oral session, you should def go to Image/Video Gen, Friday at 9am: I'll be presenting "Motion Prompting" Ryan Burgert will be presenting "Go with the Flow" and Pascal CHANG will be presenting "LookingGlass"

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

This Saturday at CVPR, don't miss Oral Session 3A. Vision all-stars Qianqian Wang, Linyi Jin, Zhengqi Li are presenting MegaSaM, CUT3R, and Stereo4D. The posters are right after, and the whole crew will be there. It'll be fun. Drop by.

Zian Wang (@zianwang97) 's Twitter Profile Photo

🚀 We just open-sourced Cosmos DiffusionRenderer! This major upgrade brings significantly improved video de-lighting and re-lighting—powered by NVIDIA Cosmos and enhanced data curation. Released under Apache 2.0 and Open Model License. Try it out! 🔗 github.com/nv-tlabs/cosmo…

Anagh Malik (@anagh_malik) 's Twitter Profile Photo

📣📣📣 Neural Inverse Rendering from Propagating Light 💡 just won Best Student Paper award at #CVPR!!!

David Lindell (@davelindell) 's Twitter Profile Photo

Honored that our work received the best student paper award at #CVPR2025! This was a really fun and exciting collaboration with Matthew O'Toole led by amazing students Anagh Malik Benjamin Attal Andrew Xie! Check out the work at anaghmalik.com/InvProp/

Gene Chou (@gene_ch0u) 's Twitter Profile Photo

I'll be presenting our work with Kai Zhang at #cvpr2025. We finetune video models to be 3d consistent without any 3d supervision!  Feel free to stop by our poster or reach out to chat: Sunday, Jun 15, 4-6pm ExHall D, poster #168 cvpr.thecvf.com/virtual/2025/p…

Kai He (@kai__he) 's Twitter Profile Photo

🚀 Introducing UniRelight, a general-purpose relighting framework powered by video diffusion models. 🌟UniRelight jointly models the distribution of scene intrinsics and illumination, enabling high-quality relighting and intrinsic decomposition from a single image or video.

Moayed Haji Ali (@moayedhajiali) 's Twitter Profile Photo

Where are good old progressive diffusion models? 🤔 Breaking generation to multiple resolution scales is a great idea, but complexity (multiple models, custom diffusion process, etc) stalled scaling. Our Decomposable Flow Matching packs multi-scale perks into one scalable model.

Robin Hesse (@robinhesse_) 's Twitter Profile Photo

Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work! 🗓️ Submissions open until June 26 AoE. 📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track. #ICCV2025 #ICCV2025

Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
đź“„ Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 <a href="/ICCVConference/">#ICCV2025</a>
Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢📢We have a last-minute internship opening on my team at NVIDIA AI for this summer. If you are interested and have experience with large feedforward reconstruction models or post-training image/video diffusion models, please get in touch!

Ruilong Li (@ruilong_li) 's Twitter Profile Photo

For everyone interested in precise đź“·camera control đź“· in transformers [e.g., video / world model etc] Stop settling for PlĂĽcker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! Paper & code: liruilong.cn/prope/

For everyone interested in precise đź“·camera control đź“· in transformers [e.g., video / world model etc]

Stop settling for PlĂĽcker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! 

Paper &amp; code: liruilong.cn/prope/