Qi Wu (@wilson_over) 's Twitter Profile
Qi Wu

@wilson_over

Applied research scientists on 3D deep learning at NVIDIA. Views and opinion are my own.

ID: 795664800

linkhttps://wilsoncernwq.github.io/ calendar_today01-09-2012 08:57:15

38 Tweet

158 Followers

341 Following

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

DiffusionRenderer: Unifying forward and inverse rendering with video diffusion models. Stoked that we are finally looking for alternatives to PBR, which requires near perfect G buffers that are hard if not impossible to obtain in real world!

NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Spatial AI is increasingly important, and the newest papers from #NVIDIAResearch, 3DGRT and 3DGUT, represent significant advancements in enabling researchers and developers to explore and innovate with 3D Gaussian Splatting techniques. 💎 3DGRT (Gaussian Ray Tracing) ➡️

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📣 We have released the source code of 3DGRT and 3DGUT in a common 3DGRUT repo under the Apache license! Go try it out and play with our playground app!

Ruilong Li (@ruilong_li) 's Twitter Profile Photo

🌟gsplat🌟just integrated 3DGUT, which allows training and rendering 3DGS on *distorted* pinhole/fisheye cameras, as well as rolling shutter effects! > Checkout this NVIDIA tech blog developer.nvidia.com/blog/revolutio… > Sweepstakes to win a 4090 nvidia.com/en-us/research…

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢 3DGUT has also been integrated into gsplat! Checkout the NVIDIA AI blog ( developer.nvidia.com/blog/revolutio…) and participate in the sweepstake for the chance of winning an Nvidia 4090 GPU: nvidia.com/en-us/research… Great job Ruilong Li (amazing start at NVIDIA AI) and Qi Wu!

Zan Gojcic (@zgojcic) 's Twitter Profile Photo

📢Our team at NVIDIA AI is again looking for Research Scientists and Engineerings to help us push the boundaries of Neural Reconstruction and Generation in AV and Robotics simulation! Check out our latest work at zgojcic.github.io, and if interested, reach out directly!

Masha Shugrina (@_shumash) 's Twitter Profile Photo

Curious about 3D Gaussians, simulation, rendering and the latest from #NVIDIA? Come to the NVIDIA Kaolin Library live-coding session at #CVPR2025, powered by a cloud GPU reserved especially for you. Wed, Jun 11, 8-noon. Bring your laptop! tinyurl.com/nv-kaolin-cvpr…

Or Perel (@orperel) 's Twitter Profile Photo

🎙️ Super excited to invite you all to our #CVPR2025 tutorial!! Join us for a hands on session to learn how to combine rendering and simulation through #kaolin #simplicits #3dgrt from the #NVIDIA Spatial Intelligence Lab. June 11 morning. Invitation page: tinyurl.com/nv-kaolin-cvpr…

Huan Ling (@huanling6) 's Twitter Profile Photo

We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:

Tri Dao (@tri_dao) 's Twitter Profile Photo

Getting mem-bound kernels to speed-of-light isn't a dark art, it's just about getting the a couple of details right. We wrote a tutorial on how to do this, with code you can directly use. Thanks to the new CuTe-DSL, we can hit speed-of-light without a single line of CUDA C++.

Ruilong Li (@ruilong_li) 's Twitter Profile Photo

For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc] Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! Paper & code: liruilong.cn/prope/

For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc]

Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! 

Paper & code: liruilong.cn/prope/