Eric Chan (@ericryanchan) 's Twitter Profile
Eric Chan

@ericryanchan

PhD Student, Stanford University

ID: 1334362663657349120

linkhttp://ericryanchan.github.io calendar_today03-12-2020 05:03:59

29 Tweet

351 Followers

86 Following

Marco Monteiro (@monteiroamarco) 's Twitter Profile Photo

Happy π day! How many pies would Eric Chan and I have to eat to get enough images to train pi-GAN on pies? marcoamonteiro.github.io/pi-GAN-website/ #CVPR #GAN #AI

Julien N.P. Martel (@jnpmartel) 's Twitter Profile Photo

Our new work on scaling up neural representations is out! Adaptive coordinate networks (ACORN) can be trained to represent high-res imagery (10s of MP) in seconds & scale up to gigapixel images and highly detailed 3D models. computationalimaging.org/publications/a… arxiv.org/abs/2105.02788 1/n

Vincent Sitzmann (@vincesitzmann) 's Twitter Profile Photo

Introducing "Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering"! vsitzmann.github.io/lfns (w/ video!) LFNs are the first fully implicit neural scene representation with real-time rendering, without post-processing / hybrid data-structures! (1/n)

Koki Nagano (@luminohope) 's Twitter Profile Photo

If you are at #CVPR2022, please come see "EG3D" Friday morning 𝐓𝐚𝐥𝐤: 𝟗:𝟑𝟎𝐚𝐦~ @ 𝐆𝐫𝐞𝐚𝐭 𝐇𝐚𝐥𝐥 𝐀-𝐃 𝐏𝐨𝐬𝐭𝐞𝐫𝐬 (𝟏𝟑𝐚) 𝟏𝟎:𝟎𝟎-𝟏𝟐:𝟑𝟎𝐩𝐦 @ 𝐇𝐚𝐥𝐥𝐬 𝐁𝟐-𝐂 Code & models released: github.com/NVlabs/eg3d joint work w NVIDIA AI and Gordon Wetzstein

Koki Nagano (@luminohope) 's Twitter Profile Photo

also if you come to our poster early, we (Eric Chan, Matthew, Shalini De Mello ) will show a live 3D visualizer demo running on a laptop for the first 30 mins or so as long as the battery lasts

Jun Gao (@jungao33210520) 's Twitter Profile Photo

Excited to share our #NeurIPS2022 NVIDIA AI work GET3D, a generative model that directly produces explicit textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. #3D Project page: nv-tlabs.github.io/GET3D/

Alex Bergman (@alexwbergman) 's Twitter Profile Photo

We are excited to present our work at #NeurIPS22 on Generative Neural Articulated Radiance Fields! In this paper, we build on top of the capabilities of existing 3D GAN architectures and propose the first method for generating full human bodies! (1/7)

Hyunjik Kim (@hyunjik11) 's Twitter Profile Photo

We're organizing an #ICLR2023 workshop titled: 🌟Neural Fields across Fields: Methods and Applications of Implicit Neural Representations🌟 sites.google.com/view/neural-fi… Please submit your cool research (due: 3rd Feb)! Looking forward to seeing you on 05/05/23 @ Kigali, Rwanda 🇷🇼

Koki Nagano (@luminohope) 's Twitter Profile Photo

We also present another paper at @SIGGRAPH 2023 on neural implicit 3D Morphable Models that can be used to create a dynamic 3D avatar from a single in-the-wild image. (Lead author Connor Lin). research.nvidia.com/labs/toronto-a…

Koki Nagano (@luminohope) 's Twitter Profile Photo

AI-driven 3D lifting has huge potential! Choose your favorite AI tool to generate/customize an avatar and bring it to life in 3D – something not possible with 3D scanning-based systems. If you come to @SIGGRAPH, try a live demo as part of Emerging Technologies #SIGGRAPH2023

Vincent Sitzmann (@vincesitzmann) 's Twitter Profile Photo

Introducing “FlowCam: Training Generalizable 3D Radiance Fields w/o Camera Poses via Pixel-Aligned Scene Flow”! We train a generalizable 3D scene representation self-supervised on datasets of raw videos, without any pre-computed camera poses or SFM! cameronosmith.github.io/flowcam 1/n

Jeong Joon Park (@jjpark3d) 's Twitter Profile Photo

Eric Chan Eric Chan will be presenting our #ICCV2023 work GeNVS at the "Generative AI 1" Oral session today (Oct. 4th 04:30-06:00 PM)! If you're interested in this work or want to chat with us, come visit our poster 190 in Room "Foyer Sud."

Songyou Peng (@songyoupeng) 's Twitter Profile Photo

Given a single image, have you ever dreamed about how it would look like if you fly into it? We present DiffDreamer for 3D-consistent long-range scene extrapolation! #ICCV2023 Come to chat with Shengqu Cai, Eric Chan and me at our poster 10:30 am today in Foyer Sud!

Matthias Niessner (@mattniessner) 's Twitter Profile Photo

Diffusion models are awesome! Check out our survey on 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 𝐟𝐨𝐫 𝐕𝐢𝐬𝐮𝐚𝐥 𝐂𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠! We give an introduction to diffusion models and highlight how they are used by state-of-the-art methods in graphics and vision. arxiv.org/abs/2310.07204

Diffusion models are awesome! 

Check out our survey on 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥𝐬 𝐟𝐨𝐫 𝐕𝐢𝐬𝐮𝐚𝐥 𝐂𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠! We give an introduction to diffusion models and highlight how they are used by state-of-the-art methods in graphics and vision.

arxiv.org/abs/2310.07204
Kyle Sargent (@kylesargentai) 's Twitter Profile Photo

I’m really excited to finally share our new paper “ZeroNVS: Zero-shot 360-degree View Synthesis from a Single Real Image.” The paper, webpage, and code are all released! 📖arxiv.org/abs/2310.17994 🌐kylesargent.github.io/zeronvs/ 🖥️github.com/kylesargent/Ze… 🧵is below.

Andrea Tagliasacchi 🇨🇦 (@taiyasaki) 's Twitter Profile Photo

📢📢📢 thrilled to announce "𝟒𝐃-𝐟𝐲: 𝐓𝐞𝐱𝐭-𝐭𝐨-𝟒𝐃 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐔𝐬𝐢𝐧𝐠 𝐇𝐲𝐛𝐫𝐢𝐝 𝐒𝐜𝐨𝐫𝐞 𝐃𝐢𝐬𝐭𝐢𝐥𝐥𝐚𝐭𝐢𝐨𝐧 𝐒𝐚𝐦𝐩𝐥𝐢𝐧𝐠" sherwinbahmani.github.io/4dfy Way to start Sherwin Bahmani 🎉 PhD co-advised with David Lindell at #UofT.

Ivan Skorokhodov (@isskoro) 's Twitter Profile Photo

This paper has received significantly less attention than it deserves, so let me shed a bit more light on it and describe why it's so good: 1. It turns out that the classical U-Net image diffusion backbone, which the entire community has been happily building upon during the

Boyang Deng (@boyang_deng) 's Twitter Profile Photo

Thought about generating realistic 3D urban neighbourhoods from maps, dawn to dusk, rain or shine? Putting heavy snow on the streets of Barcelona? Or making Paris look like NYC? We built a Streetscapes system that does all these. See boyangdeng.com/streetscapes. (Showreel w/ 🔊 ↓)

Eric Chan (@ericryanchan) 's Twitter Profile Photo

Some cool work led by Shengqu Cai showing how you can adapt a diffusion model for image-to-image tasks using data generated by itself. My favorite examples are of “decomposition”, but it’s useful for other image2image tasks like relighting and character-consistent generation too!

Some cool work led by <a href="/prime_cai/">Shengqu Cai</a> showing how you can adapt a diffusion model for image-to-image tasks using data generated by itself. My favorite examples are of “decomposition”, but it’s useful for other image2image tasks like relighting and character-consistent generation too!
Ivan Skorokhodov (@isskoro) 's Twitter Profile Photo

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!) arxiv.org/abs/2502.14831 arxiv.org/abs/2502.09509

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!)

arxiv.org/abs/2502.14831
arxiv.org/abs/2502.09509