
Eric Chan
@ericryanchan
PhD Student, Stanford University
ID: 1334362663657349120
http://ericryanchan.github.io 03-12-2020 05:03:59
29 Tweet
351 Followers
86 Following


Our new work on scaling up neural representations is out! Adaptive coordinate networks (ACORN) can be trained to represent high-res imagery (10s of MP) in seconds & scale up to gigapixel images and highly detailed 3D models. computationalimaging.org/publications/aโฆ arxiv.org/abs/2105.02788 1/n


If you are at #CVPR2022, please come see "EG3D" Friday morning ๐๐๐ฅ๐ค: ๐:๐๐๐๐ฆ~ @ ๐๐ซ๐๐๐ญ ๐๐๐ฅ๐ฅ ๐-๐ ๐๐จ๐ฌ๐ญ๐๐ซ๐ฌ (๐๐๐) ๐๐:๐๐-๐๐:๐๐๐ฉ๐ฆ @ ๐๐๐ฅ๐ฅ๐ฌ ๐๐-๐ Code & models released: github.com/NVlabs/eg3d joint work w NVIDIA AI and Gordon Wetzstein




We're organizing an #ICLR2023 workshop titled: ๐Neural Fields across Fields: Methods and Applications of Implicit Neural Representations๐ sites.google.com/view/neural-fiโฆ Please submit your cool research (due: 3rd Feb)! Looking forward to seeing you on 05/05/23 @ Kigali, Rwanda ๐ท๐ผ

We also present another paper at @SIGGRAPH 2023 on neural implicit 3D Morphable Models that can be used to create a dynamic 3D avatar from a single in-the-wild image. (Lead author Connor Lin). research.nvidia.com/labs/toronto-aโฆ


Introducing โFlowCam: Training Generalizable 3D Radiance Fields w/o Camera Poses via Pixel-Aligned Scene Flowโ! We train a generalizable 3D scene representation self-supervised on datasets of raw videos, without any pre-computed camera poses or SFM! cameronosmith.github.io/flowcam 1/n



Diffusion models are awesome! Check out our survey on ๐๐ข๐๐๐ฎ๐ฌ๐ข๐จ๐ง ๐๐จ๐๐๐ฅ๐ฌ ๐๐จ๐ซ ๐๐ข๐ฌ๐ฎ๐๐ฅ ๐๐จ๐ฆ๐ฉ๐ฎ๐ญ๐ข๐ง๐ ! We give an introduction to diffusion models and highlight how they are used by state-of-the-art methods in graphics and vision. arxiv.org/abs/2310.07204


Iโm really excited to finally share our new paper โZeroNVS: Zero-shot 360-degree View Synthesis from a Single Real Image.โ The paper, webpage, and code are all released! ๐arxiv.org/abs/2310.17994 ๐kylesargent.github.io/zeronvs/ ๐ฅ๏ธgithub.com/kylesargent/Zeโฆ ๐งตis below.

๐ข๐ข๐ข thrilled to announce "๐๐-๐๐ฒ: ๐๐๐ฑ๐ญ-๐ญ๐จ-๐๐ ๐๐๐ง๐๐ซ๐๐ญ๐ข๐จ๐ง ๐๐ฌ๐ข๐ง๐ ๐๐ฒ๐๐ซ๐ข๐ ๐๐๐จ๐ซ๐ ๐๐ข๐ฌ๐ญ๐ข๐ฅ๐ฅ๐๐ญ๐ข๐จ๐ง ๐๐๐ฆ๐ฉ๐ฅ๐ข๐ง๐ " sherwinbahmani.github.io/4dfy Way to start Sherwin Bahmani ๐ PhD co-advised with David Lindell at #UofT.


Thought about generating realistic 3D urban neighbourhoods from maps, dawn to dusk, rain or shine? Putting heavy snow on the streets of Barcelona? Or making Paris look like NYC? We built a Streetscapes system that does all these. See boyangdeng.com/streetscapes. (Showreel w/ ๐ โ)

Some cool work led by Shengqu Cai showing how you can adapt a diffusion model for image-to-image tasks using data generated by itself. My favorite examples are of โdecompositionโ, but itโs useful for other image2image tasks like relighting and character-consistent generation too!

