Connor Lin (@connorzl) 's Twitter Profile
Connor Lin

@connorzl

research scientist @LumaLabsAI | prev. co-founder @apparatelabs, CS PhD @stanford, @SCSatCMU

ID: 1456671053405581312

linkhttp://connorzlin.com calendar_today05-11-2021 17:13:49

17 Tweet

168 Followers

79 Following

Koki Nagano (@luminohope) 's Twitter Profile Photo

If you are at #CVPR2022, please come see "EG3D" Friday morning 𝐓𝐚𝐥𝐤: 𝟗:𝟑𝟎𝐚𝐦~ @ 𝐆𝐫𝐞𝐚𝐭 𝐇𝐚𝐥𝐥 𝐀-𝐃 𝐏𝐨𝐬𝐭𝐞𝐫𝐬 (𝟏𝟑𝐚) 𝟏𝟎:𝟎𝟎-𝟏𝟐:𝟑𝟎𝐩𝐦 @ 𝐇𝐚𝐥𝐥𝐬 𝐁𝟐-𝐂 Code & models released: github.com/NVlabs/eg3d joint work w NVIDIA AI and Gordon Wetzstein

Shalini De Mello (@shalinidemello) 's Twitter Profile Photo

If you are #CVPR2022 in person, check out EG3D. We learn to generate depth images from single images via tri-plane representations — without depth or mutiview supervision. Oral on Friday 6/24 9:30am, and poster and live demo at board 13a at 10:00am. Code: github.com/NVlabs/eg3d

Connor Lin (@connorzl) 's Twitter Profile Photo

Excited to present our #NeurIPS2022 paper today on 3D neural shape editing! (4-6 pm CST, poster #525) NeuForm combines overfitted and generalizable model priors to preserve surface details while generating realistic joint geometry. Paper: openreview.net/forum?id=RnjDF…

Excited to present our #NeurIPS2022 paper today on 3D neural shape editing! (4-6 pm CST, poster #525) NeuForm combines overfitted and generalizable model priors to preserve surface details while generating realistic joint geometry.

Paper: openreview.net/forum?id=RnjDF…
Koki Nagano (@luminohope) 's Twitter Profile Photo

We also present another paper at @SIGGRAPH 2023 on neural implicit 3D Morphable Models that can be used to create a dynamic 3D avatar from a single in-the-wild image. (Lead author Connor Lin). research.nvidia.com/labs/toronto-a…

AK (@_akhaliq) 's Twitter Profile Photo

Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization abs: arxiv.org/abs/2305.03043 paper page: huggingface.co/papers/2305.03… project page: research.nvidia.com/labs/toronto-a…

Connor Lin (@connorzl) 's Twitter Profile Photo

Come check out how you can create a photorealistic and animatable 3D face avatar with easily controllable texture from just a single image! ACM SIGGRAPH 2 pm, room 515A

Koki Nagano (@luminohope) 's Twitter Profile Photo

Connor Lin to present Single-Shot Implicit 3DMM with Consistent Texture Parameterization at #SIGGRAPH2023 in room 515A in 2-3:30 session s2023.siggraph.org/presentation/?…

Apparate Labs (@apparatelabs) 's Twitter Profile Photo

Introducing Proteus 0.1, REAL-TIME video generation that brings life to your AI. Proteus can laugh, rap, sing, blink, smile, talk, and more. From a single image! Come meet Proteus on Twitch in real-time. ↓ Sign up for API waitlist: apparate.ai/early-access.h… 1/11

Linqi (Alex) Zhou (@linqi_zhou) 's Twitter Profile Photo

I want to share about this amazing project I have been working on for the past few months. I could not have done this without the incredible @willbokuishen, Connor Lin, Gordon Wetzstein , and generous support from our advisors Leo Guibas and Stefano Ermon!

William Shen (@shenbokui) 's Twitter Profile Photo

AI is entering the REAL-TIME era. We are here to give it a visual embodiment. Our journey has just begun. DM is open if you want to talk!

Luma AI (@lumalabsai) 's Twitter Profile Photo

🚀 Introducing the Dream Machine API. Developers can now build and scale creative products with the world's most popular and intuitive video generation model without building complex tools in their apps. Start today lumalabs.ai/dream-machine/… #LumaDreamMachine

Luma AI (@lumalabsai) 's Twitter Profile Photo

Introducing Ray2, a new frontier in video generative models. Scaled to 10x compute, #Ray2 creates realistic videos with natural and coherent motion, unlocking new freedoms of creative expression and visual storytelling. Available now. Learn more lumalabs.ai/ray.

Linqi (Alex) Zhou (@linqi_zhou) 's Twitter Profile Photo

SO excited to finally share my work at Luma! We introduce Inductive Moment Matching, a new generative paradigm that can be trained stably with a single model and single objective from scratch, achieving 1.99 FID on ImageNet-256x256 in 8 steps and 1.98 FID on CIFAR-10 in 2 steps.