Deqing Sun (@deqingsun) 's Twitter Profile
Deqing Sun

@deqingsun

Research Scientist @GoogleDeepmind - #ICCV2025 Program Chair

ID: 2190920784

linkhttps://deqings.github.io/ calendar_today12-11-2013 19:33:30

66 Tweet

845 Followers

193 Following

Michael Black (@michael_j_black) 's Twitter Profile Photo

Build what you need and use what you build. This is a core philosophy of my research. It shifts the focus away from publishing “papers” to what really matters — impact. This thread unpacks why I think this is a successful approach to science. 1/10 Or see: perceiving-systems.blog/en/post/build-…

Michael Black (@michael_j_black) 's Twitter Profile Photo

Should you do research in a AI startup? Does it burn someone else's money and your equity? Or is it the key to success? If you do it, how do you manage it? Drawing on experiences at Xerox PARC, Amazon, Body Labs, & Meshcapade, I try to shed some light: medium.com/@black_51980/s…

Deqing Sun (@deqingsun) 's Twitter Profile Photo

Welcome to our fifth AI for Content Creation (ai4cc.net) workshop at CVPR 2024 on June 17th at Seattle Convention Center — Summit 342! Our late-breaking talks are Sora by Tim Brooks and Genie by Yuge Shi (Jimmy) & Jack Parker-Holder 🔥🔥🔥

Junyi Zhang (@junyi42) 's Twitter Profile Photo

On my way to Seattle ✈️ for my first ever #CVPR! Excited to meet old and new friends. 😄 I'll be presenting our work telling-left-from-right.github.io on Wed. (19th) morning at #284. If you're interested in how a plug-in processor can enhance the Geo-aware SC of SD+DINO, please stop by.

On my way to Seattle ✈️ for my first ever #CVPR! Excited to meet old and new friends. 😄

I'll be presenting our work telling-left-from-right.github.io on Wed. (19th) morning at #284. If you're interested in how a plug-in processor can enhance the Geo-aware SC of SD+DINO, please stop by.
Junyi Zhang (@junyi42) 's Twitter Profile Photo

Excited to share MonST3R! -- a simple way to estimate geometry from unposed video of dynamic scene We achieve competitive results on several downstreams (video depth, camera pose) and believe this is a promising step toward feed-forward 4D reconstruction monst3r-project.github.io

Yonglong Tian (@yonglongt) 's Twitter Profile Photo

Do we still need codebook/quantization for scalable autoregressive visual generation? No! Thrilled to share our latest work on scaling w/ continuous tokens. We observe power-law scaling behavior on val loss, and obtain SOTA coco FID and GenEval score. arxiv.org/abs/2410.13863

Do we still need codebook/quantization for scalable autoregressive visual generation?

No! Thrilled to share our latest work on scaling w/ continuous tokens. We observe power-law scaling behavior on val loss, and obtain SOTA coco FID and GenEval score.
arxiv.org/abs/2410.13863
Lijie Fan (@lijie_fan) 's Twitter Profile Photo

🚀 Excited to share our latest work Fluid! We've developed a scalable autoregressive text-to-image model without VQ. We trained the model up to 10B parameters, achieving state-of-the-art COCO FID and GenEval scores. 🔥 Check it out: arxiv.org/pdf/2410.13863 🙏 Shout out to

🚀 Excited to share our latest work Fluid!

We've developed a scalable autoregressive text-to-image model without VQ. We trained the model up to 10B parameters, achieving state-of-the-art COCO FID and GenEval scores. 🔥
Check it out: arxiv.org/pdf/2410.13863

🙏 Shout out to
Daniel Geng (@dangengdg) 's Twitter Profile Photo

What happens when you train a video generation model to be conditioned on motion? Turns out you can perform "motion prompting," just like you might prompt an LLM! Doing so enables many different capabilities. Here’s a few examples – check out this thread 🧵 for more results!

Dreaming Tulpa 🥓👑 (@dreamingtulpa) 's Twitter Profile Photo

Thanks to AI, so many amazing new possibilities are emerging. Oh, what I would give to be a kid again. Our imagination will truly be the limit.

Deqing Sun (@deqingsun) 's Twitter Profile Photo

A friendly reminder for those submitting to #ICCV2025 #ICCV2025: *The supplementary materials deadline is the same as the paper deadline*. (iccv.thecvf.com/Conferences/20…) Please make sure you are prepared.

#ICCV2025 (@iccvconference) 's Twitter Profile Photo

Friendly reminder: #ICCV2025 paper registration deadline is TODAY March 3rd! ⏰ Remember that paper registration is required before submission. Program chairs won't be able to accommodate late registrations. Please check all dates and deadlines at iccv.thecvf.com/Conferences/20…

Friendly reminder: #ICCV2025 paper registration deadline is TODAY March 3rd! ⏰
Remember that paper registration is required before submission. Program chairs won't be able to accommodate late registrations.
Please check all dates and deadlines at
iccv.thecvf.com/Conferences/20…
#ICCV2025 (@iccvconference) 's Twitter Profile Photo

To help manage server bandwidth, we urge #ICCV2025 authors to upload their supplementary material well in advance of the deadline.

To help manage server bandwidth, we urge #ICCV2025 authors to upload their supplementary material well in advance of the deadline.
Kosta Derpanis (@csprofkgd) 's Twitter Profile Photo

#KostasThoughts: When communicating with conference organizers, start by assuming good intentions and always keep it respectful. Remember, these are real people, volunteering their time and making personal sacrifices to bring everything together.

#ICCV2025 (@iccvconference) 's Twitter Profile Photo

The #ICCV2025 rebuttal deadline is almost here! Double-check the time and timezone, and be sure to submit on time—no extensions will be granted. Good luck! Important dates: iccv.thecvf.com/Conferences/20…

The #ICCV2025 rebuttal deadline is almost here!  Double-check the time and timezone, and be sure to submit on time—no extensions will be granted.  Good luck!

Important dates: iccv.thecvf.com/Conferences/20…
Nate Gillman @ICLR'25 (@gillmanlab) 's Twitter Profile Photo

Ever wish you could turn your video generator into a controllable physics simulator? We're thrilled to introduce Force Prompting! Animate any image with physical forces and get fine-grained control, without needing any physics simulator or 3D assets at inference. 🧵(1/n)