Qianqian Wang (@qianqianwang5) 's Twitter Profile
Qianqian Wang

@qianqianwang5

Postdoc at UC Berkeley and Visiting Researcher at Google. Former Ph.D. student at Cornell Tech. qianqianwang68.github.io

ID: 1074108664150134784

calendar_today16-12-2018 01:07:16

22 Tweet

1,1K Followers

334 Following

AK (@_akhaliq) 's Twitter Profile Photo

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images abs: arxiv.org/abs/2207.11148 project page: infinite-nature-zero.github.io

Zhengqi Li (@zhengqi_li) 's Twitter Profile Photo

Check out our CVPR 2023 Award Candidate paper, DynIBaR! dynibar.github.io DynIBaR takes monocular videos of dynamic scenes and renders novel views in space and time. It addresses limitations of prior dynamic NeRF methods, rendering much higher quality views.

AK (@_akhaliq) 's Twitter Profile Photo

Doppelgangers: Learning to Disambiguate Images of Similar Structures paper page: huggingface.co/papers/2309.02โ€ฆ We consider the visual disambiguation task of determining whether a pair of visually similar images depict the same or distinct 3D surfaces (e.g., the same or opposite sides

Aleksander Holynski (@holynski_) 's Twitter Profile Photo

.Qianqian Wang's ๐ŸŽ‰Best Student Paper๐ŸŽ‰ is being presented at #ICCV2023 tomorrow (Friday)! โ–ถ๏ธ"Tracking Everything Everywhere All At Once"โ—€๏ธ w/ Yen-Yu Chang, Ruojin Cai Zhengqi Li Bharath Hariharan Noah Snavely Friday Afternoon Oral & Poster! Come say hi! omnimotion.github.io

AK (@_akhaliq) 's Twitter Profile Photo

SpatialTracker Tracking Any 2D Pixels in 3D Space Recovering dense and long-range pixel motion in videos is a challenging problem. Part of the difficulty arises from the 3D-to-2D projection process, leading to occlusions and discontinuities in the 2D motion domain.

Yuxi Xiao (@yuxixiaohenry) 's Twitter Profile Photo

#CVPR If you are interested in how to extract the 3D trajectories from monocular video, do not miss our poster Tomorrow: Poster #84 (Arch 4A-E) SpatialTracker: Tracking Any 2D Pixels in 3D Space Web: henry123-boy.github.io/SpaTracker/ I and Qianqian Wang will be there, welcome to say hi!

Justin Kerr (@justkerrding) 's Twitter Profile Photo

Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates from 1) an object scan and 2) a human demonstration video, reconstructing 3D motion to plan a robot trajectory. robot-see-robot-do.github.io #CoRL2024 (Oral)

Zhengqi Li (@zhengqi_li) 's Twitter Profile Photo

Introducing MegaSaM! ๐ŸŽฅ Accurate, fast, & robust structure + camera estimation from casual monocular videos of dynamic scenes! MegaSaM outputs camera parameters and consistent video depth, scaling to long videos with unconstrained camera paths and complex scene dynamics!

Jing-Jing Li (@drjingjing2026) 's Twitter Profile Photo

1/3 Today, an anecdote shared by an invited speaker at #NeurIPS2024 left many Chinese scholars, myself included, feeling uncomfortable. As a community, I believe we should take a moment to reflect on why such remarks in public discourse can be offensive and harmful.

1/3 Today, an anecdote shared by an invited speaker at #NeurIPS2024 left many Chinese scholars, myself included, feeling uncomfortable. As a community, I believe we should take a moment to reflect on why such remarks in public discourse can be offensive and harmful.
Junyi Zhang (@junyi42) 's Twitter Profile Photo

Introducing St4RTrack!๐Ÿ–– Simultaneous 4D Reconstruction and Tracking in the world coordinate feed-forwardly, just by changing the meaning of two pointmaps! st4rtrack.github.io

Zhiwen(Aaron) Fan (@wayneinr) 's Twitter Profile Photo

๐Ÿ“ข Call for Papers: End-to-End 3D Learning @ ICCV 2025 Workshop Advance 3D Representation, Geometry & Generative AI for Robotics, Autonomous Driving, XR, and Science. ๐ŸŒ Domain-leading speakers ๐Ÿ† Best Paper Award ๐Ÿ—“ Submission Deadline: June 29 ๐Ÿ”— e2e3d.github.io

๐Ÿ“ข Call for Papers: End-to-End 3D Learning @ ICCV 2025 Workshop
Advance 3D Representation, Geometry & Generative AI for Robotics, Autonomous Driving, XR, and Science.

๐ŸŒ Domain-leading speakers
๐Ÿ† Best Paper Award
๐Ÿ—“  Submission Deadline: June 29
๐Ÿ”— e2e3d.github.io
Zhiwen(Aaron) Fan (@wayneinr) 's Twitter Profile Photo

We're grateful to receive multiple sponsorships (to be announced soon)! ๐Ÿ† $1000 Best Paper Award ๐Ÿ† $1000 Best Demo Paper Award ๐ŸŽ Onsite gifts for attendees! Join us to shape the future of 3D learning in robotics, autonomous driving, the metaverse, and scientific imaging! ๐Ÿ”—

We're grateful to receive multiple sponsorships (to be announced soon)!
๐Ÿ† $1000 Best Paper Award
๐Ÿ† $1000 Best Demo Paper Award
๐ŸŽ Onsite gifts for attendees!
Join us to shape the future of 3D learning in robotics, autonomous driving, the metaverse, and scientific imaging!
๐Ÿ”—
Yutong Bai (@yutongbai1002) 's Twitter Profile Photo

What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action spaceโ€”not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to