Xuweiyi Chen (@chenxuweiyi) 's Twitter Profile
Xuweiyi Chen

@chenxuweiyi

1st year Ph.D. @ UVA | Research in 3D Computer Vision

ID: 1497059143357636617

calendar_today25-02-2022 04:01:36

20 Tweet

67 Followers

96 Following

MichiganAI (@michigan_ai) 's Twitter Profile Photo

💫🏆 Special congratulations to our amazing Prof. Joyce Chai for receiving a 2024 @Umich EECS Outstanding Achievement Award! She was recognized for her sustained excellence and contributions made in teaching, research, and service. cse.engin.umich.edu/stories/2024-e…

💫🏆 Special congratulations to our amazing Prof. Joyce Chai for receiving a 2024 @Umich EECS Outstanding Achievement Award!

She was recognized for her sustained excellence and contributions made in teaching, research, and service.

cse.engin.umich.edu/stories/2024-e…
Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

🔥 3D-LLMs go brrrr! 🚀 Excited to announce our latest research on scaling 3D-LLM training data to *million-scale* with *dense grounding*. 🌟 Introducing 3D-GRAND: a pioneering dataset featuring 40,087 household scenes paired with 6.2 million densely-grounded 3D-text pairs. 🏠💬

Xuweiyi Chen (@chenxuweiyi) 's Twitter Profile Photo

I’m excited to share our work towards densely grounded 3D-LLMs! Check our work if you are looking for a million-scale, densely-grounded 3D-text dataset! Happy to collaborate!

Xuejun Zhang (@eva_xuejunzhang) 's Twitter Profile Photo

🎉Happy to share my first paper "Multi-Object Hallucination in Vision-Language Models" in #ALVR at #ACL2024! Do VLMs hallucinate more in multi-object scenes? Shortcuts? Spurious correlations? What factors contribute to their hallucinations? 👀 🔗multi-object-hallucination.github.io 🧵[1/n]

🎉Happy to share my first paper "Multi-Object Hallucination in Vision-Language Models" in #ALVR at #ACL2024! Do VLMs hallucinate more in multi-object scenes? Shortcuts? Spurious correlations? What factors contribute to their hallucinations? 👀
đź”—multi-object-hallucination.github.io đź§µ[1/n]
Xuweiyi Chen (@chenxuweiyi) 's Twitter Profile Photo

Congratulations! Please take some time to rest and I wish we will have more interesting collaborations in the near future. Jianing “Jed” Yang @ CVPR Since it starts from an interesting observation during data curation, any new insights while you are at Meta?

Boyang Wang (@boyangwang7) 's Twitter Profile Photo

Verry happy to share our new paper, This&That, an dynamic robot video generation model with language and simple gestures conditioning! Moreover, we also propose Diffusion Video to Action (DiVA) model to transfer generated videos to robot actions in the rollout environment.

Jianing “Jed” Yang (@jed_yang) 's Twitter Profile Photo

⚡️ Excited to announce Fast3R: 3D reconstruction of 1000+ images in a single forward pass! Fast3R achieves 251 FPS at its peak. 🔥 Try the demo with your images or video! 🔗 Website: fast3r-3d.github.io 🎮 Demo: fast3r.ngrok.app #CVPR2025 #3D AI at Meta

Boyang Wang (@boyangwang7) 's Twitter Profile Photo

Thanks AK for sharing! The homepage can be found at: uva-computer-vision-lab.github.io/Frame-In-N-Out/ Thank you to all my coauthors: Xuweiyi Chen Matheus Gadelha Zezhou Cheng Research Work at UVA! Special thanks Xuweiyi Chen for bridging this opportunity.

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

SAB3R: Semantic-Augmented Backbone in 3D Reconstruction Xuweiyi Chen, Tian XIA, Si.X, Jianing “Jed” Yang @ CVPR, Joyce Chai, Zezhou Cheng tl;dr: MASt3R+distillation->open-vocabulary segmentation+3D reconstruction arxiv.org/abs/2506.02112

SAB3R: Semantic-Augmented Backbone in 3D Reconstruction

<a href="/ChenXuweiyi/">Xuweiyi Chen</a>, <a href="/TianX_ia/">Tian XIA</a>, <a href="/6SihanXu/">Si.X</a>, <a href="/jed_yang/">Jianing “Jed” Yang @ CVPR</a>, Joyce Chai, <a href="/ZezhouCheng/">Zezhou Cheng</a>

tl;dr: MASt3R+distillation-&gt;open-vocabulary segmentation+3D reconstruction

arxiv.org/abs/2506.02112
Voxel51 (@voxel51) 's Twitter Profile Photo

We’re officially set up and ready to go at #CVPR2025—swing by booth #1417 and say hi to the Voxel51 team. 👋 We’re showcasing powerful tools and workflows designed to help ML and computer vision teams work faster, smarter, and more accurately at scale. Here’s what you’ll see

We’re officially set up and ready to go at #CVPR2025—swing by booth #1417 and say hi to the Voxel51 team. 👋

We’re showcasing powerful tools and workflows designed to help ML and computer vision teams work faster, smarter, and more accurately at scale. 

Here’s what you’ll see