Stan Birchfield (@birchfieldstan) 's Twitter Profile
Stan Birchfield

@birchfieldstan

NVIDIA, Univ of Washington, Clemson
computer vision, robotics

ID: 1697052979499786240

calendar_today31-08-2023 01:05:59

16 Tweet

28 Followers

31 Following

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects Bowen Wen, et al. tl;dr: object-level SLAM, online pose graph optimization + Neural Object Field #CVPR2023 arxiv.org/pdf/2303.14158…

BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects

Bowen Wen, et al.

tl;dr: object-level SLAM, online pose graph optimization + Neural Object Field

#CVPR2023 
arxiv.org/pdf/2303.14158…
Aritra Ghosh (@iamaritrag) 's Twitter Profile Photo

👉#Nvidia unveils BundleSDF: the novel SOTA in real-time 6-DoF tracking of unknown objects, while simultaneously performing neural 3D reconstruction. Paper, Project & Code ✅6-DoF + 3D reconstruction of unknown objects ✅Cam. movement, objects manipulation & editing ✅Co-design

AK (@_akhaliq) 's Twitter Profile Photo

HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose Annotations, Affordances, and Reconstructions paper page: huggingface.co/papers/2308.01… present the HANDAL dataset for category-level object pose estimation and affordance prediction. Unlike previous datasets,

HANDAL: A Dataset of Real-World Manipulable Object Categories with Pose Annotations, Affordances, and Reconstructions

paper page: huggingface.co/papers/2308.01…

present the HANDAL dataset for category-level object pose estimation and affordance prediction. Unlike previous datasets,
NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

📣 Introducing FoundationPose, a unified foundation model for 6D object pose estimation and tracking for novel objects from #NVIDIAResearch. See the details below. 🧵 👇

Open Robotics (@openroboticsorg) 's Twitter Profile Photo

.NVIDIA has released the source code for their 2024 CVPR paper, "FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects." We're really excited to see what the #ROS community can make happen with this code. nvlabs.github.io/FoundationPose/

vannguyen (@vannguyen_ng) 's Twitter Profile Photo

📢 Benchmark for 6D Object Pose Estimation 📢 BOP challenge 24 has been opened! bop.felk.cvut.cz/challenges/bop… Results to be presented at the R6D workshop at #ECCV2024 Details in comments below👇

📢 Benchmark for 6D Object Pose Estimation 📢

BOP challenge 24 has been opened!

bop.felk.cvut.cz/challenges/bop…

Results to be presented at the R6D workshop at #ECCV2024

Details in comments below👇
Martin Sundermeyer (@ma_sundermeyer) 's Twitter Profile Photo

The new BOP challenge 2024 pushes model-free and generalizable 6D object pose estimation. 🏆 Learn about the current state of 6D object pose estimation and the next challenges by visiting our poster at the mixed reality workshop (10:30-11:15) #CV4MR at #CVPR2024 📷

MrNeRF (@janusch_patas) 's Twitter Profile Photo

#CVPR2024: "NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows" Robots can navigate 3D worlds with NeRFs, but updating them after scene changes is computationally expensive (retraining the whole model!). Researchers from UIUC and #NVIDIAResearch introduce

#CVPR2024: "NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows"

Robots can navigate 3D worlds with NeRFs, but updating them after scene changes is computationally expensive (retraining the whole model!). 

Researchers from UIUC and #NVIDIAResearch introduce
Bardienus Duisterhof (@bduisterhof) 's Twitter Profile Photo

Dense tracking of deformable objects can unlock applications in robotics, gen-AI and AR. We present DeformGS (previously MD-Splatting) and release the code and data.  Join us at #WAFR where we will present new real-world results! 👇deformgs.github.io 1/9🧵

Cheng-Chun Hsu (@chengchunhsu) 's Twitter Profile Photo

How can robots learn household tasks from videos using just an iPhone, no robot hardware? Introducing SPOT, an object-centric framework that learns from minimal human demos, capturing the task-related constraints. (1/n)

NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Could #AI revolutionize stereo depth estimation? Utilizing a massive dataset of stereo pairs, FoundationStereo is designed as a foundation model with strong zero-shot generalization and automatic self-curation pipeline. Details from #NVIDIAResearch ➡️ nvlabs.github.io/FoundationSter…

Bowen Wen (@bowenwen_me) 's Twitter Profile Photo

📢Time to upgrade your depth camera! Introducing **FoundationStereo**, a foundation model for stereo depth estimation in zero-shot (accepted to CVPR 2025 with full scores) [1/n] Code: github.com/NVlabs/Foundat… Website: nvlabs.github.io/FoundationSter… Paper: arxiv.org/abs/2501.09898

Bardienus Duisterhof (@bduisterhof) 's Twitter Profile Photo

Imagine if robots could fill in the blanks in cluttered scenes. ✨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out. It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io

NVIDIA DRIVE (@nvidiadrive) 's Twitter Profile Photo

đźš— NVIDIA Research named Autonomous Grand Challenge winner for End-to-End Autonomous Driving at #CVPR2025 in Nashville. Theme of challenge: Towards Generalizable Embodied Systems. Learn more about NVIDIA Research at CVPR: nvda.ws/45MH4NB