Blake Wulfe (@wulfebw) 's Twitter Profile
Blake Wulfe

@wulfebw

Robotics @ToyotaResearch

ID: 2319603205

linkhttps://wulfebw.github.io/ calendar_today30-01-2014 21:53:07

8 Tweet

34 Followers

27 Following

Corina Gurau (@corina_gurau) 's Twitter Profile Photo

*Deadline Extension* Call for a second round of submissions to the #ICRA2022 workshop on Fresh Perspectives on the Future of Autonomous Driving. Submit by April 25 AoE: icra2022av.org Rowan McAllister Anthony Hu Blake Wulfe Emma Saunders Felipe Codevilla Nick Rhinehart Sergey Zagoruyko

Rowan McAllister (@rowantmc) 's Twitter Profile Photo

How should models be trained? We show weighting model errors by their effect on downstream control improves accuracy where it matters most (gif) + improves end performance arxiv.org/abs/2204.13319 youtu.be/OSzh6Jkp5io w Blake Wulfe Jean Mercat Logan Ellis Sergey Levine Adrien Gaidon

Rowan McAllister (@rowantmc) 's Twitter Profile Photo

ICRA 2022 AV workshop videos now online: 🚗 keynotes: icra2022av.org/#h.198firfh7fex 🚗 authors: icra2022av.org/#h.r8uf7ehwpntw 🚗 all: youtu.be/qzo61V7G1EM?t=… Thanks to lead organizer Corina Gurau, virtual org Blake Wulfe Anthony Hu Emma Saunders Błażej Osiński, PC members + authors + attendees!

Alexander Khazatsky (@sashakhazatsky) 's Twitter Profile Photo

After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset” DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices

Karl Pertsch (@karlpertsch) 's Twitter Profile Photo

Access to *diverse* training data is a major bottleneck in robot learning. We're releasing DROID, a large-scale in-the-wild manipulation dataset. 76k trajectories, 500+ scenes, multi-view stereo, language annotations etc Check it out & download today! 💻: droid-dataset.github.io

Stephen Tian (@stephentian_) 's Twitter Profile Photo

Learned visuomotor robot policies are sensitive to observation viewpoint shifts, which happen all the time. Can visual priors from large-scale data help? Introducing VISTA: using zero-shot novel view synthesis models for view-robust policy learning! #CoRL2024 🧵👇

Kyle Hatch (@khatch441) 's Twitter Profile Photo

Image/video models can transfer knowledge from Internet data to robot agents by generating goal images. But what happens when images have harmful visual artifacts? We present GHIL-Glue, a method to align image/video models and low-level policies. ghil-glue.github.io

Ryan Marten (@ryanmart3n) 's Twitter Profile Photo

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals. We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data

Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals.

We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data
Jenn Grannen (@jenngrannen) 's Twitter Profile Photo

Meet ProVox: a proactive robot teammate that gets you 🤖❤️‍🔥 ProVox models your goals and expectations before a task starts — enabling personalized, proactive help for smoother, more natural collaboration. All powered by LLM commonsense. Recently accepted at IEEE RAS R-AL! 🧵1/7