
Lin Shao
@linshaonju
Assistant Professor in Robotics @NUS | Ph.D. @Stanford | Opinions are my own
ID: 3035319432
https://linsats.github.io/ 22-02-2015 01:28:54
259 Tweet
2,2K Followers
2,2K Following

If you are still around, I will be speaking at today's RSS Workshop on Learning Dexterous Manipulation on some cool work we are doing at Berkeley AI Research! Room 323 at 4:45pm Robotics: Science and Systems



Congratulations to Leonidas Guibas and colleagues for winning the SIGGRAPH 2023 Test of Time award for their paper “Functional Maps: A Flexible Representation of Maps Between Shapes”! dl.acm.org/doi/10.1145/21… people.csail.mit.edu/jsolomon/asset…








Conference on Robot Learning website and call for papers are live now! corl.org Deadline is on June 6! Looking forward to seeing everyone in Munich this November! #CoRL2024


🚀RobotEra & Tsinghua University launch Humanoid-Gym! An open-source sim2real RL framework for humanoid robots. This codebase is verified on RobotEra's humanoid robot XBot (XiaoXing)! Project page: sites.google.com/view/humanoid-… Github : github.com/roboterax/huma… Jianyu Chen Yen-Jen Wang

Achieving bimanual dexterity with RL + Sim2Real! toruowo.github.io/bimanual-twist/ TLDR - We train two robot hands to twist bottle lids using deep RL followed by sim-to-real. A single policy trained with simple simulated bottles can generalize to drastically different real-world objects.

Current 3D generative models are slow and low quality. We present GRM, a large-scale model that reconstructs 3D Gaussians in 0.1s and generates high-quality 3D assets from text or single images in a few seconds. justimyhxu.github.io/projects/grm/ Demo: huggingface.co/spaces/GRM-dem… 1/4

"Why don't we have better robots yet?": just posted on the TED Talks home page under Newest Talks (3rd row from the top) with links to PiE Robotics and Forbes article on art by Ben Wolff @creativecellist @Berkeley_AI UC Berkeley Berkeley Engineering TED.com



Check out our paper: arxiv.org/abs/2403.19460 Code: github.com/HeegerGao/RiEM… Awesome cooperation with Zhengrong Xue, Shuying Deng, Tianhai Liang, Siqi Yang, Lin Shao , and Huazhe Harry Xu.



We are excited to share our #CORL2024 paper (oral) on "Learning Quadruped Locomotion Using Differentiable Simulation" done in collaboration with Sangbae Kim Massachusetts Institute of Technology (MIT). We present a new way to learn to walk in minutes without parallelization, outperforming PPO in sample efficiency!
