Chris Paxton (@chris_j_paxton) 's Twitter Profile
Chris Paxton

@chris_j_paxton

Mostly posting about robots.

currently AI @agilityrobotics

prev embodied AI @hellorobotinc @AIatMeta, @NVIDIAAI. All views my own.

cpaxton.bsky.social

ID: 2244121213

linkhttp://cpaxton.github.io calendar_today13-12-2013 16:04:47

10,10K Tweet

15,15K Followers

2,2K Following

Jack (@jacklouisp) 's Twitter Profile Photo

OpenArm is launching V.10 of their open-sourced humanoid robot arm in 1 month. It should cost $2.5k to manufacture. About the system: - 7 Degrees of Freedom - Open-source HW, controls, simulation and data capture - Payload 1.9 - 3.5kg with a 760mm reach. Super excited to see

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Cool work on neural motion planning. much more challenging problem in a lot of ways than vision-language models - because you have to be able to come up with paths between any pair of configurations. good general purpose tool.

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Very impressed with how far robotics + AI has come along! There were a ton of cool hackathon projects pushing on hardware and autonomy for new manipulation skills at the global LeRobot Hackathon this weekend. Just 5 years ago, these would have been impossible to do in 36 hours.

Nicolas Keller (@nicolas_keller) 's Twitter Profile Photo

Ted Xiao LeRobot Remi Cadene Michael Cho - Rbt/Acc BitRobot Network Jason Ma Dhruv Shah Chris Paxton Vitaly Bulatov đź’Ż! I was absolutely astonished what people where able to put together over a weekend at our Munich LeRobot hackathon! And the energy & enthusiasm was amazing - I thought we were ambitious & driven when we started Franka Robotics years ago, but the new generation just goes HAM!

Chuang Gan (@gan_chuang) 's Twitter Profile Photo

🤖Can world models quickly adapt to new environments with just a few interactions? Introducing AdaWorld 🌍 — a new approach to learning world models conditioned on continuous latent actions extracted from videos via self-supervision! It enables rapid adaptation, efficient

Nathan Lambert (@natolambert) 's Twitter Profile Photo

A common trend across recent research in using reinforcement learning to train reasoning models is that the clipping operation within a trust region (core to PPO, adopted by GRPO) is squashing rare tokens that are key to clever behaviors like verification or backtracking. The

A common trend across recent research in using reinforcement learning to train reasoning models is that the clipping operation within a trust region (core to PPO, adopted by GRPO) is squashing rare tokens that are key to clever behaviors like verification or backtracking. The
Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Congratulations to the graduation of Jiarui Xu Jiteng Mu Ruihan Yang Yinbo Chen ! I am excited for their future journeys in industry: Jiarui -> OpenAI Jiteng -> Adobe Ruihan -> Amazon Yinbo -> OpenAI

Congratulations to the graduation of <a href="/Jerry_XU_Jiarui/">Jiarui Xu</a> <a href="/JitengMu/">Jiteng Mu</a> <a href="/RchalYang/">Ruihan Yang</a> <a href="/YinboChen/">Yinbo Chen</a> !

I am excited for their future journeys in industry: 
Jiarui -&gt; OpenAI
Jiteng -&gt; Adobe
Ruihan -&gt; Amazon
Yinbo -&gt; OpenAI
Zixuan Chen (@c___eric417) 's Twitter Profile Photo

🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:

Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

I like seeing time lapse videos of "robots doing work" from e2e models, like this. In the end we want our robots to run for long periods of time -- that's the whole point of them

Pete Florence (@peteflorence) 's Twitter Profile Photo

Last Spring I took off from Google DeepMind, and I've been heads-down building since with an amazing team. Excited to share more today -- introducing Generalist. It's felt to me for a couple years, since we started bringing multimodal LLMs into robotics, that a subset of the

Simon Kalouche (@simonkalouche) 's Twitter Profile Photo

You don't need humanoids to do dexterous manipulation. Robots are limited by intelligence, not because they're not shaped like humans.

C Zhang (@chongzitazhang) 's Twitter Profile Photo

Some companies still call such platforms a humanoid. My intuition is the same: morph can decide the capability upper bounds, but intelligence should not come from morphology. Good methodologies (solution-level ones for planning and control) have always been insensitive to morph.

Animesh Garg (@animesh_garg) 's Twitter Profile Photo

Only a matter of time when we had legs with wheels on biped systems! Also dual battery system for self-swaps And it is not just CGI, they have built it! Well done Hexagon AB Pretty remarkable how fast the hardware is iterating in this space! Links in đź§µ

Ville 🤖 (@villekuosmanen) 's Twitter Profile Photo

How do AI robots represent their knowledge of the world? Inspired by Anthropic’s Golden Gate Claude work, I trained a Sparse Auto-encoder to extract thousands of interpretable features from the “mind” of an ACT robot model, and visualise them in a Hugging Face space.

How do AI robots represent their knowledge of the world?

Inspired by Anthropic’s Golden Gate Claude work, I trained a Sparse Auto-encoder to extract thousands of interpretable features from the “mind” of an ACT robot model, and visualise them in a Hugging Face space.