Ji Woong Kim (@jwbkim) 's Twitter Profile
Ji Woong Kim

@jwbkim

Postdoc at @JohnsHopkins working on AI / robotics

ID: 1574414136947388418

linkhttps://sites.google.com/view/jkimrobot/home calendar_today26-09-2022 15:02:51

42 Tweet

303 Followers

93 Following

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

WOW 🤯 🤯 Da Vinci robot learning surgical manipulation tasks via imitation learning. The researchers successfully executed three fundamental surgical tasks, including tissue manipulation, needle handling, and knot-tying.

Samuel Schmidgall (@srschmidgall) 's Twitter Profile Photo

Excited to share that our paper Evaluation and mitigation of cognitive biases in medical language models was published in npj Digital Medicine In this work, we ask the question: can LLMs be easily fooled with simple cognitive bias-inducing prompts during patient diagnosis? To

Excited to share that our paper Evaluation and mitigation of cognitive biases in medical language models was published in <a href="/npjDigitalMed/">npj Digital Medicine</a> 

In this work, we ask the question: can LLMs be easily fooled with simple cognitive bias-inducing prompts during patient diagnosis? To
Tony Z. Zhao (@tonyzzhao) 's Twitter Profile Photo

The most insane gait I've seen on a humanoid. Walking with locked knees is much more energy efficient as the motors don't need to be engaged all the time. Public info of EngineAI: - team of 36 - raised ~14M USD - investment from SenseTime, Hefei province - founded Oct 2023

Sergey Levine (@svlevine) 's Twitter Profile Photo

RL in the real world presents some big challenges, but also some really big opportunities. In our new work, HIL-SERL, Charles Xu, Jeffrey Wu, Jianlan Luo show that real-world RL can learn a huge range of precise and robust tasks, and perform them much faster than imitation.

Ken Goldberg (@ken_goldberg) 's Twitter Profile Photo

Gary Guthart (CEO of Intuitive) and I propose the phrase “Augmented Dexterity” to describe systems where surgical subtasks are controlled by a robot under the close supervision of a human surgeon (vs. "supervised autonomy" which can be intimidating). Go Bears! UC Berkeley

Chelsea Finn (@chelseabfinn) 's Twitter Profile Photo

At Conference on Robot Learning, I’m giving a talk on π₀ post-training, i.e. how we got the robot to fold laundry. Tomorrow/Saturday at 1:45 pm at the WCBM workshop: wcbm-workshop.github.io

At <a href="/corl_conf/">Conference on Robot Learning</a>, I’m giving a talk on π₀ post-training, i.e. how we got the robot to fold laundry.

Tomorrow/Saturday at 1:45 pm at the WCBM workshop: wcbm-workshop.github.io
Sergey Levine (@svlevine) 's Twitter Profile Photo

Lots of memorable quotes from Jitendra MALIK at CoRL, the most significant one of course is: “I believe that Physical Intelligence is essential to AI” :) I did warn you Jitendra that out of context quotes are fair game. Some liberties taken wrt capitalization.

Karol Hausman (@hausman_k) 's Twitter Profile Photo

One way to test the robustness of the policy is to mess with the robot. Some of the π₀ corrections are pretty impressive: Messing with the robot by throwing another shirt

Yuejiang Liu (@liu_yuejiang) 's Twitter Profile Photo

Robot learning has been following in the footsteps of LLMs in many aspects, but one element stands apart - action chunking Why is it crucial? Any limitations? How to improve it? Introducing our paper Bidirectional Decoding: Improving Action Chunking via Closed-Loop Resampling🧵

Karol Hausman (@hausman_k) 's Twitter Profile Photo

A little bit of a history of the laundry folding task Physical Intelligence We started the company in March. By May we had our first relatively simple shirt folding policy. Starts from a simple state: t-shirt flat on the table Yes, it's Chelsea Finn proudly evaluating her policy 🦾

Samuel Schmidgall (@srschmidgall) 's Twitter Profile Photo

🚀🔬 Introducing Agent Laboratory: an assistant for automating machine learning research Agent Laboratory takes your research ideas and outputs a research paper and code repository, allowing you to allocate more effort toward ideation rather than low-level coding and writing 🧵

🚀🔬 Introducing Agent Laboratory: an assistant for automating machine learning research

Agent Laboratory takes your research ideas and outputs a research paper and code repository, allowing you to allocate more effort toward ideation rather than low-level coding and writing 🧵
AgileX Robotics (@agilexrobotics) 's Twitter Profile Photo

Simplify data collection with low cost,hand-held parallel jaw gripper #Pika AgileX Robotics Pika Gripper: ±1.5mm accuracy, dual-camera system Pika Sense: Lightweight (550g) Support ROS1/2 Perfect for researchers and developers driving innovation. #Robotics #EmbodiedAI #ROS

Karl Pertsch (@karlpertsch) 's Twitter Profile Photo

Excited to release FAST, our new robot action tokenizer! 🤖 Some highlights: - Simple autoregressive VLAs match diffusion VLA performance - Trains up to 5x faster - Works on all robot datasets we tested - First VLAs that work out-of-the-box in new environments! 🧵/