Michigan SLED Lab (@sled_ai) 's Twitter Profile
Michigan SLED Lab

@sled_ai

Situated Language and Embodied Dialogue (SLED) research lab at @michigan_AI, led by Joyce Chai.

ID: 1663360763195523072

linkhttps://sled.eecs.umich.edu/ calendar_today30-05-2023 01:45:13

103 Tweet

283 Followers

114 Following

Hua Shen✨ (@huashen218) 's Twitter Profile Photo

🚀 Are you passionate about #Alignment Research? Exciting news! Join us at the ICLR 2025 Workshop on 👫<>🤖Bidirectional Human-AI Alignment (April 27 or 28, Singapore). We're inviting researchers in AI, HCI, NLP, Speech, Vision, Social Science, and beyond domains to submit their

🚀 Are you passionate about #Alignment Research? Exciting news! Join us at the ICLR 2025 Workshop on 👫&lt;&gt;🤖Bidirectional Human-AI Alignment (April 27 or 28, Singapore).  We're inviting researchers in AI, HCI, NLP, Speech, Vision, Social Science, and beyond domains to submit their
Xuejun Zhang (@eva_xuejunzhang) 's Twitter Profile Photo

I am at Vancouver this week🇨🇦! I’ll present our work at #NeurIPS2024 this Friday, Dec 13 4:30-7:30 pm at East Exhibition Hall A-C #4303. Feel free to drop by and chat if you are interested. I’m also actively looking for PhD opportunities for Fall 2025. Would love to connect with

Shane Storks, PhD (@shanestorks) 's Twitter Profile Photo

How well can VLMs detect and explain humans' procedural mistakes, like in cooking or assembly? 🧑‍🍳🧑‍🔧 My new pre-print with Itamar Bar-Yossef, Yayuan Li, Brian Zheyuan Zhang, Jason Corso, and Joyce Chai (Michigan SLED Lab MichiganAI Computer Science and Engineering at Michigan) dives into this! arxiv.org/pdf/2412.11927

How well can VLMs detect and explain humans' procedural mistakes, like in cooking or assembly?
🧑‍🍳🧑‍🔧

My new pre-print with Itamar Bar-Yossef, <a href="/YayuanLi/">Yayuan Li</a>, <a href="/zheyuanzhang99/">Brian Zheyuan Zhang</a>, <a href="/_JasonCorso_/">Jason Corso</a>, and Joyce Chai (<a href="/SLED_AI/">Michigan SLED Lab</a> <a href="/michigan_AI/">MichiganAI</a> <a href="/UMichCSE/">Computer Science and Engineering at Michigan</a>) dives into this!

arxiv.org/pdf/2412.11927
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

I'm excited to join #ACL ACL Mentorship as a mentor! Your ideas can help make mentorship more impactful. Let’s plan together!🚀 forms.gle/dURA4QUANH3pBx…

Jian Wang (@jwanglvy) 's Twitter Profile Photo

🤖 LLM agents can code — but can they tutor others to complete coding? 🚀 We are excited to introduce our latest work, "Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors." 📄 arXiv: arxiv.org/abs/2502.13311

🤖 LLM agents can code — but can they tutor others to complete coding?

🚀 We are excited to introduce our latest work, "Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The Curious Case of LLMs as Your Coding Tutors."

📄 arXiv: arxiv.org/abs/2502.13311
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Meet VEGGIE🥦Adobe Research VEGGIE is a video generative model trained solely with diffusion loss, designed for both video concept grounding and instruction-based editing. It effectively handles diverse video concept editing tasks by leveraging pixel-level grounded training in a

Yash Bhalgat (@ysbhalgat) 's Twitter Profile Photo

Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🚀 #CVPR2025 Topics: 3D-VLA models, LLM agents for 3D scene understanding, Robotic control with language. 📢 Call for papers: Deadline – April 20, 2025 🌐Details: 3d-llm-vla.github.io #llm #3d #Robotics #ai

Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🚀 <a href="/CVPR/">#CVPR2025</a> 

Topics: 3D-VLA models, LLM agents for 3D scene understanding, Robotic control with language.

📢 Call for papers: Deadline – April 20, 2025

🌐Details: 3d-llm-vla.github.io

#llm #3d #Robotics #ai
Hua Shen✨ (@huashen218) 's Twitter Profile Photo

🚀 BiAlign 🚀 Thrilled to announce that 72 papers have been accepted to our Bidirectional Human-AI Alignment workshop at #ICLR2025 & #CHI SIG! 🎉 Huge thanks to 80+ reviewers for their invaluable contributions! 🙏 ICLR 2026 ACM CHI Conference 📜 Accepted Papers: Check them out:

🚀 BiAlign 🚀 Thrilled to announce that 72 papers have been accepted to our Bidirectional Human-AI Alignment workshop at #ICLR2025 &amp; #CHI SIG! 🎉 Huge thanks to 80+ reviewers for their invaluable contributions! 🙏 <a href="/iclr_conf/">ICLR 2026</a> <a href="/acm_chi/">ACM CHI Conference</a> 

📜 Accepted Papers: Check them out:
Hua Shen✨ (@huashen218) 's Twitter Profile Photo

🚀 #ICLR2025 & #CHI2025 are just around the corner — and we’re excited to welcome you to our Bidirectional 👫Human-AI🤖 Alignment events! 🏅 "Golden Sponsors"🏅 A heartfelt thank you to our two generous Golden Sponsors: 🌟Prolific and 🌟Layer 6 AI of TD Bank! Their

🚀 #ICLR2025 &amp; #CHI2025 are just around the corner — and we’re excited to welcome you to our Bidirectional 👫Human-AI🤖 Alignment events!

🏅 "Golden Sponsors"🏅
     A heartfelt thank you to our two generous Golden Sponsors: 🌟<a href="/Prolific/">Prolific</a> and 🌟<a href="/Layer6AI/">Layer 6 AI</a> of <a href="/TDBank_US/">TD Bank</a>! 
    Their
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

I won’t be attending #ICLR2025 in person since #NAACL2025 follows right after, but here are a few things I’m excited about. 🔹 ICLR Oral Paper: Do Vision-Language Models Represent Space and How? 🗓 Oral: Apr 25, 3:42–3:54 a.m. (Session 4C) 🗓 Poster: Thu, Apr 24, 10 p.m.–12:30

I won’t be attending #ICLR2025 in person since #NAACL2025 follows right after, but here are a few things I’m excited about. 

🔹 ICLR Oral Paper:
Do Vision-Language Models Represent Space and How?
🗓 Oral:  Apr 25, 3:42–3:54 a.m. (Session 4C)
🗓 Poster: Thu, Apr 24, 10 p.m.–12:30
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Vision-Language Models (VLMs) can describe the environment, but can they refer within it? Our findings reveal a critical gap: VLMs fall short of pragmatic optimality. We identify 3 key failures of pragmatic competence in referring expression generation with VLMs: (1) cannot

Freda Shi (@fredahshi) 's Twitter Profile Photo

On my way to NAACL✈️! If you're also there and interested in grounding, don't miss our tutorial on "Learning Language through Grounding"! Mark your calendar: May 3rd, 14:00-17:30, Ballroom A. Another exciting collaboration with Martin Ziqiao Ma Jiayuan Mao Parisa Kordjamshidi Michigan SLED Lab!

On my way to NAACL✈️! If you're also there and interested in grounding, don't miss our tutorial on "Learning Language through Grounding"! 
Mark your calendar: May 3rd, 14:00-17:30, Ballroom A. 

Another exciting collaboration with <a href="/ziqiao_ma/">Martin Ziqiao Ma</a> <a href="/maojiayuan/">Jiayuan Mao</a> <a href="/Kordjamshidi/">Parisa Kordjamshidi</a> <a href="/SLED_AI/">Michigan SLED Lab</a>!
Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Just arrived at ABQ for #NAACL2025! I haven’t attended an NLP conference in person since ACL 2023, excited to catch up with friends! Here’s where you can catch me ⬇️ 🔹 NAACL Tutorial Learning Language through Grounding 🗓 May 3, 14:00–17:30 (Ballroom A) Website:

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Can we scale 4D pretraining to learn general space-time representations that reconstruct an object from a few views at any time to any view at any other time? Introducing 4D-LRM: a Large Space-Time Reconstruction Model that ... 🔹 Predicts 4D Gaussian primitives directly from

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Thrilled to share that VEGGIE is accepted to #ICCV2025! 🎉 Check out the full thread by Shoubin Yu for details. Funny enough — it’s been 6 years since I came to the US, and this might be my first time setting foot in Hawaii. 🌴

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

Our study on pragmatic generation is accepted to #COLM2025! Missed the first COLM last year (no suitable ongoing project at the time😅). Heard it’s a great place to connect with LM folks, excited to join for round two finally.

Martin Ziqiao Ma (@ziqiao_ma) 's Twitter Profile Photo

📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! 👉 …vision-language-embodied-ai.github.io 🦾Co-organized with an incredible team → Freda Shi · Jiayuan Mao · Jiafei Duan · Manling Li · David Hsu · Parisa Kordjamshidi 🌌 Why Space & SpaVLE? We

📣 Excited to announce SpaVLE: #NeurIPS2025 Workshop on Space in Vision, Language, and Embodied AI! 

👉 …vision-language-embodied-ai.github.io

🦾Co-organized with an incredible team → <a href="/fredahshi/">Freda Shi</a> · <a href="/maojiayuan/">Jiayuan Mao</a> · <a href="/DJiafei/">Jiafei Duan</a> · <a href="/ManlingLi_/">Manling Li</a> · David Hsu · <a href="/Kordjamshidi/">Parisa Kordjamshidi</a> 

🌌 Why Space &amp; SpaVLE?
We