Tianyi Zhang (@tianyiz2022) 's Twitter Profile
Tianyi Zhang

@tianyiz2022

Deep sea autonomous vehicle and machine vision @CarnegieMellon | @TJU1895 @UMich alumni

ID: 1205552274887626753

linkhttp://tyz1030.github.io calendar_today13-12-2019 18:17:42

14 Tweet

147 Followers

158 Following

Maani Ghaffari (@ghaffarimaani) 's Twitter Profile Photo

We're hiring multiple tenure-track faculty positions NAME at Michigan with special interests in Asst Prof level and marine robotics and autonomy. The search is broad, and we consider all applicants from different disciplines. Contact me if you have any questions.

Yafei Hu (@yafeihucmu) 's Twitter Profile Photo

Check out our new survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis! arXiv: arxiv.org/abs/2312.08782 Website: robotics-fm-survey.github.io

Check out our new survey paper: Toward General-Purpose Robots via Foundation Models:
A Survey and Meta-Analysis!
arXiv: arxiv.org/abs/2312.08782
Website: robotics-fm-survey.github.io
Tianyi Zhang (@tianyiz2022) 's Twitter Profile Photo

[RAL+ICRA'24] A principled approch for underwater true color correction, learning 3D geometry and physics of light-water interaction together. code: github.com/tyz1030/neural… arxiv: arxiv.org/abs/2304.03384

Andrew Owens (@andrewhowens) 's Twitter Profile Photo

We generate spectrograms that look like natural images by composing together the score functions of audio and visual diffusion networks. This was a fun project to be a part of! w/ Ziyang Chen and Daniel Geng Site: ificl.github.io/images-that-so… Paper: arxiv.org/abs/2405.12221

Daniel Geng (@dangengdg) 's Twitter Profile Photo

What happens when you train a video generation model to be conditioned on motion? Turns out you can perform "motion prompting," just like you might prompt an LLM! Doing so enables many different capabilities. Here’s a few examples – check out this thread 🧵 for more results!

Haomeng Zhang (@haomengz99) 's Twitter Profile Photo

We are excited to share our #NeurIPS2024 work "Multi-Object 3D Grounding with Dynamic Modules and Language Informed Spatial Attention" (D-LISA). Please drop by our poster session on Dec 11th 11 a.m. to 2 p.m. (Poster #1501) and learn from 楊喬諳 on Multi-Object 3D Grounding!

We are excited to share our #NeurIPS2024 work "Multi-Object 3D Grounding with Dynamic Modules and Language Informed Spatial Attention" (D-LISA). Please drop by our poster session on Dec 11th 11 a.m. to 2 p.m. (Poster #1501) and learn from <a href="/cajoeyang/">楊喬諳</a> on Multi-Object 3D Grounding!
Zhongyu Li (@zhongyuli4) 's Twitter Profile Photo

Command humanoids *directly* with natural language? Introducing LangWBC, a generative, end-to-end policy that turns natural language into real-world whole-body humanoid control! 💬→🦿Smooth, robust, surprisingly intuitive! See more 👉 LangWBC.github.io #RSS2025

Tianyi Zhang (@tianyiz2022) 's Twitter Profile Photo

the elegance of a single camera and unlimited field-of-view. I see a roboneck not just for humanoids, but also wheeled robots running on rugged terrain, subsea robots that cannot row and pitch, ….

Lili (@lchen915) 's Twitter Profile Photo

Self-Questioning Language Models: LLMs that learn to generate their own questions and answers via asymmetric self-play RL. There is no external training data – the only input is a single prompt specifying the topic.

Self-Questioning Language Models: LLMs that learn to generate their own questions and answers via asymmetric self-play RL.

There is no external training data – the only input is a single prompt specifying the topic.