Ruohan Gao (@ruohangao1) 's Twitter Profile
Ruohan Gao

@ruohangao1

Assistant Professor @umdcs. Ph.D. @UTCompSci and PostDoc @StanfordAILab. I teach machines seešŸ‘€, hearšŸ‘‚, feelšŸ–ļø.

ID: 1283223973254164485

linkhttps://ruohangao.github.io/ calendar_today15-07-2020 02:17:04

134 Tweet

1,1K Followers

460 Following

Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

Excited to share that my book, The Worlds I See, will be published on Nov 7, 2023 from Flatiron Books & Melinda French Gates' Moment of Lift Books. I believe AI can help people & I hope you’ll come along on the journey. Preorder momentoflift.com/the-worlds-i-s… #MoemtnofLiftBooks #WorldsISee 1/

Excited to share that my book, The Worlds I See, will be published on Nov 7, 2023 from <a href="/Flatironbooks/">Flatiron Books</a> &amp; <a href="/melindagates/">Melinda French Gates</a>' Moment of Lift Books. I believe AI can help people &amp; I hope you’ll come along on the journey.  Preorder momentoflift.com/the-worlds-i-s… #MoemtnofLiftBooks #WorldsISee 1/
Changan Chen (@changan_vr) 's Twitter Profile Photo

šŸ“¢Calling for papers at the second AV4D Workshop #ICCV2023! This workshop shares recent progress of audio-visual learning in both traditional topics and emerging topics such as embodied AI, AI generation, and metaverse. Submissions due Aug 25. See more at av4d.org!

šŸ“¢Calling for papers at the second AV4D Workshop #ICCV2023! This workshop shares recent progress of audio-visual learning in both traditional topics and emerging topics such as embodied AI, AI generation, and metaverse. Submissions due Aug 25. See more at av4d.org!
Hao Su (@haosu_twitr) 's Twitter Profile Photo

šŸ“¢Thrilled to announce sudoAI (sudoAI), founded by a group of leading AI talents and me!šŸš€ We are dedicated to revolutionizing digital & physical realms by crafting interactive AI-generated 3D environments! Join our 3D Gen AI model waitlist today! šŸ‘‰sudo.ai

Ruohan Zhang (@ruohanzhang76) 's Twitter Profile Photo

Introducing our new work Conference on Robot Learning 2023, a novel brain-robot interface system: NOIR (Neural Signal Operated Intelligent Robots). Website: noir-corl.github.io Paper: arxiv.org/abs/2311.01454 šŸ§ šŸ¤–

Ruohan Gao (@ruohangao1) 's Twitter Profile Photo

Really enjoyed reading the book by my postdoc advisor Fei-Fei Li. Fei-Fei is not only my enduring mentor but also the best role model for my 8-month-old daughter and for young kids in the generations to come, inspiring them to actively embrace the transformative era of AI! 🦾 🧭

Really enjoyed reading the book by my postdoc advisor <a href="/drfeifei/">Fei-Fei Li</a>. Fei-Fei is not only my enduring mentor but also the best role model for my 8-month-old daughter and for young kids in the generations to come, inspiring them to actively embrace the transformative era of AI! 🦾 🧭
Jia-Bin Huang (@jbhuang0604) 's Twitter Profile Photo

Grad school application season! We will be recruiting highly motivated PhD students for our lab! We have so many vibrant research groups UMD Department of Computer Science working on cutting-edge CV/ML work! Don't miss out on the opportunity! 🤩

Wenqi Jia (@wenqi_jia) 's Twitter Profile Photo

šŸ—£Exploring beyond interpreting actions that directly involve the camera wearer, how can egocentric audio-visual signals aid in understanding the natural social behaviors among all partners, ultimately enhancing our daily communication? #CVPR24 Project: vjwq.github.io/AV-CONV/

šŸ—£Exploring beyond interpreting actions that directly involve the camera wearer, how can egocentric audio-visual signals aid in understanding the natural social behaviors among all partners, ultimately enhancing our daily communication? #CVPR24
Project: vjwq.github.io/AV-CONV/
Abhinav Shrivastava (@abhi2610) 's Twitter Profile Photo

Call for Papers: #INRV2024 Workshop on Implicit Neural Representation for Vision @ #CVPR2024! Topics: Compression, Representation using INR’s for images, audio, video & more! Ddl: 3/31. Submit now! #CVPR2025 Website: inrv.github.io Submission Link: shorturl.at/vzBR8

Hal DaumƩ III (@haldaume3) 's Twitter Profile Photo

ā€œWhat would an AI research & education mission that crossed the whole campus look like?ā€ This is the question posed to me & a working group by Univ. of Maryland's Provost Rice summer'23. Now we’re announcing the answer: #AIMaryland, the AI Interdisciplinary Institute at Maryland. >

Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Our computer vision textbook is released! Foundations of Computer Vision with Antonio Torralba and Bill Freeman mitpress.mit.edu/9780262048972/… It’s been in the works for >10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields. 1/4

Our computer vision textbook is released!

Foundations of Computer Vision
with Antonio Torralba and Bill Freeman
mitpress.mit.edu/9780262048972/…

It’s been in the works for &gt;10 years. Covers everything from linear filters and camera optics to diffusion models and radiance fields.

1/4
Mason Wang (@masonlongwang) 's Twitter Profile Photo

We can capture & reconstruct 3D visual environments, but what about sound? Our CVPR paper "Hearing Anything Anywhere" takes on the task of capturing the spatial acoustic characteristics of a room, allowing us to create immersive auditory experiences. Demo (Use Headphones šŸŽ§):

Wenqi Jia (@wenqi_jia) 's Twitter Profile Photo

Final day of #CVPR2024 and let's face it, engaging with multiple people simultaneously is TIRING!!! Before you're completely worn out, stop by our poster at #218 this afternoon and see how we use a social graph to capture interpersonal conversation dynamics!

Andrew Owens (@andrewhowens) 's Twitter Profile Photo

In case you were wondering what’s going on with the back of the #CVPR2024 T-shirt: it’s a hybrid image made by Aaron Inbum Park and Daniel Geng! When you look at it up close, you’ll just see the Seattle skyline, but when you view it from a distance, the text ā€œCVPRā€ should appear.

Kelin Yu (@colinyu14116982) 's Twitter Profile Photo

Introducing MimicTouch, our new paper accepted by #CoRL2024 (also the Best Paper Award at the #NIPS2024 TouchProcessing Workshop). MimicTouch learns tactile-only policies (no visual feedback) for contact-rich manipulation directly from human hand demonstrations. (1/6)

Ruohan Gao (@ruohangao1) 's Twitter Profile Photo

Thank you for the kind words, Brendan Iribe ! It’s truly inspiring to be part of a department housed in a building made possible by your generosity and vision. I’m so excited to contribute to advancing Vision ML with brilliant colleagues and students here at UMD!