Sangmin Lee (@sangminlee777) 's Twitter Profile
Sangmin Lee

@sangminlee777

Postdoc at UIUC, working on multimodal AI

ID: 1730676533269082112

linkhttp://sites.google.com/view/sangmin-lee calendar_today01-12-2023 19:53:46

27 Tweet

58 Followers

41 Following

Xu Cao (@irohxu) 's Twitter Profile Photo

🎉Excited to share one of our #CVPR2024 papers on Multimodal LLM for building the next generation autonomous driving system and HD map! Github Repo: github.com/LLVM-AD/MAPLM Previous Survey Repo: github.com/IrohXu/Awesome…

🎉Excited to share one of our #CVPR2024 papers on Multimodal LLM for building the next generation autonomous driving system and HD map!

Github Repo: github.com/LLVM-AD/MAPLM
Previous Survey Repo: github.com/IrohXu/Awesome…
Zixuan Huang (@zixuan_huang) 's Twitter Profile Photo

Low-resolution-trained point diffusion model generates high resolution -- performance continuously improves as inference resolution increases! Thanks AK for featuring our work!

Sangmin Lee (@sangminlee777) 's Twitter Profile Photo

Our paper on multimodal social interactions will be presented as an Oral at #CVPR2024 today! 📢 Oral Presentation 1pm, Thursday Summit Flex Hall C 📌 Poster Session 5:15-6:45pm, Thursday Arch 4A-E Poster #459

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

While learning new skills, have you ever felt tired of reading the verbose manual or annoyed about the unclear instructions? Check out our #ECCV2024 work on generating egocentric (first-person) visual guidance tailored to the user's situation! [1/7] Page: bolinlai.github.io/Lego_EgoActGen/

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

Excited to introduce our new #ECCV2024 work about ​​audio-visual egocentric (first-person) gaze anticipation. We propose Contrastive Spatial-Temporal Separable (CSTS) fusion approach – the first model for multi-modal egocentric gaze modeling. [1/7] Page: bolinlai.github.io/CSTS-EgoGazeAn…

Excited to introduce our new #ECCV2024 work about ​​audio-visual egocentric (first-person) gaze anticipation.

We propose Contrastive Spatial-Temporal Separable (CSTS) fusion approach – the first model for multi-modal egocentric gaze modeling. [1/7]

Page: bolinlai.github.io/CSTS-EgoGazeAn…
Anh Thai (@ngailapdi) 's Twitter Profile Photo

#ECCV2024 Large 2D visual foundation models like DINOv2 and Stable Diffusion have shown impressive capabilities in understanding object semantics. How can we leverage these features for 3D object part segmentation? Paper: arxiv.org/abs/2407.09648

#ECCV2024 Large 2D visual foundation models like DINOv2 and Stable Diffusion have shown impressive capabilities in understanding object semantics. How can we leverage these features for 3D object part segmentation? 
Paper: arxiv.org/abs/2407.09648
Bolin Lai (@bryanislucky) 's Twitter Profile Photo

Our ECCV paper is recognized as oral presentation! Thank all co-authors (Miao, Xiaoliang Dai, Lawrence Chen, Guan Pang, James Matthew Rehg ) for your awesome contributions. Our dataset and codes have been released. Project: bolinlai.github.io/Lego_EgoActGen/ Code: github.com/BolinLai/LEGO

Leena Mathur (@lmathur_) 's Twitter Profile Photo

In a few weeks at #ECCV2024, we will have the 3rd edition of the Artificial Social Intelligence Workshop! This workshop will occur on September 29 in Milan 🇮🇹, with an interactive hybrid option available, as well sites.google.com/andrew.cmu.edu…

In a few weeks at #ECCV2024, we will have the 3rd edition of the Artificial Social Intelligence Workshop!

This workshop will occur on September 29 in Milan 🇮🇹, with an interactive hybrid option available, as well 

sites.google.com/andrew.cmu.edu…
Leena Mathur (@lmathur_) 's Twitter Profile Photo

It has been a great experience working with the organizing team for this workshop! We welcome your participation at #ECCV2024! 🇮🇹 Fellow organizers: Evonne Ng, Fiona Ryan, Sangmin Lee, Paul Liang, Gül Varol, Shiry Ginosar, Hanbyul (Han) Joo, Justine Cassell, James Matthew Rehg, LP Morency

It has been a great experience working with the organizing team for this workshop! We welcome your participation at #ECCV2024! 🇮🇹

Fellow organizers: Evonne Ng, <a href="/fionakryan/">Fiona Ryan</a>, <a href="/sangminlee777/">Sangmin Lee</a>, <a href="/pliang279/">Paul Liang</a>, <a href="/gulvarol/">Gül Varol</a>, <a href="/shiryginosar/">Shiry Ginosar</a>, <a href="/jhugestar/">Hanbyul (Han) Joo</a>, Justine Cassell, <a href="/RehgJim/">James Matthew Rehg</a>, <a href="/lpmorency/">LP Morency</a>
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Our workshop will start in a few hours! > #ECCV2024 9/29 AM workshop > Suite 2, Allianz MiCo 🇮🇹 > Zoom info on our website (QR code below) Looking forward to the discussion today and learning from our keynote speakers! sites.google.com/andrew.cmu.edu…

Our workshop will start in a few hours! 
&gt; #ECCV2024 9/29 AM workshop
&gt; Suite 2, Allianz MiCo 🇮🇹
&gt; Zoom info on our website (QR code below)

Looking forward to the discussion today and learning from our keynote speakers! 

sites.google.com/andrew.cmu.edu…
James Matthew Rehg (@rehgjim) 's Twitter Profile Photo

Super excited to be in Milan for #ECCV2024. I have an opening for a Postdoc in my lab at UIUC, in the areas of egocentric computer vision and multimodal foundation models. Message me here or by email, or find me at our 3 posters (and an oral :-)

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

Our paper was nominated in the Best Paper Finalist of #ECCV2024. I sincerely thank all co-authors. Our work was also reported by Georgia Tech Georgia Tech School of Interactive Computing . My advisor James Matthew Rehg will present it on Oct 2 1:30pm at Oral 4B Session, and Oct 2 4:30pm at #240 of Poster Session.European Conference on Computer Vision #ECCV2026

Our paper was nominated in the Best Paper Finalist of #ECCV2024. I sincerely thank all co-authors. Our work was also reported by Georgia Tech <a href="/ICatGT/">Georgia Tech School of Interactive Computing</a> . My advisor  <a href="/RehgJim/">James Matthew Rehg</a> will present it on Oct 2 1:30pm at Oral 4B Session, and Oct 2 4:30pm at #240 of Poster Session.<a href="/eccvconf/">European Conference on Computer Vision #ECCV2026</a>
Fiona Ryan (@fionakryan) 's Twitter Profile Photo

Introducing Gaze-LLE, a new model for gaze target estimation built on top of a frozen visual foundation model! Gaze-LLE achieves SOTA results on multiple benchmarks while learning minimal parameters, and shows strong generalization paper: arxiv.org/abs/2412.09586

Max Xu (@maxxu05) 's Twitter Profile Photo

My paper RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data, from my Apple internship, has been accepted at #ICLR2025! 🎉 We introduce the first IMU foundation model, unlocking generalization across motion tasks. 🏃‍♀️📊 arxiv.org/abs/2411.18822

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

📢#CVPR2025 Introducing InstaManip, a novel multimodal autoregressive model for few-shot image editing. 🎯InstaManip can learn a new image editing operation from textual and visual guidance via in-context learning, and apply it to new query images. [1/8] bolinlai.github.io/projects/Insta…

Leena Mathur (@lmathur_) 's Twitter Profile Photo

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 #ICCV2025 Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 <a href="/ICCVConference/">#ICCV2025</a>

Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. We introduce Social Genome to study and advance this form of reasoning in models! New paper w/ Marian Qian, Paul Liang, & LP Morency!

Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. 

We introduce Social Genome to study and advance this form of reasoning in models!

New paper w/ Marian Qian, <a href="/pliang279/">Paul Liang</a>, &amp; <a href="/lpmorency/">LP Morency</a>!