Wenqi Jia (@wenqi_jia) 's Twitter Profile
Wenqi Jia

@wenqi_jia

TYC➡️HK➡️ATL /
Computer Vision @ GeorgiaTech /
🇨🇳🇭🇰🇲🇴🇺🇸🇰🇭🇸🇬🇲🇾🇦🇺🇫🇷🇱🇺🇩🇰🇳🇴🇮🇸🇸🇪🇫🇮🇵🇱🇩🇪🇧🇪🇪🇸🇮🇱🇶🇦🇨🇦

ID: 705797226023383040

linkhttps://vjwq.github.io/ calendar_today04-03-2016 16:49:06

31 Tweet

119 Followers

264 Following

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

Our paper was awarded the Best Student Paper Prize in BMVC2022🎉 Thanks for my advisor James Matthew Rehg and all co-authors Miao Fiona Ryan. Now we have released our data, codes and pretrained weights on GitHub (github.com/BolinLai/GLC) as well as a video demo on the project page.

Our paper was awarded the Best Student Paper Prize in BMVC2022🎉 Thanks for my advisor <a href="/RehgJim/">James Matthew Rehg</a> and all co-authors <a href="/aptx4869ml/">Miao</a> <a href="/fionakryan/">Fiona Ryan</a>. Now we have released our data, codes and pretrained weights on GitHub (github.com/BolinLai/GLC) as well as a video demo on the project page.
The Academy (@theacademy) 's Twitter Profile Photo

Of all the universes, we live in the one where Michelle Yeoh makes history as the first Asian woman to win the Best Actress Oscar—love that for us! #Oscars #Oscars95

Of all the universes, we live in the one where Michelle Yeoh makes history as the first Asian woman to win the Best Actress Oscar—love that for us! #Oscars #Oscars95
Fiona Ryan (@fionakryan) 's Twitter Profile Photo

Introducing our #CVPR2023 paper Egocentric Auditory Attention Localization in Conversations arxiv.org/abs/2303.16024 w/Hao Jiang Abhinav Shukla James Matthew Rehg Vamsi Ithapu We introduce the task of localizing selective auditory attention targets from egocentric video & multichannel audio

Introducing our #CVPR2023 paper Egocentric Auditory Attention Localization in Conversations 
arxiv.org/abs/2303.16024
 w/Hao Jiang <a href="/Abhinav95_/">Abhinav Shukla</a> <a href="/RehgJim/">James Matthew Rehg</a> Vamsi Ithapu

We introduce the task of localizing selective auditory attention targets from egocentric video &amp; multichannel audio
James Matthew Rehg (@rehgjim) 's Twitter Profile Photo

Delighted to share an overview of my lab's eight #cvpr 2024 papers. Thanks to my amazing postdoc Sangmin Lee for spearheading the effort and to our invaluable collaborators AI at Meta Reality Labs at Meta Stability AI Toyota USA. See you in Seattle! @IllinoisCS The Grainger College of Engineering

Delighted to share an overview of my lab's eight #cvpr 2024 papers. Thanks to my amazing postdoc <a href="/sangminlee777/">Sangmin Lee</a> for spearheading the effort and to our invaluable collaborators <a href="/AIatMeta/">AI at Meta</a> <a href="/RealityLabs/">Reality Labs at Meta</a> <a href="/StabilityAI/">Stability AI</a> <a href="/Toyota/">Toyota USA</a>. See you in Seattle! @IllinoisCS <a href="/uofigrainger/">The Grainger College of Engineering</a>
Max Xu (@maxxu05) 's Twitter Profile Photo

#ICLR2024 How can we choose meaningful positive pairs for time-series contrastive learning? What about motif similarity? REBAR uses a learned measure that captures motif similarity and achieves SOTA performance. Arxiv: arxiv.org/abs/2311.00519 Github: github.com/maxxu05/rebar

#ICLR2024 How can we choose meaningful positive pairs for time-series contrastive learning? What about motif similarity?
REBAR uses a learned measure that captures motif similarity and achieves SOTA performance.
Arxiv: arxiv.org/abs/2311.00519
Github: github.com/maxxu05/rebar
Wenqi Jia (@wenqi_jia) 's Twitter Profile Photo

Happy to be back to Seattle for #CVPR2024 ! I will be presenting my av-conv paper tomorrow in the EgoVis Workshop (10.20-11.20 Summit 428) and the Sight and Sound Workshop (13.00-14.00 Summit 326) Welcome to drop by to my poster/talk. See you there!

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

While learning new skills, have you ever felt tired of reading the verbose manual or annoyed about the unclear instructions? Check out our #ECCV2024 work on generating egocentric (first-person) visual guidance tailored to the user's situation! [1/7] Page: bolinlai.github.io/Lego_EgoActGen/

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

Excited to introduce our new #ECCV2024 work about ​​audio-visual egocentric (first-person) gaze anticipation. We propose Contrastive Spatial-Temporal Separable (CSTS) fusion approach – the first model for multi-modal egocentric gaze modeling. [1/7] Page: bolinlai.github.io/CSTS-EgoGazeAn…

Excited to introduce our new #ECCV2024 work about ​​audio-visual egocentric (first-person) gaze anticipation.

We propose Contrastive Spatial-Temporal Separable (CSTS) fusion approach – the first model for multi-modal egocentric gaze modeling. [1/7]

Page: bolinlai.github.io/CSTS-EgoGazeAn…
Anh Thai (@ngailapdi) 's Twitter Profile Photo

#ECCV2024 Large 2D visual foundation models like DINOv2 and Stable Diffusion have shown impressive capabilities in understanding object semantics. How can we leverage these features for 3D object part segmentation? Paper: arxiv.org/abs/2407.09648

#ECCV2024 Large 2D visual foundation models like DINOv2 and Stable Diffusion have shown impressive capabilities in understanding object semantics. How can we leverage these features for 3D object part segmentation? 
Paper: arxiv.org/abs/2407.09648
Max Xu (@maxxu05) 's Twitter Profile Photo

My paper RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data, from my Apple internship, has been accepted at #ICLR2025! 🎉 We introduce the first IMU foundation model, unlocking generalization across motion tasks. 🏃‍♀️📊 arxiv.org/abs/2411.18822

Bolin Lai (@bryanislucky) 's Twitter Profile Photo

📢#CVPR2025 Introducing InstaManip, a novel multimodal autoregressive model for few-shot image editing. 🎯InstaManip can learn a new image editing operation from textual and visual guidance via in-context learning, and apply it to new query images. [1/8] bolinlai.github.io/projects/Insta…

Yiming Dou (@_yimingdou) 's Twitter Profile Photo

Ever wondered how a scene sounds👂 when you interact👋 with it? Introducing our #CVPR2025 work "Hearing Hands: Generating Sounds from Physical Interactions in 3D Scenes" -- we make 3D scene reconstructions audibly interactive! yimingdou.com/hearing_hands/