Xingyu Bruce Liu (@liu_xingyu) 's Twitter Profile
Xingyu Bruce Liu

@liu_xingyu

PhD student @UCLAengineering; Previously @Google @Meta @Snap @cmuhcii @UTokyo_News_en. Research #HCI, #AI, #AR, #a11y. Also make musicšŸŽø

ID: 1186171916090068992

linkhttps://liubruce.me calendar_today21-10-2019 06:46:53

44 Tweet

380 Followers

362 Following

Ruofei Du (@duruofei) 's Twitter Profile Photo

What if #AI could assists you w/ relevant realtime visuals in #AR during conversations? Check out Project #VisualCaptions: duruofei.com/projects/augme… Kudos to Xingyu Bruce Liu, Vlad, Xiuxiu, Alex Olwal Xiang 'Anthony' Chen Peggy Chi for the wonderful teamwork! ACM SIGCHI #CHI2023 #GPT3 #LLM

Ruofei Du (@duruofei) 's Twitter Profile Photo

Are you tired of jittery captions in Google Meet, Zoom, Microsoft Teams, #FaceTime, Skype? Check out our latest research in text stability in live captions ACM SIGCHI #CHI2023! #TextStability paper available at in duruofei.com/projects/augme… #accessbility #a11y #HCI #UX #AR

Ruofei Du (@duruofei) 's Twitter Profile Photo

#VisualCaptions CHI'23 #opensource at github.com/google/archat! Feel free to star+fork! A big kudos to Xingyu Bruce Liu and the #ARChat + #Rapsai teams for an amazing year of dedication and hard work! You may need to apply API keys to run, but we verified it :) #AR ACM CHI Conference ACM UIST

Google AI (@googleai) 's Twitter Profile Photo

What if #AI could show context-relevant visuals in online meetings? #VisualCaptions in #ARChat is an interactive system to augment communication w/ interactive visuals suggested by #LLM + a search →goo.gle/42yYgBi Fork & star our code & dataset →github.com/google/archat

UCLA Samueli Engineering (@uclaengineering) 's Twitter Profile Photo

Congrats to our 11 UCLA doctoral students from computer science UCLA CS Department, bioengineering UCLA Bioengineering, and electrical and computer engineering UCLA Electrical and Computer Engineering who have been awarded Amazon fellowships by Amazon Science to conduct #AI research. sciencehub.ucla.edu/2023-amazon-fe…

Ruofei Du (@duruofei) 's Twitter Profile Photo

Hearty congratulations to Xun, Xingyu Bruce Liu, Siyou Pei (on job market), and the team on the #CHI2024 papers in #XR & #AI! Can't wait to finalize the revision and share the latest invention with the world! Thanks Purdue Mechanical Engineering UCLA Computer Science UCLA Electrical and Computer Engineering for the AMAZING collaboration with Google AI!

Hearty congratulations to Xun, <a href="/liu_xingyu/">Xingyu Bruce Liu</a>, <a href="/SiyouPei/">Siyou Pei (on job market)</a>, and the team on the #CHI2024 papers in #XR &amp; #AI! Can't wait to finalize the revision and share the latest invention with the world! Thanks <a href="/PurdueME/">Purdue Mechanical Engineering</a> <a href="/UCLAComSci/">UCLA Computer Science</a>  <a href="/ECE_UCLA/">UCLA Electrical and Computer Engineering</a> for the AMAZING collaboration with <a href="/GoogleAI/">Google AI</a>!
Xingyu Bruce Liu (@liu_xingyu) 's Twitter Profile Photo

šŸ¤”Ever missed an important call in a noisy place? Our recent work Human I/O introduces a unified approach that rethinks Situational Impairments as the availability of human input/ouput channels, e.g. handsšŸ‘‹, visionšŸ‘€, hearingšŸ‘‚ and speakingšŸ‘„. Preprint: arxiv.org/pdf/2403.04008

Jiahao ā€œNickā€ Li (@_jiahaoli) 's Twitter Profile Photo

šŸ’”Ever missed a breathtaking view while driving? Tired of juggling apps to perform digital actions? #CHI2024 Meet OmniActions: an egocentric, multimodal AI agent in #PervasiveAR that predicts your next move during real-world interaction. Preprint: arxiv.org/abs/2405.03901

Google AI (@googleai) 's Twitter Profile Photo

Human I/O is a unified approach that uses egocentric vision, multimodal sensing, and LLM reasoning to detect situational impairments and assess a user’s ability to interact with their hands, vision, hearing or speech in a given situation. Learn more at: goo.gle/3KJEP1T

Human I/O is a unified approach that uses egocentric vision, multimodal sensing, and LLM reasoning to detect situational impairments and assess a user’s ability to interact with their hands, vision, hearing or speech in a given situation. Learn more at: goo.gle/3KJEP1T
Ruofei Du (@duruofei) 's Twitter Profile Photo

Check out our latest blog post in #HumanIO that introduces a unified approach that rethinks Situational Impairments as the availability of human input/ouput channels, e.g. handsšŸ‘‹, visionšŸ“·, hearingšŸ“·. šŸ‘„ #AR #accessbility #LLM #a11y #XR #metaverse