
Paul Liang
@pliang279
Assistant Professor MIT @medialab @MITEECS @nlp_mit || PhD from CMU @mldcmu @LTIatCMU || Foundations of multisensory AI to enhance the human experience.
ID: 630270019
https://pliang279.github.io/ 08-07-2012 14:46:16
3,3K Tweet
7,7K Followers
857 Following



Thrilled to announce that I will be joining UT Austin Computer Science at UT Austin as an assistant professor in fall 2026! I will continue working on language models, data challenges, learning paradigms, & AI for innovation. Looking forward to teaming up with new students & colleagues! 🤠🤘




I'm excited to share that I’ll be joining Univ. of Maryland as an Assistant Professor in Computer Science, where I’ll be launching the Resilient AI and Grounded Sensing Lab. The RAGS Lab will build AI that works in chaotic environments. If you would like to partner, please DM me!


Thanks Tanishq Mathew Abraham, Ph.D. for posting about our recent work! We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training


Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. We introduce Social Genome to study and advance this form of reasoning in models! New paper w/ Marian Qian, Paul Liang, & LP Morency!


I am very excited about David's [email protected] line of work in developing generalist multimodal clinical foundation models. CLIMB (which will be presented at ICML 2025) github.com/DDVD233/climb is a large-scale benchmark comprising 4.51 million patient samples totaling 19.01 terabytes

Lots of interest in AI reasoning, but most use cases involve structured inputs (text) with automatic and objective verifiers (e.g. coding, math). Leena Mathur's latest work takes an ambitious step towards social reasoning in AI, a task where inputs are highly multimodal (verbal and

Led by Prof. Paul Liang, the Multisensory Intelligence group at the MIT Media Lab studies the foundations of multisensory artificial intelligence to create human-AI symbiosis across scales and sensory mediums. The group’s members draw upon their multidisciplinary backgrounds to






