Paul Liang (@pliang279) 's Twitter Profile
Paul Liang

@pliang279

Assistant Professor MIT @medialab @MITEECS @nlp_mit || PhD from CMU @mldcmu @LTIatCMU || Foundations of multisensory AI to enhance the human experience.

ID: 630270019

linkhttps://pliang279.github.io/ calendar_today08-07-2012 14:46:16

3,3K Tweet

7,7K Followers

857 Following

MIT Media Lab (@medialab) 's Twitter Profile Photo

30+ years of Media Lab students, alumni, and postdocs at CHI 2025 in Yokohama! Photo courtesy of Professor Pattie Maes. #chi2025

30+ years of Media Lab students, alumni, and postdocs at CHI 2025 in Yokohama! Photo courtesy of Professor Pattie Maes. #chi2025
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 #ICCV2025 Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.

Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 <a href="/ICCVConference/">#ICCV2025</a>

Join us in October to discuss the science of social intelligence and algorithms to advance socially-intelligent AI! Discussion will focus on reasoning, multimodality, and embodiment.
Yizhong Wang (@yizhongwyz) 's Twitter Profile Photo

Thrilled to announce that I will be joining UT Austin Computer Science at UT Austin as an assistant professor in fall 2026! I will continue working on language models, data challenges, learning paradigms, & AI for innovation. Looking forward to teaming up with new students & colleagues! 🤠🤘

Thrilled to announce that I will be joining <a href="/UTAustin/">UT Austin</a> <a href="/UTCompSci/">Computer Science at UT Austin</a> as an assistant professor in fall 2026! 

I will continue working on language models, data challenges, learning paradigms, &amp; AI for innovation. Looking forward to teaming up with new students &amp; colleagues! 🤠🤘
Zanë (zbucinca@hci.social) (@zanabucinca) 's Twitter Profile Photo

Thrilled to share that I’ve successfully defended my PhD dissertation and I will be joining MIT as an Assistant Professor starting Fall 2026, with a shared appointment between Sloan and EECS! I will be recruiting PhD students this upcoming cycle. Consider applying to MIT EECS!

Ritwik Gupta 🇺🇦 (@ritwik_g) 's Twitter Profile Photo

I'm excited to share that I’ll be joining Univ. of Maryland as an Assistant Professor in Computer Science, where I’ll be launching the Resilient AI and Grounded Sensing Lab. The RAGS Lab will build AI that works in chaotic environments. If you would like to partner, please DM me!

Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training "we introduce QoQ-Med-7B/32B, the first open generalist clinical foundation model that jointly reasons across medical images, time-series signals, and text reports. QoQ-Med is trained with

QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training

"we introduce QoQ-Med-7B/32B, the first open generalist clinical  foundation model that jointly reasons across medical images, time-series  signals, and text reports. QoQ-Med is trained with
dvd@dvd.chat (@ddvd233) 's Twitter Profile Photo

Thanks Tanishq Mathew Abraham, Ph.D. for posting about our recent work! We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training

Thanks <a href="/iScienceLuvr/">Tanishq Mathew Abraham, Ph.D.</a> for posting about our recent work! 

We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training
Leena Mathur (@lmathur_) 's Twitter Profile Photo

Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. We introduce Social Genome to study and advance this form of reasoning in models! New paper w/ Marian Qian, Paul Liang, & LP Morency!

Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. 

We introduce Social Genome to study and advance this form of reasoning in models!

New paper w/ Marian Qian, <a href="/pliang279/">Paul Liang</a>, &amp; <a href="/lpmorency/">LP Morency</a>!
Paul Liang (@pliang279) 's Twitter Profile Photo

I am very excited about David's [email protected] line of work in developing generalist multimodal clinical foundation models. CLIMB (which will be presented at ICML 2025) github.com/DDVD233/climb is a large-scale benchmark comprising 4.51 million patient samples totaling 19.01 terabytes

Paul Liang (@pliang279) 's Twitter Profile Photo

Lots of interest in AI reasoning, but most use cases involve structured inputs (text) with automatic and objective verifiers (e.g. coding, math). Leena Mathur's latest work takes an ambitious step towards social reasoning in AI, a task where inputs are highly multimodal (verbal and

MIT Media Lab (@medialab) 's Twitter Profile Photo

Led by Prof. Paul Liang, the Multisensory Intelligence group at the MIT Media Lab studies the foundations of multisensory artificial intelligence to create human-AI symbiosis across scales and sensory mediums. The group’s members draw upon their multidisciplinary backgrounds to

Paul Liang (@pliang279) 's Twitter Profile Photo

Despite much progress in AI, the ability for AI to 'smell' like humans remains elusive. Smell AIs 🤖👃can be used for allergen sensing (e.g., peanuts or gluten in food), hormone detection for health, safety & environmental monitoring, quality control in manufacturing, and more.

Megan Tjandrasuwita (@mmtjandrasuwita) 's Twitter Profile Photo

Most problems have clear-cut instructions: solve for x, find the next number, choose the right answer. Puzzlehunts don’t. They demand creativity and lateral thinking. We introduce PuzzleWorld: a new benchmark of puzzlehunt problems challenging models to think creatively.

elvis (@omarsar0) 's Twitter Profile Photo

This paper is impressive! It introduces a clever way of keeping memory use constant regardless of task length. Great use of RL for AI agents to efficiently use memory and reasoning. Here are my full notes:

This paper is impressive!

It introduces a clever way of keeping memory use constant regardless of task length.

Great use of RL for AI agents to efficiently use memory and reasoning.

Here are my full notes:
harsh (@harshsikka) 's Twitter Profile Photo

Incredibly excited to announce the release of MultiNet v0.2 - a major update to our comprehensive open-source benchmark suite for evaluating Multimodal Models on Action tasks. Read on for several paper announcements, details on the evaluation harness and platform, and more!

Incredibly excited to announce the release of MultiNet v0.2 - a major update to our comprehensive open-source benchmark suite for evaluating Multimodal Models on Action tasks. 

Read on for several paper announcements, details on the evaluation harness and platform, and more!
Paul Liang (@pliang279) 's Twitter Profile Photo

check out our growing open-source contribution MultiNet v0.2 - a comprehensive open-source benchmark for training and evaluating multimodal vision-language-action models on agentic and embodied tasks. think multimodal robotics and AI agent platforms - but with all data

Yutong Bai (@yutongbai1002) 's Twitter Profile Photo

What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to

Lily Chen (@lilyychenn) 's Twitter Profile Photo

Are we fact-checking medical claims the right way? 🩺🤔 Probably not. In our study, even experts struggled to verify Reddit health claims using end-to-end systems. We show why—and argue fact-checking should be a dialogue, with patients in the loop arxiv.org/abs/2506.20876 🧵1/

Are we fact-checking medical claims the right way? 🩺🤔

Probably not. In our study, even experts struggled to verify Reddit health claims using end-to-end systems.

We show why—and argue fact-checking should be a dialogue, with patients in the loop

arxiv.org/abs/2506.20876

🧵1/