Minqian Liu (@minqian_liu) 's Twitter Profile
Minqian Liu

@minqian_liu

PhD student @VT_CS | Previous Research Intern at Microsoft and AWS AI | he/him

ID: 901640778115457024

linkhttp://mqianliu.github.io calendar_today27-08-2017 03:01:27

89 Tweet

446 Followers

675 Following

Kai Zhang (@drogokhal4) 's Twitter Profile Photo

Knowledge Conflict gets accepted in #ICLR24 #Spotlight! We updated more results on 5 open- and 3 close-source LLMs and found highly receptiveness & confirmation bias exist widely! Check it out: openreview.net/forum?id=auKAU…

Barry Menglong Yao (@barry_yao0) 's Twitter Profile Photo

Our entity linking work has been accepted by #EACL2024. Check our work: Ameli: Enhancing Multimodal Entity Linking with Fine-Grained Attributes (arxiv.org/pdf/2305.14725…). Congratulations to all collaborators! The dataset, code, and checkpoints will be released soon.

AK (@_akhaliq) 's Twitter Profile Photo

Vision-Flan Scaling Human-Labeled Tasks in Visual Instruction Tuning Despite vision-language models' (VLMs) remarkable capabilities as versatile visual assistants, two substantial challenges persist within the existing VLM frameworks: (1) lacking task diversity in pretraining

Vision-Flan

Scaling Human-Labeled Tasks in Visual Instruction Tuning

Despite vision-language models' (VLMs) remarkable capabilities as versatile visual assistants, two substantial challenges persist within the existing VLM frameworks: (1) lacking task diversity in pretraining
Ying Shen (@yingshen_ys) 's Twitter Profile Photo

🚀 Excited to introduce my internship work at Apple MLR : Many-to-many Image Generation with Auto-regressive Diffusion Models (arxiv.org/abs/2404.03109). Exploring the paradigm for domain-general multi-image to multi-image generation.

🚀 Excited to introduce my internship work at <a href="/Apple/">Apple</a> MLR : Many-to-many Image Generation with Auto-regressive Diffusion Models (arxiv.org/abs/2404.03109). Exploring the paradigm for domain-general multi-image to multi-image generation.
Xingyu Fu (@xingyufu2) 's Twitter Profile Photo

Can Text-to-Image models understand common sense? 🤔 Can they generate images that fit everyday common sense? 🤔 tldr; NO, they are far less intelligent than us 💁🏻‍♀️ Introducing Commonsense-T2I 💡 zeyofu.github.io/CommonsenseT2I/, a novel evaluation and benchmark designed to measure

Can Text-to-Image models understand common sense? 🤔

Can they generate images that fit everyday common sense? 🤔

tldr; NO, they are far less intelligent than us 💁🏻‍♀️

Introducing Commonsense-T2I 💡 zeyofu.github.io/CommonsenseT2I/, a novel evaluation and benchmark designed to measure
Ruohao Guo (@guooctavia) 's Twitter Profile Photo

Ever wondered if style lexicons still play a role in the era of LLMs? 🤔 We tested 13 established and 63 novel language styles across different LLMs. 🧠✨ It turns out lexicons are still crucial for style understanding! But how can we better leverage this lexical knowledge? Our

Ever wondered if style lexicons still play a role in the era of LLMs? 🤔

We tested 13 established and 63 novel language styles across different LLMs. 🧠✨ It turns out lexicons are still crucial for style understanding! But how can we better leverage this lexical knowledge? Our
Tu Vu (@tuvllms) 's Twitter Profile Photo

📢✨ I am recruiting 1-2 PhD students at Virginia Tech this cycle. If you are interested in efficient model development (including model merging, parameter-efficient fine-tuning & transfer learning), instruction tuning, advanced reasoning, LLMs-as-judges, etc., please apply!!

Minqian Liu (@minqian_liu) 's Twitter Profile Photo

🔥Thrilled to announce ReFocus, our latest work led by Xingyu Fu that teaches MLLMs to generate “visual thoughts” 🧠 via visual editing on tables and charts 📊 to improve reasoning. Huge thanks to all the amazing co-authors! 🙌

Xingyu Fu (@xingyufu2) 's Twitter Profile Photo

Refocus🔍 Visual reasoning for Tables and Charts with Edits Happy to share ReFocus accepted at #ICML2025. We’ve open-sourced code and training data: zeyofu.github.io/ReFocus/ ReFocus enables multimodal LMs to better reason on Tables and Charts with visual edits. It also provides

Yang Chen (@ychennlp) 's Twitter Profile Photo

📢We conduct a systematic study to demystify the synergy between SFT and RL for reasoning models. The result? We trained a 7B model - AceReason-Nemotron-1.1, significantly improved from version 1.0 on math and coding benchmarks. ✅AIME2025 (math): 53.6% -> 64.8% ✅LiveCodeBench

📢We conduct a systematic study to demystify the synergy between SFT and RL for reasoning models.

The result? We trained a 7B model - AceReason-Nemotron-1.1, significantly improved from version 1.0 on math and coding benchmarks.

✅AIME2025 (math): 53.6% -&gt; 64.8%
✅LiveCodeBench