Matteo Farina (@farinamatteoo) 's Twitter Profile
Matteo Farina

@farinamatteoo

🇮🇹 AI/ML PhD Student @ the University of Trento (@UniTrento) - Currently RI @ the Tübingen AI Center (@uni_tue) 🇩🇪

ID: 1669910181830877184

linkhttp://farinamatteo.github.io calendar_today17-06-2023 03:30:25

27 Tweet

125 Followers

149 Following

Nicola Strisciuglio (@nicstrisc) 's Twitter Profile Photo

I am happy of the collab with the group of Elisa Ricci, w/ a paper led by Davide Berasi (coming from his brilliant MSc thesis) and co-authors Matteo Farina and Massimiliano Mancini! 📰"Not Only Text: Exploring Compositionality of Visual Representations in Vision-Language Models"

Paolo Rota (@paolorotaphd) 's Twitter Profile Photo

🚨 CVPR main conference decisions are out, unlucky? Do not despair! The MULA workshop deadline is March 12, 2025. If you want to experience Nashville and present your work, don’t miss this opportunity! 🎸📅 #CVPR2025 More info here 👇🏻 mula-workshop.github.io

Matteo Farina (@farinamatteoo) 's Twitter Profile Photo

Definitely a bit late on this one, but 2/2 papers accepted at #CVPR2025 (one of which is a highlight ✨) I promised myself I'd post a thread about them the day I can get up as soon as my alarm rings... probably gonna take a while, stay tuned

Vishaal Udandarao (@vishaal_urao) 's Twitter Profile Photo

🚀New Paper! arxiv.org/abs/2504.07086 Everyone’s celebrating rapid progress in math reasoning with RL/SFT. But how real is this progress? We re-evaluated recently released popular reasoning models—and found reported gains often vanish under rigorous testing!! 👀 🧵👇

🚀New Paper!
arxiv.org/abs/2504.07086

Everyone’s celebrating rapid progress in math reasoning with RL/SFT. But how real is this progress?

We re-evaluated recently released popular reasoning models—and found reported gains often vanish under rigorous testing!! 👀

🧵👇
Alessandro Conti (@altndrr) 's Twitter Profile Photo

What if we stopped treating image classification like a multiple-choice quiz… …and just asked the model: "What’s in this image?" Our paper on open-world classification with LMMs got into #ICCV2025! 🎉🌺 Let’s talk failures, insights, and flipping mistakes 👇

Vishaal Udandarao (@vishaal_urao) 's Twitter Profile Photo

🚀New Paper arxiv.org/abs/2510.20860 We conduct a systematic data-centric study for speech-language pretraining, to improve end-to-end spoken-QA! 🎙️🤖 Using our data-centric insights, we pretrain a 3.8B SpeechLM (called SpeLangy) outperforming 3x larger models! 🧵👇

🚀New Paper

arxiv.org/abs/2510.20860

We conduct a systematic data-centric study for speech-language pretraining, to improve end-to-end spoken-QA! 🎙️🤖

Using our data-centric insights, we pretrain a 3.8B SpeechLM (called SpeLangy) outperforming 3x larger models!

🧵👇
Adhiraj Ghosh (@adhiraj_ghosh98) 's Twitter Profile Photo

🚨Current data curation results in the creation of static datasets and the use of model-based filters that induce many biases. Can we fix this? We propose ✨CABS✨, a flexible concept-aware online batch curation method that improves CLIP pretraining! arxiv.org/abs/2511.20643 🧵👇

🚨Current data curation results in the creation of static datasets and the use of model-based filters that induce many biases. Can we fix this?

We propose ✨CABS✨, a flexible concept-aware online batch curation method that improves CLIP pretraining!
arxiv.org/abs/2511.20643
🧵👇
Vishaal Udandarao (@vishaal_urao) 's Twitter Profile Photo

🚀New Paper arxiv.org/abs/2511.20643 Most data-curation for contrastive VLMs is offline and concept-agnostic! We introduce online, concept-aware batch sampling (CABS), enabling flexible and targeted curation, yielding ⬆️boosts over IID sampling (+7-9% on ImageNet + ret)! 🧵👇

🚀New Paper
arxiv.org/abs/2511.20643

Most data-curation for contrastive VLMs is offline and concept-agnostic!

We introduce online, concept-aware batch sampling (CABS), enabling flexible and targeted curation, yielding ⬆️boosts over IID sampling (+7-9% on ImageNet + ret)!

🧵👇