Mark Ibrahim (@marksibrahim) 's Twitter Profile
Mark Ibrahim

@marksibrahim

Researching the dark arts of deep learning at Meta's FAIR (Fundamental AI Research) Lab

ID: 982708309

linkhttp://markibrahim.me/ calendar_today01-12-2012 14:54:11

52 Tweet

187 Followers

1,1K Following

Mohammad Pezeshki (@mpezeshki91) 's Twitter Profile Photo

🔗 Visit our GitHub repository for further insights: github.com/facebookresear… Special thanks to my amazing collaborators: Diane Bouchacourt Mark Ibrahim Nicolas Ballas, Pascal Vincent, and David Lopez-Paz

Vlad Sobal (@vlad_is_ai) 's Twitter Profile Photo

Representation learning is often done by considering samples to be either identical (same class, positive pairs) or not–with no middle ground. We propose 𝕏-CLR to learn from soft inter-sample relationships, and get better accuracy & improved robustness. arxiv.org/abs/2407.18134

Representation learning is often done by considering samples to be either identical (same class, positive pairs) or not–with no middle ground. We propose 𝕏-CLR to learn from soft inter-sample relationships, and get better accuracy & improved robustness.
arxiv.org/abs/2407.18134
Yann LeCun (@ylecun) 's Twitter Profile Photo

A soft similarity graph improves contrastive learning for image recognition. By Vlad Sobal and a cast of characters from Meta FAIR, NYU, and Brown.

AK (@_akhaliq) 's Twitter Profile Photo

Meta announces UniBench Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling discuss: huggingface.co/papers/2408.04… Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches. Yet, with an ever-growing number of

Meta announces UniBench

Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

discuss: huggingface.co/papers/2408.04…

Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches. Yet, with an ever-growing number of
AI at Meta (@aiatmeta) 's Twitter Profile Photo

New research from Meta FAIR: UniBench is a unified implementation of 50+ VLM benchmarks spanning a comprehensive range of carefully categorized capabilities from object recognition to spatial awareness, counting and much more. Research paper ➡️ go.fb.me/fa97z9

New research from Meta FAIR: UniBench is a unified implementation of 50+ VLM benchmarks spanning a comprehensive range of carefully categorized capabilities from object recognition to spatial awareness, counting and much more.

Research paper ➡️ go.fb.me/fa97z9
Haider Al-Tahan (@haidertahan) 's Twitter Profile Photo

🚀 Excited to share our work at #NeurIPS2024! We show how billion parameter VLMs lose to a two-layer MLP on MNIST. Come by our poster presentation at West Ballroom A-D #5211, today from 4:30–7:30 PM PST. A 🧵:

🚀 Excited to share our work at #NeurIPS2024! We show how billion parameter VLMs lose to a two-layer MLP on MNIST.

Come by our poster presentation at West Ballroom A-D #5211, today from 4:30–7:30 PM PST.

A 🧵:
Quentin Garrido (@garridoq_) 's Twitter Profile Photo

The last paper of my PhD is finally out ! Introducing "Intuitive physics understanding emerges from self-supervised pretraining on natural videos" We show that without any prior, V-JEPA --a self-supervised video model-- develops an understanding of intuitive physics !

The last paper of my PhD is finally out ! Introducing
"Intuitive physics understanding emerges from self-supervised pretraining on natural videos"

We show that without any prior, V-JEPA --a self-supervised video model-- develops an understanding of intuitive physics !
Mark Ibrahim (@marksibrahim) 's Twitter Profile Photo

Join us as a PhD research intern at FAIR w/ Polina Kirichenko @ CVPR Kamalika Chaudhuri to start this summer or fall with a focus on open science into multimodal models, agents and beyond! Email [email protected] with the title [Prospective Intern 2025] and attach your CV if interested!

Polina Kirichenko (@polkirichenko) 's Twitter Profile Photo

Join us at #CVPR2025 Demographic Diversity in Computer Vision workshop tomorrow! 📅 Wednesday, June 11, 9am-6pm 📍 room 213 (main session) + Hall D (poster sessions), the Music City Center We have an amazing lineup of speakers and panelists! Can't wait to meet you all there :)

Join us at #CVPR2025 Demographic Diversity in Computer Vision workshop tomorrow!
📅 Wednesday, June 11, 9am-6pm
📍 room 213 (main session) + Hall D (poster sessions), the Music City Center
We have an amazing lineup of speakers and panelists! Can't wait to meet you all there :)