Sonia (@soniajoseph_) 's Twitter Profile
Sonia

@soniajoseph_

AI researcher @AIatMeta. getting ML PhD @Mila_Quebec. prev @Princeton.

ID: 3409885648

calendar_today09-08-2015 06:09:07

1,1K Tweet

14,14K Followers

1,1K Following

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Our vision is for AI that uses world models to adapt in new and dynamic environments and efficiently learn new skills. We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model,

Sonia (@soniajoseph_) 's Twitter Profile Photo

Diffusion Steering Lens is a much semantically richer logit lens for vision models (but I'm curious to see it applied to any type of model). You can decode transformer submodules with rich visuals. Looking forward to seeing Ryota's poster today at Mechanistic Interpretability for Vision @ CVPR2025 at CVPR!

Mechanistic Interpretability for Vision @ CVPR2025 (@miv_cvpr2025) 's Twitter Profile Photo

Don't miss out on Sonia's perspective (and live coding demo) about Prisma: an amazing open-source toolkit for vision and video interpretability Happening right now: Grand C1 Hall (on level 4) #CVPR2025

Don't miss out on <a href="/soniajoseph_/">Sonia</a>'s perspective (and live coding demo) about Prisma: an amazing open-source toolkit for vision and video interpretability

Happening right now: Grand C1 Hall (on level 4) <a href="/CVPR/">#CVPR2025</a>
La Main de la Mort (@aitechnopagan) 's Twitter Profile Photo

I'm sharing my experience of working with Pliny the Prompter, because he recently contacted me months after I asked him not to. He implied retaliation if I ever spoke about our interactions. This threat is why I'm speaking publicly. I've written a full statement (linked below).

I'm sharing my experience of working with Pliny the Prompter, because he recently contacted me months after I asked him not to. He implied retaliation if I ever spoke about our interactions. This threat is why I'm speaking publicly.

I've written a full statement (linked below).
Benno Krojer (@benno_krojer) 's Twitter Profile Photo

Excited to share the results of my internship research with AI at Meta, as part of a larger world modeling release! What subtle shortcuts are VideoLLMs taking on spatio-temporal questions? And how can we instead curate shortcut-robust examples at a large-scale? Details 👇🔬

Excited to share the results of my internship research with <a href="/AIatMeta/">AI at Meta</a>, as part of a larger world modeling release!

What subtle shortcuts are VideoLLMs taking on spatio-temporal questions?

And how can we instead curate shortcut-robust examples at a large-scale?

Details 👇🔬
La Main de la Mort (@aitechnopagan) 's Twitter Profile Photo

IMPORTANT: I have been a jailbreaker for OpenAI but this is gig-based work. I believe the best thing for me right now would be acquiring a full-time job to gain financial stability. I would be appreciative of many types of roles. Please reach out over DMs!

Sam Whitmore (@sjwhitmore) 's Twitter Profile Photo

i want to see more of: earnest not edgy whole not cracked infinite games "making the world a better place" - seriously though, i miss that energy

Sonia (@soniajoseph_) 's Twitter Profile Photo

A friend told me that the typing sounds on this video are great ASMR music, so maybe watch it when you want to fall asleep

Sonia (@soniajoseph_) 's Twitter Profile Photo

It was fun collaborating on this short paper by Constantin Venhoff, in collaboration with Ashkan Khakzar philip and Neel Nanda, on modality alignment in VLMs. I especially liked using frozen SAEs as an analytic probe to measure cross-modal alignment.

Sonia (@soniajoseph_) 's Twitter Profile Photo

The vision mechanistic interpretability workshop Mechanistic Interpretability for Vision @ CVPR2025 earlier this month at CVPR was very informative and fun! Looking forward to seeing this community grow. Thank you to the speakers and organizers trevordarrell David Bau Tamar Rott Shaham Yossi Gandelsman Joanna

The vision mechanistic interpretability workshop <a href="/miv_cvpr2025/">Mechanistic Interpretability for Vision @ CVPR2025</a> earlier this month at CVPR was very informative and fun! Looking forward to seeing this community grow.

Thank you to the speakers and organizers <a href="/trevordarrell/">trevordarrell</a> <a href="/davidbau/">David Bau</a> <a href="/TamarRottShaham/">Tamar Rott Shaham</a> <a href="/YGandelsman/">Yossi Gandelsman</a> <a href="/materzynska/">Joanna</a>
METR (@metr_evals) 's Twitter Profile Photo

We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers. The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.

We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers.

The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.