Pierre Fernandez (@pierrefdz) 's Twitter Profile
Pierre Fernandez

@pierrefdz

Researcher (Meta, FAIR Paris) • Working on AI, watermarking and data protection • ex. @Inria, @Polytechnique, @UnivParisSaclay (MVA)

ID: 1325092743073443840

linkhttps://pierrefdz.github.io/ calendar_today07-11-2020 15:08:36

164 Tweet

511 Followers

246 Following

TimDarcet (@timdarcet) 's Twitter Profile Photo

Want strong SSL, but not the complexity of DINOv2? CAPI: Cluster and Predict Latents Patches for Improved Masked Image Modeling.

Want strong SSL, but not the complexity of DINOv2?

CAPI: Cluster and Predict Latents Patches for Improved Masked Image Modeling.
Ayush Sekhari (on Job Market) (@ayush_sekhari) 's Twitter Profile Photo

Very excited to share our recent work on structural watermarking of Large Language Models; joint work with the incredible Adam Block (at Microsoft Research) and Sasha Rakhlin (MIT). Our method, named Gaussmarks, offers a straightforward approach to watermarking language models.

Very excited to share our recent work on structural watermarking of Large Language Models; joint work with the incredible Adam Block (at Microsoft Research) and Sasha Rakhlin (MIT). Our method, named Gaussmarks, offers a straightforward approach to watermarking language models.
Quentin Garrido (@garridoq_) 's Twitter Profile Photo

The last paper of my PhD is finally out ! Introducing "Intuitive physics understanding emerges from self-supervised pretraining on natural videos" We show that without any prior, V-JEPA --a self-supervised video model-- develops an understanding of intuitive physics !

The last paper of my PhD is finally out ! Introducing
"Intuitive physics understanding emerges from self-supervised pretraining on natural videos"

We show that without any prior, V-JEPA --a self-supervised video model-- develops an understanding of intuitive physics !
Wassim (Wes) Bouaziz (@_vassim) 's Twitter Profile Photo

Want to know if a ML model was trained on your dataset with 1 API call? See you in conferences 🙌 Excited to share that our paper Data Taggants for image data was accepted at ICLR 2025 🎉 Our follow-up on audio data, was accepted at ICASSP 2025! 🎉 Check out the details below 👇

Want to know if a ML model was trained on your dataset with 1 API call? See you in conferences 🙌

Excited to share that our paper Data Taggants for image data was accepted at ICLR 2025 🎉
Our follow-up on audio data, was accepted at ICASSP 2025! 🎉
Check out the details below 👇
Pierre Chambon (@pierrechambon6) 's Twitter Profile Photo

Does your LLM truly comprehend the complexity of the code it generates? 🥰   Introducing our new non-saturated (for at least the coming week? 😉) benchmark:   ✨BigO(Bench)✨ - Can LLMs Generate Code with Controlled Time and Space Complexity?   Check out the details below !👇

Does your LLM truly comprehend the complexity of the code it generates? 🥰
 
Introducing our new non-saturated (for at least the coming week? 😉) benchmark:
 
✨BigO(Bench)✨ - Can LLMs Generate Code with Controlled Time and Space Complexity?
 
Check out the details below !👇
Axel Darmouni (@adarmouni) 's Twitter Profile Photo

Complexity-related generation sets the bar high for code-related models 🧵 📖 Read of the day, season 3, day 27: « Big(O)Bench: Can LLMs generate code with controlled time and space complexity? », by Pierre Chambon, Baptiste Rozière, Benoît Sagot and Gabriel Synnaeve from AI at Meta We know that

Complexity-related generation sets the bar high for code-related models

🧵 📖 Read of the day, season 3, day 27: « Big(O)Bench: Can LLMs generate code with controlled time and space complexity? », by <a href="/PierreChambon6/">Pierre Chambon</a>, <a href="/b_roziere/">Baptiste Rozière</a>, <a href="/bensagot/">Benoît Sagot</a> and <a href="/syhw/">Gabriel Synnaeve</a> from <a href="/AIatMeta/">AI at Meta</a>

We know that
Krunoslav Lehman Pavasovic (@krunolehman) 's Twitter Profile Photo

1/ Happy to share my first accepted paper as a PhD student at Meta and École normale supérieure | PSL which I will present at ICLR 2026: 📚 Our work proposes difFOCI, a novel rank-based objective for ✨better feature learning✨ In collab with David Lopez-Paz, Giulio Biroli and Levent Sagun!

Furong Huang (@furongh) 's Twitter Profile Photo

Gave my keynote today at the GenAI Watermarking Workshop @ #ICLR2025! 🎤 🌊 Introduced WAVES — our benchmark for stress-testing image watermark robustness. 🕵️‍♀️ First time revealing details of our NeurIPS 2024 comp, Erasing the Invisible — including which watermarks we used and

Gave my keynote today at the GenAI Watermarking Workshop @ #ICLR2025! 🎤

🌊 Introduced WAVES — our benchmark for stress-testing image watermark robustness.
🕵️‍♀️ First time revealing details of our NeurIPS 2024 comp, Erasing the Invisible — including which watermarks we used and
Kunhao Zheng @ ICLR 2025 (@kunhaoz) 's Twitter Profile Photo

🚨 Your RL only improves 𝗽𝗮𝘀𝘀@𝟭, not 𝗽𝗮𝘀𝘀@𝗸? 🚨 That’s not a bug — it’s a 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲 you’re optimizing. You get what you optimize for. If you want better pass@k, you need to optimize for pass@k at training time. 🧵 How?

🚨 Your RL only improves 𝗽𝗮𝘀𝘀@𝟭, not 𝗽𝗮𝘀𝘀@𝗸? 🚨

That’s not a bug — it’s a 𝗳𝗲𝗮𝘁𝘂𝗿𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗼𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲 you’re optimizing.

You get what you optimize for. If you want better pass@k, you need to optimize for pass@k at training time.

🧵 How?
m_ric (@aymericroucher) 's Twitter Profile Photo

We're launching Computer Use in smolagents! 🥳 -> As vision models become more capable, they become able to power complex agentic workflows. Especially Qwen-VL models, that support built-in grounding, i.e. ability to locate any element in an image by its coordinates, thus to

Sven Gowal (@sgowal) 's Twitter Profile Photo

My team Google DeepMind is hiring. If you are passionate about engineering and real-world impact with experience in robust ML, provenance of synthetic media, or data trustworthiness, consider applying: job-boards.greenhouse.io/deepmind/jobs/…

Joan Serrà (@serrjoa) 's Twitter Profile Photo

New paper on evaluating watermarking algorithms for audio and music! We uncover a number of robustness issues, and especially question the future of such algorithms under current and emerging neural codecs (including, but not limited to, generative ones). Links below ⬇️

New paper on evaluating watermarking algorithms for audio and music!

We uncover a number of robustness issues, and especially question the future of such algorithms under current and emerging neural codecs (including, but not limited to, generative ones).

Links below ⬇️
Federico Baldassarre (@baldassarrefe) 's Twitter Profile Photo

DINOv2 meets text at #CVPR 2025! Why choose between high-quality DINO features and CLIP-style vision-language alignment? Pick both with dino.txt 🦖📖 We align frozen DINOv2 features with text captions, obtaining both image-level and patch-level alignment at a minimal cost. [1/N]

DINOv2 meets text at #CVPR 2025! Why choose between high-quality DINO features and CLIP-style vision-language alignment? Pick both with dino.txt 🦖📖

We align frozen DINOv2 features with text captions, obtaining both image-level and patch-level alignment at a minimal cost. [1/N]
Mathurin Videau (@mathuvu_) 's Twitter Profile Photo

We present an Autoregressive U-Net that incorporates tokenization inside the model, pooling raw bytes into words then word-groups. AU-Net focuses most of its compute on building latent vectors that correspond to larger units of meaning. Joint work with Badr Youbi Idrissi 1/8

We present an Autoregressive U-Net that incorporates tokenization inside the model, pooling raw bytes into words then word-groups. AU-Net focuses most of its compute on building latent vectors that correspond to larger units of meaning.
Joint work with <a href="/byoubii/">Badr Youbi Idrissi</a> 1/8
Nikola Jovanović @ ICLR 🇸🇬 (@ni_jovanovic) 's Twitter Profile Photo

There's a lot of work now on LLM watermarking. But can we extend this to transformers trained for autoregressive image generation? Yes, but it's not straightforward 🧵(1/10)

Wassim (Wes) Bouaziz (@_vassim) 's Twitter Profile Photo

🚨New AI Security paper alert: Winter Soldier 🥶🚨 In our last paper, we show: -how to backdoor a LM _without_ training it on the backdoor behavior -use that to detect if a black-box LM has been trained on your protected data Yes, Indirect data poisoning is real and powerful!

🚨New AI Security paper alert: Winter Soldier 🥶🚨
In our last paper, we show:
-how to backdoor a LM _without_ training it on the backdoor behavior
-use that to detect if a black-box LM has been trained on your protected data

Yes, Indirect data poisoning is real and powerful!