Monika Wysoczańska (@mkwysoczanska) 's Twitter Profile
Monika Wysoczańska

@mkwysoczanska

PhD Student in Multimodal Learning at @WUT_edu

ID: 1597623426327478276

linkhttp://wysoczanska.github.io calendar_today29-11-2022 16:08:23

65 Tweet

232 Followers

133 Following

valeo.ai (@valeoai) 's Twitter Profile Photo

📢We introduce the ScaLR models (code+checkpoints) for LiDAR perception distilled from vision foundation models tl;dr: don’t neglect the choice of teacher, student, and pretraining datasets -> their impact is probably more important than the distillation method #CVPR2024 🧵 [1/8]

📢We introduce the ScaLR models (code+checkpoints) for LiDAR perception distilled from vision foundation models
tl;dr: don’t neglect the choice of teacher, student, and pretraining datasets -> their impact is probably more important than the distillation method #CVPR2024
🧵
[1/8]
The Nobel Prize (@nobelprize) 's Twitter Profile Photo

Today we remember one of the world's greatest scientists: Marie Skłodowska Curie, who died #OTD in 1934. Curie was the first person to be awarded the #NobelPrize twice and is still the only person to receive the prize in two different scientific fields.

Dmytro Mishkin 🇺🇦 (@ducha_aiki) 's Twitter Profile Photo

No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations Walter Simoncini, Spyros Gidaris, Andrei Bursuc , Yuki M. Asano tl;dr: gradients from augmentation-supervised obj at test time, concatenated with features improve your descriptor arxiv.org/abs/2407.10964

No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations

Walter Simoncini, Spyros Gidaris, <a href="/abursuc/">Andrei Bursuc</a> , Yuki M. Asano
tl;dr: gradients from augmentation-supervised obj at test time, concatenated with features improve your descriptor
arxiv.org/abs/2407.10964
David Picard (@david_picard) 's Twitter Profile Photo

Let me improve the conference decision process very easily: discussion stops at 8pm, reviewers can update their scores until 9pm, average scores are computed at 9:01, thresholded at 9:02, decision are out at 9:03. Everybody wins 2 months, for about the same results.

Let me improve the conference decision process very easily: discussion stops at 8pm, reviewers can update their scores until 9pm, average scores are computed at 9:01, thresholded at 9:02, decision are out at 9:03. Everybody wins 2 months, for about the same results.
Andrei Bursuc (@abursuc) 's Twitter Profile Photo

Thank you all for stopping by our poster. It was really fun! Check out our CLIP-DINOiser code and paper here wysoczanska.github.io/CLIP_DINOiser/ #eccv2024 This was such a fun collaboration with Monika Wysoczańska Oriane Siméoni Michaël Ramamonjisoa Tomasz Trzcinski Patrick Pérez

Thank you all for stopping by our poster. It was really fun!
Check out our CLIP-DINOiser code and paper here wysoczanska.github.io/CLIP_DINOiser/
#eccv2024
This was such a fun collaboration with <a href="/mkwysoczanska/">Monika Wysoczańska</a> <a href="/oriane_simeoni/">Oriane Siméoni</a> <a href="/MichaelRamamon/">Michaël Ramamonjisoa</a> <a href="/tomasztrzcinsk1/">Tomasz Trzcinski</a> <a href="/ptrkprz/">Patrick Pérez</a>
Fabio Cermelli (@fcdl94) 's Twitter Profile Photo

Om my way back to office, time to recap. What’s the most interesting idea you found on #ECCV2024 ? For me it was CLIP-DINOiser (ecva.net/papers/eccv_20…) of Monika Wysoczańska Oriane Siméoni Andrei Bursuc and colleagues. Cheap training, standing on previous networks, nice results!

Andrei Bursuc (@abursuc) 's Twitter Profile Photo

🚨Hear! Hear! We have a few MSc research internship openings valeo.ai for 2025 on computer vision & machine learning (yeah AI). You can find the openings in the link below along with the achievements of our amazing previous interns: valeoai.github.io/interns/ Join us!

David Picard (@david_picard) 's Twitter Profile Photo

🍏New preprint alert! PoM: Efficient Image and Video Generation with the Polynomial Mixer arxiv.org/abs/2411.12663 This is my latest "summer project", but it was so big I had to call in reinforcements (tx Nicolas DUFOUR) TL;DR Transformers are for boomers, welcome to the future 🧵👇

Nicolas DUFOUR (@nico_dufour) 's Twitter Profile Photo

🌍 Guessing where an image was taken is a hard, and often ambiguous problem. Introducing diffusion-based geolocation—we predict global locations by refining random guesses into trajectories across the Earth's surface! 🗺️ Paper, code, and demo: nicolas-dufour.github.io/plonk

World of Statistics (@stats_feed) 's Twitter Profile Photo

🏴󠁧󠁢󠁥󠁮󠁧󠁿 ENGLISH: Eat, eats, ate, eaten, eating 🇵🇱 POLISH: Jeść, zjeść, jadać, zjadać, jem, zjem, jadam, zjadam, jesz, zjesz, jadasz, zjadasz, je, zje, jada, zjada, jemy, zjemy, jadamy, zjadamy, jecie, zjecie, jadacie, zjadacie, jedzą, zjedzą, jadają, zjadają, jadłem, jadłaś, zjadłem,

WiCV (@wicvworkshop) 's Twitter Profile Photo

🚨Calling Sponsors!🚨 🤝Support the Women in Computer Vision Workshop at #CVPR2025 in Nashville!🎉 Help us empower women in #ComputerVision through mentorship, travel awards, and networking🌟 Interested? 📩Contact us: [email protected] #Womeninscience #DiversityInTech

🚨Calling Sponsors!🚨
🤝Support the Women in Computer Vision Workshop at #CVPR2025 in Nashville!🎉
Help us empower women in #ComputerVision through mentorship, travel awards, and networking🌟
Interested? 📩Contact us: wicvcvpr2025@googlegroups.com
#Womeninscience #DiversityInTech
Andrei Bursuc (@abursuc) 's Twitter Profile Photo

The valeo.ai team is presenting a few exciting works ICLR 2026 this year on masked generative transformers, adaptation of VLMs, self-supervised representation learning, neural solvers. #iclr2025 Check them out 👇

Sophia Sirko-Galouchenko (@sophia_sirko) 's Twitter Profile Photo

1/n 🚀New paper out - accepted at #ICCV2025! Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!

1/n 🚀New paper out - accepted at <a href="/ICCVConference/">#ICCV2025</a>!

Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding

Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
valeo.ai (@valeoai) 's Twitter Profile Photo

Check out our MOCA self-supervised learning approach unifying the learning principles of both discriminative & masked image modelling paradigms. With a non-linear path, MOCA has been accepted at #TMLR and presented in the TMLR poster session at #ICLR2025