Video & Image Sense Lab (VIS Lab) (@vislab_uva) 's Twitter Profile
Video & Image Sense Lab (VIS Lab)

@vislab_uva

Computer Vision research group at @UvA_Amsterdam directed by Cees Snoek (@cgmsnoek)

ID: 1788473121948712960

linkhttps://ivi.fnwi.uva.nl/vislab/ calendar_today09-05-2024 07:36:56

55 Tweet

42 Followers

29 Following

Sarah Rastegar (@rastegar_sarah) 's Twitter Profile Photo

🚀 Excited to announce that our paper "SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery" has been accepted to ECCV 2024! 🎉 Special thanks to my incredible coauthor mrz.salehi and my amazing supervisors Yuki, Hazel Doughty, and Cees Snoek🙏.

🚀 Excited to announce that our paper "SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery" has been accepted to ECCV 2024! 🎉

Special thanks to my incredible coauthor <a href="/MrzSalehi/">mrz.salehi</a> and my amazing supervisors <a href="/y_m_asano/">Yuki</a>, <a href="/doughty_hazel/">Hazel Doughty</a>, and <a href="/cgmsnoek/">Cees Snoek</a>🙏.
UvA AMLab (@amlabuva) 's Twitter Profile Photo

We are hiring a postdoc! Come work with us in the booming AI ecosystem of beautiful Amsterdam on generative AI and/or uncertainty quantification 🤗 🎇vacatures.uva.nl/UvA/job/Postdo…

We are hiring a postdoc! Come work with us in the booming AI ecosystem of beautiful Amsterdam on generative AI and/or uncertainty quantification 🤗

🎇vacatures.uva.nl/UvA/job/Postdo…
Riccardo Valperga (@rvalperga) 's Twitter Profile Photo

I will be in Montréal until December for my internship with ServiceNow. I will be working on causal discovery from time-series. Get in touch if you are around and want to chat about some **apprentissage profond**

I will be in Montréal until December for my internship with ServiceNow. I will be working on causal discovery from time-series. Get in touch if you are around and want to chat about some **apprentissage profond**
Efstratios Gavves (@egavves) 's Twitter Profile Photo

📢📢📢 PhD vacancy alert 📢📢📢 We open several PhD positions supervised by myself and Georgios on #Robot Learning and #Dynamics! If you have strong #ML and/or #Robotics experience and want to dive into the next big thing in #AI, apply! Please share! linkedin.com/jobs/view/4021…

Yuki (@y_m_asano) 's Twitter Profile Photo

First time organising a Tutorial with an amazing team and am very excited 🎉! The topic is learning from videos, which I think will be the new 'Big' paradigm for new vision foundation models. Come to learn, chat and discuss European Conference on Computer Vision #ECCV2026!

Intelligent Data Engineering Lab (@inde_lab_ams) 's Twitter Profile Photo

We're happy to be at #ECCV2024 this week thanks to our cooperation with Video & Image Sense Lab (VIS Lab) . Check out Melika work at the Instance-Level Recognition Workshop the Self-Supervised Learning workshop.

Sarah Rastegar (@rastegar_sarah) 's Twitter Profile Photo

🚀 Excited to present our work on Self-Expertise at #ECCV2024 in Milan! Join us at poster #89 on Friday, Oct 4 at 10:30 AM to see how self-expertise outperforms self-supervision in tackling unknown data in open-world settings! 🌍 #SelfSupervision #GeneralizedCategoryDiscovery

🚀 Excited to present our work on Self-Expertise at #ECCV2024 in Milan!
Join us at poster #89 on Friday, Oct 4 at 10:30 AM to see how self-expertise outperforms self-supervision in tackling unknown data in open-world settings! 🌍 #SelfSupervision #GeneralizedCategoryDiscovery
mrz.salehi (@mrzsalehi) 's Twitter Profile Photo

🚀 Excited to present SIGMA at European Conference on Computer Vision #ECCV2026 ! 🎉 We upgrade VideoMAE with Sinkhorn-Knopp on patch-level embeddings, pushing reconstruction to more semantic features. With Michael Dorkenwald. Let’s connect at today's poster session at 4:30 PM, poster number 256, or send us a DM.

Sarah Rastegar (@rastegar_sarah) 's Twitter Profile Photo

Stop by today and discuss our European Conference on Computer Vision #ECCV2026 paper (SelEx) with me, Hazel Doughty, and Cees Snoek! 🎉 We present self-expertise—an alternative to self-supervision for learning from unlabelled data with fine-grained distinctions and unknown categories. 📍 Poster #89 🕥 10:30 AM

Yuki (@y_m_asano) 's Twitter Profile Photo

Excited to announce that today I'm starting my new position at Technische Universität Nürnberg as a full Professor 🎉. I thank everyone who has helped me to get to this point, you're all the best! Our lab is called FunAI Lab, where we strive to put the fun into fundamental research. 😎 Let's go!

David M. Knigge (@davidmknigge) 's Twitter Profile Photo

🇨🇦 Deeeelighted to share that this work got into #neurips2024. Many thanks to my dear friend and co-author David Wessels, as well as the rest of the team. Solving PDEs in continuous space-time with Neural Fields on cool geometries while respecting their inherent symmetries! 💫💫

Michael Dorkenwald (@mdorkenw) 's Twitter Profile Photo

The Self-Supervised Learning: What is Next? workshop at European Conference on Computer Vision #ECCV2026 had a great turnout with excellent talks. Slides of most talks are available at sslwin.org (soon all 🤞). Thanks to all attendees, speakers, and co-organizers for making it a fantastic event!

The Self-Supervised Learning: What is Next? workshop at <a href="/eccvconf/">European Conference on Computer Vision #ECCV2026</a> had a great turnout with excellent talks. Slides of most talks are available at sslwin.org (soon all 🤞). Thanks to all attendees, speakers, and co-organizers for making it a fantastic event!
Yuki (@y_m_asano) 's Twitter Profile Photo

Today, we're introducing TVBench! 📹💬 Video-language evaluation is crucial, but are we doing it right? We find that current benchmarks fall short in testing temporal understanding. 🧵👇

Michael Dorkenwald (@mdorkenw) 's Twitter Profile Photo

📢 Announcing TVBench: Temporal Video-Language Benchmark 📺 We reveal that widely used Video-Language benchmarks, such as MVBench, fall short in testing temporal understanding and propose an alternative TVBench: huggingface.co/datasets/FunAI…

Pascal Mettes (@pascalmettes) 's Twitter Profile Photo

All vision-language models should have hyperbolic embeddings. Vision and language are incredibly hierarchical in nature! See below our latest work on hyperbolic vision-language models that exploit visual compositions through entailment:

Aritra Bhowmik (@aritrabhowmik6) 's Twitter Profile Photo

🚀 Excited to share LynX! 🦁 🔑 A new method in visual grounding using a Dual Mixture of Experts—LynX enables pretrained VLMs to continuously learn grounding while retaining their image-language capabilities. 📄 Check out the full paper: arxiv.org/pdf/2410.10491

Cees Snoek (@cgmsnoek) 's Twitter Profile Photo

📢📢 Beyond Model Adaptation at Test Time: A Survey by Zehao Xiao. TL;DR: we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers 💯💯💯💯 🤩 #CVPR2025 #ICLR2025 arxiv.org/abs/2411.03687

📢📢 Beyond Model Adaptation at Test Time: A Survey by <a href="/zehao_xiao/">Zehao Xiao</a>. TL;DR: we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers 💯💯💯💯 🤩 #CVPR2025 #ICLR2025 
arxiv.org/abs/2411.03687