Alexander Hermans (@pandoro_o) 's Twitter Profile
Alexander Hermans

@pandoro_o

Postdoc at the @RWTHVisionLab, as well as the @laim_uka

ID: 133746817

calendar_today16-04-2010 14:34:55

158 Tweet

112 Followers

246 Following

Alexander Hermans (@pandoro_o) 's Twitter Profile Photo

Ich habe kein Verständnis für die massive Unterstützung der Interessen von RWE durch Armin Laschet und die Landesregierung NRW. Der Hambacher Forst braucht jetzt eine politische Lösung! #StopptdenWahnsinn #kohlefrei wwf.de/stoppt-das

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

Many lessons for self-supervised representation learning: arxiv.org/abs/1901.09005. The best case halves the gap between previous SOTA and supervised training on ImageNet! Code: github.com/google/revisit…. Work with Alexander Kolesnikov and Xiaohua Zhai at Google Brain Zürich Google AI.

Many lessons for self-supervised representation learning: arxiv.org/abs/1901.09005. The best case halves the gap between previous SOTA and supervised training on ImageNet! Code: github.com/google/revisit…. Work with <a href="/__kolesnikov__/">Alexander Kolesnikov</a> and <a href="/XiaohuaZhai/">Xiaohua Zhai</a> at Google Brain Zürich <a href="/GoogleAI/">Google AI</a>.
AK (@_akhaliq) 's Twitter Profile Photo

Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think discuss: huggingface.co/papers/2409.11… Recent work showed that large diffusion models can be reused as highly precise monocular depth estimators by casting depth estimation as an image-conditional image

Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think

discuss: huggingface.co/papers/2409.11…

Recent work showed that large diffusion models can be reused as highly precise monocular depth estimators by casting depth estimation as an image-conditional image
Karim Abou Zeid (@kacodes) 's Twitter Profile Photo

Check out our work on fine-tuning of image-conditional diffusion models for depth and normal estimation. Widely used diffusion models can be improved with single-step inference and task-specific fine-tuning, allowing us to gain better accuracy while being 200x faster!⚡ 🧵(1/6)

Check out our work on fine-tuning of image-conditional diffusion models for depth and normal estimation.

Widely used diffusion models can be improved with single-step inference and task-specific fine-tuning, allowing us to gain better accuracy while being 200x faster!⚡

🧵(1/6)
Tommie Kerssies (@tommiekerssies) 's Twitter Profile Photo

Image segmentation doesn’t have to be rocket science. 🚀 Why build a rocket engine full of bolted-on subsystems when one elegant unit does the job? 💡 That’s what we did for segmentation. ✅ Meet the Encoder-only Mask Transformer (EoMT): tue-mps.github.io/eomt (CVPR 2025) (1/6)

Image segmentation doesn’t have to be rocket science. 🚀
Why build a rocket engine full of bolted-on subsystems when one elegant unit does the job? 💡
That’s what we did for segmentation.
✅ Meet the Encoder-only Mask Transformer (EoMT): tue-mps.github.io/eomt (CVPR 2025)
(1/6)
Kadir Yılmaz (@kadiryilmaz_cv) 's Twitter Profile Photo

I'll be presenting "DINO in the Room (DITR)", the winning method of the ScanNet++ 3D semantic segmentation challenge, tomorrow at CVPR at 10 a.m. in Room 211. Project page: visualcomputinginstitute.github.io/DITR/

I'll be presenting "DINO in the Room (DITR)", the winning method of the ScanNet++ 3D semantic segmentation challenge, tomorrow at CVPR at 10 a.m. in Room 211.
Project page: visualcomputinginstitute.github.io/DITR/
Alexander Hermans (@pandoro_o) 's Twitter Profile Photo

Woohoo, spent 4 hours on a #presentation everyone told me not to put too much effort into x) Finished at 3 AM :x Lots of #animations though

Alexander Hermans (@pandoro_o) 's Twitter Profile Photo

Thinking all your experiments were wrong... 2 days prior to the submission #deadline! Haven't had such a bad feeling in a long time!