Fida Mohammad (@fmthoker) 's Twitter Profile
Fida Mohammad

@fmthoker

PhD student at University of Amsterdam | Deep learning for Computer Vision

ID: 988447344

linkhttps://fmthoker.github.io/ calendar_today04-12-2012 09:59:37

32 Tweet

107 Followers

180 Following

Cees Snoek (@cgmsnoek) 's Twitter Profile Photo

Are you passionate about #artificial #intelligence, #computer #vision and #health? We are seeking six #PhD candidates for our new AIM lab on AI for Medical Imaging in #Amsterdam: uva.nl/en/content/vac…

Cees Snoek (@cgmsnoek) 's Twitter Profile Photo

ACM #Multimedia 2021: Skeleton-Contrastive 3D Action Representation Learning w/ Fida Mohammad, Hazel Doughty: arxiv.org/abs/2108.03656 We learn invariances to multiple #skeleton representations and introduce various skeleton augmentations via noise contrastive estimation 1/n

ACM #Multimedia 2021: Skeleton-Contrastive 3D Action Representation Learning w/ <a href="/fmthoker/">Fida Mohammad</a>, <a href="/doughty_hazel/">Hazel Doughty</a>: arxiv.org/abs/2108.03656 We learn invariances to multiple #skeleton representations and introduce various skeleton augmentations via noise contrastive estimation 1/n
Cees Snoek (@cgmsnoek) 's Twitter Profile Photo

#ICCV2021 cam ready: "Social Fabric: Tubelet Compositions for Video Relation Detection" w/ Shuo Chen, Zenglin Shi, Pascal Mettes arxiv.org/abs/2108.08363 TL;DR We classify and detect predicates for pairs of object tubelets a priori, rather than a posteriori. 1/n

Hazel Doughty (@doughty_hazel) 's Twitter Profile Photo

Excited to share our #ECCV2022 work "How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?" w/ Fida Mohammad, Piyush Bagad and Cees Snoek. We investigate if video self-supervised methods generalize beyond current benchmarks. Details: bpiyush.github.io/SEVERE-website/ 🧵1/5

Excited to share our #ECCV2022 work "How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?" w/ <a href="/fmthoker/">Fida Mohammad</a>, <a href="/bagad_piyush/">Piyush Bagad</a> and <a href="/cgmsnoek/">Cees Snoek</a>.

We investigate if video self-supervised methods generalize beyond current benchmarks.

Details: bpiyush.github.io/SEVERE-website/

🧵1/5
Kosta Derpanis (@csprofkgd) 's Twitter Profile Photo

Check out our EXCITING lineup of invited speakers for the #CVPR2025 #CVPR2023 2nd Workshop on Learning with Limited Labelled Data for Image and Video Understanding 🤗 sites.google.com/view/l3d-ivu-2…

Check out our EXCITING lineup of invited speakers for the <a href="/CVPR/">#CVPR2025</a> #CVPR2023 2nd Workshop on Learning with Limited Labelled Data for Image and Video Understanding 🤗

sites.google.com/view/l3d-ivu-2…
#ICCV2025 (@iccvconference) 's Twitter Profile Photo

The list of paper IDs for accepted #ICCV2023 papers is now available at the following link: drive.google.com/file/d/1t0XHej…

Hazel Doughty (@doughty_hazel) 's Twitter Profile Photo

Happy our paper 'Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization' was accepted to #ICCV2023. Congrats Fida Mohammad! Preprint: arxiv.org/abs/2303.11003

mrz.salehi (@mrzsalehi) 's Twitter Profile Photo

How to do research: people.csail.mit.edu/billf/publicat… Summary : 1. Slow down to speed up. 2. Find why something does not work as much as possible. 3. Hard working is necessary . 4. Collaborate. Small piece of a big cake> Big piece of small cake. 5. Communicate effectively. #icvss23

Hazel Doughty (@doughty_hazel) 's Twitter Profile Photo

Next Thursday at #ICCV2023, we'll present our work on self-supervised learning for motion-focused video representations. Work w/ Fida Mohammad and Cees Snoek We learn similarities between videos with identical local motion dynamics but an otherwise different appearance. 1/6

Next Thursday at #ICCV2023, we'll present our work on self-supervised learning for motion-focused video representations.

Work w/ <a href="/fmthoker/">Fida Mohammad</a> and <a href="/cgmsnoek/">Cees Snoek</a>

We learn similarities between videos with identical local motion dynamics but an otherwise different appearance.

1/6
Michael Dorkenwald (@mdorkenw) 's Twitter Profile Photo

📢SIGMA: Sinkhorn-Guided Masked Video Modeling got accepted to European Conference on Computer Vision #ECCV2026 #ECCV2024 TL;DR: Instead of using pixel targets in Video Masked Modeling, we reconstruct jointly trained features using Sinkhorn guidance, achieving SOTA. 📝Project page: quva-lab.github.io/SIGMA/ 🌐Paper:

📢SIGMA: Sinkhorn-Guided Masked Video Modeling got accepted to <a href="/eccvconf/">European Conference on Computer Vision #ECCV2026</a> #ECCV2024 

TL;DR: Instead of using pixel targets in Video Masked Modeling, we reconstruct jointly trained features using Sinkhorn guidance, achieving SOTA.

📝Project page: quva-lab.github.io/SIGMA/
🌐Paper:
Hazel Doughty (@doughty_hazel) 's Twitter Profile Photo

Our #ACCV2024 Oral "LocoMotion: Learning Motion-Focused Video-Language Representations" w/ Fida Mohammad and Cees Snoek is now on ArXiv arxiv.org/abs/2410.12018 We remove the spatial focus of video-language representations and instead train representations to have a motion focus.

Our  #ACCV2024 Oral "LocoMotion: Learning Motion-Focused Video-Language Representations" w/ <a href="/fmthoker/">Fida Mohammad</a> and <a href="/cgmsnoek/">Cees Snoek</a>  is now on ArXiv arxiv.org/abs/2410.12018

We remove the spatial focus of video-language representations and instead train representations to have a motion focus.
Hasan Hammoud (@hammh0a) 's Twitter Profile Photo

Just released "DiffCLIP", extending Differential Attention proposed by Tianzhu Ye to CLIP models - replacing both visual & text encoder attention with the differential attention mechanism! TL;DR: Consistent improvements across all tasks with only 0.003% extra parameters!

Just released "DiffCLIP", extending Differential Attention proposed by <a href="/ytz2024/">Tianzhu Ye</a> to CLIP models - replacing both visual &amp; text encoder attention with the differential attention mechanism!

TL;DR: Consistent improvements across all tasks with only 0.003% extra parameters!
Omar Abdullah (@omarabdullah) 's Twitter Profile Photo

We have had 4 deaths, not 6/7 as some channels/websites are insisting & all injured are stable 'n out of danger. Even 4 is 4 too many 2/3