Samuel Lavoie (@lavoiems) 's Twitter Profile
Samuel Lavoie

@lavoiems

PhD candidate @Mila_quebec, @UMontreal. Ex: FAIR @AIatMeta.
Learning representations and minimizing free energy.

ID: 2683977224

calendar_today27-07-2014 03:04:26

125 Tweet

629 Followers

514 Following

Adriana Romero-Soriano (@adri_romsor) 's Twitter Profile Photo

We're looking for a postdoc to work with us in FAIR Montreal AI at Meta. Interested in building generative visual models of the world and leveraging them to train dowsntream ML models? Apply: metacareers.com/jobs/376087892… cc:Melissa Hall Reyhane Askari Jakob Verbeek Michal Drozdzal

Yikang Shen (@yikang_shen) 's Twitter Profile Photo

Stick-Breaking Attention: Out-of-box length extrapolation, thanks to removing the position embedding; Better performance than Softmax+RoPE on almost every task; Similar efficient implementation like Flash Attention. Do we still need Softmax+RoPE for Language Models?

Stick-Breaking Attention: Out-of-box length extrapolation, thanks to removing the position embedding; Better performance than Softmax+RoPE on almost every task; Similar efficient implementation like Flash Attention. Do we still need Softmax+RoPE for Language Models?
Pietro Astolfi (@piovrasca) 's Twitter Profile Photo

As I'm heading out of Vancouver, I'm also wrapping up my postdoc at FAIR. It was a fun week with my friends/collabs and I made many great connections. But, hey! I'm still looking for full-time positions. Reach out if you work on multimodal generation/understanding #NeurIPS2024

As I'm heading out of Vancouver, I'm also wrapping up my postdoc at FAIR. It was a fun week with my friends/collabs and I made many great connections. But, hey! I'm still looking for full-time positions. Reach out if you work on multimodal generation/understanding #NeurIPS2024
malkin1729 (@felineautomaton) 's Twitter Profile Photo

Happy to share our latest work on #diffusion models without data: building theoretical bridges between existing methods, analysing their continuous-time asymptotics, and showing some cool practical implications. arxiv.org/abs/2501.06148 #MachineLearning 1/9

Happy to share our latest work on #diffusion models without data: building theoretical bridges between existing methods, analysing their continuous-time asymptotics, and showing some cool practical implications. arxiv.org/abs/2501.06148 #MachineLearning 1/9
Samuel Lavoie (@lavoiems) 's Twitter Profile Photo

The code and model weights for this paper are finally open! Despite being a little late for releasing them, I hope you will find them useful! Code: github.com/facebookresear… Models: - (ViT-G): huggingface.co/lavoies/llip-v… - (ViT-B): huggingface.co/lavoies/llip-v…

Michael Noukhovitch, gonna be @ICLR 2025 (@mnoukhov) 's Twitter Profile Photo

We're releasing a cool paper! DLCs are image tokens that enable better diffusion modelling. For now, we show this is the right representation. But in the future, this can allow LLMs to "speak in images"🤯to enable visual reasoning and more powerful text-image generalization. ⬇️