Filip Radenovic (@filipradenovic) 's Twitter Profile
Filip Radenovic

@filipradenovic

Research Scientist @MetaAI

ID: 1004491099010785280

linkhttp://filipradenovic.github.io calendar_today06-06-2018 22:31:55

50 Tweet

242 Followers

185 Following

Filip Radenovic (@filipradenovic) 's Twitter Profile Photo

So proud to have grown up with the guys behind this game! As somebody whose favorite games are soulsborne, I cant wait to play it! Three developers, just wow!

AI at Meta (@aiatmeta) 's Twitter Profile Photo

We're excited to help organize the XAI4V workshop at #CVPR2023 — submission deadline is just two weeks away! Want to present your paper or demo? More information in the 🧵 below.

Montenegrin AI Association (@aisocietyme) 's Twitter Profile Photo

Za velikim zadovoljstvom pozivamo sve zainteresovane za vještačku inteligenciju na naš seminar - "AI bez filtera - uvod u svijet vještačke inteligencije". 🗓 17.03.2023. (petak) ⏰ 14:00 📍 Zgrada Rektorata, Univerziteta Crne Gore Prijave: lnkd.in/dR78apzA Vidimo se!

Sunnie S. Y. Kim (@sunniesuhyoung) 's Twitter Profile Photo

#CVPR2023 workshop proceedings are out! See openaccess.thecvf.com/CVPR2023_works… for the 17 🔥 papers accepted to the XAI4CV workshop xai4cv.github.io/workshop_cvpr23

Sunnie S. Y. Kim (@sunniesuhyoung) 's Twitter Profile Photo

This coming Monday, June 19th, join us at the 2nd Explainable AI for CV (XAI4CV) Workshop at #CVPR2023! We have an amazing program scheduled 🤩 All materials and info at xai4cv.github.io #CVPR2025 CVPR News Explainable AI Trustworthy ML Initiative (TrustML)

This coming Monday, June 19th, join us at the 2nd Explainable AI for CV (XAI4CV) Workshop at #CVPR2023! We have an amazing program scheduled 🤩

All materials and info at xai4cv.github.io

<a href="/CVPR/">#CVPR2025</a> <a href="/CVPRConf/">CVPR News</a> <a href="/XAI_Research/">Explainable AI</a> <a href="/trustworthy_ml/">Trustworthy ML Initiative (TrustML)</a>
Devi Parikh (@deviparikh) 's Twitter Profile Photo

For those who caught the keynote at Connect -- meet Emu! :) For those who didn't, go check it out for lots of cool GenAI stuff -- image generation (based on Emu), sticker generation, AIs (assistant, personalities), more -- all in your pocket across WhatsApp, IG, Messenger, etc.

AI at Meta (@aiatmeta) 's Twitter Profile Photo

🆕 Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack ➡️ bit.ly/3PVhE8r This research was an important piece of our new generative image features such as AI stickers and upcoming image editing features in Instagram like Restyle and Backdrop.

🆕 Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack ➡️ bit.ly/3PVhE8r

This research was an important piece of our new generative image features such as AI stickers and upcoming image editing features in Instagram like Restyle and Backdrop.
AK (@_akhaliq) 's Twitter Profile Photo

Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack paper page: huggingface.co/papers/2309.15… Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text. However, these pre-trained models

Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack

paper page: huggingface.co/papers/2309.15…

Training text-to-image models with web scale image-text pairs enables the generation of a wide range of visual concepts from text. However, these pre-trained models
AK (@_akhaliq) 's Twitter Profile Photo

Context Diffusion: In-Context Aware Image Generation paper page: huggingface.co/papers/2312.03… propose Context Diffusion, a diffusion-based framework that enables image generation models to learn from visual examples presented in context. Recent work tackles such in-context learning

Context Diffusion: In-Context Aware Image Generation

paper page: huggingface.co/papers/2312.03…

propose Context Diffusion, a diffusion-based framework that enables image generation models to learn from visual examples presented in context. Recent work tackles such in-context learning
Ivona Najdenkoska (@ivonajdenkoska) 's Twitter Profile Photo

🥳 Excited to share the project I worked on at Meta: Context Diffusion - an in-context aware image generation model 🌐 Project page: ivonajdenkoska.github.io/contextdiffusi… 🔗ArXiv: arxiv.org/abs/2312.03584 🧵⏭️

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Introducing Meta Llama 3: the most capable openly available LLM to date. Today we’re releasing 8B & 70B models that deliver on new capabilities such as improved reasoning and set a new state-of-the-art for models of their sizes. Today's release includes the first two Llama 3

Filip Radenovic (@filipradenovic) 's Twitter Profile Photo

Happy to be part of Llama 3 team. Checkout the current capabilities and released models at llama.meta.com/llama3 Or play with the model at meta.ai Multimodal (and more) capabilities coming in the next few months.

Happy to be part of Llama 3 team. 

Checkout the current capabilities and released models at llama.meta.com/llama3

Or play with the model at meta.ai

Multimodal (and more) capabilities coming in the next few months.
Rohit Girdhar (@_rohitgirdhar_) 's Twitter Profile Photo

Excited to share llama3.1, that brings multimodal capabilities to your favorite open source LLM using simple, post-trained adapters! Great experience building w/ our incredible multimodal team, and espl my partners in crime for all things video, Mannat Singh and Filip Radenovic!

Mannat Singh (@mannat_singh) 's Twitter Profile Photo

Llama 3.1 is out! Through adapters we've made it multimodal, supporting images, videos, speech! Was a fun journey adding video understanding capabilities with Rohit Girdhar, Filip Radenovic , Ishan Misra and the whole MM team! P.S. MM models are WIP (not part of the release).