Yedid Hoshen (@yhoshen) 's Twitter Profile
Yedid Hoshen

@yhoshen

Associate Prof., CS @ HUJI, Visiting Faculty @ Google

ID: 2933259979

calendar_today17-12-2014 08:56:13

43 Tweet

100 Followers

72 Following

Daniel Winter (@_daniel_winter_) 's Twitter Profile Photo

ObjectDrop is accepted to #ECCV2024! 🥳 In this work from Google AI we tackle photorealistic object removal and insertion. Congrats to the team: Matan Cohen Shlomi Fruchter Yael Pritch  Alex Rav-Acha Yedid Hoshen Checkout our project page: objectdrop.github.io

Nataniel Ruiz (@natanielruizg) 's Twitter Profile Photo

With friends at Google we announce 💜 Magic Insert 💜 - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread 🧵) web: magicinsert.github.io

With friends at <a href="/Google/">Google</a> we announce 💜 Magic Insert 💜 - a generative AI method that allows you to drag-and-drop a subject into an image with a vastly different style achieving a style-harmonized and realistic insertion of the subject (Thread 🧵)
web: magicinsert.github.io
Damian Borth (@damianborth) 's Twitter Profile Photo

🚀 ICLR 2026 notification, 🚀 #CVPR2025 rebuttal, 🚀 ICML Conference submission - nothing is as fun as submitting a paper to our 🔥 Weight Space Learning Workshop 🔥 at ICLR 2026 Interested? Have a look at: weight-space-learning.github.io Konstantin Schürholt,Giorgos Bouritsas (@[email protected]), Eliahu Horwitz,

🚀 <a href="/iclr_conf/">ICLR 2026</a>  notification, 🚀 <a href="/CVPR/">#CVPR2025</a>  rebuttal, 🚀 <a href="/icmlconf/">ICML Conference</a>  submission - nothing is as fun as submitting a paper to our 🔥 Weight Space Learning Workshop 🔥 at <a href="/iclr_conf/">ICLR 2026</a> 

Interested? Have a look at: weight-space-learning.github.io

<a href="/k_schuerholt/">Konstantin Schürholt</a>,<a href="/gbouritsas/">Giorgos Bouritsas (@gbouritsas@sigmoid.social)</a>, <a href="/EliahuHorwitz/">Eliahu Horwitz</a>,
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

⏳Final hours to submit to the 1st workshop on Weight Space Learning at #ICLR2025 @ICLR_conf❗ 🚀Explore learning directly from model weights! 🌐 weight-space-learning.github.io ⏰ Submit by Feb 10 2025 AoE

⏳Final hours to submit to the 1st workshop on Weight Space Learning at #ICLR2025 @ICLR_conf❗

🚀Explore learning directly from model weights!  

🌐 weight-space-learning.github.io 
⏰ Submit by Feb 10 2025 AoE
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

With so many models one is left asking, how do find the relevant model for me?🔍🕵‍♂ Here's our stab at tackling this question👀 First attempt: horwitz.ai/probex And fresh off the press is a very exciting direction led by Jonathan Kahana jonkahana.github.io/probelog/

Jonathan Kahana (@jonkahana) 's Twitter Profile Photo

🚨 New Paper Alert on Zero-Shot Model Search 🚨 We present an efficient an accurate method to search and understand what models can do, based on their weights without access to any metadata! Project Page 🌐: jonkahana.github.io/probelog/

Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

Daniel van Strien Hugging Face This is awesome! But what about the 700k+ models with no model cards? Here's our stab at tackling this problem for models with no metadata 👀 Our #CVPR2025 paper horwitz.ai/probex And a fresh and exciting direction led by Jonathan Kahana jonkahana.github.io/probelog/

ℏεsam (@hesamation) 's Twitter Profile Photo

this paper is a visual art! the atlas of🤗hugging face models studies 63,000 models published on Hugging Face and they share some fun numbers: > 96% of vision models are ≤1 node from foundation models, but 5% of NLP models reach ≥5 node depths > only 0.15% of vision models

this paper is a visual art!

the atlas of🤗hugging face models studies 63,000 models published on <a href="/huggingface/">Hugging Face</a> and they share some fun numbers:

&gt; 96% of vision models are ≤1 node from foundation models, but 5% of NLP models reach ≥5 node depths
&gt; only 0.15% of vision models
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

🚨 Update Alert! 🚨 • ⚡ Much faster loading (seconds, not minutes!) • ✨ Fresh layout for Large NLP Atlas • 📖 Clear explanations of all features Stay tuned—more improvements coming soon!

🚨 Update Alert! 🚨
• ⚡ Much faster loading (seconds, not minutes!)
• ✨ Fresh layout for Large NLP Atlas
• 📖 Clear explanations of all features

Stay tuned—more improvements coming soon!
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

What if models could be the data🤔Find out at ICLR 2026 #ICLR2025 Join the 1st workshop on Model Weights as a New Data Modality. We're training networks on model weights for a wide variety of tasks. Featuring an amazing lineup of papers & speakers🚀 🗓️Sunday 9-17 📍Topaz 220-225

What if models could be the data🤔Find out at <a href="/iclr_conf/">ICLR 2026</a> #ICLR2025
Join the 1st workshop on Model Weights as a New Data Modality. We're training networks on model weights for a wide variety of tasks. Featuring an amazing lineup of papers &amp; speakers🚀
🗓️Sunday 9-17
📍Topaz 220-225
Eliahu Horwitz | @ ICLR2025 (@eliahuhorwitz) 's Twitter Profile Photo

Andrej Karpathy Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"🙌Would really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633

<a href="/karpathy/">Andrej Karpathy</a> Thanks for the inspiring talk (as always!). I'm the author of the Model Atlas. I'm delighted you liked our work, seeing the figure in your slides felt like an "achievement unlocked"🙌Would really appreciate a link to our work in your slides/tweet arxiv.org/abs/2503.10633