Ondrej Bohdal (@obohdal) 's Twitter Profile
Ondrej Bohdal

@obohdal

Machine learning researcher at Samsung Research @samsungresearch. Previously @InfAtEd @EdiDataScience @turinginst @AmazonScience

ID: 1337349175537774592

linkhttps://ondrejbohdal.github.io/ calendar_today11-12-2020 10:51:15

54 Tweet

266 Followers

308 Following

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

I'm really happy to share the news that Meta-Calibration has been accepted to TMLR! Meta-Calibration uses meta-learning as a new way to optimise for uncertainty calibration of neural networks. I've had a very positive experience with TMLR and certainly recommend submitting there!

Ruchika Chavhan (@chavhan_ruchika) 's Twitter Profile Photo

🎉 Super excited to share that I'll be attending #ICCV2023 in Paris 🇫🇷 to present my paper - Quality Diversity for Visual Pre-training - a pre-training strategy inspired by Quality Diversity Optimization in robotics 🤖 Henry Gouk Da Li Timothy Hospedales

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Interested in how to improve the fairness of large vision models? Learn more in our FairTune paper that was recently accepted to #ICLR!

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Vision-language models are highly capable yet prone to generate unsafe content. To help with this challenge, we introduce the VLGuard safety fine-tuning dataset ✨, together with two strategies for how to utilise it ✅. Learn more at ➡️ ys-zong.github.io/VLGuard/

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Curious about how to better evaluate in-context learning in multimodal #LLMs? We introduce VL-ICL Bench to enable rigorous evaluation of MLLM's ability to learn from a few examples✨. Details at ys-zong.github.io/VL-ICL

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Noise can be helpful for improving generalisation and uncertainty calibration of neural networks - but how to use it effectively in different scenarios? Find out in our recent paper that was accepted to #TMLR!

Yongshuo Zong (@yongshuozong) 's Twitter Profile Photo

VLGuard is accepted to #ICML2024! Check out our strong baseline for 🛡️safeguarding🛡️ VLLMs: ys-zong.github.io/VLGuard/

Raman Dutt (@ramandutt4) 's Twitter Profile Photo

Finally arrived in Vienna to present FairTune at ICLR 2026. A dream come true ✨ Also, co-organizing the ML-Collective social on 8th (12:45-2:15 CEST) with Rosanne Liu RahimEntezari Olga Saukh and muhtasham. Do join us! DM for discussions around PEFT, diffusion, Medical imaging etc

Raman Dutt (@ramandutt4) 's Twitter Profile Photo

🚨 MemControl: Mitigating Memorization in Medical Diffusion Models via Automated Parameter Selection A new strategy to mitigate memorization in Diffusion models Arxiv: arxiv.org/abs/2405.19458 Work done with Pedro P. Sanchez Ondrej Bohdal Sotirios (Sotos) Tsaftaris Timothy Hospedales UKRI Centre for Doctoral Training in Biomedical AI 🧵👇

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Career update: I'm excited to share the news that I've recently joined Samsung Research! 🎉 I'll be primarily doing research on large language models. Looking forward to catching up with friends in London 🇬🇧 🙌 and also meeting new people here!

Raman Dutt (@ramandutt4) 's Twitter Profile Photo

Looking to reduce memorization WHILE improving image quality in diffusion models? Delighted to share our work "𝐌𝐞𝐦𝐂𝐨𝐧𝐭𝐫𝐨𝐥" now accepted at WACV '25 (WACV). We show strong results for medical image generation and also establish an initial benchmark! More 👇

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

🚀Excited to share our latest work 𝗟𝗼𝗥𝗔.𝗿𝗮𝗿: an efficient method to merge LoRAs for personalized content and style image generation! 🖼️✨

Ondrej Bohdal (@obohdal) 's Twitter Profile Photo

Our benchmark for evaluating in-context learning of multimodal LLMs has been accepted to ICLR'25! 🎉 Check out the project page for more details: ys-zong.github.io/VL-ICL/ 📄

Yongshuo Zong (@yongshuozong) 's Twitter Profile Photo

I'll be at #ICLR2025 next week to present VL-ICL, our benchmark for multimodal in-context learning. Find me at the poster session and happy to chat about all kinds of stuffs on multimodal LLMs and more. DM/email is welcome!

I'll be at #ICLR2025 next week to present VL-ICL, our benchmark for multimodal in-context learning. 

Find me at the poster session and happy to chat about all kinds of stuffs on multimodal LLMs and more. DM/email is welcome!