Ronnie Clark (@ronnieclark__) 's Twitter Profile
Ronnie Clark

@ronnieclark__

Associate Professor @UniofOxford, Prev: Dyson Robotics Lab @imperialcollege. Computer Vision, Graphics, Machine Learning. pixl.cs.ox.ac.uk

ID: 3323459854

linkhttp://www.ronnie-clark.co.uk calendar_today13-06-2015 18:09:06

499 Tweet

3,3K Followers

1,1K Following

Andrew Davison (@ajddavison) 's Twitter Profile Photo

Jan @czarnowski' DeepFactors (RA-L 2020) built code ideas into a unified SLAM system where network mono-depth prediction is a factor in a standard probabilistic factor graph. With Tristan Laidlow and Ronnie Clark from the Dyson Robotics Lab. Open source! 46/n x.com/czarnowskij/st…

Ronnie Clark (@ronnieclark__) 's Twitter Profile Photo

To me the astounding thing here is that it shows the 2D diffusion model must have learned consistent 3D structure and lighting internally… and this is a very neat way of extracting that 3D info!

Ronnie Clark (@ronnieclark__) 's Twitter Profile Photo

Ever wondered how 3D vision can help understand the weather and climate? Check our new work on reconstructing 3D cloud fields. We test our model using data from Raspberry Pi cameras observing a 10km2 area of the sky. Project: cloud-field.github.io w/ Jacob Lin Ed Gryspeerdt

Christian Schroeder de Witt (@casdewitt) 's Twitter Profile Photo

Excited to announce my student’s Sumeet’s @sumeetrn recent work on Multi-Agent LLM Training is now on arXiv! What if a generator, critic, and refinement model could be jointly trained to improve their reasoning performance? ✨🧵 MALT is an early advance leveraging synthetic data

Sumeet Motwani (@sumeetrm) 's Twitter Profile Photo

Excited to announce our work on Multi-Agent LLM Training!🚨 MALT is an early advance leveraging synthetic data generation and credit assignment strategies for post-training specialized models solving problems together. Check out the first project from my PhD🧵✨

Ronnie Clark (@ronnieclark__) 's Twitter Profile Photo

If you’re interested in LLM reasoning, check out our new paper on multi-agent LLM training (MALT), where we specialise a base model to create a generator (G), verifier (V), and refiner (R) that work together to solve problems. Paper: arxiv.org/pdf/2412.01928 Led by Sumeet Motwani

If you’re interested in LLM reasoning, check out our new paper on multi-agent LLM training (MALT), where we specialise a base model to create a generator (G), verifier (V), and refiner (R) that work together to solve problems.

Paper: arxiv.org/pdf/2412.01928

Led by <a href="/sumeetrm/">Sumeet Motwani</a>
Ronnie Clark (@ronnieclark__) 's Twitter Profile Photo

We trained a computer vision agent that, given a prompt + image, can choose the best specialist model to solve a task or even chain together models for more complex tasks. #CVPR2025 w/ Microsoft AI See more: arxiv.org/pdf/2412.09612

Jerred Chen (@jerredchen) 's Twitter Profile Photo

Motion blur typically breaks SLAM/SfM algorithms - but what if blur was actually the key to super-robust motion estimation? In our new work, Image as an IMU, Ronnie Clark and I demonstrate exactly how a single motion-blurred image can be used to our advantage. 🧵1/9

(a)bram (@abramschonfeldt) 's Twitter Profile Photo

🚨 New preprint on arXiv from OxWearables PIXL @ Oxford ! Can vision-language models (VLMs) help automatically annotate physical activity in large real-world wearable datasets (⌚️+📷, 🇬🇧 + 🇨🇳). 📄 arxiv.org/abs/2505.03374 🧵1/7

🚨 New preprint on arXiv from <a href="/OxWearables/">OxWearables</a> <a href="/pixl_oxford/">PIXL @ Oxford</a> !

Can vision-language models (VLMs) help automatically annotate physical activity in large real-world wearable datasets (⌚️+📷, 🇬🇧 + 🇨🇳).
📄 arxiv.org/abs/2505.03374

🧵1/7