Pablo Montalvo (@m_olbap) 's Twitter Profile
Pablo Montalvo

@m_olbap

ML Engineer @HuggingFace. Previously ML R&D @ Rakuten. Computer vision and NLP mixer, ex-physicist. Dice thrower, dreamer, learner. He/him. Usually friendly :)

ID: 1210486663740346368

calendar_today27-12-2019 09:05:02

114 Tweet

831 Followers

347 Following

Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Aaaand 🥁 we just shipped PaliGemma 2! New open vision models - Your friendly sizes 3B, 10B, 28B - 3 different resolutions (224x224, 448x448, and 896x896) - Easily fine-tunable models Models: hf.co/collections/go… Google blog: developers.googleblog.com/en/introducing…

Physical Intelligence (@physical_int) 's Twitter Profile Photo

Many of you asked for code & weights for π₀, we are happy to announce that we are releasing π₀ and pre-trained checkpoints in our new openpi repository! We tested the model on a few public robots, and we include code for you to fine-tune it yourself.

Dana Aubakirova (@daubakirovaa) 's Twitter Profile Photo

Let's goooo! We’ve ported robot foundation models to Hugging Face LeRobot! 🎉 Meet π0 & π0-FAST, developed by Physical Intelligence Vision-Language-Action (VLA) models for generalist robotic intelligence. 🦾

Let's goooo! We’ve ported robot foundation models to Hugging Face LeRobot! 🎉 

Meet π0 &amp; π0-FAST, developed by <a href="/physical_int/">Physical Intelligence</a>  Vision-Language-Action (VLA) models for generalist robotic intelligence. 🦾
Chelsea Finn (@chelseabfinn) 's Twitter Profile Photo

Our first open-source release at Pi 🤖 - π₀ and π₀-FAST model weights - code for model, on-robot inference, & fine-tuning - fine-tuned checkpoints that can be used out-of-the-box on ALOHA and DROID platforms github.com/Physical-Intel… Excited to see how people use it!

Arthur Zucker (@art_zucker) 's Twitter Profile Photo

A quick update on the future of the `transformers` library! In order to provide a source of truth for all models, we are working with the rest of the ecosystem to make the modeling code the standard. A joint effort with vLLM, LlamaCPP, SGLang, Mlx, Qwen, Glm, Unsloth, Axoloth,

Lysandre (@lysandrejik) 's Twitter Profile Photo

The Transformers library is undergoing it's largest pivot to date 🙌 It now cements its role as the central model definition, irrespective of the backend and runner. One ground truth to bring more reliability across the ecosystem. Why is this important?

The Transformers library is undergoing it's largest pivot to date 🙌

It now cements its role as the central model definition, irrespective of the backend and runner.

One ground truth to bring more reliability across the ecosystem.

Why is this important?
Pablo Montalvo (@m_olbap) 's Twitter Profile Photo

Ever wondered how models actually see an image? Been playing with some visualizations of patch extraction, token layouts, how they affect predictions too. Planning a short visual deep dive comparing how different models process images. Would love thoughts before I go on.