Adam Golinski  (@adam_golinski) 's Twitter Profile
Adam Golinski

@adam_golinski

ML research @Apple, prev @OxCSML @InfAtEd, part of @MLinPL & @polonium_org 🇵🇱, sometimes funny

ID: 2922540907

linkhttp://adamgol.me/ calendar_today08-12-2014 09:04:27

796 Tweet

2,2K Followers

3,3K Following

Enrico Fini (@donkeyshot21) 's Twitter Profile Photo

We release AIMv2, the second iteration of the AIM family of large autoregressive vision encoders. This time we bring multimodality into the game 🔥 Paper: arxiv.org/abs/2411.14402 Repo: github.com/apple/ml-aim Model Gallery: huggingface.co/collections/ap…

We release AIMv2, the second iteration of the AIM family of large autoregressive vision encoders. This time we bring multimodality into the game 🔥

Paper: arxiv.org/abs/2411.14402
Repo: github.com/apple/ml-aim
Model Gallery: huggingface.co/collections/ap…
Eugene Ndiaye (@eugene_ndiaye) 's Twitter Profile Photo

MLSS is coming to Senegal 🇸🇳 in 2025! 🌍 📍 AIMS Mbour, Senegal 📅 June 23 - July 4, 2025 An international summer school to explore, collaborate, and deepen your understanding of machine learning in a unique and welcoming environment. Details: mlss-senegal.github.io

Pau Rodríguez (@prlz77) 's Twitter Profile Photo

Thrilled to share the latest work from our team at Apple where we achieve interpretable and fine-grained control of LLMs and Diffusion models via Activation Transport 🔥 📄 arxiv.org/abs/2410.23054 🛠️ github.com/apple/ml-act 1/9 🧵

Shuangfei Zhai (@zhaisf) 's Twitter Profile Photo

We attempted to make Normalizing Flows work really well, and we are happy to report our findings in paper arxiv.org/pdf/2412.06329, and code github.com/apple/ml-tarfl…. [1/n]

We attempted to make Normalizing Flows work really well, and we are happy to report our findings in paper arxiv.org/pdf/2412.06329, and code github.com/apple/ml-tarfl…. [1/n]
Andreas Kirsch 🇺🇦 (@blackhc) 's Twitter Profile Photo

Have you wondered why I've posted all these nice plots and animations? 🤔 Well, the slides for my lectures on (Bayesian) Active Learning, Information Theory, and Uncertainty are online now! They cover quite a bit from basic information theory to some recent papers 🥳

Have you wondered why I've posted all these nice plots and animations? 🤔

Well, the slides for my lectures on (Bayesian) Active Learning, Information Theory, and Uncertainty are online now! They cover quite a bit from basic information theory to some recent papers 🥳
Richard Ngo (@richardmcngo) 's Twitter Profile Photo

I recently gave a short talk at the International Workshop on Reimagining Democracy. The first half focused on feeling the AGI. The second half briefly outlined a new research direction I'm very excited about: leveraging AI to build unprecedentedly trustworthy institutions.

Arwen Bradley (@arwenbradley) 's Twitter Profile Photo

When does composition of diffusion models “work”? Prior work (Du et al., 2023; Liu et al., 2022) has shown that composition via linear score combination can sometimes compose concepts like “dog” and “oil painting”, but why? Does it always work? arxiv.org/abs/2502.04549

When does composition of diffusion models “work”? Prior work (Du et al., 2023; Liu et al., 2022) has shown that composition via linear score combination can sometimes compose concepts like “dog” and “oil painting”, but why? Does it always work?   arxiv.org/abs/2502.04549
Dan Busbridge (@danbusbridge) 's Twitter Profile Photo

Reading "Distilling Knowledge in a Neural Network" left me fascinated and wondering: "If I want a small, capable model, should I distill from a more powerful model, or train from scratch?" Our distillation scaling law shows, well, it's complicated... 🧵 arxiv.org/abs/2502.08606

Rin Metcalf Susa (@rinmetcalfsusa) 's Twitter Profile Photo

🚀 We're hiring an ML Researcher! 🚀 If you're an expert in LLM alignment & personalization and want to work on a world-class research team, apply here 👉 lnkd.in/gU9yeivi Know someone who’d be a great fit? Tag them! #MachineLearning #AI #Apple

Aayush Karan (@aakaran31) 's Twitter Profile Photo

Can machine learning models predict their own errors 🤯 ? In a new preprint w/ Apple collaborators Aravind Gollakota, Parikshit Gopalan, Charlotte Peale, and Udi Wieder, we present a theory of loss prediction and show an equivalence with algorithmic fairness! A thread (1/n):

Can machine learning models predict their own errors 🤯 ?

In a new preprint w/ <a href="/Apple/">Apple</a> collaborators Aravind Gollakota, Parikshit Gopalan, Charlotte Peale, and Udi Wieder, we present a theory of loss prediction and show an equivalence with algorithmic fairness!

A thread (1/n):
ML in PL (@mlinpl) 's Twitter Profile Photo

We are happy to welcome our next speaker to MLSS 2025! 🎤 Regina Barzilay is a School of Engineering Distinguished Professor of AI & Health in the Department of Computer Science and the AI Faculty Lead at MIT Jameel Clinic. She develops machine learning methods for drug

We are happy to welcome our next speaker to MLSS 2025!

🎤 <a href="/BarzilayRegina/">Regina Barzilay</a> is a School of Engineering Distinguished Professor of AI &amp; Health in the Department of Computer Science and the AI Faculty Lead at MIT Jameel Clinic. She develops machine learning methods for drug
Martin Klissarov (@martinklissarov) 's Twitter Profile Photo

Here is an RL perspective on understanding LLMs for decision making. Are LLMs best used as: policies / rewards / transition functions ? How do you fine-tune them ? Can LLMs explore / exploit ? 🧵 Join us down this rabbit hole... (ICLR 2025 paper, done at  ML Research)

Teresa Huang (@teresanhuang) 's Twitter Profile Photo

Is the mystery behind the performance of Mamba🐍  keeping you awake at night? We got you covered! Our ICML2025 paper demystifies input selectivity in Mamba from the lens of approximation power, long-term memory, and associative recall capacity. arxiv.org/abs/2506.11891

Is the mystery behind the performance of Mamba🐍  keeping you awake at night? We got you covered! Our ICML2025 paper demystifies input selectivity in Mamba from the lens of approximation power, long-term memory, and associative recall capacity.
arxiv.org/abs/2506.11891
masani (@mohammadhamani) 's Twitter Profile Photo

Why does RL struggle with tasks requiring long reasoning chains? Because “bumping into” a correct solution becomes exponentially less likely as the number of reasoning steps grows. We propose an adaptive backtracking algorithm: AdaBack. 1/n

Freddie Bickford Smith (@fbickfordsmith) 's Twitter Profile Photo

There’s a lot of confusion around uncertainty in machine learning. We argue the "aleatoric vs epistemic" view has contributed to this and present a rigorous alternative. #ICML2025 with Jannik Kossen @eleanortrollope Mark van der Wilk @adamefoster Tom Rainforth 1/5

There’s a lot of confusion around uncertainty in machine learning.

We argue the "aleatoric vs epistemic" view has contributed to this and present a rigorous alternative.

#ICML2025 with <a href="/janundnik/">Jannik Kossen</a> @eleanortrollope <a href="/markvanderwilk/">Mark van der Wilk</a> @adamefoster <a href="/tom_rainforth/">Tom Rainforth</a>

1/5
Michael Kirchhof (@mkirchhof_) 's Twitter Profile Photo

Can LLMs access and describe their own internal distributions? With my colleagues at Apple, I invite you to take a leap forward and make LLM uncertainty quantification what it can be. 📄 arxiv.org/abs/2505.20295 💻 github.com/apple/ml-selfr… 🧵1/9

Michael Kirchhof (@mkirchhof_) 's Twitter Profile Photo

I'll present my view on the future of uncertainties in LLMs and vision models at ICML Conference, in penal discussions, posters, and workshops. Reach out if you wanna chat :) Here's everything from me and other folks at Apple: machinelearning.apple.com/updates/apple-…

I'll present my view on the future of uncertainties in LLMs and vision models at <a href="/icmlconf/">ICML Conference</a>, in penal discussions, posters, and workshops. Reach out if you wanna chat :)  
Here's everything from me and other folks at Apple: machinelearning.apple.com/updates/apple-…
Andrea Santilli (@teelinsan) 's Twitter Profile Photo

Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly? 🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed

Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly?

🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed