
Angelos Katharopoulos
@angeloskath
Machine Learning Research @Apple. Previously PhD student at @idiap_ch and @EPFL. Interested in all things machine learnable
ID: 874169451356278784
https://angeloskath.github.io/ 12-06-2017 07:40:13
293 Tweet
2,2K Followers
260 Following






The latest mlx-lm supports AWQ (activation-aware weight quantization), thanks to Alex Barron ! Use it to make 4-bit quantized models that are nearly as good as full-precision:


Qwen3 and Qwen3 MoEs are already supported in the latest mlx-lm thanks to Prince Canuma and Gökdeniz Gülmez pip install -U mlx-lm Awesome that Qwen ships a model for every device: -iPhone: 0.6B, 4B -Macbook: 8B, 30B, 3B/30B MoE -M2, M3 Ultra: 22B/235B MoE






Better together 🤝 LM Studio's new mlx-engine architecture is an effort to unify the brilliant work of Awni Hannun Angelos Katharopoulos (mlx-lm), Prince Canuma (mlx-vlm) and community contributors. This pattern is designed to be extended and we have a good first issue on the repo! 🍎




Exciting MLX updates at #WWDC25 new webpage and 2 sessions for Python and Swift devs! New MLX webpage! mlx-framework.org Getting started with MLX by Awni Hannun developer.apple.com/videos/play/ww… Explore LLMs on Apple silicon by Angelos Katharopoulos developer.apple.com/videos/play/ww…

Honestly, if you want to get up to speed quickly on using LLMs locally with MLX the #WWDC25 session by Angelos Katharopoulos has it all:


