Shashank Prasanna
@shshnkp
AI/ML Evangelist (@apple). I talk/write/teach/build AI/ML. Recreational runner🏃♂️. Passionate about AI. Math. Physics. GPU. My own opinions ↓
ID: 17752403
https://shashankprasanna.com 30-11-2008 10:37:43
282 Tweet
1,1K Followers
173 Following
Latest mlx + mlx-lm have much faster prompt processing speeds for MoEs. Thanks to some magic from Angelos Katharopoulos pip install -U mlx-lm Deep Seek v3 → up to 2x faster Mixtral → up to 3.5x faster Llama 4 → up to 2x faster
Exciting MLX updates at #WWDC25 new webpage and 2 sessions for Python and Swift devs! New MLX webpage! mlx-framework.org Getting started with MLX by Awni Hannun developer.apple.com/videos/play/ww… Explore LLMs on Apple silicon by Angelos Katharopoulos developer.apple.com/videos/play/ww…
We have two awesome new videos on MLX at #WWDC25 this year. - Learn all about MLX. - Learn all about running LLMs locally with MLX. Angelos Katharopoulos, Shashank Prasanna, myself, and others worked super hard to make these. Check them out and hope you find them useful!
A lot of care went into this session by Angelos Katharopoulos. You can do so much with a single line command using MLXLM CLI. (1) quantizing: `mlx_lm.convert` (2) finetuning: `mlx_lm.lora` (3) fusing and uploading to Hugging Face : `mlx_lm.fuse` Entire notebook: : github.com/ml-explore/mlx…
MLX 🤝 GPUs! MLX with NVIDIA AI Developer CUDA backend!