Awni Hannun (@awnihannun) 's Twitter Profile
Awni Hannun

@awnihannun

Machine learning research @apple

ID: 245262377

linkhttps://awnihannun.com/ calendar_today31-01-2011 08:05:27

3,3K Tweet

35,35K Followers

286 Following

mzba (@limzba) 's Twitter Profile Photo

Finally published my first iOS on-device LLM app powered by MLX. apps.apple.com/app/textmates/… One year ago, there were barely any small LLMs that could be useful, but now a 0.6B model with RAG can already do whatever we used to have to rely on cloud providers for. I feel so lucky to

Shashank Prasanna (@shshnkp) 's Twitter Profile Photo

A lot of care went into this session by Angelos Katharopoulos. You can do so much with a single line command using MLXLM CLI. (1) quantizing: `mlx_lm.convert` (2) finetuning: `mlx_lm.lora` (3) fusing and uploading to Hugging Face : `mlx_lm.fuse` Entire notebook: : github.com/ml-explore/mlx…

Ivan Fioravanti ᯅ (@ivanfioravanti) 's Twitter Profile Photo

Achieving 90.5% accuracy on a classification task with Qwen 0.6B after just 12 minutes of fine-tuning with MLX, means you have found great hyperparams!

Achieving 90.5% accuracy on a classification task with Qwen 0.6B after just 12 minutes of fine-tuning with MLX, means you have found great hyperparams!
Cheng (@zcbenz) 's Twitter Profile Photo

Last year when I decided to get involved in ML, I faced 2 choices: buy some NVIDIA hardware, or the Mac with 128GB RAM. I chose Mac, because it has more uses, and there was MLX, enabling me to do real machine learning. NVIDIA hardware was considered because of PyTorch,

mzba (@limzba) 's Twitter Profile Photo

I quickly created a swift clone of tiny agents that can directly load the MLX model to make MCP calls. The biggest issue with this kind of agent-based workflow on MLX is still the prompt processing speed. I think it only would be useful for some background task processing for now

Molly Cantillon (@mollycantillon) 's Twitter Profile Photo

Everything will be local Yesterday I gave a talk about Real-World Applications of MLX and built a fast on-device semantic search index over Apple WWDC 2025 docs. Open-sourced the code for anyone curious! github.com/mcantillon21/l…

davepoon (@davepoon) 's Twitter Profile Photo

Inspired by the incredible real-time AI vision demos and ngxson's live SmolVLM + llama.cpp project, I wanted to see what was possible on Apple Silicon! 🍏 Super excited to share my SmolVLM Real-Time Webcam Demo with MLX-VLM! 😙

Filip Strand (@filipstrand) 's Twitter Profile Photo

Excited to finally release v.0.8.0 of MFLUX! `uv tool install --upgrade mflux` github.com/filipstrand/mf… This release is packed! It has been a few weeks in the making and builds on top of the features from 0.7.0, so let's go through each major feature below: #MLX #Apple #FLUX

Ivan Fioravanti ᯅ (@ivanfioravanti) 's Twitter Profile Photo

Instructions to run Open WebUI in Apple Container in a single screenshot, with LM Studio for MLX and ollama Main differences are (simpler than Docker!): - no port mapping, each VM gets its own IP Address - no named volume - host is reachable by containers using 192.168.64.1

Instructions to run <a href="/OpenWebUI/">Open WebUI</a> in Apple Container in a single screenshot, with <a href="/lmstudio/">LM Studio</a>  for MLX  and <a href="/ollama/">ollama</a> 

Main differences are (simpler than Docker!):
- no port mapping, each VM gets its own IP Address
- no named volume
- host is reachable by containers using 192.168.64.1
Qwen (@alibaba_qwen) 's Twitter Profile Photo

🚀 Excited to launch Qwen3 models in MLX format today! Now available in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 — Optimized for MLX framework. 👉 Try it now! Huggingface:huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

🚀 Excited to launch Qwen3 models in MLX format today!

Now available in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 — Optimized for MLX framework. 

👉 Try it now!

Huggingface:huggingface.co/collections/Qw…
ModelScope: modelscope.cn/collections/Qw…
Junyang Lin (@justinlin610) 's Twitter Profile Photo

A small update, but it really takes us quite a long time as there are so many models and we even need Mac Studio for the testing. Hope that they are helpful for MLX users!