Faisal Amir (@urmauur) 's Twitter Profile
Faisal Amir

@urmauur

UI Engineer - Making pretty things do useful stuff

ID: 256966153

linkhttp://www.urmauur.com calendar_today24-02-2011 12:49:32

14,14K Tweet

333 Followers

300 Following

luis (@lgaa201) 's Twitter Profile Photo

πŸ‘‹ Jan It works wonderfully, it has better performance, and it is rightly at the top on GitHub. The result that is seen is when I turn on the GPU integrated into the CPU and when it was off.

πŸ‘‹ Jan (@jandotai) 's Twitter Profile Photo

Open-source models can do Deep Research too. This video shows a full research report created by Jan-nano. To try it: - Get Jan-nano from Jan Hub - In settings, turn on MCP Servers and Serper API - Paste your Serper API key Your deep research assistant is ready.

Lysandre (@lysandrejik) 's Twitter Profile Photo

BOOOM! transformers now has a baked-in http server w/ OpenAI spec compatible API Launch it with `transformers serve` and connect your favorite apps. Here I'm running πŸ‘‹ Jan with local transformers and hot-swappable models. There is preliminary tool call support as well!

Victor M (@victormustar) 's Twitter Profile Photo

what a time to be alive: a local 4B model (Jan-nano) + Web Search tool is already amazing to get the info you need with no overhead πŸš€

what a time to be alive: a local 4B model (Jan-nano) + Web Search tool is already amazing to get the info you need with no overhead πŸš€
Alan Dao (@alandao_ai) 's Twitter Profile Photo

I'm glad to announce our latest research:🍎 Lucy. Lucy is a model trained specifically to optimize machine-generated task vectors. Let’s say you have a reasoning model where everything happens inside: <think>... etc ...</think> Essentially, the moment the <think> tag ends,

Jerry Howell (@j3rryh0well) 's Twitter Profile Photo

OK so . Node JS MCP servers do NOT work on LM Studio , I was able to get the momory/knowledge graph I use working with πŸ‘‹ Jan . Only certain LLMs have tool use enabled on Jan though. Qwen 3 4B is pretty fast, but the addon also adds to inference time. TLDR: If you want to

πŸ‘‹ Jan (@jandotai) 's Twitter Profile Photo

Jan v0.6.6 is out: Jan now runs fully on llama.cpp. - Cortex is gone, local models now run on Georgi Gerganov's llama.cpp - Toggle between llama.cpp builds - Hugging Face added as a model provider - Hub enhanced - Images from MCPs render inline in chat Update Jan or grab the latest.

πŸ‘‹ Jan (@jandotai) 's Twitter Profile Photo

Introducing Jan-v1: 4B model for web search, an open-source alternative to Perplexity Pro. In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. Use cases: - Web search - Deep Research Built on the new version

Ivan Fioravanti α―… (@ivanfioravanti) 's Twitter Profile Photo

Jan-V1-4B on Jan using llamacpp GGUF, perfect! I tested 4bit and 8bit quality of responses is amazing on both engine and quantizations! Here a quick video. Link to MLX models below. Enjoy this amazing small thinking LLM!

πŸ‘‹ Jan (@jandotai) 's Twitter Profile Photo

Jan v0.6.8 is out: Jan is more stable now! - Stability fixes for model loading - Better llama.cpp: clearer errors & backend suggestions - Faster MoE models with CPU offload - Search private Hugging Face models - Custom Jinja templates Update your Jan or download the latest.