
Ashwin Mathur
@awinml
NLP Engineer | MSc. Data Science | Open-Source Contributor
ID: 1621149684852228102
https://github.com/awinml 02-02-2023 14:13:35
7 Tweet
8 Followers
172 Following


llama.cpp is one of the first tools that comes to mind when running LLMs locally. It uses the efficient quantized GGUF format, dramatically reducing memory requirements and accelerating inference without a GPU! 🏎️ Thanks to Ashwin Mathur, Haystack now has llama.cpp integration that


With llama.cpp integration of Haystack (kudos to Ashwin Mathur), you can use local LLMs in your pipelines. Just follow these steps👇 1️⃣ Install llama.cpp for your preferred compute backend: github.com/abetlen/llama-… 2️⃣ Install llama-cpp-haystack integration: haystack.deepset.ai/integrations/l…



Voyage text embeddings and rerankers are now available in Haystack, an open-source AI framework by @deepset_AI !🚀 To get started, check out the Voyage integration page on Haystack: haystack.deepset.ai/integrations/v… Shout out to Ashwin Mathur for laying the foundation for this integration!
