Lamini (@laminiai) 's Twitter Profile
Lamini

@laminiai

The LLM tuning & inference platform for enterprises. Factual LLMs. Deployed anywhere.

ID: 1646454418118938624

linkhttps://lamini.ai calendar_today13-04-2023 10:05:50

308 Tweet

6,6K Followers

10 Following

Lamini (@laminiai) 's Twitter Profile Photo

Our new DeepLearning.AI course on Improving Accuracy of LLM Applications is live! If you are short on time but curious about fine-tuning LLMs, this is the course for you!

Andrew Ng (@andrewyng) 's Twitter Profile Photo

Learn a development pattern to systematically improve the accuracy and reliability of LLM applications in our new short course, Improving Accuracy of LLM Applications, built in partnership with Lamini and Meta, and taught by Lamini’s CEO Sharon Zhou, and Meta’s Senior

Sharon Zhou (@realsharonzhou) 's Twitter Profile Photo

We at Meta & Lamini & DeepLearning.AI have built a course for you! 🚀 Improving Accuracy of LLM Applications: deeplearning.ai/short-courses/… ◽️ Reliable agents & apps are here - it's just a matter of effort. ◽️ The effort is now a ton lower because of... LLMs themselves! ◽️ Evals

Supermicro (@supermicro_smci) 's Twitter Profile Photo

🎉We're excited to share how Lamini, a leading AI cloud service provider, is leveraging our Supermicro GPU Servers to support their large-scale #LLM tuning and generative #AI models. Read our Success Story: hubs.la/Q02LtN5D0 #GenAI #datacenter #GPU #supermicro

🎉We're excited to share how <a href="/LaminiAI/">Lamini</a>, a leading AI cloud service provider, is leveraging our Supermicro GPU Servers to support their large-scale #LLM tuning and generative #AI models. Read our Success Story: hubs.la/Q02LtN5D0  #GenAI #datacenter #GPU #supermicro
AI at Meta (@aiatmeta) 's Twitter Profile Photo

🆕 New course on DeepLearning.AI: Improving Accuracy of LLM Applications ➡️ go.fb.me/zfwvd8 Created in collaboration with DL, Meta & Lamini, this free course covers topics like evaluation frameworks, instruction & memory fine-tuning, LoRA + training data generation.

Lamini (@laminiai) 's Twitter Profile Photo

Like many startups, our tech is possible because of access to open source LLMs. Sharon Zhou Matt White @starlordxie and Christopher Nguyen ⽗ recently discussed the importance of an open ecosystem and implications of SB 1047. Thanks to AI at Meta and Cerebral Valley for

Lamini (@laminiai) 's Twitter Profile Photo

Vertical vs. horizontal AI use cases? GitHub Copilot started vertical and crossed over into horizontal applications. Low latency + accuracy were key! Thanks for the great discussion Gajen Kandiah and Hitachi! youtube.com/watch?v=4Wn-rE…

Lamini (@laminiai) 's Twitter Profile Photo

Go from AI novice to fine-tuning wiz with our Improving Accuracy of LLM Applications course with DeepLearning.AI + Amit Sangani. Here's one student's experience getting to 96% accuracy on factual data in just 3 iterations. lamini.ai/blog/llm-accur…

Lamini (@laminiai) 's Twitter Profile Photo

LLM inference frameworks have hit the “memory wall”, which is a hardware imposed speed limit on memory bound code. Is it possible to tear down the memory wall? Greg Diamos explains how it works in his new technical blog post. lamini.ai/blog/evaluate-…

Lamini (@laminiai) 's Twitter Profile Photo

🎉🎉🎉 Excited to announce our new pay-as-you-go offering, Lamini On-Demand. Get $300 in free credit to run your tuning and inference jobs on our high-performance GPU cluster. Happy tuning! lamini.ai/blog/lamini-on…

Lamini (@laminiai) 's Twitter Profile Photo

.Sharon Zhou recently spoke at Aurecon's #ExemplarForum2024 on high-ROI use cases for LLMs and overcoming key challenges in AI deployment, including poor model quality, hallucinations, costs, and security. Watch the video here: youtube.com/watch?v=gLXT4l…

Lamini (@laminiai) 's Twitter Profile Photo

🙌 Our new Enterprise Guide to Fine-Tuning is out! If you can't get above 40-50% accuracy with RAG, fine-tuning might be the answer. Learn the basics of fine-tuning and specific applications and use cases. bit.ly/495Q7c9

🙌 Our new Enterprise Guide to Fine-Tuning is out! If you can't get above 40-50% accuracy with RAG, fine-tuning might be the answer. Learn the basics of fine-tuning and specific applications and use cases. bit.ly/495Q7c9
Sharon Zhou (@realsharonzhou) 's Twitter Profile Photo

I'm so excited to launch Lamini’s Classifier Agent Toolkit, aka. CAT! 🚀🐱 CAT hunts & tags the important signals 🐭 in a vast amount of data — so devs can easily create agentic classifiers. ❌ Manual data labeling ❌ Large, slow general LLM calls that can only handle 20-30

Lamini (@laminiai) 's Twitter Profile Photo

🎁 Our new Classifier Agent Toolkit (CAT 🐱) is here! No more extensive manual data labeling or heavy ML systems. 😻 Build classifier agents that can quickly categorize large volumes of data at 95%+ accuracy / 400k token throughput in under 2 seconds. Watch the demo and get

Lamini (@laminiai) 's Twitter Profile Photo

Have you seen our Classifier Agent Toolkit 😺 demo yet? Learn how to use our SDK to build a highly accurate Classifier Agent for a customer service chatbot. The agent categorizes customer interactions by intent so it can respond appropriately. You can run multiple evaluations

Lamini (@laminiai) 's Twitter Profile Photo

🙌Introducing Memory RAG—a simpler approach to RAG that leverages embed-time compute to create more intelligent, validated data representations. Build mini-agents with a simple prompt. Get the paper: hubs.la/Q0333d5c0

🙌Introducing Memory RAG—a simpler approach to RAG that leverages embed-time compute to create more intelligent, validated data representations. Build mini-agents with a simple prompt.

Get the paper: hubs.la/Q0333d5c0
Lamini (@laminiai) 's Twitter Profile Photo

Join us for a webinar on building Text-to-SQL BI agents. We’ll show how to finetune any open LLM to reach 90%+ accuracy. Register now bit.ly/41qIycU 🎯 Build high-accuracy Text-to-SQL BI agents 📅 March 20, 2025 🕘 10:00 - 10:45 AM PT

Lamini (@laminiai) 's Twitter Profile Photo

🎯 Aiming for 90%+ accuracy on your Text-to-SQL agent, but can't get past 50%? With our proven methodology, our customers have cracked the code and hit 9s of accuracy! We're spilling the tea 🍵 in our upcoming webinar. Bring your toughest Text-to-SQL questions—we’ve got answers!