MK1 Technologies (@mkoneai) 's Twitter Profile
MK1 Technologies

@mkoneai

We're building the engines to power the AI economy. Follow us to stay tuned!

ID: 1711520811285348352

linkhttps://mk1.ai calendar_today09-10-2023 23:15:48

11 Tweet

312 Followers

32 Following

MK1 Technologies (@mkoneai) 's Twitter Profile Photo

Launching Flywheel, MK1's enterprise LLM inference engine. Now out of closed beta and ready to make AI go brrr! You can try MK1 Flywheel now on AWS SageMaker or reach out if you're ready to deploy at scale. Learn more about our progress on our blog post: mkone.ai/blog/mk1-flywh…

MK1 Technologies (@mkoneai) 's Twitter Profile Photo

Run MK1 Flywheel on Modal for the fastest LLM inference 🚀 Experience faster responses and lower costs with just a few lines of code. Get started with free credits and instant GPU access. Learn more on our blog: mk1.ai/blog/modal

MK1 Technologies (@mkoneai) 's Twitter Profile Photo

What’s the fastest you can run LLM inference on NVIDIA hardware? Hint: It’s with MK1 Flywheel. Get started for free on Modal today!

MK1 Technologies (@mkoneai) 's Twitter Profile Photo

Watch 32 AMD MI210 cards 🔥 on Tensorwave’s GPU cloud. Summarizing 82 articles / sec using MK1 Flywheel inference engine, hitting max TDP!!! Can’t wait for 80 MI300x cards per node. What will you run? Learn how to get the most from your GPUs at MK1 Technologies and TensorWave

Paul Merolla (@paul_merolla) 's Twitter Profile Photo

LLMs like GPT4o often struggle with complex queries on large documents. But why? Our new benchmark spotlights a key deficiency—comprehension—that worsens as context grows. The good news: We show how our custom LLM CAMeL can augment a frontier model, vastly improving

LLMs like GPT4o often struggle with complex queries on large documents. But why?

Our new benchmark spotlights a key deficiency—comprehension—that worsens as context grows.

The good news: We show how our custom LLM CAMeL can augment a frontier model, vastly improving