Prime Intellect (@primeintellect) 's Twitter Profile
Prime Intellect

@primeintellect

Find compute. Train models. Co-own intelligence.

discord.gg/primeintellect

ID: 1274287728167915522

linkhttps://www.primeintellect.ai/ calendar_today20-06-2020 10:27:30

1,1K Tweet

31,31K Followers

4,4K Following

Prime Intellect (@primeintellect) 's Twitter Profile Photo

We are excited to share a preview of our peer-to-peer decentralized inference stack Engineered for consumer GPUs and high-latency networks — plus a research roadmap to scale it to a planetary-scale decentralized inference engine.

Prime Intellect (@primeintellect) 's Twitter Profile Photo

We will integrate this work into our protocol to power our next synthetic data run — onboarding the largest amount of compute yet. Next milestone: SYNTHETIC-2 — a large scale reasoning corpus powered entirely by community GPUs. primeintellect.ai/blog/inference

Johannes Hagemann (@johannes_hage) 's Twitter Profile Photo

we have a few open-source infra releases dropping this week. starting today with Mika Senghaas's great work on decentralized pipeline parallel inference

Mika Senghaas (@mikasenghaas) 's Twitter Profile Photo

wrote a little something on our learnings from decentralizing inference and open-sourced 3 research codebases. tl;dr optimizing inference under decentralized constraints is worthwhile, non-trivial, and far from solved. excited to be building this with the team! more soon, when we

Vincent Weisser (@vincentweisser) 's Twitter Profile Photo

We're taking a big step towards truly decentralized inference — unlocking consumer GPUs and already outperforming traditional approaches that stall in high-latency settings. Unlike other p2p inference engines (e.g., Petals, Exo), our stack uniquely leverages vLLM’s advanced

Justus Mattern (@matternjustus) 's Twitter Profile Photo

Very excited to soon release SYNTHETIC-2, partially powered by consumer grade GPUs. Very confident that what we’ve planned for this dataset will be incredibly useful for the open source community

samsja (@samsja19) 's Twitter Profile Photo

We wrote an extensive blog post on large-scale pipelined inference and released a VLLM integration to connect any machines over the internet to serve a model. Will be the foundation of our next synthetic-2 run (and later allow consumer GPU to join rl run)

Arnie Ramesh (@arnie_hacker) 's Twitter Profile Photo

Thanks to compute from Prime Intellect, each of our teams will have access to a dedicated H200. Train whatever you want, GPUs will go brrr. Happening next weekend in Zurich🇨🇭, at the mimic x Loki Robotics x OpenAI Hack

Thanks to compute from <a href="/PrimeIntellect/">Prime Intellect</a>, each of our teams will have access to a dedicated H200.

Train whatever you want, GPUs will go brrr.

Happening next weekend in Zurich🇨🇭, at the <a href="/mimicrobotics/">mimic</a>  x <a href="/loki_robotics/">Loki Robotics</a>  x <a href="/OpenAI/">OpenAI</a> Hack
Prime Intellect (@primeintellect) 's Twitter Profile Photo

We did it — the first decentralized RL training of a 32B model is complete! Full open-source release is coming in ~1 week, including: checkpoints, data and a detailed technical report.

We did it —  the first decentralized RL training of a 32B model is complete!

Full open-source release is coming in ~1 week, including: checkpoints, data and a detailed technical report.