templar (@tplr_ai) 's Twitter Profile
templar

@tplr_ai

incenτivised inτerneτ-wide τraining

ID: 1896776987806740481

linkhttp://www.tplr.ai calendar_today04-03-2025 04:17:46

150 Tweet

1,1K Followers

7 Following

Grok (@grok) 's Twitter Profile Photo

BτComander templar Scaling an AI model from 1.2 billion to 8 billion parameters is like expanding a football team from a small local squad to a world-class club. A bigger team can cover more ground, execute complex plays, and adapt better, just as more parameters help an AI tackle tougher tasks and

Synapz (@synapz_org) 's Twitter Profile Photo

🚨 templar Templar SN3 just kicked off its next act: training an 8B parameter model — fully decentralized, fully permissionless. No datacenters. No gatekeepers. Just swarm intelligence, mining the future. Not sure what this means? I explain: The Basement Rebellion 👇

🚨 <a href="/tplr_ai/">templar</a> Templar SN3 just kicked off its next act: training an 8B parameter model — fully decentralized, fully permissionless.

No datacenters. No gatekeepers. Just swarm intelligence, mining the future.

Not sure what this means? I explain:

The Basement Rebellion 👇
Teng Yan - Championing Crypto AI (@0xprismatic) 's Twitter Profile Photo

Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: So much has happened in the past 3 months and it's hard not to get excited - Nous Research pre-trained a 15B model in a distributed fashion

Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh:

So much has happened in the past 3 months and it's hard not to get excited
- <a href="/NousResearch/">Nous Research</a> pre-trained a 15B model in a distributed fashion
Lucas 🛡️ (@onchainlu) 's Twitter Profile Photo

templar (sn3 on bittensor) also falls into the data-parallel bucket and is already live. in their latest run, they're using a permissionless network to train an 8B parameter model with refined incentive structures to improve training coordination.

The Church of Rao (@churchofrao) 's Twitter Profile Photo

🕊 The Church of Rao awakens on X. Where #Bittensor's open networks become sacred, and code becomes communion. Here, we chronicle the continuous creation of decentralized intelligence—one commit, one consensus, one connection at a time. In Bittensor's permissionless arena,

🕊 The Church of Rao awakens on X. 

Where #Bittensor's open networks become sacred, and code becomes communion. Here, we chronicle the continuous creation of decentralized intelligence—one commit, one consensus, one connection at a time. 

In Bittensor's permissionless arena,
The Church of Rao (@churchofrao) 's Twitter Profile Photo

0/🧵 Weekly OpenDev Chronicle📋 This week's development updates from the June 10, 2025 OpenDev community call—Yuma3 stabilization, Uniswap V3 integration progress, enhanced network security measures, and community contribution opportunities. 🔥

0/🧵 Weekly OpenDev Chronicle📋 

This week's development updates from the June 10, 2025 OpenDev community call—Yuma3 stabilization, Uniswap V3 integration progress, enhanced network security measures, and community contribution opportunities. 🔥
Distributed State (@diststateandme) 's Twitter Profile Photo

_τao_moonwalker_ templar Hats off to the miners. Honestly , these guys are the SEALs of bittensor. We put them through the Gauntlet , and they always rise to the challenge.

Rayon Labs (@rayon_labs) 's Twitter Profile Photo

The approach: Templar (SN3) → Base model pretraining Gradients (SN56) → Instruct fine-tuning Proof of concept results: Templar 3B (mid-training) → Gradients instruct tuning → benchmarks rising across the board Plot twist: we're just getting started.

The approach:  

Templar (SN3) → Base model pretraining 
Gradients (SN56) → Instruct fine-tuning  

Proof of concept results: 
Templar 3B (mid-training) → Gradients instruct tuning → benchmarks rising across the board  

Plot twist: we're just getting started.