Hrishi (@hrishioa) 's Twitter Profile
Hrishi

@hrishioa

Building artificially intelligent bridges at Southbridge, prev-CTO Greywing (YC W21).

Chop wood carry water.

ID: 1548645654

linkhttps://olickel.com calendar_today26-06-2013 17:14:03

3,3K Tweet

10,10K Followers

2,2K Following

Jay (@jay_s6a) 's Twitter Profile Photo

You: Gemini-CLI doesn't support other models and ratelimits you down to flash. Me: Literally tell Claude Code to clone the gemini repo, then modify the code to never switch back from pro + add Claude models in.

You: Gemini-CLI doesn't support other models and ratelimits you down to flash.

Me: Literally tell Claude Code to clone the gemini repo, then modify the code to never switch back from pro + add Claude models in.
Minu (@minune29) 's Twitter Profile Photo

"frog chilling on southbridge chip" FLUX.1 [dev]: no idea what a southbridge is, but at every iteration, this frog will be vibing Love all the frogs

"frog chilling on southbridge chip" 

FLUX.1 [dev]: no idea what a southbridge is, but at every iteration, this frog will be vibing 

Love all the frogs
Hrishi (@hrishioa) 's Twitter Profile Photo

Vibe-added Sonnet and Opus to Gemini-cli, so I could watch Gemini use Claude to work on the Gemini codebase that it's running from. Also added logging to jsonl (like claude-trace), a live model switcher and exact token counting This power is too great for any of us to have

Hrishi (@hrishioa) 's Twitter Profile Photo

Completely fair point. Problems with subagents (related to Brook's Law) I'd say are: 1. Learnings from one part can't be easily passed to the next one, so you have a bunch of agents repeating the same problem. 2. Metaprompting (figuring out the right amount of context for a

Hrishi (@hrishioa) 's Twitter Profile Photo

We now have a more apples-to-apples comparison of Claude Code and Gemini Cli. Here's Gemini Cli (with Sonnet 4) vs Claude Code (with and without subagents) fixing the same bug from the same prompt in the gemini-cli codebase: notion.so/southbridge-re…

Hrishi (@hrishioa) 's Twitter Profile Photo

Chat did you know you can proxy Claude Code requests through the Cloudflare gateway which means you can ✅ Keep all of your traces and data ✅ Get your own analytics and costs ✅ Cache those haiku requests so CC is noticeably faster ANTHROPIC_BASE_URL thank me later

Hrishi (@hrishioa) 's Twitter Profile Photo

Super simple - make a Cloudflare gateway at developers.cloudflare.com/ai-gateway/get… Get the account id and gateway id ANTHROPIC_BASE_URL=gateway.ai.cloudflare.com/v1/<account-id>/<gateway-id>/anthropic claude Look at charts and drink wine

Jay (@jay_s6a) 's Twitter Profile Photo

Inside you are two wolves: One believes benchmarks reflect reality One has seen CustomerID used as a boolean The second wolf is correct and should probably seek therapy

Hrishi (@hrishioa) 's Twitter Profile Photo

Merging frontier models to improve ARC-AGI - the work from Sakana AI is interesting, because of the questions it raises on sampling and MCTS based methods for me. LLMs, given enough tries, do better. Groups of LLMs do even better (though Gemini+o4-mini+r1) do only a few

Merging frontier models to improve ARC-AGI - the work from <a href="/SakanaAILabs/">Sakana AI</a> is interesting, because of the questions it raises on sampling and MCTS based methods for me.

LLMs, given enough tries, do better. Groups of LLMs do even better (though Gemini+o4-mini+r1) do only a few
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ (@danielmiessler) 's Twitter Profile Photo

The most important skill for talking to AI is not prompting or context engineering. It's clear thinking. It's articulating intent. Without that, prompting won't matter. And you won't know even know the right context to gather. You'll be optimizing in the wrong direction.

The most important skill for talking to AI is not prompting or context engineering.

It's clear thinking. It's articulating intent.

Without that, prompting won't matter. And you won't know even know the right context to gather.

You'll be optimizing in the wrong direction.
Mario Zechner (@badlogicgames) 's Twitter Profile Photo

Don't built MCP servers. Build CLI tools with a --llm flag the LLM can invoke to get an LLM compatible description of what the tool does and how to use it. Benefit: you don't have a gazillion MCP server tools in your context. You pull in just the tools you need ad-hoc.

Hrishi (@hrishioa) 's Twitter Profile Photo

Things are going to change - faster than we think, with massive downstream effects. blog.cloudflare.com/introducing-pa…