Valery Bugakov (@valerybugakov) 's Twitter Profile
Valery Bugakov

@valerybugakov

šŸ„ā€ā™‚ļøšŸ‚šŸ€ Software engineer at @sourcegraph

ID: 404265797

calendar_today03-11-2011 17:37:46

504 Tweet

202 Followers

2,2K Following

Jamie Kyle is on šŸ¦‹ (@buildsghost) 's Twitter Profile Photo

Highly recommend using @sourcegraph’s free GitHub search to find real world code examples Today I wanted to find some code related to undocumented Electron extensions to navigator.getUserMedia() — completely ungoogleable stuff, but I found dozens of examples on SourceGraph

Beyang (@beyang) 's Twitter Profile Photo

We* hacked together a version of Cody that runs as a Slackbot and answers questions about your company's docs. So far, it's proven quite handy. Would others like to try it out? *By "we" I mean Valery Bugakov, you can see his work here: sourcegraph.com/github.com/sou…

We* hacked together a version of Cody that runs as a Slackbot and answers questions about your company's docs. So far, it's proven quite handy. Would others like to try it out?

*By "we" I mean <a href="/valerybugakov/">Valery Bugakov</a>, you can see his work here: sourcegraph.com/github.com/sou…
Beyang (@beyang) 's Twitter Profile Photo

Cody now has a mechanism for pulling in context from *outside* the codebase! Introducing OpenCtx, a protocol for providing relevant technical context to humans and AI. This builds on Sourcegraph's foundation as the world's best code search and connects our code graph to entities

Cody now has a mechanism for pulling in context from *outside* the codebase!

Introducing OpenCtx, a protocol for providing relevant technical context to humans and AI. This builds on Sourcegraph's foundation as the world's best code search and connects our code graph to entities
Quinn Slack (@sqs) 's Twitter Profile Photo

Big autocomplete improvements in Cody: • 350ms reduction in p75 latency • 58% increase in accepted chars per user​ per day Shipping now! Get it at cody.dev.

swyx (@swyx) 's Twitter Profile Photo

trying out Claude 4 Sonnet + Quinn Slack' new Amp — Research Preview and.... i think i just felt the agi this was the result of "turn my scripts into a multitenant @railway app with billing" 🤯

trying out Claude 4 Sonnet + <a href="/sqs/">Quinn Slack</a>' new <a href="/AmpCode/">Amp — Research Preview</a>  and.... i think i just felt the agi

this was the result of "turn my scripts into a multitenant @railway app with billing" 🤯
Thorsten Ball (@thorstenball) 's Twitter Profile Photo

Heeeere we go! Man, I've been waiting for this. Valery Bugakov Tom Hitesh Sagtani team have done a fantastic job: it's super fast, the custom model is better, and the contextual "understanding" is better.

Tom (@umpox) 's Twitter Profile Photo

While Amp — Research Preview is great for large, agentic coding sessions… Sometimes you need to make surgical edits to the code yourself. We’ve enabled more fine grained suggestions in Amp Tab, so you can now Tab through a refactor and view each change carefully

Hitesh Sagtani (@sagtanihitesh) 's Twitter Profile Photo

Auto-imports: Amp Tab vs. Cursor Tab Amp Tab uses your project’s LSP, so imports are accurate and added inline, so you stay focused and in your flow as you type. Cursor Tab suggests the wrong imports and scrolls to the top. Left (dark) is Amp Tab, right (light) is Cursor Tab.

Amp — Research Preview (@ampcode) 's Twitter Profile Photo

In this episode, Beyang and Thorsten discuss strategies for effective agentic coding, including the 101 of how it's different from coding with chat LLMs, the key constraint of the context window, how and where subagents can help, and the new oracle subagent which combines

Hitesh Sagtani (@sagtanihitesh) 's Twitter Profile Photo

Amp Tab is now 30% faster We worked with Baseten to switch to TensorRT-LLM and KV caching with speculative decoding. The new lookahead decoding also reduces draft tokens and compute per iteration by using improved n-gram candidate selection and variable-length speculations.

Amp Tab is now 30% faster

We worked with Baseten to switch to TensorRT-LLM and KV caching with speculative decoding. 

The new lookahead decoding also reduces draft tokens and compute per iteration by using improved n-gram candidate selection and variable-length speculations.