It might be controversial, but it feels like we’re in a moment where we’re actively choosing what deserves artisanal quality in software, and what doesn’t.
If you’re building something that thousands of other developers will use, I still think the quality bar should be high.
And it’s been genuinely fun building this with OpenCode. For reference: I built Contextrie, a memory framework for AI agents, to manage input sources (files, chat, DBs, records…), assess their relevance for each request, and then compose the right context for each agentic task.
At the time, I didn’t have a clear vision of how I wanted to build it. So step one was writing a strong README.md with all the ideas I had in mind.
Step two was a strong CONTRIBUTING.md. I pointed both AGENTS.md and claude.md (I was using Claude at the time too) at it. Simple, but honestly, I think that file is useful for agents and human contributors (another conversation).
Next, I asked OpenCode something like: “I want to design the ingestor types. I want to keep it composable. It should … it should …” Then I told it: “Ask me many questions about the library architecture: patterns, types, conventions. And at every step, update the README once we agree on something.”
That process was a blast. I think it produced a better outcome than if I had just coded it myself, and it was easily 10× faster.
I expect this pattern to become more common. It’s definitely something I’ll do more often, especially for “infrastructure” work like this.
Everyone is coining names, so let’s call this agentic pair programming?
Nah, Peter Steinberger’s Agentic Engineering is a much better name.
Get involved
If you’re trying agentic workflows in a real codebase, I’d love patterns that worked and failure cases that didn’t.
- Follow the code on GitHub: https://github.com/feuersteiner/contextrie
- Share a use case or bug as an issue: https://github.com/feuersteiner/contextrie/issues
- Join the discussion on Discord: https://discord.gg/ayX9hm4D