Raising An Agent
Quinn and Thorsten are back! It's been a while since they published a Raising An Agent episode and in this this episode, they discuss how everything seems to have changed again with Gemini 3 and Opus 4.5 and what comes after — the assistant is dead, long live the factory.
info_outlineRaising An Agent
In this episode of Raising an Agent, Beyang and Camden dive into how the Amp team evaluates models for agentic coding. They break down why tool calling is the key differentiator, what went wrong with Gemini Pro, and why open models like K2 and Qwen are promising but not ready as main drivers. They share first impressions of GPT-5, explore the idea of alloying models, and explain why qualitative “vibe checks” often matter more than benchmarks. If you want to understand how Amp thinks about model selection, subagents, and the future of coding with agents, this episode has you covered.
info_outlineRaising An Agent
In this episode, Beyang and Thorsten discuss strategies for effective agentic coding, including the 101 of how it's different from coding with chat LLMs, the key constraint of the context window, how and where subagents can help, and the new oracle subagent which combines multiple LLMs. 00:53 Intros 03:35 How coding with agents is very different from coding with prior AI tools that use chat LLMs 10:46 Example of an agentic coding run to fix a simple issue 14:28 Example of debugging an issue with an MCP server 22:05 Example of unifying two build scripts that share logic 25:24 How context window...
info_outlineRaising An Agent
In this episode, Quinn and Thorsten discuss Claude 4, sub-agents, background agents, and they share "hot tips" for agentic coding.
info_outlineRaising An Agent
In this episode, Beyang interviews Thorsten and Quinn to unpack what has happened in the world of Amp in the last five weeks: how predictions played out, how working with agents shaped how they write code, how agents are and will influence model development, and, of course, all the things that have been shipped in Amp.
info_outlineRaising An Agent
Thorsten and Quinn talk about the future of programming and whether code will still be as valuable in the future, how maybe the GitHub contribution graph is already worthless, how LLMs can free us from the tyranny of input boxes, and how conversations with an agent might be a better record of how a code change came to be than git commit tools. They also share where it works and simply doesn't work.
info_outlineRaising An Agent
Quinn and Thorsten start by sharing how reviews are still very much needed when using AI to code and how it changes the overall flow you're in when coding with an agent. They also talk about a very important question they face: how important is code search, in its current form, in the age of AI agents?
info_outlineRaising An Agent
Thorsten and Quinn talk about how different agentic programming is from normal programming and how the mindset has to adapt to it. One thing they discuss is that having a higher-level architectural understanding is still very important, so that the agent can fill in the blanks. They also talk about how, surprisingly, the models are really, really good when they have inputs that a human would normally get. Most importantly, they share the realization that subscription-based pricing might make bad agentic products.
info_outlineRaising An Agent
In the first episode of Raising an Agent, Quinn and Thorsten kick things off by sharing a lot of wow-moments they experienced after getting the agent at the heart of Amp into a working state. They talk about how little is actually needed to create an agent and how magical the results are when you give a model the right tools, unlimited tokens, and feedback. That might be the biggest surprise: how many previous assumptions feel outdated when you see an agent explore a codebase on its own.
info_outlineIn the first episode of Raising an Agent, Quinn and Thorsten kick things off by sharing a lot of wow-moments they experienced after getting the agent at the heart of Amp into a working state. They talk about how little is actually needed to create an agent and how magical the results are when you give a model the right tools, unlimited tokens, and feedback. That might be the biggest surprise: how many previous assumptions feel outdated when you see an agent explore a codebase on its own.