Listened to Raising An Agent: Episode 8
Post details
In this episode of Raising an Agent, Beyang and Camden dive into how the Amp team evaluates models for agentic coding. They break down why tool calling is the key differentiator, what went wrong with Gemini Pro, and why open models like K2 and Qwen are promising but not ready as main drivers. They share first impressions of GPT-5, explore the idea of alloying models, and explain why qualitative “vibe checks” often matter more than benchmarks. If you want to understand how Amp thinks about model selection, subagents, and the future of coding with agents, this episode has you covered.

This post was filed under listens.

Interactions with this post

Interactions with this post

Below you can find the interactions that this page has had using WebMention.

Have you written a response to this post? Let me know the URL:

Do you not have a website set up with WebMention capabilities? You can use Comment Parade.