Benchmarking AI Agents on Full-Stack Coding

28 Mar 2025 • 33 min • EN
33 min
00:00
33:28
No file found

In this episode, a16z General Partner Martin Casado sits down with Sujay Jayakar, co-founder and Chief Scientist at Convex, to talk about his team’s latest work benchmarking AI agents on full-stack coding tasks. From designing Fullstack Bench to the quirks of agent behavior, the two dig into what’s actually hard about autonomous software development, and why robust evals—and guardrails like type safety—matter more than ever. They also get tactical: which models perform best for real-world app building? How should developers think about trajectory management and variance across runs? And what changes when you treat your toolchain like part of the prompt? Whether you're a hobbyist developer or building the next generation of AI-powered devtools, Sujay’s systems-level insights are not to be missed. Drawing from Sujay’s work developing the Fullstack-Bench, they cover:Why full-stack coding is still a frontier task for autonomous agentsHow type safety and other “guardrails” can significantly reduce variance and failureWhat makes a good eval—and why evals might matter more than clever promptsHow different models perform on real-world app-building tasks (and what to watch out for)Why your toolchain might be the most underrated part of the promptAnd what all of this means for devs—from hobbyists to infra teams building with AI in the loop Learn More: Introducing Fullstack-Bench Follow everyone on X: Sujay Jayakar Martin Casado Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

From "AI + a16z"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories