The New Stack Podcast
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Show episodes
At the PyTorch Conference 2025 in San Francisco, Luca Antiga — CTO of Lightning AI and head of the PyTorch Foundation’s Technical Advisory Council — discussed the evolution and influence of PyTorch. Originally designed to be “Pythonic” and researcher-friendly Antiga emphasized that PyTorch has remained central across m
Harness co-founder Jyoti Bansal highlights a growing issue in software development: while AI tools help generate more code, they often create bottlenecks further along the pipeline, especially in testing, deployment, and compliance. Since its 2017 launch, Harness has aimed to streamline these stages using AI and machin
The agentic AI space faces challenges around secure, governed connectivity between agents, tools, large language models, and microservices. To address this, Solo.io developed two open-source projects: Kagent and Agentgateway. While Kagent, donated to the Cloud Native Computing Foundation, helps scale AI agents, it lack
David Cramer, founder and chief product officer of Sentry, remains skeptical about generative AI's current ability to replace human engineers, particularly in software production. While he acknowledges AI tools aren't yet reliable enough for full autonomy—especially in tasks like patch generation—he sees value in using
Cursor, the AI code editor, recently integrated with Linear, a project management tool, enabling developers to assign tasks directly to Cursor's background coding agent within Linear. The collaboration felt natural, as Cursor already used Linear internally. Linear's new agent-specific API played a key role in enabling
In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operatio