
The New Stack Podcast
The New Stack
All episodes
Best episodes
Top 10 The New Stack Podcast Episodes
Goodpods has curated a list of the 10 best The New Stack Podcast episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to The New Stack Podcast for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite The New Stack Podcast episode by adding your comments to the episode page.

Google AI Infrastructure PM On New TPUs, Liquid Cooling and More
The New Stack Podcast
05/13/25 • 19 min
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity.
Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs.
Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud:
Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure
A2A, MCP, Kafka and Flink: The New Stack for AI Agents
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Kong’s AI Gateway Aims to Make Building with AI Easier
The New Stack Podcast
04/03/25 • 21 min
AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections.
However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly.
To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway supports multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations.
Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections.
Learn more from The New Stack about Kong’s AI Gateway
Kong: New ‘AI-Infused’ Features for API Management, Dev Tools
From Zero to a Terraform Provider for Kong in 120 Hours
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Linux xz and the Great Flaws in Open Source
The New Stack Podcast
06/27/24 • 12 min
The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.
The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats.
Learn more from The New Stack about Linux xz utils
Linux xz Backdoor Damage Could Be Greater Than Feared
Unzipping the XZ Backdoor and Its Lessons for Open Source
The Linux xz Backdoor Episode: An Open Source Myster
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

What’s the Future of Distributed Ledgers?
The New Stack Podcast
07/02/24 • 23 min
Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams.
Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode.
Learn more from The New Stack about Distributed Ledgers (DLTs)
IOTA Distributed Ledger: Beyond Blockchain for Supply Chains
Why I Changed My Mind About Blockchain
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Why Are So Many Developers Out of Work in 2024?
The New Stack Podcast
12/12/24 • 21 min
The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.
Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.
Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.
Learn more from The New Stack about developer talent, skills and needs:
Top Developer Skills for AI and Cloud Jobs
5 Software Development Skills AI Will Render Obsolete
Cloud Native Skill Gaps are Killing Your Gains
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Your AI Coding Buddy Is Always Available at 2 a.m.
The New Stack Podcast
05/15/25 • 20 min
Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support.
Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual.
Learn more from The New Stack about the latest AI insights with Google Cloud:
Google AI Coding Tool Now Free, With 90x Copilot’s Output
Gemini 2.5 Pro: Google’s Coding Genius Gets an Upgrade
Q&A: How Google Itself Uses Its Gemini Large Language Model
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

How Heroku Is ‘Re-Platforming’ Its Platform
The New Stack Podcast
04/24/25 • 18 min
Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source.
The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem.
Learn more from The New Stack about Heroku's approach to Platform-as-a-Service:
Return to PaaS: Building the Platform of Our Dreams
Heroku Moved Twelve-Factor Apps to Open Source. What’s Next?
How Heroku Is Positioned To Help Ops Engineers in the GenAI Era
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Platform Engineering Rules, now with AI
The New Stack Podcast
10/24/24 • 25 min
Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.
Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions.
Learn more from The New Stack about Kubernetes:
Cloud Native Networking as Kubernetes Starts Its Second Decade
Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care
How Cloud Foundry Has Evolved With Kubernetes
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/

OpenSearch: What’s Next for the Search and Analytics Suite?
The New Stack Podcast
04/10/25 • 20 min
OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development.
Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in downloads since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources.
Katona sees the Linux Foundation’s involvement as key to OpenSearch’s long-term success, offering vendor-neutral governance and reassuring users seeking openness, performance, and scalability in data search and analytics.
Learn more from The New Stack about OpenSearch:
Report: OpenSearch Bests ElasticSearch at Vector Modeling
AWS Transfers OpenSearch to the Linux Foundation
OpenSearch: How the Project Went From Fork to Foundation
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Container Security and AI: A Talk with Chainguard's Founder
The New Stack Podcast
04/22/25 • 20 min
In this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices.
The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries.
The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images.
Learn more from The New Stack about Container Security and AI
Chainguard Takes Aim At Vulnerable Java Libraries
Clean Container Images: A Supply Chain Security Revolution
Revolutionizing Offensive Security: A New Era With Agentic AI
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Show more best episodes

Show more best episodes
FAQ
How many episodes does The New Stack Podcast have?
The New Stack Podcast currently has 357 episodes available.
What topics does The New Stack Podcast cover?
The podcast is about News, Open Source, Tech, Devops, Tech News, Kubernetes, Software Development, Podcasts, Technology and Developer.
What is the most popular episode on The New Stack Podcast?
The episode title 'Who’s Keeping the Python Ecosystem Safe?' is the most popular.
What is the average episode length on The New Stack Podcast?
The average episode length on The New Stack Podcast is 27 minutes.
How often are episodes of The New Stack Podcast released?
Episodes of The New Stack Podcast are typically released every 5 days, 20 hours.
When was the first episode of The New Stack Podcast?
The first episode of The New Stack Podcast was released on Sep 4, 2020.
Show more FAQ

Show more FAQ