Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
AI CyberSecurity Podcast

AI CyberSecurity Podcast

Kaizenteq Team

AI Cybersecurity simplified for CISOs and CyberSecurity Professionals.
Share icon

All episodes

Best episodes

Seasons

Top 10 AI CyberSecurity Podcast Episodes

Goodpods has curated a list of the 10 best AI CyberSecurity Podcast episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to AI CyberSecurity Podcast for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite AI CyberSecurity Podcast episode by adding your comments to the episode page.

AI CyberSecurity Podcast - How AI is changing Detection Engineering & SOC Operations?
play

02/07/25 • 57 min

AI is revolutionizing many things, but how does it impact detection engineering and SOC teams? In this episode, we sit down withDylan Williams, a cybersecurity practitioner with nearly a decade of experience in blue team operations and detection engineering. We speak about how AI is reshaping threat detection and response, the future role of detection engineers in an AI-driven world, can AI reduce false positives and speed up investigations, the difference between automation vs. agentic AI in security and practical AI tools you can use right now in detection & response

Questions asked:

(00:00) Introduction

(02:01) A bit about Dylan Williams

(04:05) Keeping with up AI advancements

(06:24) Detection with and without AI

(08:11) Would AI reduce the number of false positives?

(10:28) Does AI help identity what is a signal?

(14:18) The maturity of the current detection landscape

(17:01) Agentic AI vs Automation in Detection Engineering

(19:35) How prompt engineering is evolving with newer models?

(25:52) How AI is imapcting Detection Engineering today?

(36:23) LLM Models become the detector

(42:03) What will be the future of detection?

(47:58) What can detection engineers practically do with AI today?

(52:57) Favourite AI Tool and Final thoughts on Detection Engineering

Resources spoken about during the episode:

exa.ai - The search engine for AI

Building effective agents (Athropic’s blog different architecture and design patterns for agents)-https://www.anthropic.com/research/building-effective-agents -

Introducing Ambient Agents (LangChain’s blog on Ambient Agents) -https://blog.langchain.dev/introducing-ambient-agents/ -

Jared Atkinson’s Blog on Capability Abstraction -https://posts.specterops.io/capability-abstraction-fbeaeeb26384

LangGraph Studio -https://studio.langchain.com/

n8n -https://n8n.io/

Flowise -https://flowiseai.com/

CrewAI -https://www.crewai.com/

bookmark
plus icon
share episode
AI CyberSecurity Podcast - AI Code Generation - Security Risks and Opportunities
play

08/02/24 • 70 min

How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems.

Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space.

Questions asked:

(00:00) Introduction

(02:36) What is AI Generated Code?

(03:45) Should we trust AI Generated Code?

(14:34) The current usage of AI in Code Generated

(18:27) Securing AI Generated Code

(23:44) Reality of Security AI Generated Code Today

(30:22) The evolution of Security Testing

(37:36) Where to start with AI Security today?

(50:18) Evolution of the broader cybersecurity industry with AI

(54:03) The Positives of AI for Cybersecurity

(01:00:48) The startup Landscape around AI

(01:03:16) The future of AppSec

(01:05:53) The future of security with AI

bookmark
plus icon
share episode

What is the current state and future potential of AI Security? This special episode was recorded LIVE at BSidesSF (thats why its a little noisy), as we were amongst all the exciting action. Clint Gibler, Caleb Sima and Ashish Rajan sat down to talk about practical uses of AI today, how AI will transform security operations, if AI can be trusted to manage permissions and the importance of understanding AI's limitations and strengths.

Questions asked:

(00:00) Introduction

(02:24) A bit about Clint Gibler

(03:10) What top of mind with AI Security?

(04:13) tldr of Clint’s BSide SF Talk

(08:33) AI Summarisation of Technical Content

(09:47) Clint’s favourite part of the talk - Fuzzing

(15:30) Questions Clint got about his talk

(17:11) Human oversight and AI

(25:04) Perfection getting in the way of good

(30:15) AI on the engineering side

(36:31) Predictions for AI Security

Resources from this coversation:

Caleb's Keynote at BSides SF

Clint's Newsletter

bookmark
plus icon
share episode
AI CyberSecurity Podcast - What is AI Native Security?

What is AI Native Security?

AI CyberSecurity Podcast

play

10/23/24 • 27 min

In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems.

We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems.

Questions asked:

(00:00) Introduction

(01:39) A bit about Vijay

(03:32) DeepMind and Gemini

(04:38) Training data for models

(06:27) Who can build an AI Foundation Model?

(08:14) What is AI Native Security?

(12:09) Does the response time change for AI Security?

(17:03) What should enterprise security teams be thinking about?

(20:54) Shared fate with Cloud Service Providers for AI

(25:53) Final Thoughts and Predictions

bookmark
plus icon
share episode
AI CyberSecurity Podcast - Types of Artificial Intelligence | AI Explained
play

11/16/23 • 30 min

To understand what role AI will play in the world of cybersecurity, it important to understand the technology behind it. Caleb and Ashish are levelling up the playing field and laying the foundations with AI primers for cybersecurity in the season 1 of AI CyberSecurity Podcast.

What was discussed:

(00:00) Introduction

(02:36) Learning about AI/ML

(08:00) Acronyms of AI

(10:49) AGI - Artificial General Intelligence

(11:29) Three states of AGI

(13:48) AI/ML in Security Products

(17:03) Different kinds of learning

(21:51) Whats hot in the AI Section!!

bookmark
plus icon
share episode
AI CyberSecurity Podcast - How AI can be used in Cybersecurity Operations?
play

04/12/24 • 44 min

How can AI change a Security Analyst's workflow? Ashish and Caleb caught up with Ely Kahn, VP of Product at SentinelOne, to discuss the revolutionary impact of generative AI on cybersecurity. Ely spoke about the challenges and solutions in integrating AI into cybersecurity operations, highlighting how can simplify complex processes and empowering junior to mid-tier analysts.

Questions asked:

(00:00) Introduction

(03:27) A bit about Ely Kahn

(04:29) Current State of AI in Cybersecurity

(06:45) How AI could impact Cybersecurity User Workflow?

(08:37) What are some of the concerns with such a model?

(14:22) How does it compare to a analyst not using this model?

(21:41) Whats stopping models for going into autopilot?

(30:14) The reasoning for using multiple LLMs

(34:24) ChatGPT vs Anthropic vs Mistral

You can discover more about SentinelOne's Purple AI here!

bookmark
plus icon
share episode
AI CyberSecurity Podcast - The Evolution of Pentesting with AI

The Evolution of Pentesting with AI

AI CyberSecurity Podcast

play

04/04/24 • 53 min

How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.

Questions asked:

(00:00) Introductions

(02:12) A bit about Rob Ragan

(03:33) AI in Security Assessment and Pentesting

(09:15) How is AI impacting pentesting?

(14:50 )Where to start with AI implementation in offensive Security?

(18:19) AI and Static Code Analysis

(21:57) Key components of LLM pentesting

(24:37) Testing whats inside a functional model?

(29:37) Whats the right way to threat model an LLM?

(33:52) Current State of Security Frameworks for LLMs

(43:04) Is AI changing how Red Teamers operate?

(44:46) A bit about Claude 3

(52:23) Where can you connect with Rob

Resources spoken about in this episode:

https://www.pentestmuse.ai/

https://github.com/AbstractEngine/pentest-muse-cli

https://docs.garak.ai/garak/

https://github.com/Azure/PyRIT

https://bishopfox.github.io/llm-testing-findings/

https://www.microsoft.com/en-us/research/project/autogen/

bookmark
plus icon
share episode
AI CyberSecurity Podcast - AI's role in Security Operation Automation
play

03/18/24 • 51 min

What is the current reality for AI automation in Cybersecurity? Caleb and Ashish spoke to Edward Wu, founder and CEO of Dropzone AI about the current capabilities and limitations of AI technologies, particularly large language models (LLMs), in the cybersecurity domain. From the challenges of achieving true automation to the nuanced process of training AI systems for cyber defense, Edward, Caleb and Ashish shared their insights into the complexities of implementing AI and the importance of precision in AI prompt engineering, the critical role of reference data in AI performance, and how cybersecurity professionals can leverage AI to amplify their defense capabilities without expanding their teams.

Questions asked:

(00:00) Introduction

(05:22) A bit about Edward Wu

(08:31) What is a LLM?

(11:36) Why have we not seen entreprise ready automation in cybersecurity?

(14:37) Distilling the AI noise in the vendor landscape

(18:02) Solving challenges with using AI in enterprise internally

(21:35) How to deal with GenAI Hallucinations?

(27:03) Protecting customer data from a RAG perspective

(29:12) Protecting your own data from being used to train models

(34:47) What skillset is required in team to build own cybersecurity LLMs?

(38:50) Learn how to prompt engineer effectively

bookmark
plus icon
share episode
AI CyberSecurity Podcast - Where is the Balance Between AI Innovation and Security?
play

02/23/24 • 31 min

There is a complex interplay between innovation and security in the age of GenAI. As the digital landscape evolves at an unprecedented pace, Daniel, Caleb and Ashish share their insights on the challenges and opportunities that come with integrating AI into cybersecurity strategies

Caleb challenges the current trajectory of safety mechanisms in technology and how overregulation may inhibit innovation and the advancement of AI's capabilities. Daniel Miessler, on the other hand, emphasizes the necessity of accepting technological inevitabilities and adapting to live in a world shaped by AI. Together, they explore the potential overreach in AI safety measures and discuss how companies can navigate the fine line between fostering innovation and ensuring security.

Questions asked:

(00:00) Introduction

(03:19) Maintaining Balance of Innovation and Security

(06:21) Uncensored LLM Models

(09:32) Key Considerations for Internal LLM Models

(12:23) Balance between Security and Innovation with GenAI

(16:03) Enterprise risk with GenAI

(25:53) How to address enterprise risk with GenAI?

(28:12) Threat Modelling LLM Models

bookmark
plus icon
share episode

Dive deep into the world of AI agent communication with this episode. Join hosts Caleb Sima and Ashish Rajan as they break down the crucial protocols enabling AI agents to interact and perform tasks: Model Context Protocol (MCP) and Agent-to-Agent (A2A).

Discover what MCP and A2A are, why they're essential for unlocking AI's potential beyond simple chatbots, and how they allow AI to gain "hands and feet" to interact with systems like your desktop, browsers, or enterprise tools like Jira. The hosts explore practical use cases, the underlying technical architecture involving clients and servers, and the significant security implications, including remote execution risks, authentication challenges, and the need for robust authorization and privilege management.

The discussion also covers Google's entry with the A2A protocol, comparing and contrasting it with Anthropic's MCP, and debating whether they are complementary or competing standards. Learn about the potential "AI-ification" of services, the likely emergence of MCP firewalls, and predictions for the future of AI interaction, such as AI DNS.

If you're working with AI, managing cybersecurity in the age of AI, or simply curious about how AI agents communicate and the associated security considerations, this episode provides critical insights and context.

Questions asked:

(00:00) Introduction: AI Agents & Communication Protocols

(02:06) What is MCP (Model Context Protocol)? Defining AI Agent Communication

(05:54) MCP & Agentic Workflows: Enabling AI Actions & Use Cases

(09:14) Why MCP Matters: Use Cases & The Need for AI Integration

(14:27) MCP Security Risks: Remote Execution, Authentication & Vulnerabilities

(19:01) Google's A2A vs Anthropic's MCP: Protocol Comparison & Debate

(31:37) Future-Proofing Security: MCP & A2A Impact on Security Roadmaps

(38:00) - MCP vs A2A: Predicting the Dominant AI Protocol

(44:36) - The Future of AI Communication: MCP Firewalls, AI DNS & Beyond

(47:45) - Real-World MCP/A2A: Adoption Hurdles & Practical Examples

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does AI CyberSecurity Podcast have?

AI CyberSecurity Podcast currently has 26 episodes available.

What topics does AI CyberSecurity Podcast cover?

The podcast is about Podcasts and Technology.

What is the most popular episode on AI CyberSecurity Podcast?

The episode title 'The Evolution of Pentesting with AI' is the most popular.

What is the average episode length on AI CyberSecurity Podcast?

The average episode length on AI CyberSecurity Podcast is 47 minutes.

How often are episodes of AI CyberSecurity Podcast released?

Episodes of AI CyberSecurity Podcast are typically released every 19 days.

When was the first episode of AI CyberSecurity Podcast?

The first episode of AI CyberSecurity Podcast was released on Oct 9, 2023.

Show more FAQ

Toggle view more icon

Comments