Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
AI CyberSecurity Podcast - AI Code Generation - Security Risks and Opportunities

AI Code Generation - Security Risks and Opportunities

08/02/24 • 70 min

AI CyberSecurity Podcast

How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems.

Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space.

Questions asked:

(00:00) Introduction

(02:36) What is AI Generated Code?

(03:45) Should we trust AI Generated Code?

(14:34) The current usage of AI in Code Generated

(18:27) Securing AI Generated Code

(23:44) Reality of Security AI Generated Code Today

(30:22) The evolution of Security Testing

(37:36) Where to start with AI Security today?

(50:18) Evolution of the broader cybersecurity industry with AI

(54:03) The Positives of AI for Cybersecurity

(01:00:48) The startup Landscape around AI

(01:03:16) The future of AppSec

(01:05:53) The future of security with AI

plus icon
bookmark

How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems.

Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space.

Questions asked:

(00:00) Introduction

(02:36) What is AI Generated Code?

(03:45) Should we trust AI Generated Code?

(14:34) The current usage of AI in Code Generated

(18:27) Securing AI Generated Code

(23:44) Reality of Security AI Generated Code Today

(30:22) The evolution of Security Testing

(37:36) Where to start with AI Security today?

(50:18) Evolution of the broader cybersecurity industry with AI

(54:03) The Positives of AI for Cybersecurity

(01:00:48) The startup Landscape around AI

(01:03:16) The future of AppSec

(01:05:53) The future of security with AI

Previous Episode

undefined - Exploring Top AI Security Frameworks

Exploring Top AI Security Frameworks

Which AI Security Framework is right for you? As AI is gaining momentum, we are starting to see quite a few frameworks appearing but the question is, which one should you start with and can AI help you decide! Caleb and Ashish tackle this challenge head-on, comparing three major AI security frameworks: Databricks, NIST, and OWASP Top 10. They break down the key components of each framework, discuss practical implementation strategies, and provide actionable insights for CISOs and security leaders. They may have had some help along the way.

Questions asked:

(00:00) Introduction

(02:54) Databricks AI Security Framework (DASF)

(06: 38) Top 3 things from DASF by Claude 3

(07:32) Top 3 things from DASF by ChatGPT

(08:46) DASF Use Case Scenario

(11:01) Thoughts on DASF

(13:18) OWASP Top 10 for LLM Models

(20:12) Google's Secure AI Framework (SAIF)

(21:31) NIST AI Risk Management Framework

(25:18) Claude 3 summarises NIST RMF for 5 year old

(28:00) ChatGPT compares NIST RMF and NIST CSF

(28:48) How do the frameworks compare?

(36:46) Summary of all the frameworks

Resources from this episode:

Databricks AI Security Framework (DASF)

OWASP Top 10 for LLM

NIST AI Risk Management Framework

Google Secure AI Framework

Next Episode

undefined - Our insights from Google's AI Misuse Report

Our insights from Google's AI Misuse Report

In this episode of the AI Cybersecurity Podcast, we dive deep into the latest findings from Google's DeepMind report on the misuse of generative AI. Hosts Ashish and Caleb explore over 200 real-world cases of AI misuse across critical sectors like healthcare, education, and public services. They discuss how AI tools are being used to create deepfakes, fake content, and more, often with minimal technical expertise. They analyze these threats from a CISO's perspective but also include an intriguing comparison between human analysis and AI-generated insights using tools like ChatGPT and Anthropic's Claude. From the rise of AI-powered impersonation to the manipulation of public opinion, this episode uncovers the real dangers posed by generative AI in today’s world.

Questions asked:

(00:00) Introduction

(03:39) Generative Multimodal Artificial Intelligence

(09:16) Introduction to the report

(17:07) Enterprise Compromise of GenAI systems

(20:23) Gen AI Systems Compromise

(27:11) Human vs Machine

Resources spoken about during the episode:

Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/ai-cybersecurity-podcast-344611/ai-code-generation-security-risks-and-opportunities-66125022"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ai code generation - security risks and opportunities on goodpods" style="width: 225px" /> </a>

Copy