Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Kabir's Tech Dives - 😬 GitHub Copilot: AI Security Breach Exposes Private Repos

😬 GitHub Copilot: AI Security Breach Exposes Private Repos

03/07/25 • 10 min

4 Listeners

Kabir's Tech Dives

GitHub's Copilot experienced a security breach where it leaked sensitive data from previously public repositories that were later made private. Researchers discovered that Copilot retained information even after the repositories were no longer public, impacting over 16,000 organizations. Microsoft initially classified the issue as low severity, drawing criticism for its handling of user privacy. The AI model could regurgitate sensitive data, like API keys and proprietary code, potentially introducing leaked information into other projects. Experts recommend immediately rotating keys and credentials that were ever in a public repo. The incident highlights the risks of AI models training on public data that later becomes private, a growing concern in AI security.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

plus icon
bookmark

GitHub's Copilot experienced a security breach where it leaked sensitive data from previously public repositories that were later made private. Researchers discovered that Copilot retained information even after the repositories were no longer public, impacting over 16,000 organizations. Microsoft initially classified the issue as low severity, drawing criticism for its handling of user privacy. The AI model could regurgitate sensitive data, like API keys and proprietary code, potentially introducing leaked information into other projects. Experts recommend immediately rotating keys and credentials that were ever in a public repo. The incident highlights the risks of AI models training on public data that later becomes private, a growing concern in AI security.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

Previous Episode

undefined - 📉 2024 Zero-Click Search Study: Google (US vs. EU)

📉 2024 Zero-Click Search Study: Google (US vs. EU)

4 Recommendations

SparkToro's 2024 Zero-Click Search Study analyzes Google search behavior in the US and EU using clickstream data from Datos. The study reveals that a majority of searches end without a click to the open web, with a significant portion of clicks directed to Google-owned properties. The report highlights that for every 1,000 searches, only around 360-374 clicks lead to external websites. Despite concerns about Google's dominance and the impact of AI Overviews, the study indicates Google's search market share remains strong, though traffic to the open web is decreasing. The research suggests that EU regulations may have had a small impact on curbing Google's self-preferencing compared to the US. Ultimately, the study underscores the challenges for web publishers in gaining traffic from Google search.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

Next Episode

undefined - Diffusion LLMs: A Paradigm Shift in Text Generation

Diffusion LLMs: A Paradigm Shift in Text Generation

4 Recommendations

In a groundbreaking development, Diffusion Large Language Models are revolutionizing the field by generating entire responses at once, using a technique inspired by text-to-image generation. This innovative approach, developed by Inception Labs, promises to be 10 times faster and 10 times less expensive than traditional autoregressive models that generate one token at a time. Unlike autoregressive models, diffusion models refine a rough, almost nonsensical text into a coherent solution through iterative steps. This leap in speed, achieving over a thousand tokens per second on standard NVIDIA H100 chips, drastically reduces waiting times and enables more test time compute. This breakthrough not only accelerates coding processes but also facilitates more advanced reasoning, error correction, and controllable generation, opening new possibilities for AI agents, edge applications, and various use cases. According to AI experts like Andrej Karpathy, this diffusion model may also unlock new unique psychology or new strengths and weaknesses, potentially leading to new behaviors in intelligent models.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/kabirs-tech-dives-594483/github-copilot-ai-security-breach-exposes-private-repos-86900428"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to 😬 github copilot: ai security breach exposes private repos on goodpods" style="width: 225px" /> </a>

Copy