Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
The Shifting Privacy Left Podcast

The Shifting Privacy Left Podcast

Debra J. Farber (Shifting Privacy Left)

Shifting Privacy Left features lively discussions on the need for organizations to embed privacy by design into the UX/UI, architecture, engineering / DevOps and the overall product development processes BEFORE code or products are ever shipped. Each Tuesday, we publish a new episode that features interviews with privacy engineers, technologists, researchers, ethicists, innovators, market makers, and industry thought leaders. We dive deeply into this subject and unpack the exciting elements of emerging technologies and tech stacks that are driving privacy innovation; strategies and tactics that win trust; privacy pitfalls to avoid; privacy tech issues ripped from the headlines; and other juicy topics of interest.

bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 The Shifting Privacy Left Podcast Episodes

Goodpods has curated a list of the 10 best The Shifting Privacy Left Podcast episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to The Shifting Privacy Left Podcast for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite The Shifting Privacy Left Podcast episode by adding your comments to the episode page.

The Shifting Privacy Left Podcast - S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)
play

09/05/23 • 51 min

This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.
In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers.
We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs:

  1. the hallucination problem;
  2. the deliberation problem;
  3. the sleazy salesperson problem; &
  4. the problem of shared responsibility

Topics Covered:

  • What motivated Reid to write his book, "Ethical Machines"
  • The key differences between 'active privacy' & 'passive privacy'
  • Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
  • The importance of aligning privacy agendas with business priorities
  • Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
  • Automated decision making: when it's necessary to have a 'human in the loop'
  • Approaches for mitigating 'AI ethics fatigue'
  • The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
  • The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
  • Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
  • Reid's advice for technical staff building products & services that leverage LLM's

Resources Mentioned:

Guest Info:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

This week, we welcome Lipika Ramaswamy, Senior Applied Scientist at Gretel AI, a privacy tech company that makes it simple to generate anonymized and safe synthetic data via APIs. Previously, Lipika worked as a Data Scientist at LeapYear Technologies, and was the Machine Learning Researcher at Harvard University's Privacy Tools Project.

Lipika’s interest in both machine learning and privacy comes from her love of math and things that can be defined with equations. Her interest was piqued in grad school and accidentally walked into a classroom holding a lecture on Applying Differential Privacy for Data Science. The intersection of data combined with the privacy guarantees that we have available today has kept her hooked ever since.

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

There's a lot to unpack when it comes to synthetic data & privacy guarantees, as she takes listeners on a deep dive of these compelling topics. Lipika finds elegant how privacy assurances like differential privacy revolve around math and statistics at their core. Essentially, she loves building things with 'usable privacy' & security that people can easily use. We also delve into the metrics tracked in the Gretel Synthetic Data Report, which assesses both 'statistical integrity' & 'privacy levels' of a customer's training data.

Topics Covered:

  • The definition of 'synthetic data,' & good use cases
  • The process of creating synthetic data
  • How to ensure that synthetic data is 'privacy-preserving'
  • Privacy problems that may arise from overtraining ML models
  • When to use synthetic data rather than other techniques like tokenization, anonymization, aggregation & others
  • Examples of good use cases vs poor use cases for using synthetic data
  • Common misperceptions around synthetic data
  • Gretel.ai's approach to 'privacy assurance,' including a focus on 'privacy filters,' which prevent some privacy harms outputted by LLMs
  • How to plug into the 'synthetic data' community
  • Who bears the responsibility for educating the public about new technology like LLMs and potential harms
  • Highlights from Gretel.ai's Synthesize 2023 conference

Resources Mentioned:

Guest Info:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

In this conversation with Markus Lampinen, Co-founder and CEO at Prifina, a personal data platform, we discuss meaty topics like: Prifina’s approach to building privacy-respected apps for consumer wearable sensors; LLMs (Large Language Models) like Chat GPT; and why we should consider training our own personal AIs.
Markus shares his entrepreneurial journey in the privacy world and how he is “the biggest data nerd you’ll find.” It started with tracking his own data, like his eating habits, activity, sleep, and stress, an then he built his company around that interest. His curiosity about what you can glean from one's own data made him wonder how you could also improve your life or the lives of your customers with that data.

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

We discuss how to approach building a privacy-first platform to unlock the value and use of IOT / sensor data. It began with the concept of individual ownership: who should actually benefit from the data that we generate? Markus says it should be individuals themselves.
Prifina boasts a strong community of 30,000 developers who align around common interests - liberty, equality & data - and build and test prototypes that are gathering and utilizing the data working for individuals, as opposed to corporate entities. The aim is to empower individuals, companies & developers to build apps that re-purpose individuals' own sensor data to gain privacy-enabled insights.

---------
Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.
---------
Topics Covered:

  • Enabling true, consumer-grade 'data portability' with personal data clouds (a 'bring your own data' approach)
  • Use cases to illustrate the problems Prifina is solving with sensors
  • What are large language models (LLM) and chatbots trained on them, and why they are so hot right now
  • The dangers of using LLMs, with emphasis on privacy harms
  • How to benefit from our own data with personal AIs
  • Advice to data scientists, researchers and developers regarding how to architect for ethical uses of LLMs
  • Who's responsible for educating the public about LLMs, chatbots, and their potential harms & limitations

Resources Mentioned:

Guest Info:

  • Follow Markus on

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode
The Shifting Privacy Left Podcast - S1E9: Funding Web3 Privacy & Recent Web3 Trust Fails with Jim Nasr
play

12/20/22 • 58 min

This week, I continue my conversation with Jim Nasr, CEO of Acoer about privacy and using distributed ledger technology (DLT). We discuss his work leading The HBAR Foundation's Privacy Market Development Fund and the trends he sees across grant applicants. We also chat about the collapse of FTX and the ripple effect it’s had on the crypto space.

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

Jim tells us about the types of innovations The HBAR Foundation seeks to fund; why privacy & security usability is an imperative; uses cases for decentralized identifiers (DIDs) and new "DID methods" like PKH. We also discuss FTX's collapse and how to provide real transparency and data regulation in DLT technology.

---------
Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.
---------

Topics Covered:

  • The HBAR Foundation’s search for projects to fund that enhance privacy usability
  • Exciting privacy use cases Jim has seen using Hedera's DLT, including those that enable high-value, privacy-preserving transactions
  • What went wrong with FTX and what we can learn from it's collapse
  • How decentralized identity can enable the next iteration of web privacy
  • The tech behind MetaMask's Snap software that allows anyone to safely extend capabilities of their wallet

Resources Mentioned:

Jim Nasr’s Info:

Follow the SPL Show:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode
The Shifting Privacy Left Podcast - S2E8: Leveraging Federated Learning for Input Privacy with Victor Platt
play

02/28/23 • 41 min

Victor Platt is a Senior AI Security and Privacy Strategist who previously served as Head of Security and Privacy for privacy tech company, Integrate.ai. Victor was formerly a founding member of the Risk AI Team with Omnia AI, Deloitt’s artificial intelligence practice in Canada. He joins today to discuss privacy enhancing technologies (PETs) that are shaping industries around the world, with a focus on federated learning.

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

Victor views PETs as functional requirements and says they shouldn’t be buried in your design document as nonfunctional obligations. In his work, he has found key gaps where organizations were only doing “security for security’s sake.” Rather, he believes organizations should be thinking about it at the forefront. Not only that, we should all be getting excited about it because we all have a stake in privacy.

With federated learning, you have the tools available to train ML models on large data sets with precision at scale without risking user privacy. In this conversation, Victor demystifies what federated learning is, describes the 2 different types: at the edge and across data silos, and explains how it works and how it compares to traditional machine learning.We deep dive into how an organization knows when to use federated learning, with specific advice for developers and data scientists as they implement it into their organizations.

Topics Covered:

  • What 'federated learning' is and how it compares to traditional machine learning
  • When an organization should use vertical federated learning vs horizontal federated learning, or instead a hybrid version
  • A key challenge in 'transfer learning': knowing whether two data sets are related to each other and techniques to overcome this, like 'private set intersection'
  • How the future of technology will be underpinned by a 'constellation of PETs'
  • The distinction between 'input privacy' vs. 'output privacy'
  • Different kinds of federated learning with use case examples
  • Where the responsibility for adding PETs lies within an organization
  • The key barriers to adopting federated learning and other PETs within different industries and use cases
  • How to move the needle on data privacy when it comes to legislation and regulation

Resources Mentioned:

Guest Info:

Follow the SPL Show:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode
The Shifting Privacy Left Podcast - S1E1: "Guardians of the Metaverse" with Kavya Pearlman (XRSI)
play

10/25/22 • 56 min

(Transcription)
Welcome to the first episode of Shifting Privacy Left. To kick off the show, I’m joined by Kavya Pearlman, Exec Director of The eXtended Reality Safety Initiative (XRSI) to discuss current challenges associated with extended reality (XR), the XRSI Privacy & Safety Framework, and the importance of embedding privacy into today’s technology.
---------
**Thank you to our sponsor, Privado, the developer friendly privacy platform**
---------

In our conversation, Kavya describes her vision for bridging the gap between government & technologists. While consulting Facebook back in 2016, she’s witnessed 1st-hand the impacts on society when technology risks are ignored or misunderstood. As XR technology develops, there’s a dire need for human-centered safeguarding and designing for privacy & ethics.
We also discuss what it’s been like to create standards while the XR industry is still evolving, and why it’s crucial to influence standards at the foundational code-level. Kavya also shares her advice for builders of immersive products (developers, architects, designers, engineers, etc.) and what she urges regulators to consider when making laws for web3 tech.

Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.

Topics Covered:

  • The story behind XRSI, its mission & overview of key programs.
  • The differences between the "XR" & "metaverse."
  • XRSI's definitions for new subsets of "personal data" w/in immersive experiences: biometrically-inferred data & psychographically-inferred data.
  • Safety, privacy & ethical implications of XR data collection & use.
  • Kavya explains the importance of the human in the loop.

Check out XRSI:

Guest Info (Kavya Pearlman):

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

This week’s guests are Mathew Mytka and Alja Isakovoić, Co-Founders of Tethix, a company that builds products that embed ethics into the fabric of your organization. We discuss Matt and Alja’s core mission to bring ethical tech to the world, and Tethix’s services that work with your Agile development processes. You’ll learn about Tethix’s solution to address 'The Intent to Action Gap,' and what Elemental Ethics can provide organizations beyond other ethics frameworks. We discuss ways to become a proactive Responsible Firekeeper, rather than remaining a reactive Firefighter, and how ETHOS, Tethix's suite of apps can help organizations embody and embed ethics into everyday practice.

TOPICS COVERED:

  • What inspired Mat & Alja to co-found Tethix and the company's core mission
  • What the 'Intent to Action Gap' is and how Tethix address it
  • Overview of Tethix's Elemental Ethics framework; and how it empowers product development teams to 'close the 'Intent to Action Gap' and move orgs from a state of 'Agile Firefighting' to 'Responsible Firekeeping'
  • Why Agile is an insufficient process for embedding ethics into software and product development; and how you can turn to Elemental Ethics and Responsible Firekeeping to embed 'Ethics-by-Design' into your Agile workflows
  • The definition of 'Responsible Firekeeping' and its benefits; and how Ethical Firekeeping transitions Agile teams from a reactive posture to a proactive one
  • Why you should choose Elemental Ethics over conventional ethics frameworks
  • Tethix's suite of apps called ETHOS: The Ethical Tension and Health Operating System apps, which help teams embed ethics into their collaboration tech stack (e.g., JIRA, Slack, Figma, Zoom, etc.)
  • How you can become a Responsible Firekeeper
  • The level of effort required to implement Elemental Ethics & Responsible Firekeeping into Product Development based on org size and level of maturity
  • Alja's contribution to the ResponsibleTech.Work, an open source Responsible Product Development Framework, core elements of the Framework, and why we need it
  • Where to learn more about Responsible Firekeeping

RESOURCES MENTIONED:

GUEST INFO:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

In this week's episode, I am joined by Heidi Saas, a privacy lawyer with a reputation for advocating for products and services built with privacy by design and against the abuse of personal data. In our conversation, she dives into recent FTC enforcement actions, analyzing five FTC actions and some enforcement sweeps by Colorado & Connecticut.
Heidi shares her insights on the effect of the FTC enforcement actions and what privacy engineers need to know, emphasizing the need for data management practices to be transparent, accountable, and based on affirmative consent. We cover the role of privacy engineers in ensuring compliance with data privacy laws; why 'browsing data' is 'sensitive data;' the challenges companies face regarding data deletion; and the need for clear consent mechanisms, especially with the collection and use of location data. We also discuss the need to audit the privacy posture of products and services - which includes a requirement to document who made certain decisions - and how to prioritize risk analysis to proactively address risks to privacy.
Topics Covered:

  • Heidi’s journey into privacy law and advocacy for privacy by design and default
  • How the FTC brings enforcement actions, the effect of their settlements, and why privacy engineers should pay closer attention
  • Case 1: FTC v. InMarket Media - Heidi explains the implication of the decision: where data that are linked to a mobile advertising identifier (MAID) or an individual's home are not considered de-identified
  • Case 2: FTC v. X-Mode Social / OutLogic - Heidi explains the implication of the decision, focused on: affirmative express consent for location data collection; definition of a 'data product assessment' and audit programs; and data retention & deletion requirements
  • Case 3: FTC v. Avast - Heidi explains the implication of the decision: 'browsing data' is considered 'sensitive data'
  • Case 4: The People (CA) v. DoorDash - Heidi explains the implications of the decision, based on CalOPPA: where companies that share personal data with one another as part of a 'marketing cooperative' are, in fact, selling of data
  • Heidi discusses recent State Enforcement Sweeps for privacy, specifically in Colorado and Connecticut and clarity around breach reporting timelines
  • The need to prioritize independent third-party audits for privacy
  • Case 5: FTC v. Kroger - Heidi explains why the FTC's blocking of Kroger's merger with Albertson's was based on antitrust and privacy harms given the sheer amount of personal data that they process
  • Tools and resources for keeping up with FTC cases and connecting with your privacy community

Guest Info:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
TRU Staffing Partners
Top privacy talent - when you need it, where you need it.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode
The Shifting Privacy Left Podcast - S1E8: Leveraging Distributed Ledgers for Privacy Assurance with Jim Nasr
play

12/13/22 • 52 min

Today, I am joined by Jim Nasr, CEO of Acoer. I had the pleasure of collaborating with Jim on several projects during my 6-month stint as Privacy Strategist for Hedera. Jim joins me today to discuss the use of distributed ledger tech (DLT) to provide computational trust for real-time applications. Jim and I speak about the development of secure, privacy-preserving, and traceable technologies, which can gain adoption via open protocols and usable interfaces.

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

In part one of this two-episode conversation, Jim explains Acoer's approach to building DLT-enabled software and its initial application to healthcare and clinical trials. Jim shares his background and experience in tech both academically and professionally; as an entrepreneur in software development; his roles in large-scale tech companies and with the government at the CDC; and how he enjoyed “getting his hands dirty” in public health to bring automated trust and accountability to the space. At Acoer, Jim continues his previous work - to build open technologies - by leveraging DLT and also building interfaces with usable privacy and security.

In this conversation, Jim also covers the security and privacy approaches that Acoer takes to ensure that its products work as advertised and so that the machinery of its clients is never compromised.

----------
Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.
----------

Topics Covered:

  • How Acoer designs and builds its tech as components to be absorbed & consumed by other machines
  • How using DLT reduces the need for intermediaries
  • Acoer's approach to building decentralized apps & why it chose to build on hashgraph tech instead of blockchain
  • Benefits gained from DLT's "data stamping" to computationally prove transactions & to assist during data leakages, compliance issues, or to demonstrate privacy assurance
  • How you can use NFTs to represent individuals' consents via RightsHash

Resources Mentioned:

Jim Nasr's Info:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

In this episode, I interview Mert Can Boyar, Director of Privacy Innovation Lab at Bilgi University and Founder of privacy tech company, Verilogy. Mert walks us through his creative approach to educating on core privacy engineering concepts, particularly through the lens of storytelling, visual art & music. He also shares his vision & mission behind his passion project, “The Hitchhiker’s Guide to Privacy Engineering."

---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------

Mert tells his "origin story" and dives into how he ended up in privacy and data protection. He highlights the thread of art & entrepreneurship throughout his career, which has taken him from musician to lawyer to start-up founder, and now educator.

Privacy Innovation Lab is a multi-stakeholder hub for privacy innovation. Mert highlights exciting projects that his students are working on, including an assessment tool to help practitioners build fair & lawful AI models and new tech in the self-sovereign identity (SSI) space.

While working at the lab, Mert came up with a “creative privacy" strategy, which he uses to inspire young minds about privacy engineering. In this episode, he takes us behind-the-scenes of his comic book project that’s meant to educate people who want to understand how modern software and data processing technologies function.

---------
Listen to the episode on Apple Podcasts, Spotify, iHeartRadio, or on your favorite podcast platform.
---------

Topics Covered:

  • What initially sparked Mert’s interest in data and privacy protection
  • How Mert uses his multifaceted & creative skillsets to bridge knowledge gaps between privacy law & engineering
  • Verilogy’s open source database tool that automates and streamlines the work that Mert was doing as a privacy lawyer
  • Fascinating projects underway at Privacy Innovation Lab
  • What Mert hopes to achieve with The Hitchhiker’s Guide to Privacy Engineering

Resources Mentioned:

Guest Info:

Send us a text

Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does The Shifting Privacy Left Podcast have?

The Shifting Privacy Left Podcast currently has 63 episodes available.

What topics does The Shifting Privacy Left Podcast cover?

The podcast is about Security, Tech, Architecture, Entrepreneurship, Devops, Design, Podcasts, Technology, Business, Innovation, Privacy, Data Science, Ethics and Engineering.

What is the most popular episode on The Shifting Privacy Left Podcast?

The episode title 'S2E17 - Noise in the Machine: How to Assess, Design & Deploy 'Differential Privacy' with Damien Desfontaines (Tumult Labs)' is the most popular.

What is the average episode length on The Shifting Privacy Left Podcast?

The average episode length on The Shifting Privacy Left Podcast is 51 minutes.

How often are episodes of The Shifting Privacy Left Podcast released?

Episodes of The Shifting Privacy Left Podcast are typically released every 7 days.

When was the first episode of The Shifting Privacy Left Podcast?

The first episode of The Shifting Privacy Left Podcast was released on Oct 25, 2022.

Show more FAQ

Toggle view more icon

Comments