Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
AI, Government, and the Future

AI, Government, and the Future

Corner Alliance

Welcome to AI, Government, and the Future, a podcast by Corner Alliance. We explore the intersection of artificial intelligence, government, and the future. Join us as we dive into the latest AI advancements, government policies, and innovative strategies to shape the future of our society. Whether you're a policy maker, venture capitalist, academic, or industry leader, this podcast will provide valuable insights and thought-provoking discussions to help you navigate the evolving landscape of AI and its impact on government. Tune in to AI, Government, and the Future to stay ahead in this transformative era.
Share icon

All episodes

Best episodes

Seasons

Top 10 AI, Government, and the Future Episodes

Goodpods has curated a list of the 10 best AI, Government, and the Future episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to AI, Government, and the Future for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite AI, Government, and the Future episode by adding your comments to the episode page.

In this thought-provoking episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Dr. Eva-Marie Muller-Stuler, Partner at Ernst & Young and leader of the Data & AI practice for the Middle East and North Africa. Drawing from her extensive experience in AI governance and data science, Dr. Muller-Stuler explores the critical intersection of AI, ethics, and democracy. She emphasizes the importance of ethical considerations in AI development, stressing the need for transparency, fairness, and accountability in AI systems used by governments and businesses alike.
The discussion delves into the evolving landscape of AI adoption in government sectors, highlighting both the opportunities and challenges. Dr. Muller-Stuler shares valuable insights on how governments can leverage AI to enhance services while safeguarding against potential risks such as bias and misinformation. She offers practical advice for policymakers and government leaders on implementing responsible AI frameworks, underscoring the importance of continuous learning and adaptation in the face of rapidly evolving AI technologies.Throughout the conversation, Dr. Muller-Stuler addresses the critical role of workforce development in preparing for an AI-driven future. She discusses the need for upskilling and reskilling initiatives, as well as the importance of fostering critical thinking skills in the age of AI. The episode provides a comprehensive look at the transformative potential of AI in governance, the ethical considerations that must guide its implementation, and the steps needed to ensure that AI enhances rather than undermines democratic processes.If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Google Podcasts, instructions on how to do this are here.
Connect with the host and guest here:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
Patricia Cogswell brings her extensive experience in homeland and national security to discuss the critical role of AI in addressing mission-critical challenges. With over 28 years of experience spanning the White House, Department of Homeland Security, and Department of Justice, she offers unique insights into the practical implementation of AI solutions in security contexts.
Throughout the conversation, Patricia emphasizes the importance of starting with clear problem statements and use cases before implementing AI solutions. She shares compelling examples from her work in biometric systems and border security, highlighting how public-private partnerships have driven innovation while maintaining security standards. The discussion explores how agencies can balance rapid technological advancement with careful oversight and regulation.Patricia also addresses crucial considerations around privacy, bias, and civil liberties in AI implementation. She advocates for a measured approach to AI adoption that combines innovation with robust governance frameworks, emphasizing the need for human oversight and clear communication with the public about AI capabilities and limitations.
If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Youtube Podcasts, instructions on how to do this are here.
Episode Resources:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
Mina Narayanan, a Research Analyst of AI Assessment at the Center for Security and Emerging Technology, joins this episode of AI, Government, and the Future to discuss the challenges of assessing AI systems and managing their risk. They explore the evolving landscape of AI assessment and the need for standards and testing to address bias and risks. Mina also touches on the role of industry, funding, and coordination between branches of government.
Mina specializes in developing procedural tools and evaluating methods for making AI systems safe and beneficial. She has contributed significantly to the creation of responsible AI frameworks, as well as to the setting of AI standards and testing procedures. Currently, Mina also serves as a Technology Specialist at Analytics for Advocacy and has prior experience as a Program Analyst at the U.S. Department of State.
If you enjoyed this episode, make sure to subscribe, rate, and review on Apple Podcasts, Spotify, and Google Podcasts, instructions on how to do this are here.
Tune in here:
Connect with the host and guests here:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
Max Romanik and Marc Leh, Principal Consultants at Corner Alliance, join this episode of AI, Government, and the Future by Alan Pentz to explore the exciting role of AI in government. They discuss how AI is being integrated into various sectors, such as R&D and homeland security, to identify potential threats and improve efficiency, the government's role in supporting high-risk research and standardizing technology, the potential for government data sets in training AI models, and the potential impact of AI on the consulting industry.
Max brings a wealth of expertise as a versatile problem solver with a broad leadership background spanning technology innovation, cybersecurity, healthcare, legal frameworks, policy formation, crisis response, financial oversight, and entrepreneurial management. He is also the Founder of ShieldMyfiles, a pioneering cybersecurity startup focused on safeguarding files against unauthorized access, malicious threats, and espionage.
Marc offers a diverse and powerful skill set, including proficiency in Data Science, Adobe Illustrator, Digidesign Pro Tools, brand strategy, and graphic design. His journey with Corner Alliance began in 2013 as an Associate Consultant, progressing to his current role. Some of his previous experiences include the role of Consultant at Wilshire Consulting and Campus CEO at Meetor.
If you enjoyed this episode, make sure to subscribe, rate, and review on Apple Podcasts, Spotify, and Google Podcasts, instructions on how to do this are here.
Tune in here:
Connect with the host and guests here:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
Willem Koenders brings his extensive experience in data strategy and governance to explore the evolving landscape of AI implementation across organizations. Drawing from his background at major financial institutions and his current role at ZS, Willem discusses how companies can effectively bridge the gap between data management and AI innovation.
Throughout the conversation, Willem emphasizes the importance of foundational data capabilities in enabling successful AI initiatives. He highlights how organizations often spend 60-70% of their time wrangling data, and proposes practical solutions for streamlining these processes while maintaining robust governance frameworks.The discussion also delves into international perspectives on data privacy and governance, examining how cultural differences impact AI implementation across regions. Willem shares valuable insights on building sustainable governance models that can adapt to rapidly evolving AI technologies while maintaining essential controls and oversight.If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Youtube Podcasts, instructions on how to do this are here.
Connect with the host and guest here:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
David Danks, Professor of Data Science and Philosophy at the University of California, San Diego, joins this episode of AI, Government, and the Future by Alan Pentz to delve into the intricacies of AI regulation. From the challenges faced by the federal government to the potential impact on innovation, they explore the need for smarter governance and a nuanced approach to balancing risks and benefits. David also shares his perspective on AI ethics, ethical interoperability, and the shortage of AI talent.
David holds an honorable position as an affiliate faculty member in UCSD's Department of Computer Science & Engineering and is the proud owner of Danks Consulting. His significant contributions to the field have earned him seats on several prestigious advisory boards, including the National AI Advisory Committee (NAIAC), the Special Competitive Studies Project (SCSP), and the Partnership to Advance Responsible Technology (PART), among others.
His awards include the James S. McDonnell Foundation Scholar Award (2008) and the esteemed Andrew Carnegie Fellowship (2017). Beyond his outstanding academic achievements, Professor Danks is an accomplished author. His notable works include "Unifying the Mind: Cognitive Representations as Graphical Models" and "Building Theories: Heuristics and Hypotheses in Sciences" (featured in Studies in Applied Philosophy, Epistemology, and Rational Ethics, Vol. 41). Furthermore, he has written a thought-provoking thesis titled "Finding Trust and Understanding in Autonomous Technologies."
If you enjoyed this episode, make sure to subscribe, rate, and review on Apple Podcasts, Spotify, and Google Podcasts, instructions on how to do this are here.
Tune in here:
Connect with the hosts and guests here:

Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
Alex Fink, founder of OtherWeb and co-founder of Swarmer, joins Max Romanik to explore the rapidly evolving landscape of AI-generated content and its implications for society, government, and regulation. Drawing from his experiences in computer vision, news curation, and military technology, Alex offers a unique perspective on the challenges and opportunities presented by AI.The conversation covers a wide range of topics, including:
  • The current state of AI-generated content and its differences from human-created material
  • Challenges in detecting and moderating AI-generated misinformation
  • The potential impact of AI on journalism and the need for new business models
  • Innovative ideas for using AI in democracy, including personalized AI avatars for representation
  • The intersection of AI and copyright law, and potential need for new intellectual property frameworks
  • Balancing innovation with responsible AI deployment and civil liberties protection
  • The role of government in shaping AI development through incentives rather than heavy-handed regulation
Alex emphasizes the importance of treating AI as a tool and maintaining human responsibility in its deployment. He advocates for a nuanced approach to AI governance that focuses on creating the right incentives for beneficial development while avoiding overly restrictive regulations that could stifle innovation.Throughout the discussion, Alex provides thought-provoking insights on how AI might reshape our information ecosystem, democratic processes, and regulatory frameworks. He concludes with a call for individuals to be mindful of the information they consume in an AI-driven world, likening it to the importance of healthy eating habits.If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Google Podcasts, instructions on how to do this are here.
Tune in here:Connect with the host and guests here:Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
In this thought-provoking episode, Jonathan Gillham, the visionary entrepreneur behind Originality.ai, a cutting-edge platform that helps content creators and publishers verify the authenticity of their work, shares his insights on the importance of identifying AI-generated content. As advanced language models reshape the content landscape, Gillham discusses the societal implications, particularly in mitigating misinformation during election years.Gillham delves into the role of AI detection in academia, where ensuring the originality of student assignments is crucial for maintaining educational integrity. He also explores the publishing industry's concerns about AI-generated content and the potential impact on search engine rankings.Drawing from his experience, Gillham offers strategies for businesses and government agencies to future-proof their content creation processes against AI-related penalties or ranking issues. He emphasizes the need for unique information gain and novel insights to stand out in a world where AI can generate derivative content at scale.Additionally, Gillham addresses regulatory considerations, suggesting a focus on areas where societal harm can be significant, such as AI-generated images, videos, and audio. He also discusses the potential for regulations around AI-generated reviews and user-generated content platforms.Throughout the conversation, Gillham provides practical advice for regulators and policymakers navigating the challenges posed by AI, emphasizing the importance of hands-on experience with these technologies to make informed decisions.If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Google Podcasts, instructions on how to do this are here.Tune in here:Connect with the host and guests here:Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
In this episode of AI, Government, and the Future, host Marc Leh engages in an insightful conversation with Eric Wengrowski, co-founder and CEO of Steg AI, a digital watermarking startup using deep learning to authenticate digital media. Eric shares his journey from studying electrical and computer engineering to becoming passionate about computer vision and AI, which led to the founding of Steg AI.The discussion delves into the pressing need for digital asset authentication in an age where deepfakes and AI-generated content are becoming increasingly sophisticated. Eric explains Steg AI's unique approach to forensic watermarking, which uses imperceptible changes to digital assets to provide robust, tamper-evident content credentials. This technology has wide-ranging applications, from protecting creative works to ensuring the authenticity of government communications.Eric highlights the potential of their technology in combating misinformation, securing election integrity, and enhancing trust in government-citizen communication. He also discusses Steg AI's involvement in developing standards for AI-generated content authentication and their collaboration with entities like the Singaporean IMDA and the U.S. National Security Council.The conversation touches on the balance between innovation and responsible AI deployment, the importance of data quality in AI training, and the role of AI in preventing information leaks. Eric offers insights into upcoming Steg AI products, including DeepFake Shield, designed to protect digital assets against unauthorized AI-driven manipulations.Throughout the episode, Eric emphasizes the need for a holistic approach to digital security, combining AI-powered tools with traditional cybersecurity practices. He concludes by inviting camera manufacturers and software developers to partner with Steg AI to integrate watermarked content credentials at the point of capture, potentially revolutionizing digital asset authentication.Eric Wengrowski is the co-founder and CEO of Steg AI, a digital watermarking startup that uses digital watermarks and deep learning to authenticate digital media. He holds a Ph.D. from Rutgers University, where he focused on computer vision and photographic steganography.If you enjoyed this episode, make sure to subscribe, rate and review on Apple Podcasts, Spotify and Google Podcasts, instructions on how to do this are here.Tune in here:Connect with the host and guests here:Tune in here:Previous guests include: Jesse Anglen of Rapid Innovation, Maya Sherman of Embassy of Israel In India, Alex Wirth of Quorum, Mfon Apkan of Methodist University, Dr. Eva-Marie Muller-Stuler of Ernst & Young, and Christophe Foulon
Check out our Top 3 episodes:If you are interested in joining AI, Government, and the Future as a guest, please complete this form: fame.so/cai-guest
AI, Government, and the Future is handcrafted by our friends over at: fame.so
bookmark
plus icon
share episode
AI, Government, and the Future - The Future of AI in Maritime Warfare with Zac Staples of Fathom5
play

02/20/25 • 48 min

In this compelling conversation, Zac Staples shares his unique perspective on the intersection of artificial intelligence and maritime defense, shaped by his extensive naval career and current role leading Fathom5. He discusses the critical need to enhance existing military systems through AI integration, rather than completely replacing them.
The discussion delves into the practical challenges of implementing AI in defense systems, including the importance of developing a tactical Platform as a Service (PaaS) and the need for robust testing frameworks. Zac emphasizes the significance of data engineering and the value of focusing on operator acuity enhancement before tackling more complex AI applications in combat systems.Throughout the episode, Zac articulates a balanced approach to AI adoption in defense, highlighting both the opportunities for enhanced capabilities and the importance of careful, methodical implementation. He shares insights on the "hedge strategy" approach to military technology adoption and the critical role of industrial optimization in maintaining strategic advantage.If you enjoyed this episode, make sure to subscribe, rate, and review it on Apple Podcasts, Spotify, and YouTube Podcasts, instructions on how to do this are here.Episode Highlights:
  • [08:49] - Digital Modernization as Strategic Advantage in Defense: The U.S. cannot win a numbers game in ship production against China, but can gain advantage through superior digital capabilities. Staples explains that the focus should be on making existing systems exponentially more capable through AI and digital modernization, rather than trying to match manufacturing output. This approach allows for creating a "digital warfighting ecosystem" that could overwhelm traditional forces through superior coordination and capability. For defense strategists and policymakers, this means prioritizing investments in digital transformation of existing platforms over building more traditional assets. The strategy requires rethinking how we measure military capability, moving from counting platforms to assessing networked effectiveness.
  • [20:33] - Leveraging DOD's Testing Framework for AI Safety: Staples advocates using the Department of Defense's established Operational Test & Evaluation (OT&E) frameworks as a model for ensuring AI safety and effectiveness. The military's rigorous testing protocols, developed for complex weapons systems, provide a ready-made framework for evaluating AI systems. This approach offers a structured way to assess AI safety and performance before deployment, something the commercial sector lacks. For policymakers and technology leaders, this means adapting existing military testing frameworks rather than creating entirely new oversight mechanisms. The framework can help bridge the gap between innovation and safety in AI development.
  • [31:43] - "Safe Sandbox" Approach to AI Development: Rather than immediately deploying AI in critical combat situations, Staples recommends starting with lower-risk applications like predictive maintenance. By focusing first on scenarios where AI failures result in mechanical rather than human casualties, organizations can develop expertise and testing protocols safely. This approach allows for learning and refinement of AI systems while minimizing potential harm. For defense technologists and policy makers, this provides a clear pathway for responsible AI integration that builds public trust. The strategy enables rapid development while maintaining appropriate safety margins.
  • [45:04] - Tactical Platform as a Service (PaaS) Priority: Staples identifies the critical need for developing a tactical PaaS capability that enables AI deployment at the tactical edge. This infrastructure would allow for continuous AI learning and updating on forward-deployed platforms without requiring constant connection to cloud services. For defense technology leaders and policymakers, this represents a crucial investment priority that could determine future military effectiveness. The development of this capability requires rethinking how we deploy and update AI systems in contested environments, making it a key focus area for defense modernization efforts.
Episode Resources:

Tune in here: