Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
FedSoc Forums - Artificial Intelligence and Bias

Artificial Intelligence and Bias

05/06/21 • 55 min

FedSoc Forums
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?
Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?
Featuring:
-- Stewart Baker, Partner, Steptoe & Johnson LLP
-- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley
-- Moderator: Curt Levey, President, Committee for Justice
plus icon
bookmark
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?
Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?
Featuring:
-- Stewart Baker, Partner, Steptoe & Johnson LLP
-- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley
-- Moderator: Curt Levey, President, Committee for Justice

Previous Episode

undefined - Courthouse Steps Oral Argument Teleforum: Terry v. United States

Courthouse Steps Oral Argument Teleforum: Terry v. United States

Thirteen years ago, Tarahrick Terry was charged with possession with intent to distribute 3.9 grams of cocaine base otherwise known as crack cocaine. He pled guilty and was sentenced under 21 U.S.C. 842(b)(1)(C) which set a range of 0-30 years. Terry received a sixteen-year term of imprisonment followed by six months of supervised release.
Congress passed comprehensive criminal justice reform twice in the years following: the Fair Sentencing Act (2010) and the First Step Act (2018) which modified the application of the Fair Sentencing Act. Terry appealed his sentence, arguing his offense was a “covered offense” under Section 404 of the First Step Act. The district court denied relief and the Eleventh Circuit affirmed.
On May 4, 2021, the Supreme Court will hear oral argument taking up the question whether Terry’s offense was a “covered offense” under Section 404 under the First Step Act and whether he is entitled to relief.
Featuring:
Vikrant Reddy, Senior Research Fellow, Charles Koch Institute
Teleforum calls are open to all dues paying members of the Federalist Society. To become a member, sign up on our website. As a member, you should receive email announcements of upcoming Teleforum calls which contain the conference call phone number. If you are not receiving those email announcements, please contact us at 202-822-8138.

Next Episode

undefined - Litigation Update: Johnson & Johnson v. Ingham

Litigation Update: Johnson & Johnson v. Ingham

Johnson & Johnson v. Ingham is a pending petition before the U.S. Supreme Court. It involves many important legal issues, specifically: (1) whether a court must assess if consolidating multiple plaintiffs for a single trial violates Due Process, or whether it can presume that jury instructions always cure both jury confusion and prejudice to the defendant; (2) whether a punitive-damages award violates Due Process when it far exceeds a substantial compensatory-damages award, and whether the ratio of punitive to compensatory damages for jointly and severally liable defendants is calculated by assuming that each defendant will pay the entire compensatory award; and (3) whether the “arise out of or relate to” requirement for specific personal jurisdiction can be met by merely showing a “link” in the chain of causation, as the Court of Appeals of Missouri held, or whether a heightened showing of relatedness is required, as the Ford Motor Company in Ford Motor Co. v. Montana Eighth Judicial District Court has argued.
Attorney John Reeves, who filed an amicus brief for petitioners on behalf of the Missouri Organization of Defense Lawyers, will discuss the case and its implications.
Featuring:
-- John Reeves, Founder and Member, Reeves Law LLC

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/fedsoc-forums-465/artificial-intelligence-and-bias-13634982"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to artificial intelligence and bias on goodpods" style="width: 225px" /> </a>

Copy