Pause and pondr
WHY DOES AI BIAS MATTER?
Machine-learning tools are being adopted in various industries and sectors, but they can be riddled with insufficient data and biases that reinforce inequalities.
DID YOU KNOW?
One in five human resource professionals surveyed are concerned that AI in HR could perpetuate or even increase biases in hiring and talent development.
Source: 2018 IBM and UNLEASH HR professional survey
WHAT YOU’LL LEARN
Identify some of the ethical dilemmas of artificial intelligence (AI) and automated algorithms
Appraise the use of automated algorithms in hiring employees
What is algorithmic bias?
If human decision-making is prone to errors and personal bias, then using automated algorithms can remove human subjectivity and lead to fairer outcomes — at least, that’s the idea. But in recent years, some artificial intelligence systems have been shown to reproduce and even exacerbate social biases, a phenomenon called algorithmic bias.
One area of concern is around hiring technologies. In 2018, Amazon had to scrap its artificial intelligence recruiting tool after it was discovered that it favored male candidates over female candidates. And this year, recruiting technology company HireVue, whose clients include Hilton and Unilever, announced that it would stop using its facial analysis AI, which examines candidates’ facial expressions during interviews to determine employability, after mounting criticism that the system could arbitrarily discriminate against non-native English speakers, the disabled, or simply nervous job applicants.
How is machine-learning bias affecting other sectors?
Computer science researcher Joy Buolamwini and a team at MIT Media Lab found in a 2018 study that facial recognition technology misidentified darker-skinned females at an error rate of up to 34.7% higher than for lighter-skinned males, raising concerns over its use by law enforcement and other sectors. Another study of 189 algorithms in 2019 by the governmental agency National Institute of Standards and Technology found that the technology is consistently less accurate at identifying women of color.
In healthcare, algorithmic bias could lead to serious consequences for people of color. Researchers in 2019 discovered that one algorithm used by many U.S. health providers privileged white patients over sicker Black patients, and worked with the company to correct the issue. Also, algorithmic bias can be present in skin-cancer detection tools, when data relies on lighter-skinned patients.
What are some ways to tackle algorithmic bias?
One of the most commonly cited causes of algorithmic bias is a lack of inclusive data to train machine-learning models, which teach themselves by analyzing patterns in large datasets and replicating those outcomes. But oftentimes, minorities are underrepresented in historical data.
So when developers are creating products and tools with machine-learning capabilities, they sometimes simply don’t have access to comprehensive data, according to Sharona Hoffman, a professor of law and bioethics at Case Western Reserve University. There’s also the vital steps of testing and validation of the model that can be overlooked. “Anyone using AI should look to make sure that it’s actually promoting people’s welfare rather than the opposite, that there aren’t problems of bias and fairness,” says Hoffman.
Improving data quality is only part of the solution. Experts say that developers also need to examine the lack of representation among those who build algorithms. Manish Raghavan, a doctoral candidate in Cornell University’s department of computer science who studies machine-learning bias, suggests that “certain groups of people who have more experience being subject to those biases will be better placed to recognize them before they actually manifest in the real world.”
Finally, there’s the issue of accountability around how algorithms are designed and when they are being deployed. In 2019, members of the U.S. Congress introduced the Algorithmic Accountability Act, which would empower the Federal Trade Commission to require that companies assess their technology continually for fairness, bias, and privacy issues. The bill did not pass, but it garnered broad support and could be reintroduced in the near future. Currently, the FTC can enforce algorithmic fairness under general consumer protection laws.
“The concrete things that I think are going to be useful in the near term are primarily about legislation,” says Raghavan. “How sensitive companies are going to be to these definitions of bias depends heavily on whether they are legally incentivized to care about it.”
Pondr This
What was your most recent interaction with AI (e.g. automated customer service, targeted advertising, facial recognition)?
Have you ever been screened for a job by AI? If not, how would you felt about being screened?
What would it mean for AI to be fair and unbiased?
FOR LEADERS
Has the issue of algorithmic bias crossed your mind?
What are the pros and cons of depending on AI for decision-making?
Who should be responsible in identifying bias in AI systems within an organization? How do you think bias can be mitigated?
Explore The Stories
The role of chief diversity officers: A toolkit
Algorithmic bias sometimes built into hiring decisions
-
Jackie Noack is a freelance video and audio producer based in Boston. She was an associate TV and podcast producer at Christopher Kimball’s Milk Street, and served as a field producer and translator for the HBO documentary, “Clínica de Migrantes”. She double majored in international film and French at Tufts University. She is Peruvian-American and is natively fluent in Spanish.
Algorithmic bias sometimes built into hiring decisions
AI ethics is a human values issue, not a tech problem
-
Krysta Rayford is an audio producer and voice actor based in Minneapolis, MN. A graduate of the University of Wisconsin-Madison, Krysta also performs as K.Raydio, a musician whose work has been featured internationally on VH1, BBC Radio and Okayplayer. Krysta was a two-time featured performer at Soundset Music Festival in 2014 and 2019, one of the largest hip-hop festivals in the United States. In 2020, Krysta was named faculty at MacPhail Center for Music in the new Electronic Music and Recording Arts (EMRA) department.
In addition to her work as a musician, Krysta is a Digital Media Producer and Program Host for iPondr. Her background in audio production is rooted in her work as Voiceover Talent in Audio Description narration. Her credits include: Empire (FOX), A Black Lady Sketch Show (HBO), Independent Lens (PBS), Barbie’s Dreamhouse Adventures (Netflix) and more.
Topic in Review
We examined the use of AI technologies in hiring decisions, the criminal justice system, and healthcare — and possible solutions for advancing equity and justice in the design, governance and use of technology.
Continue Your Journey
The Netflix documentary “Coded Bias” centers around MIT researcher Joy Buolamwini’s discovery that facial recognition technology does not see dark-skinned faces accurately, exploring how the use of such software for surveillance can violate civil rights. It’s been called the “most important film about AI you can watch today.”
One of the other subjects in the film is Meredith Broussard, journalist, software developer and author of “Artificial Unintelligence: How Computers Misunderstand the World,” who has said that algorithmic bias is “the civil rights issue of our time.” She notes that one of the issues lies in “technochauvinism,” or the belief that technology is always the best solution in the name of progress.
As Boulamwini’s work has pointed out, it matters when algorithms are trained on data that is skewed. As she recently told NPR, “ I like to say the past dwells within our algorithms. You don't have to have a sexist hiring manager in front of you. Now you have a black box that's serving as the gatekeeper. But what it's learning are the patterns of what success has looked like in the past. So if we're defining success by how it's looked like in the past … this is where we run into problems.”
Buolamwini also started a nonprofit called Algorithmic Justice League. The nonprofit hosts educational workshops, offers system audits for companies, and provides resources in the movement for equitable and accountable AI.