Hey readers,
From recruiting algorithms that favor men’s résumés over women’s to mortgage algorithms that discriminate against Latinx and Black borrowers, artificial intelligence systems can reflect human biases and lead to unfair outcomes.
Lots of big players in the tech space will tell you they’re aware of the problems with AI and that they care about making AI that’s fair and unbiased. Google, Microsoft, the Department of Defense, and many others have released value statements signaling their commitment to ethical AI.
In 2019, Google also tried to create an ethics review board. It lasted all of one week. It crumbled in part due to controversy surrounding certain board members. But even if every member had been unimpeachable, the board would almost definitely have failed. It was set up to meet only four times a year and didn’t have the power to nix Google projects it deemed irresponsible.
Nowadays, critics use the term “ethics washing” to describe companies’ attempts to appear as though they care about making their AI-powered products ethical without actually doing the real work needed to achieve fair, unbiased outcomes.
And here’s the thing: Achieving fairness and freedom from bias in AI is really, really hard. That’s because of a daunting reality that AI companies and their critics tend to elide: Everybody wants AI to be “ethical.” But there is no consensus on what that actually means.
This is why, according to a new report by philosophers at Northeastern University, before a company can claim to be prioritizing fairness, it first has to decide which type of fairness it cares most about. In other words, step one is to specify the “content” of fairness. Then it has to do step two, which is figuring out how to operationalize that value in concrete, measurable ways.
“We’re currently in a crisis period where we lack the ethical capacity to solve this problem,” John Basl, one of the report authors, told me.
What do we even mean by unbiased AI?
Let’s dive a little deeper into the problem of AI bias and why it’s so hard to pin it down.
One form of AI bias that’s rightly gotten a lot of attention is the kind that shows up in facial recognition systems. These systems are pretty good at identifying white people but notoriously bad at recognizing Black people. That can lead to very offensive consequences — like when Google’s image-recognition system labeled Black people as “gorillas” in 2015.
So some critics have argued for the need to debias facial recognition systems. But it’s not so simple. Given that this tech is used in police surveillance, which disproportionately targets people of color, maybe we don’t exactly want it to get great at identifying Black people.
As Zoé Samudzi wrote in the Daily Beast, “In a country where crime prevention already associates blackness with inherent criminality, it is not social progress to make Black people equally visible to software that will inevitably be further weaponized against us.”
In other words, ensuring that an AI system works just as well on everyone does not mean it works just as well for everyone. So we need to differentiate between technical debiasing and debiasing that reduces disparate harm in the real world — and acknowledge that if the latter is what we actually care about, we should maybe just not use facial recognition technology, at least not for police surveillance.
The multiple meanings of “fairness” in AI
Fairness is commonly touted as a critical value in AI ethics conversations. But there’s no one thing that constitutes fairness. In fact, fairness can mean a few different things — and those things are sometimes in tension with each other.
Let’s say your job is to give out loans. You use an algorithm to help you figure out whom you should loan money to, based on a prediction about how likely they are to repay. It uses their FICO score to make the prediction. Most people with a FICO score above 600 get a loan.
One conception of fairness, procedural fairness, would say that an algorithm is fair if the procedure it uses to make decisions is fair. So, it should judge everyone based on the same relevant facts (like their payment history); given the same set of facts, everyone will get the same treatment (regardless of individual traits like race). By that measure, your algorithm is doing fine.
But let’s say members of one racial group are statistically way more likely to have a FICO score above 600 and members of another are way less likely — a disparity that can have its roots in societal and policy inequities.
Another conception of fairness, distributive fairness, says that an algorithm is fair if it leads to fair outcomes. By this measure, your algorithm is failing, because it has a disparate impact on one racial group.
Now, you can address this by giving different groups differential treatment. For one group, you make the FICO score cutoff 600, while for another, it’s 500. You make sure to adjust your process to save distributive fairness — but at the cost of procedural fairness.
How to operationalize a value like fairness
After you decide on the content of a value — say, you choose distributive fairness — you have to figure out how to operationalize it in concrete, measurable ways.
In the case of the loan example, action items might include encouraging applications from diverse communities, auditing to see what percentage of applications from different groups are getting approved, offering explanations when applicants are denied loans, and tracking what percentage of applicants who reapply get approved.
Right now, there’s a void on the policy front to specify how companies should handle all these complexities, and the space is something of a Wild West.
“It doesn’t look like we’re going to get the regulatory requirements anytime soon,” Basl, the report’s co-author, told me. “So we really do have to fight this battle on multiple fronts.”
—Sigal Samuel, @SigalSamuel
The worst horrors of factory farming could soon be phased out in Europe Jasper Juinen/Bloomberg via Getty Images Europe is on track to ban cages for farm animals as soon as 2027. The US could take much longer. Read more.
How 4 companies control the beef industry Dion Lee/Vox Corporate consolidation is making it impossible for cattle ranchers to stay afloat. Read more.
Access the web version of this newsletter here.
This email was sent to xtrabado@gmail.com. Manage your email preferences or unsubscribe. If you value Vox’s unique explanatory journalism, support our work with a one-time or recurring contribution.
View our Privacy Notice and our Terms of Service.
Vox Media, 1201 Connecticut Ave. NW, Floor 11, Washington, DC 20036. |