Community//

Why We Need to Care About AI Sensitivity

Technology reflects, not erases, our biases.

bennat-berger_sensitivity-AI

It’s a truth that always bears repeating: computers only know what we program them to know. This basic fact has enabled the false idea that the value-neutral language of code erases the all-too-human qualities of prejudice from the equation. After all, we think of humans as fallible thinkers, prone to unfair judgments and biased decision-making. Machines should, in theory,  be better — shouldn’t they?

We’re accustomed to hearing the (misconceived) story by now. Brilliant software is revolutionizing our world, making it a more equitable and colorful place.  Yet, the thought held by many that AI decisions aren’t so susceptible to prejudice is unfortunately and demonstrably false. While it’s true that a program can assess pure data without making the snap judgments that humans do, the info in question is often predicated on the biased decisions of the past and present, in the real world where discrimination is a constant reality for marginalized people. In an industry heavily invested in its image as a world-shifter, it’s become sadly apparent that discrimination is sustained, not eliminated, by many of our “revolutionary” tech tools.

At their essence, algorithms are a set of procedural instructions. Making sense of unfathomably huge data sets is time-consuming at best for a human, so we leave it to these pre-made programs to handle the most miraculous jobs in software. Most of the innovative software services we use today rely on AI — from Netflix suggestions to the autocorrect in our smartphones. Such problem-solving is inherent to today’s technology.

It’s important, then, to take a closer look at just how these miracles are happening. It’s thanks to many lines of code (often measuring well into the millions) that even the most rudimentary software applications work, and it’s within the algorithms in those indecipherable-to-most lines of data that the world of Artificial Intelligence and machine learning takes form.

A legacy of inequality

There are too many examples of “intelligent” software failing to give equal credence to users from marginalized groups. A facial recognition algorithm that’s mostly tested on white faces will falter at recognizing black ones. A smart assistant trained to recognize human speech fails to be properly attuned to women’s voices. These shouldn’t be read as mere quirks to be corrected with software patches, but a call for the tech industry to take a good, long look at their practices.

The ramifications are much graver than misidentification in consumer tech. Prison sentencing, home loan allocations, auto insurance rates and more have shown discriminatory practices historically, and many such decisions are now being handed over to AI. As algorithms take on a greater role in such processes both governmental and private, it becomes apparent that the prejudices of yesterday and today are showing no signs of slowing after the high-tech handover. With the tech firms that created it showing disinterest in correcting this injustice, it’s left to other powerful players to fix the wrongs created by algorithmic bias.

Change from above

As with other hot-button issues in tech responsibility, our neighbors across the Atlantic are taking a promising lead in this regard. Governmental leaders in France and the UK have been particularly outspoken about the need to ensure more equitable practices in the technology that’s steering our lives. Such missives from the top make for a hopeful future, but much work remains to be done.

Here in the US, my hometown of New York City has initiated a full audit of the usage of algorithms in city business, the first such project in the nation. Established in May of 2018, the Automated Decision Systems Task Force will present their findings in a full report at the end of this year. Made up of scientists, professors, advocates and other experts, the group may well put a halt on much of the algorithmic processes that have been known to discriminate.

Fight for the future

While this may be interpreted as too little, too late, the fact is that for all the progress that’s been made, thus far the tech titans have only scratched the surface of AI’s potential. The usage of “intelligent” tech will continue to grow, so it’s absolutely vital that action happens now. The damage that’s already been done can’t be negated, but through conscious action we can prevent tomorrow’s marginalized people from being hurt even more by an expansion of algorithm-driven consumer and governmental applications.

New York’s program will hopefully serve as a call to action not only for other municipalities, but for programmers and their employers to stay conscious of the limitations of smart algorithms. Major initiatives to increase diversity in the science and tech industry also promise to fight discrimination at the root with a more diverse workforce that understands firsthand the historic prejudices that continue to afflict the tech world and the greater society surrounding it. As technology itself evolves, there’s no reason to believe that the industry that creates it can’t progress simultaneously.

The Thrive Global Community welcomes voices from many spheres. We publish pieces written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Learn more or join us as a community member!
Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

Sign up for the Thrive Global newsletter

Will be used in accordance with our privacy policy.

Thrive Global
People look for retreats for themselves, in the country, by the coast, or in the hills . . . There is nowhere that a person can find a more peaceful and trouble-free retreat than in his own mind. . . . So constantly give yourself this retreat, and renew yourself.

- MARCUS AURELIUS

We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.