Community//

Undoing the Racism Met in AI

The fight against social discrimination is also extending to the way AI is created

The Thrive Global Community welcomes voices from many spheres on our open platform. We publish pieces as written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team and must meet our guidelines prior to being published.

It is generally agreed that the development of AI has made many aspects of everyday life much easier: searching through databases, replacing human labour with robots within factory production lines, even financial forecasting in some cases. Although humans have overcome many of their physical and intellectual limitations in the last decades, a problem still remains rooted deeply in our society.      

       Racism is but one of our still existing social issues. In theory, slavery was abolished and people of different races are able to work and live equally. Moreover, the mentality towards the roles of men and women in society has changed in the last century. But it has been proven in uncountable studies that people have managed to “give racism a new face”. Namely, AI is usually produced in a way that alters the odds in favour of white men.

            Our team – CleverMinds – is to code yet another facial recognition AI. The difference consists in the fact that our AI will be unbiased towards discrimination. In other words, all people have equal rights and obligations in front of technology.

            This kind of approach will also determine financial increases overall. For instance, there may be candidates of colour who are more experienced than their white counterparts when applications for a workplace are available. The decision to make use of an AI when analysing CVs – AI which automatically rejects other people and shortlisting white people only (achievable, for example, in cases where a photograph of the applicant is required) – is both unethical and unpractical. A poorly prepared candidate may get the job and may contribute to the financial loss the hiring company may have in the future. Another relevant example is “genderising” specific jobs. Everyone knows that by accepting cookies policy when entering a website leads to that website showing off many pieces of advertisement spread across the page – more or less relevant to the user of the device. Little do many people think about that the existent AIs which propose those pieces of advertisement are actually biased. Unemployed men are more likely to see ads from companies which require physical strength from possible candidates (such as construction companies), whereas unemployed women are more likely to see ads regarding the need for a cleaner.           

            In conclusion, our team is going to develop a project with a double-role. Not only will it leave no space for racism in nowadays society, but it will also materialise the efficiency of this concept into profit for the client-companies, as the AI will select people based on significant criteria for the purpose the company wants it to be created.

    Share your comments below. Please read our commenting guidelines before posting. If you have a concern about a comment, report it here.

    You might also like...

    We use cookies on our site to give you the best experience possible. By continuing to browse the site, you agree to this use. For more information on how we use cookies, see our Privacy Policy.