Although it is supposed to be neutral and objective, artificial intelligence (AI) seems in some cases to reinforce inequality and discrimination. Lucia Flores Echaiz examined this issue as part of her Master’s degree in Law and Society at UQAM and her internship at UNESCO.
While AI is supposed to be neutral and objective, several situations show on the contrary that it can deepen inequalities and be discriminating, via the biases it inserts. We’re talking about algorithmic discrimination here,” stresses Lucia. As part of my Master’s degree, I asked myself how the law can respond to these new issues, and how it can protect us from these new forms of discrimination. In Canada, as elsewhere, there’s a real void in the legal literature on this issue”.
Lucia applied her research during a five-month internship in 2019 at UNESCO in Paris. She collaborated on a report called Piloting AI and Advanced ICT for Knowledge Societies. In it, she wrote a chapter demonstrating the gender bias of artificial intelligence.
“I’ve always been very interested in human rights and freedoms, or equality. For my research, I was immediately interested in the influence and shortcomings of artificial intelligence, as well as the discrimination that can arise from its use.” – Lucia Flores Echaiz
Women’s CVs rated lower
Among the examples of bias found, Lucia quotes from the UNESCO report that of Amazon’s CV sorting software. Developed by the American firm and used between 2014 and 2017, this software was supposed to make it easier for human resources to sort the thousands of applications sent to the group. Lucia explains: “This software gave a rating to CVs, ranging from one to five stars. However, Amazon noticed that this tool rated male applications better than female ones, particularly in technical professions. When a CV contained terms related to feminism, women’s rights, or even simply if the candidate was a woman, then that profile automatically had a lower rating.” The origin of this problem comes from the fact that the software’s AI ranked resumes based on a database of applications accepted and rejected by Amazon between 2004 and 2014… where male resumes were already the highest rated. “The algorithm itself learned to discriminate by reproducing past mistakes”, notes Lucia.
Better job offers for men
The student points to other examples of bias, particularly in targeted advertising. “The personalization of online content has developed a lot, and some companies record our data to better target their ads. It’s through these mechanisms that men can potentially receive more interesting and better-paid job offers than women”. Lucia points out, however, that this targeting can be voluntary on the part of companies, with certain settings applied to highlight the offer to a certain age group, men or women.
“The question of algorithmic discrimination deserves a lot of attention,” warns Lucia. Beyond the technical aspect, this form of discrimination is a legal problem. We need to learn how to deal with this issue, to approach it from a sociological and philosophical angle”.
It also stresses that artificial intelligence will always need human support, from the design of databases to the interpretation of its choices, via its machine learning methods. “The UNESCO report suggests, among other things, that states should ensure that their legislation protects their citizens from algorithmic discrimination, and that companies should think about such discrimination when developing and evaluating their software”, concludes Lucia.