The implementation of machine learning tools across vast sectors of our daily lives- our homes, our workplaces- has borne concerns about the power of A.I. in solidifying existing prejudices into infallible lines of code in the outputs of machine learning tools.
The troubling connection between A.I. and bias stretches to the darkest periods of history; whilst new technologies boast advanced facial recognition and data-collection software, the historic use of such technology is associated with genocides such as the Holocaust. The use of phrenology to determine race algorithmically bears resemblance to Nazi procedures to identify Jews through stereotypical lists denoting “Jewish physiognomy”. And one could argue that data-collection and categorisation aid prejudicial treatment of certain groups not unlike the tabulation machines utilised to identify and track political prisoners in the mid-20th century.
In data-sets involving human beings; A.I. has shown itself to adopt learned biases toward religion, gender, and race. The danger of allowing Machine Learning tools with embedded biases to go unregulated has led Microsoft Research’s Kate Crawford to herald A.I. as a “fascist’s dream”. The use of A.I. in targeted discrimination has invited criticism from human rights experts such as Tendayi Achiume who, in a report delivered to the Human Rights Council expressed concerns regarding the use of A.I. in the form of facial recognition software; “As recent moves to ban facial recognition technologies in some parts of the world show – in some cases, the discriminatory effect of digital technologies will require their outright prohibition.” The social impact of prejudice by use of machine learning has, however, already been felt under authoritarian rule in undemocratic states; an A.I. patent filed in 2018 by Huawei and the Chinese Academy of Sciences claims the algorithms for the facial recognition technology can identify individuals’ physical traits- the patent report specifically makes note of Uighur identification, lending credence to the human rights concerns surrounding facial recognition software.
However, bias in A.I. is not limited to targeted use by governments alone, machine learning algorithms perform within a spontaneous and quasi-independent framework; Maria Temming of Science News illustrated this fact by noting that “It’s often unclear- even to the algorithm’s creator- how or why the algorithm ends up using data the way it does to make decisions.” The abstract element of machine learning consists of its ability to form patterns out of innumerable data sets- why these patterns often are congruent with existing human prejudices is unclear. Machine learning tools created in recent years by prominent technology companies have repeatedly displayed existing prejudices. Microsoft’s A.I. chatterbot “Tay”, was shut down just sixteen hours after its launch as the bot began posting anti-Semitic, racist, and inflammatory content with no mechanism to distinguish between non-offensive and offensive data sets.
Moreover, a report published in 2019 at Northeastern University showed that Facebook’s algorithm discriminates on the basis of race- distinguishing between the advertisements that are targeted to white users and those targeted to ethnic minorities. A subsequent M.I.T review quantified the findings of the report to show that, on average, low-paid, blue-collar jobs were advertised towards ethnic minorities at more than twice the rate that they were shown to white users. In this case, the root cause of the bias is that the A.I. has been developed to fulfil optimisation and efficiency objectives which are quantifiably measured; these include ad views, engagement with ads, the number of ad clicks, to name a few.
There is no mechanism by which A. I can recognise historic patterns of discrimination- in fact, one of the posited ways in which A.I. learns bias is through misaligned training models which reflect past discrimination, for which there is no corrective, as machine learning tools learn from patterns and seek to emulate the most efficient and effective ones. A frequently noted example of A.I. algorithms entrenching historic biases is the St. Georges Medical School admissions scandal in the 1980s; whereof the university implemented an algorithm to assess written applications. Although the sentiment was driven by a desire to make the application process fairer by excluding the potential for unconscious bias, in fact, the algorithm had been determining students to be Caucasian or non-Caucasian based on their name and place of birth and accordingly struck as many as fifteen points from their score, ultimately hindering the students’ chances of proceeding to the interview stage. In this particular case, there was adequate redress in the law as it was brought to the U.K. Commission for Racial Equality, wherein an inquiry determined the wide scale of discrimination exacted due to the algorithm. This was a particular instance of training data that reflected existing prejudices- rendering the codification of human bias into technology especially insidious. Had a review not been brought forward; the algorithm would have continued enacting biased application reviews and hindering the progress of students from non-Caucasian ethnic backgrounds owing to pre-existing prejudices.
While A.I. appears to be taking every industry by storm, its indiscriminate categorisation of data has the potential to devolve into prejudicial practices which can go unnoticed, or even encouraged, by its users. The social impact of emergent technologies, therefore, cannot be ignored. At present, some eighty-eight per cent of companies worldwide use some form of A.I. or Machine Learning in their hiring processes or for HR generally, rendering the repercussions of misaligned algorithms and prejudicial machine learning tools dire for those who are susceptible to discrimination on the basis of gender, race, or religion. Whether the cause of pattern discrimination lies in training data that reflects past prejudices, or imbalanced data sets- the human cost of wrongful use of machine learning tools are severe.