Unequal risks: the moral maze of artificial intelligence
Artificial intelligence technologies have witnessed a rapid expansion in the last decade, despite mounting criticisms that AI worsens social inequalities through it’s ‘white guy problem’.
The last decade has seen the rapid escalation of the usage of AI technologies in a variety of industries, including but not limited to healthcare, the prison system and online gaming. Worldwide revenues for the AI market are predicted to break $500 billion by 2024, led by companies including Dell and Huawei.
This growth has been echoed by a surge of representations of AI in popular media; Westworld, Blade Runner 2049 and Ex Machina all present a future where AI can be bent to the whims of humanity to different degrees. Often overlooked in these imaginations, however, is the very real and present impact AI is having on the industries it affects; a slew of social biases are frequently imported into AI systems and algorithms from their creators or from data fed into it. This has formed the basis of a series of attacks on AI. Perhaps the most famous critic, Kate Crawford, described AI as having a ‘white guy problem’, posing a serious risk to racial inequality wherever it is used.
The most publicised example of the failures of AI came when an algorithm popularly used to assign different health treatments in the US was found to have been systematically discriminating against black people, based on pre-existing data sets created by human clinicians who assigned black patients lower risk scores than equally sick white people The NHS, which recently extended funding for AI-powered diagnostic technology for coronary heart disease, has come under criticism for not paying suitable attention to the potential of AI to worsen existing health gaps.
An AI system used to score inmates by their risk of committing future violent crimes across the US faced similar issues, systematically assigning black inmates far higher ratings than their white compatriots despite often having committed minor non-violent felonies. Despite being ‘remarkably unreliable’ in predicting violent crime, this score was then used in parole and courtroom hearings, influencing outcomes and further worsening the racial bias in the justice system.
This trend repeats elsewhere; Crawford lists several more examples of racial bias in AI, including in photo recognition technology, in online job recommendations and deciding where to allow Amazon’s same-day delivery service.
The question remains: what could, and should, be done to amend this?
Similar technologies have been utilised in fixing racial bias. In healthcare, whilst the issue of implicit bias based on race is common, AI has also been used to supplement or amend the judgement of a clinician. In a 2021 article for Forbes, Pearl details how black women are less likely to be offered breast reconstruction after a mastectomy than white women; in a ‘medical culture that falsely believes all patients are treated equally’, properhttps://www.forbes.com/sites/robertpearl/2021/02/16/how-ai-can-remedy-racial-disparities-in-healthcare/ly created and utilised AI could be used as a force for equality.
In addition, the benefits of AI are not to be sniffed at. DeepMind, a product of the Alphabet group’s subsidiary DeepMind Technologies, has proven able to detect breast cancer more consistently and accurately than human experts. This process has been accelerated by the Covid-19 pandemic; ‘necessity is the mother of invention’, and telehealth apps have surged in usage. In some cases, racial bias in these technologies can be detected and eliminated, but in others it undoubtedly goes unnoticed, and it is in these cases where the question remains: Is the worsening racial and other inequalities is an appropriate price to pay for benefits such as improved disease diagnosis? And who gets to decide?
The world is changing at a breath-taking rate, and the onslaught of new innovations in the field of AI seems unlikely to slow down, despite warnings about the risks it poses in the far future and the damage it is causing in the present. With it comes a set of questions requiring answers that are not clear or obvious; they are, as Morisse puts it, ‘philosophy with a deadline’ Often on a case-by-case basis, decisions are being made and will continue to be made on the future of AI, each holding massive ramifications for the direction of technology.