Ethics

Page navigation

Whenever AI interacts with society there are usually ethical choices being made whether or not the decisions are conscious, or made by humans or machines. There are many avenues that we must consider the ethics of AI deployment in our social, economic, political, and natural worlds. This page gives only the very briefest highlights of some of the main avenues and is by no means an exhaustive list.


AI vision.

We know it influences our online shopping, digital dating, music choices, Google search hits, and what we see on TikTok. It’s already used extensively for facial recognition and surveillance by many governments, giving them unprecedented power over their citizens. Often the tech is being used beyond its current abilities. In 2019 MIT researcher Joy Buolamwini discovered that she had to wear a white mask to get a facial recognition service to see her face at all. The AI model didn’t recognise dark skinned people.

MIT research Joy Buolamwini found that facial recognition software could only see her when she put a white mask on. Image still from the movie Coded Bias

Some governments have experimented with using facial surveillance in school classrooms to make sure students pay attention. Obviously, that opens a path to a huge range of problems. Police around the world have wrongfully arrested and gaoled people based on incorrect facial recognition outputs. In January 2020 a black man named Robert Williams was wrongfully arrested and detained based on a blurry and inaccurate surveillance photo (Castelvecchi, Nature 2020).

Surveillance technology is also being used on a large scale in China to detect, track, and control ethnic minorities such as Uyghurs (Van Noorden, Nature 2020). Government use of facial recognition technologies pose significant ethical problems and threats to liberal democracies (Smith & Miller, 2021). All advanced countries are grappling with the impact of this new AI driven technology in 2021.


AI power.

Whether AI technologies are used for ‘good’ or ‘bad’, one thing is clear, those that hold the most advanced AI hold the power. On an individual level, the deployment of AI can have enormous power over the lives of people forced to interact with it. The question of power and AI is one of the most important questions we must ask and there are many topics of power to be discussed within this context (Crawford, 2021; Bartoletti, 2020; Zuboff, 2018)

Much of the power of AI comes in the decision-making capabilities we hand AI models. AI is used to make decisions about who gets the best jobs, where water and resources are distributed, what areas are mined, what chemicals should be developed, and who are the swinging voters to target in an election.

AI powered technologies enable organisations and actors to target people’s minds in a way never before possible: this technique is called psychoanalytics and was the foundation of Cambridge Analytica. The Cambridge Analytica scandal saw millions of Facebook accounts used to help sway the 2016 US presidential election. A key employee in Cambridge Analytica, Brittany Kaiser, has called the technique “weapons grade communications” (The Great Hack, 2019).

Brittany Kaiser a former employee of Cambridge Analytica discusses how the (AI powered) methodology of targeting voters of the 2016 USA election through Facebook was “weapons grade technology” (at 1 minute in this clip). This testimony was given in the British House of Commons in April 2018.

As AI drives into more aspects of our lives, more questions of power arise. The pandemic has used AI technology to help develop vaccines, model how to manage outbreaks, and to develop other covid fighting tools. Much of the work that AI has done has been invaluable. However, as we race to track the movement of the disease, we also must consider what cost that brings to our privacies (Nature Machine Intelligence, April 2021).


Case studies

Below are four very brief overviews of when AI causes ethical concerns in our societies. There are in fact many hundreds of documented case studies. Below these case studies I have linked four books I recommend, though there are many more excellent books on the subject.

Case study 1/4 Increase incarceration for black people.

In the USA, algorithms make predictions on people up for parole and release from gaol; the algorithm pronouncing who is likely to reoffend and who is likely to “go straight”. These predictions are used by judges to make decisions on people’s lives. In 2016 an investigation by ProRepublica showed these algorithms to be racially biased, incorrect predictions being made twice as many times that black people are more likely to reoffend than white people.

Fugett was rated low risk after being arrested with cocaine and marijuana. He was arrested three times on drug charges after that.Prorepublica

The evidence indicates significant in-built bias (Chouldechova, 2017) in the software, likely due to sociological flaws in the risk assessment frameworks (Werth, 2019). The datapoints used come primarily from heavily loaded survey questions defendants have to answer about their family, friends, and values.

WHITEAFRICAN AMERICAN
Labeled Higher Risk, But Didn’t Re-Offend23.5%44.9%
Labeled Lower Risk, Yet Did Re-Offend47.7%28.0%
PREDICTION FAILS DIFFERENTLY FOR BLACK DEFENDANTS
Source: ProPublica analysis of data from Broward County, Fla


Case study 2/4 Higher grades to wealthier suburbs.

Another example is when the UK government applied AI algorithms to graduating high school students during the COVID pandemic in 2020. Due to lockdowns, many people couldn’t attend exams, so their results were adjusted to their postcodes. Resulting in people living in wealthier postcodes receiving higher adjusted grades than those in poorer areas. The incorrectly AI-adjusted results deciding what universities high school leavers were able to get into.

LONDON, ENGLAND – AUGUST 16: A-level students protest over the results fiasco at Parliament Square on August 16, 2020 in London, England. (Photo by Guy Smallman/Getty Images). https://unherd.com/2020/08/how-ofqual-failed-the-algorithm-test/

Case study 3/4 Unfair health care distribution.

A third example is an AI model produced to prioritise health care services in the US. Due to poor research design, the result saw the algorithms more often recommend healthy white patients to receive more doctor visits, nurse care and in home care services than people of colour (Ledford, Nature 2019).

The algorithms weren’t coded to be racist, again it was flawed data collection design that used previous health care expenditure to predict future health care requirements. It was never considered that poorer people had less money to spend on health care, which is different to actual health care needs. The same flaw has been found in multiple AI health care models.

A biased algorithm favoured white people to receive more care than black people. MIT Tech Review, Oct 2019.
Black people with complex medical needs were less likely than equally ill white people to be referred to programmes that provide more personalized care. Nature Oct 2019

There are numerous case studies of the ethical questions that come up when we include AI in our healthcare decision making. AI is so powerful a tool in helping us cure diseases and better manage the health of individuals and societies that we cannot begin to think of not using it. What just need to think carefully every time we do use in AI to help with human health.


Case Study 4/4 A new underclass of workers

Most AI is decidedly unintelligent, it often needs humans to show it the way. This is especially true of symbolic AI. Often, humans are critical to the learning loop by tagging large amounts of data to help supervise the learning.

Human-in-the-loop training of AI training is usually based on financial considerations. Low cost human supervised AI training has created a new underclass of workers called “Ghost AI workers”. Ghost workers do the grunt work for machines by labelling data. This is a dog, this is a cat, this is a pineapple etc. Similar to the sweatshops of the clothing and electronics industries, the pay is poor (often a cent or two a click) and the conditions bad.

This photo shows low paid workers in India tagging medical images of polyps to train AI diagnostic systems. Some workers in this new industry earn pennies per hour tagging images of extreme violence and child abuse for Big Tech and large social media companies. Image The New York Times, April 2019.

Ghost Workers are frequently exposed to highly disturbing images including violence against women, children, and animals. Their resulting PTSD and other health impacts are rarely addressed. The term Ghost Workers was coined by Mary Gray in 2019. The ethics of human required training and supervision are a significant issue, it is one of the appeals of pursuing more advanced AI that doesn’t rely on this low-paid labour.


For further case study analysis I recommend the following books:


AI & Climate

Apart from the decisions AI is allowed to make that impacts our climate and natural world there are other factors to consider. In brief this can be viewed in two streams: what we take out of the Earth and what AI puts back into the environment. In both cases we are seeing heavy environmental costs associated with AI technologies.

Crawford (2021) has written eloquently about the natural resources that AI demands such as lithium and other rare earths. The race around the world to secure the rare earth minerals required by AI technologies is not only causing heavy environmental impact, it is also shifting power, affecting geopolitics, and perpetuating conflict mineral human rights abuses.

Children mining cobalt minerals in the Democratic Republic of Congo. Image: CBS News, March 2018.

There is also the cost of what these technologies put back into the environmental system both in waste product and the carbon footprint they create.

World’s largest toxic lake filled with refuse from emerging technologies located in Baotou, Mongolia. Image BBC 2015
The toxic lake was said to be 5.5 miles wide in 2015, made entirely of dangerous sludge. Image: Business Insider May 2015

Information on the lake is scarce. It was reported in 2020 (Khaltar, Oct 2020) that a river diversion in Mongolia made to feed the mining of rare earth’s near to this lake has removed critical water from the Gobi area.

As our AI models grow in size so too does their carbon footprint. GPT-3, is a large language model I study was trained on a really big dataset – the Internet! One training pass of this model produced 552 metric tons of CO2. That’s the equivalent of driving a car to the moon and back.

The chart below indicates the relative carbon footprint of training an AI model in 2019.

Relative CO2 outputs in 2019. Image: City AM, May 2021.

The conversation around the CO2 costs of very large AI models, in particular large language models, has gained a lot of voices in the last three years. The high cost of very large AI models has been dubbed ‘Red AI’ (Schwartz et al., 2019) as quoted by Nature below:

“For a linear gain in performance, an exponentially larger model is required, which can come in the form of increasing the amount of training data or the number of experiments, thus escalating computational costs, and therefore carbon emissions.”

Dhar, Nature, August 2020

The true cost of large AI models is not just in the training runs, it is in the carbon footprint of the whole infrastructure that supports the development and running of the models. The rare earths, supply chains, and other energy hungry infrastructures. Unfortunately, as most large AI models are privately owned, full transparency of these costs is rare.

These considerations are very important in the ethics of Large Language Models and there is significant work around trying to create greener AI.


Explosion of AI Ethics

The field of AI research first developed in earnest in the 1950s, with the ethics of AI not far behind in the 1960s; more recently there has been an explosion of interest in the ethics of AI. Floridi & Cowls, (2019) examined a wide range of developing ethical AI principles in 2019 and condensed 47 principles from numerous papers down to 5 key principles. The authors noting these principles stem from western liberal democracies.

Five key principles of AI ethics: Beneficence, Non-maleficence, autonomy, justice, explicability. Image: Floridi & Cowls, 2019

Another systematic review published in 2020 by Hagendorff (2020) compared 22 guidelines and the efficacy of these, noting that in general the world was failing to adhere to these guidelines. Further, most AI ethics guidelines are targeted at industry and government policy and very few covered critical aspects of academic research.

Most governments now have an AI ethics framework, and international organisations, such as the OECD and the United Nations, have developed guidelines in collaboration with numerous nations.

The eight Australian AI ethics principles published by the Department of Industry, Science, Energy and Resources are:

  1. Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  2. Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  3. Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  4. Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  5. Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  6. Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  7. Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Australia’s AI Ethics Principles

These days, the field of AI ethics is so large and diverse it is not possible to cover all the approaches in this short overview. What is common amongst voices in the field is that the need to address the ethics of these rapidly emerging and growing technologies is urgent and critical.


Ethics frameworks

Technomoral Virtue Ethics

Virtue ethics are rooted in ideas of what makes a good life. I have discussed this old philosophical question with GPT-3 quite a bit and will share those discussions on my blog. That we have been discussing the nature and importance of virtue ethics for thousands of years, indicates that it is a fundamental question for societies. Virtue ethics was a primary topic of discussion for Plato, Aristotle, Socrates, Confucious, Mencius. The philosophies of Buddhism and Hinduism have also been explored to understand virtue ethics in a modern day context.

Virtue ethics is one of the three main classes of normative ethics – the other two being deontology (rules, duties, responsibilities) and consequentialism (consequences of your actions). Both deontology and consequentialism have been extensively applied to AI and some good and important work has come from those endeavours. Below is one example shown in an interview about Kant and the design of AI.

“What could the 18th century German philosopher Immanuel Kant have to contribute to the design of artificial intelligence? Quite a lot, according to the University of Southampton’s Dr Andrew Stephenson.”

Whilst I acknowledge the value and importance of these paths of exploration, I am draw to the path of virtue ethics when discussing large language models. This is in part because I believe it aligns better with ideas of value pluralism which are a key part of my research into language models. Additionally, I am drawn to virtue ethics as I believe it promotes responsibility of the individual rather than outsourcing that to laws, guidelines, and frameworks. A prominent scholar in the field of virtue ethics and AI is Shannon Vallor who developed a framework with twelve values (shown below).

My work is focussed on how to use language models as mirrors to better understand our own standpoints in a virtue ethics context. That we hope that AI will help us live our best life is deeply entwined with our understanding and drive for eudaimonia.

Value Pluralism

Moral and value pluralism is the idea that there can be conflicting beliefs that are each worthy of respect. This becomes especially important when we look at the global nature of emerging technologies and how the reflect (or fail to) a diverse range of beliefs and values. Unlike virtue ethics which is a part of normative ethics, value pluralism is a meta-ethic.

Philosophers Isaiah Berlin and James Fitzjames Stephen are credited with establishing the field. Value pluralism fits in much better with our increasingly diverse and inclusive global village. It is a more ‘messy’ and challenging approach given that value pluralism doesn’t lend itseld to defined rules that can be coded into a machine. However, we should not be dissuaded from the correct path simply because it is the more difficult one.

Pragmatism

The philosophies of pragmatics is closely aligned with semiotics and meanings, making this a highly useful ethical tool for language models. Transformer AI and language models encode and decode meaning in words by using attention weighting; using an already established framework of semantics such as pragmatics that looks at the same thing in speech and text is undoubtedly useful.

Memetics

Memetics of cultural ideas was originally proposed by Richard Dawkins who based his idea on that of gene evolution. Whilst I believe that Dawkins perhaps tried to over-fit his ideas to the social world I think the essence of the idea is worthy of revival in the context of large language AI models. Unfortunately, the original ideas were taken into areas that were inappropriate, I think there is some good seeds of ideas to salvage and re-shape for the modern age. When these ideas were being created and explored we had nothing like the AI models we have today and it would have been difficult for those 20th century scholars to conceive of how the ideas of memetics may work better in the context of a reflexive AI model.

Back to top of page.

Back to Home page.

Back to Tech page.

Forward to Projects page.

search previous next tag category expand menu location phone mail time cart zoom edit close