
A PhD project examining the ethics of artificial neural networks and how they reflect who we are.
I acknowledge and pay respect to the traditional owners of the land on which I work; the Gadigal people of the Eora Nation. It is upon their ancestral lands that the University of Sydney is built. This land has always been a learning space and is the site of one of the oldest knowledge systems in the world.
My name is Bec (Rebecca Johnson), I am a Tech Ethics PhD researcher, specifically the ethics of artificial neural networks; the deep learning side of AI. I am based at The University of Sydney in Australia, in the Faculty of Science. I have long been fascinated by the interplay between humans and technology; our sociotechnical systems. I have three previous degrees in science and humanities, years of industry experience, and am a citizen of Australia and Canada.
Emerging technologies of the 21st century offer great opportunities, but also pose existential threats to humanity and the natural world. Through my research, I hope to contribute to us making better choices when deploying these new technologies into our world.


I use sociotechnical approaches to understand how we imbue emerging technologies, such as artificial intelligence (AI), with our own biases and values and how these ethics are amplified and reflected. The field AI is broad, I focus on the sub-field artificial neural networks. One specific type of neural nets that I research are Large Language Models (LLMs) that use deep learning to generate and categorise text.
This page gives a brief overview of my work. The tabs at the top of this page link to other pages that go into more detail. Enjoy the site and drop me a line, find me on Linkedin, or follow me on Twitter if you’d like to connect!
My thesis abstract on the University of Sydney website here.
Listen to a 30 minute podcast about my work.
This home page gives a brief, top-level overview of my work. Use tabs at top to navigate to other pages on this site.
Page navigation
Quick links to sections on this page.
- Emerging technologies
- Artificial Neural Networks
- AI Ethics
- Sociotechnical Systems
- Postphenomenology & other frameworks
- Empirical work
- Reflections
- The research team
- Connect
Use menu bar at top of this page to navigate to other pages or click here: philosophy, tech, ethics, projects, blog, researcher, contact.
Overview of my research
Emerging technologies
Emerging technologies of the fourth industrial revolution (4IR), such as AI, have profoundly impacted our social, economic, and environmental systems. Already we have seen tremendous gains as well as terrible (often unplanned) effects. Many researchers and organisations are concerned with the ethical consequences of deploying 4IR technologies. Whilst much important research in this area is dedicated to algorithmic transparency (transmuting the black box to a glass box), accountability, and ethical AI frameworks, my work takes a different tack. I look at impacts of these technologies from a systems approach, widening the boundaries of the problem to include more agents, both human and digital.

In 2016 Klaus Schwab of the World Economic Forum declared that the world had entered the fourth industrial revolution (commonly abbreviated to 4IR). The first industrial revolution was characterised by the introduction of fossil fuels and the development of machinery to perform tasks that animals and humans used to do. The second was driven by the development of electricity and wired and unwired communications. The third saw the rise of the personal computer, digital communications, and the internet.
Now as we enter the fourth we see highly networked technologies that are deeply integrated into our human social systems and even into our physical bodies. Just some of the technologies driving 4IR are artificial intelligence (AI), Internet of Things (IoT), blockchain, gene editing, 3D printing, and smart factories. The most notable characteristic of 4IR is a blurring of the boundaries between the digital, physical, and biological.
Emerging technologies are changing our world in ways that were unimaginable only at the turn of the millenia. We must plan these changes carefully if we wish to build a brighter future for all people and our planet. A better understanding of the ethical implications of using these new technologies in our societies has never been more important.
Read more about 4IR on the Tech page.
Artificial Neural Nets
The field of emerging technologies is very broad. One sub-field is Artificial Intelligence, but that is also a large and encompassing term. I focus on artificial neural networks (ANNs) and deep learning. One specific type of ANNs I study are Large Language Models (LLMs) that use deep learning to generate and categorise text.

The field of AI rose from the work of cyberneticians in the 1940s, an important early scholar being Norbert Wiener. Cybernetics is focussed on feedback systems and took much inspiration from the field of biology. Early cyberneticians McCulloch and Pitts had some early success with artificial neural networks inspired by the biology of the human brain. The term Artificial Intelligence was coined at the 1956 Dartmouth summer workshop in part as a way to form a distinction from the then dominant field of cybernetics. In the 1960s there was a strong split between symbolic AI (the type of AI IBM’s Deep Blue used to beat chess Grand Master Kasparov in 1997) and neural net AI (the type of AI Googles’s Deep Mind used to beat go Grand Master Lee Sedol in 2016).
Read more AI history on the Tech page.
Today we still use both types of AI though deep learning techniques used in neural net AI is rapidly gaining ground.
Symbolic AI is rules based and requires humans to explain those rules. Image: MIT-IBM Watson AI Lab. The more hidden layers a neural net has the “deeper” the learning. Image: Royalty-free stock vector ID: 1345926827
What are AI Ethics?
There are substantial ethical consequences of 4IR technologies implemented into our human system: this is a key driver behind my research. Technologies are not inherently good or evil. They are tools created and used by humans that also impact back on humans in cybernetic feedback loops. As such, the deployment of emerging technologies in our social and natural worlds are imbued with human biases and can result in beneficial and harmful outcomes.
Tools such as AI have no capacity for a sense of morality and values, yet we increasingly permit artificial agents to make choices and decisions for us. From autonomous vehicles to recidivism risk algorithms, distribution of health care services to algorithmically adjusted school grades, hiring of new employees to management of non-renewable sources; we have given huge amounts of agency to our artificial agents. In some cases these applications of AI technologies have enabled us to speed up processes and produce fairer outcomes. In many cases though, we have seen disastrous consequences for both people and the natural world. AI is not neutral. AI is a creation of humans and therefore carries with it embedded biases and values of its creators.

Whilst the ability to use technology for either good or bad is not at all new, our step change into 4IR heralds massively enhanced abilities for technologies to amplify our biases into our systems, increasing inequalities and undesirable impacts in positive feedback loops. We are at a point in history where it is critical to address these issues of inequality before they become further structurally embedded in society.
“There are ethical choices in every single algorithm we build.”
Cathy O’Neill, Weapons of Math Destruction.
There are many aspects to ethically responsible AI, just one of them is the bias we imbue these technologies with. It is an inescapable fact that we are all biased: the true meaning of bias simply describes a perspective or angle. It is impossible to dissociate human produced technologies such as AI from the inherent bias of the creators, the deployers, and the users. In many cases datasets, have our bias baked in from data selection, data tagging, and contextual experiment design. Bias is just one example of ethical problems that can arise when we implement AI, others include transparency, fairness, human dignity, weaponisation, inequities, and automated decision-making (to name just a few). In a recent speech by Australia’s outgoing Chief Scientist, Alan Finkel noted that the most important question he sees in the issue of AI and society is “What kind of society do we want to be?”
Sociotechnical Systems
Sociotechnical systems (STS) approaches help us to better understand relationships between technologies, social structures, and the emergent phenomena that arise from those relationships. By moving the abstraction boundaries of the problem to encompass a broader range of actors and relationships, my work considers cybernetic movements of agency and power within an AI-driven STS. By developing a better understanding of STS system dynamics I believe we can more clearly understand these technologies’ ethical consequences rather than focus on solutionist-based approaches.

Sociotechnical systems (STS) theory was initially developed in response to social upheavals caused by the mechanisation of coal extraction by members of the Tavistock Institute. As we enter into 4IR with its signature fusion of cyber-physical systems, there has been a renewed interest in the field of STS as a way to navigate the enormous changes we are facing. Recently STS thinking has been applied to AI ethics in a range of different ways, for example the work of Selbst et al. on Fairness and Abstraction in Sociotechnical Systems (2019).
Read more about STS on the Philosophy page.
Postphenomenology and other frameworks.
My work sits at an intersection of philosophy, technology, science, and sociology, therefore I am exploring new ways of combing useful epistemologies from multiple fields. Key theoretical frameworks and lenses used in this work include:
- Philosophy of Technology and Postphenemonology
- Concepts of Agency particularly from a systems approach.
- Theories of power especially through knowledge, decision-making, and agency.
- General systems theory and emergent phenomena.
- Lenses of language and meaning such as pragmatics and semiotics.
- Mimetics. Reflexivity and movement of cultural memes and norms between machines and humans.

Philosophy of technology and postphenomenology are central to my research. Traditional phenomenology of Husserl, Heidegger, and Merleau-Ponty was concerned with understanding how we perceive and experience phenomena and consciousness. There has been extensive scholarship and debate over the last century about the usefulness and failings of phenomenology, but one important aspect is its inability to adequately address issues of technology in a pragmatic way. In 1993 Don Ihde published work to overcome some of these objections and laid the groundwork for the field of postphenomenology; a shift later dubbed the “empirical turn.” Adams (2007) noted that postphenomenological approaches emphasize the anthropic confrontation with the world”. The has been a growing interest to apply these ideas to AI technologies.
“What sets AI apart from traditional technologies is its capacity to autonomously interact with its environment and to adapt itself on the basis of such interactions”
Van de Poel, 2020

True to the postphenomenological tradition I incorporate empirical work in my research. Empirical studies are designed philosophical principles of agency, systems, experience, and ethics. We are at such a critical juncture of society’s development that we need both philosophy and empirical science to help guide us. Understanding the changes of our STS as we move into the fourth industrial revolution can best be advanced, in my opinion, by combining philosophic inquiry with empirical projects.
Read more about postphenomenology on the Philosophy page.
Empirical Work
OpenAI has granted me access to what is currently the world’s largest LLM, GPT-3 (generative pre-trained transformer, 3rd generation), went into beta testing in July 2020. A language model is a type of deep learning neural net that is focused on generating and classifying text. LLMs use a probability distribution of sequences of words to perform these tasks. LLMs are extremely data ravenous: GPT-3 was trained on the dataset of…the Internet! GPT-3 can generate essay length texts that are often indistinguishable from human produce texts.
The reaction to GPT-3 has been both positive and negative and has come from many fields of academia. Perhaps what is most interesting to note is the amount of reaction covering many issues related to use of this newest technology. Australian Professor of Philosophy of the Mind, David Chalmers believes it to be an important step along the path to artificial general intelligence. Prof. Luciano FLoridi, director of the Digital Ethics Lab at the University of Oxford, notes that “One day classics will be divided between those written only by humans and those written collaboratively, by humans and some software, or maybe just by software.” (2020). Telling of Microsoft’s opinion of GPT-3 is that it paid $1 billion to be the exclusive cloud provider.
Using concepts of mantic mimesis (i.e., using neural nets as a mirror to ourselves), I am experimenting with how LLMs can help us better understand our own biases and values. Large neural nets will magnify and reflect our values, perspectives, and decisions, at times refracting to impact others in ways we had not considered.
Read more about my projects on the Projects page.
Reflections
I believe that using sociotechnical systems (STS) mapping of artificial neural networks (ANNs) can be used to better understand how our values and hidden biases are reflected in these emerging technologies, such as large language models (LLMs).

Perhaps an easy way to understand this is to look at the 2019 interactive art installation by the Italian design studio Ultravioletto. Their installation, called the “Neural Mirror” allowed participants to look into a mirror, enter some basic details about themself, then see a projected data doppleganger.
On a deeper level than the Italian Neural Mirror, we can look at how our hidden and unconscious biases, our values and ethics, and our perspectives of the world are embedded into our AI systems, amplified and reflected back. This is the idea that underpins my empirical philosophical work.
Read about mimetics on the Philosophy page.
A Global group of PhD researchers
As a parallel initiative, in November 2020 I founded a global group called “PhD students in AI Ethics”. The group has over 130 members from 28 countries representing every continent (except Antarctica!) and is growing steadily. Members are all doctoral or postdoctoral researchers studying AI Ethics and hail from a very wide variety of disciplines and fields. View the website here phdAIethics.com
The motivation to set this group up was to develop a platform for doctoral researchers in AI ethics from all fields of discipline and geographic regions to connect, share, and support each other. We have frequent reading group meetings, members share resources and ideas, and one group is already working on a collaborative paper about GPT-3 and ethical concerns of content filtering.
Go to PhD Students in AI Ethics website (opens new tab).
The researcher
Bec Johnson is a doctoral student at the University of Sydney. Bec is supported by her research supervisor, Prof. Dean Rickles from the School of History of Philosophy of Science, and industry supervisor, Dr Lucy Cameron from the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
Doctoral Advisor
Prof. Dean Rickles. Dean received his PhD from the University of Leeds in 2004, under the supervision of Steven French, with a thesis on conceptual issues in quantum gravity. He took up a postdoctoral fellowship at the University of Calgary in 2005 (split between health sciences and philosophy), on the application of complex systems theory to population health. Dean’s primary research focus is the foundations of quantum gravity research. However, he also has strong interests in musicology, econophysics, climate science, neuroscience, psychology, transhumanism, computing and philosophy of religion. He plays the piano as often as he can (and the drums as often as he is allowed).
Connect
See below for Twitter feed or click here to go to top of the page.
Use menu bar at top of this page to navigate to other pages or click here:
philosophy, tech, ethics, projects, blog, researcher, contact.