Home

I acknowledge and pay respect to the traditional owners of the land on which I work; the Gadigal people of the Eora Nation. It is upon their ancestral lands that the University of Sydney is built. This land has always been a learning space and is the site of one of the oldest knowledge systems in the world.

PhD research – Sociotechnical Generative AI: Ethics, bias, and value pluralism.


My name is Bec (Rebecca Johnson), I am a Generative AI Ethicist and PhD researcher, I examine how our human values are reflected by GenAI technologies. I am based at The University of Sydney in Australia, in the Faculty of Science. I use sociotechnical systems theories and moral philosophy to understand how our biases and world-views are entangled with generative AI systems and reflected in their outputs.

My research partner is Dogtoral candidate, Jackson the Boxer who specialises in ball, beach, and being ridiculously good looking.

I have a Bachelor of Science, Bachelor of Communications, Masters by Research, and I spent a year at Google Research in the Ethical AI department. I was listed on “100 Brilliant Women in AI Ethics” by Lighthouse3 in 2020, and later that year founded the global group PhD Students in AI Ethics. I am a Managing Editor for the AI Ethics Journal and am on the Editorial Board of the Springer Journal, AI and Ethics. I have received scholarships from Stanford and MIT to attend AI events. In April 2023, I convened the largest Australian conference to date on the ethics of GenAI, called ChatLLM23. I convened a follow-up event in July, bringing experts together for a thinktank workshop on AI policy and regulation called ChatRegs23 . I have taught Master’s and Undergraduate units, lectures, and tutorials at the University of Sydney since 2016 in the faculties of Science and Arts, and in the Business School. Links to many of my talks and media mentions can be found on Media+talks on this site.

I have been researching Generative AI (GenAI) since 2019 and technology ethics since 2015. I aim to help us adopt GenAI in a responsible and ethical manner that is inclusive of a diversity of people and cultures by respecting value pluralism in our technologies. Through my work, I seek to provide a deeper understanding of how human values are embedded, reflected, and co-constructed in GenAI systems.


Overview of my research

Generative Artificial Intelligence (GenAI) models reflect the human values that guide their development, evaluation and use. It is well-known that these models can reflect harmful stereotypes and other socially irresponsible outputs. Scholars all over the world have been working hard to address these issues. However, even when the outputs can appear ‘good’ to our eyes, they may still reflect Western morals, obfuscating values from other parts of the world or marginalised groups.

My research investigates effective methods for analysing and addressing the embedding of values, biases, and normative assumptions within these GenAI sociotechnical systems. I employ tools from the social sciences, philosophy, and cybernetics to achieve this. I apply these frameworks to concrete aspects of GenAI systems, including evaluations of the models, fine-tuning by feedback, and the interplay of user prompts with underlying model embeddings.

These slides work best as accompaniments to my talks. If you would like to engage me to speak at your event or organisation about any of the topics touched on in these slides or other aspects of the ethics and risks of generative AI, please send me a message via the connect form.

In my PhD work, I ask questions such as,

  • What methods can be employed to effectively analyse the embedding of values, biases, and normative assumptions within generative AI sociotechnical systems?
  • How can we assess the impact of embedded bias on generated outputs?
  • How can we improve evaluations of GenAI in a value-pluralist context?
  • How can we design evaluations aligned to human datasets instead of metrics that represent the normative assumptions of the designers?
  • What tools can we use to understand better how humans and machines co-construct language and meaning?

We live in a value-pluralist world: we must strive to maintain that rich human diversity and avoid powerful technologies reifying dominant worldviews at the expense of marginalised voices. Simultaneously, we want to uphold universal human values such as freedom, social progress, human rights and dignity, and preserving our natural environment. How can we align AI to these competing tensions? As we rapidly integrate GenAI technologies into many aspects of our lives and industries, the value alignment problem in AI technology has never been more challenging or critical.


Fields of research I employ in my work.

Ethics of Generative AI

The field of AI ethics is an interdisciplinary domain that examines the moral and societal implications of artificial intelligence technologies. As AI systems, particularly generative AI models, continue to advance and proliferate across various sectors, it becomes imperative to address the ethical challenges that arise from their deployment. Generative AI (GenAI) refers to a class of AI models capable of generating novel content, such as images, text, music, or even entire virtual environments. These models hold immense potential for creative applications, but they also pose significant ethical considerations and potential impact on society.

It is important to remember that GenAI is an extension of humans, as is all technology. It is created and designed by humans, owned and deployed by humans, and trained and fine-tuned by humans; GenAI sits within existing human structures such as political economies, cultural environments, laws and policies, and existing human social biases.

AI ethics places a strong emphasis on putting humans at the centre of AI development and deployment and encompasses a variety of considerations, including fairness, transparency, privacy, and accountability.. The central focus of AI ethics is to ensure that AI technologies serve the best interests of humanity and align with human values, rights, and well-being in a contextually appropriate manner.


Sociotechnical Systems

Sociotechnical systems-based approaches help us to understand better the relationships between technologies, social structures, people, and the emergent phenomena that arise from those relationships. Sociotechnical systems theory was initially developed in response to social upheavals caused by the mechanisation of coal extraction by members of the Tavistock Institute.  Changing to mechanised coal extraction caused a huge social impact through job losses, an impact that had not been planned for. Sociotechnical systems can be used to develop maps of human-machine systems that can highlight problematic areas of a system before they manifest in the real world.

I use a lot of sociotechnical mapping in my work to check my assumptions, my own biases and perform validity checks. I see a future where GenAI not only holds a mirror to our biases but also forces us to think about how our technologies impact humanity from systems-based perspectives. The image above shows an adaptation I have made from mid-twentieth century work to apply these concepts to evaluations of GenAI.


MaSH – Machines, Society, and Humans

The concept of Human-in-the-loop (HITL) plays a vital role in responsible AI, encompassing approaches that ensure machine behaviours align with human values. However, the term is often used ambiguously, referring to both individual human input and larger human datasets. To enhance clarity, we can distinguish HITL for individual human input from Society-in-the-loop (SITL) for system inputs representing the values of larger communities. Additionally, a newer category, Machine-in-the-loop (MITL), should be introduced to encompass techniques that employ generated content from one GenAI model to train or test another model.

Analysing the relationships between these three learning loops offers valuable insights into how values and morals are embedded and reflected in GenAI sociotechnical systems. Using this framework, we can more effectively map interdependencies among these groups, identifying potential normative biases that may permeate the system.


Philosophy of tech

Philosophy of technology discussions can be found as far back as the Ancient Greeks; it is a field going through a strong revival right now. One aspect of this field is postphenomenology which provides valuable insights into the intersection of human experience and technological artifacts such as AI. Traditional phenomenology (from scholars such as Husserl and Heidegger) focused on comprehending how we perceive and experience phenomena and consciousness. While this philosophical tradition has sparked extensive scholarship, it is limited in its applicability to technological issues. To address these limitations, the field of postphenomenology emerged, championed by Don Ihde in 1993. This shift, known as the “empirical turn,” sought to reorient phenomenology towards studying technology’s role in human existence.

Postphenomenology recognises the importance of empirical investigations examining concrete encounters between humans and technology. Postphenomenology allows for a more practical and contextual understanding of how technological artifacts, i.e. GenAI, influence our perception and cognition.

Some theoreticians of postphenomenology are Don Ihde, Van de Poel, Paul Verbeek, Rosenberger, Martin Ritter, and Suzi Adams.

As Adams (2007) observed, postphenomenological approaches emphasise the “anthropic confrontation with the world,” underscoring the significance of the human-technology relationship. This approach acknowledges that technologies are not merely neutral tools but are intertwined with human experiences, values, and cultural contexts. By investigating the ways in which technology shapes and mediates human existence, postphenomenology offers a nuanced framework for comprehending the intricate entanglements between humans and the technological world.


Respect across disciplines and the ages.

As ethics of GenAI discussions have taken hold, particularly in the Machine Learning and Computer Science communities, it is important to remember that many of these ideas have been deeply researched and discussed for years, decades, centuries, and millennia. Too often, I see scholars from the ML and CS communities try to reinvent the wheel as if AI is the first technology humans have had to grapple with. Acknowledging and building on existing philosophical, social science, and humanities work would make us more effective and more efficient in developing ethical GenAI.

I deeply and passionately believe in the critical need for cross-disciplinary collaboration in developing the ethics of GenAI. We are facing very complex issues that computer scientists and developers of tech companies alone cannot possibly resolve. The solutionism and exceptionalism that characterises some Silicon Valley leaders and SV-based GenAI developers belong to the past, as these traits are inadequate for the task at hand. Whilst much respect should be given to those who have dedicated their education, research, and work to understand and create machines, that respect must be returned to those that have deeply thought about the social and philosophical implications of technology and humans. We are, after all, in this together, best we work together to navigate humanity through to the next phase.


Connect