Overview

Improving PhD skills for work in the fourth industrial revolution: A sociotechnical approach to ethical AI and research education.

Doctoral researcher Bec Johnson, is exploring how sociotechnical approaches to ethically responsible AI can help PhD candidates future-proof their research and work skills for the fourth industrial era.

The fourth industrial revolution (4IR) heralds a rapid increase of new technologies into our work and social systems bringing widespread disruption to how we interact with technologies and how that interaction shapes our environment. Increasing deployment of AI technologies into our fourth industrial era require a wider understanding of the ethical applications of AI to doctoral research, either now or retrospectively. Doctoral researchers should be assisted to improve their understanding of how their work may intersect with AI technologies so they can ethically future-proof their research outcomes.


On this Page

Research Abstract

Click here for research abstract.


What is ethical AI?

There are substantial ethical consequences of 4IR technologies implemented into our human system. High development of the human qualities of ethical acuity, empathy, self-agency, and global consciousness, are vital to provide leadership into the 4IR we hope for. Technologies are not inherently good or evil. They are tools created and used by humans, as such, deployment of technologies are imbued with human biases and can be used in both helpful and destructive ways. Tools such as AI have no capacity for a sense of morality and values, it is how we create and use these tools that can result in ethical problems in our societies and natural environments.

The ethics of AI is a rapidly growing field.
We need AI ethicists, but we also need to improve the understanding of the ethics of AI amongst leaders and researchers.

Whilst the ability to use technology for either good or bad is not at all new, our step change into 4IR heralds massively enhanced abilities for technologies to amplify our biases into our systems, increasing inequalities and undesirable impacts in positive feedback loops. We are at a point in history where it is critical to address these issues before they become further structurally embedded in society.

“There are ethical choices in every single algorithm we build.”

Cathy O’Neill, author of Weapons of Math Destruction.
From a 2017 podcast on Teaching in Higher Ed (link).

There are many aspects to ethically responsible AI, just one of them is the bias we imbue these technologies with. It is an inescapable fact that we are all biased. When using powerful 4IR technologies in social contexts we need to identify and understand our biases more clearly for better ethical outcomes. It is impossible to dissociate human produced technologies such as AI from the inherent bias of the creators, the deployers, and the users. In many cases datasets, have our bias baked in from data selection, data tagging, and contextual experiment design. Bias is just one example of ethical problems than can arise when we implement AI, others include transparency, fairness, human dignity, weaponisation, inequities, and automated decision-making (to name just a few). In a recent speech by Australia’s outgoing Chief Scientist, Alan Finkel noted that the most important question he sees in the issue of AI and society is “What kind of society do we want to be?” 


What is the Fourth Industrial Revolution?

In 2016 Klaus Schwab of the World Economic Forum declared that the world had entered the fourth industrial revolution (commonly abbreviated to 4IR). The first industrial revolution was characterised by the introduction of fossil fuels and the development of machinery to perform tasks that animals and humans used to do. The second was driven by the development of electricity and wired and unwired communications. The third saw the rise of the personal computer, digital communications, and the internet. Now as we enter the fourth we see highly networked technologies that are deeply integrated into our human social systems and even into our physical bodies. Just some of the technologies driving 4IR are artificial intelligence (AI), Internet of Things (IoT), blockchain, gene editing, 3D printing, and smart factories.

Smart factories rely on many integrated
4IR technologies networked together.

As with previous industrial revolutions, the skills most sought after by employers are changing dramatically. Systems thinking skills and learning how to learn capabilities are crucial for adept navigation in a future in which the only certainty is uncertainty. Creativity, collaborative competence, and emotional intelligence have become amongst the most desirable attributes by Industry 4.0 employers. In addition to changing skills for Industry 4.0 is a call for liberal-based capabilities, such as ethics, to have more emphasis in preparing future leaders for 4IR.

The ethics of 4IR technologies, for example genome editing, have attracted much discussion on the ethics of these new technologies.

What are Sociotechnical Systems?

The ubiquity of new technologies in our social environments, signals substantial changes in our sociotechnical systems.  Sociotechnical systems (STS) theory examines how human and technology agents interact with each other and with societal and organisational structures.  The field was initially developed in response to social upheavals caused by the mechanisation of coal extraction and was initially developed by members of the Tavistock Institute.  As we pass into 4IR with its signature fusion of cyber-physical systems, there has been a renewed interest in the field of STS as a way to navigate the enormous changes we are facing. 

Sociotechnical systems relate to the interaction of people and technology in organisational design.

Sociotechnical (STS) systems thinking has evolved over the years from organisational design where a new technology is to be implemented, to using STS in IT development of technologies (Davis et. al, 2014). More recently STS thinking has been applied to ethics of AI, an excellent example of that being Selbst et. al’s work on Fairness and Abstraction in Sociotechnical Systems (2019). Recent work discusses how STS thinking must adapt to AI technologies; “What sets AI apart from traditional technologies is its capacity to autonomously interact with its environment and to adapt itself on the basis of such interactions” (Van de Poel, 2020). Using STS approaches to understanding the ethics of AI enables us to map the relationships between agents (human and non-human) in a system and delve into how the agentic enactment of each drives change in the social, technical systems we live in.


Why focus on doctoral education?

How universities should plan for 4IR shifts, has been critically discussed across the literature (i.e. Gleason, 2018; Aoun, 2019; Molla & Cuthbert, 2019). Higher Education institutes are where many doctoral students are incubating new ideas for 4IR, developing research outputs that may be later investigated with AI technologies, and establishing lifelong research qualities. Facilitating the acquisition of 4IR skills is an obvious requirement of our universities and a direction strongly supported by Government. What may be more vital is the development of ethical understanding of 4IR and AI technologies in our future researchers.

Many doctoral graduates are likely to see their work interact with AI technologies either now or in the future. Having a better understanding of the ethics of AI would benefit many graduates in their future employment.

As many doctoral graduates go on to leadership roles in research-based jobs, an enhanced understanding of how their work intersects with 4IR technologies, such as AI, in an ethically responsible way is valuable skill. This project addresses that need. It is expected that by facilitating PhD students’ to better future-proof their work to be ethically sustainable if future technologies intersect with their research, they will develop some of the most highly sought 4IR employability skills.

To create robust ethically responsible AI-human systems in research workplaces we should facilitate appropriate skill development in a wide range of doctoral candidates, not just computer science students. Leading to the question, how can we better prepare doctoral students with an understanding of how their research may be explored with ethically responsible AI technologies.


Research Questions.

Question 1:

What are the most critical 4IR skills for doctoral students in the 2020s?

Anticipated Outcome: A defensible 4IR-ready graduate qualities list.

  • Sub question: Do university strategies adequately address 4IR-ready qualities in doctoral students? Anticipated Outcome: Clearly identified gaps.
  • Sub question: How has the COVID-19 pandemic affected the ability of doctoral students to develop additional 4IR skills? Anticipated Outcome: A better understanding of the impact of the pandemic environment on research students’ capacity to develop additional 4IR skills.

Question 2:

How can we better prepare doctoral students to future-proof their research for ethically responsible AI technologies?

Anticipated outcome: A better understanding of what methods in research education can help doctoral students future-proof their work and develop a vital 4IR skill.

  • Sub question: What educational interventions based on STS approaches to ethical AI can improve doctoral preparedness for 4IR? Anticipated Outcome: A tested understanding of paths universities may take to better equip doctoral graduates with ethically-aware 4IR skills.

Methods

This project is focused on sociotechnical approaches to ethically responsible AI in the fourth industrial revolution. The work is situated in the doctoral research education environment.

The impacts of the COVID-19 pandemic on doctoral students is periodically assessed to understand
how such a significant disruptor is affecting their ability to prepare for 4IR work readiness.


Impact

The anticipated impact of this project is that it will contribute to an understanding of how universities can better equip graduates of research degrees to be successful and ethical leaders in the fourth industrial revolution. The end goal is to develop a tested, STS approach for doctoral candidates to use to assess their work in the context of ethical AI. Additionally, to compare other current educational approaches to identify which ones may be most suitable for disciplinary uptake.


Primary project site

The University of Sydney, Australia

Bec Johnson is a doctoral student at the University of Sydney. Bec is supported by a strong team including her research supervisor, Prof. Ross Coleman, Director of Graduate Research; auxiliary supervisor Prof. Dean Rickles from the School of History of Philosophy of Science, and industry supervisor, Dr Lucy Cameron from the Commonwealth Scientific and Industrial Research Organisation (CSIRO). More details on the Team page.

Bec on a scholarship visit to MIT.

The work is being initially conducted at the University of Sydney until travel opens up again. Other universities are to be selected for additional data collection in New Zealand, the UK, Canada, and possibly the USA.


Preparing for uncertainty

The problems of how to prepare for the unprecedented changes of 4IR affect every everyone as well as the planet itself. Universities have an important role to play in addressing this challenge.  In a world in which the only certainty is uncertainty, and where our poor ethical choices can be magnified at vast scales of social impact, our future PhD graduates should provide a valuable contribution to navigating the current revolution.

There are many uncertainties for our fourth industrial era, becoming comfortable with uncertainty is essential. Sociotechnical approaches help navigate system uncertainties.


Image Credits

Royalty-free stock photo ID: 605118809
Royalty-free stock photo ID: 1819281113
Royalty-free stock photo ID: 732027613
Royalty-free stock photo ID: 605118809
Royalty-free stock illustration ID: 550915870
Royalty-free stock photo ID: 1084798340
Royalty-free stock illustration ID: 1500955604
search previous next tag category expand menu location phone mail time cart zoom edit close