Socio-Technical Systems (STS).
As we pass into 4IR with its signature fusion of cyber-physical systems, there has been a renewed interest in the field of STS as a way to navigate the enormous changes we are facing (Pasmore, Winby, Mohrman, & Vanasse, 2019). Sociotechnical systems (STS) theory examines how human and technology agents interact with each other and with societal and organisational structures (Walker, Stanton, Salmon, & Jenkins, 2008). The way each of these aspects – technologies, humans, organisational structures – interact with each other can be described by looking at their agency in the system.
The field of STS was initially developed in response to social upheavals caused by the mechanisation of coal extraction (Emery & Trist, 1965; Trist & Bamforth, 1951). This led to the development of the Tavistock Institute in London (Baxter & Sommerville, 2011).
In 4IR, an understanding of how we create, impact, and are impacted by our sociotechnical systems has become more relevant than at any other time in history. The field of cyber-physical systems – think smart networked systems that integrate with humans – is enjoying vibrant discussion right now (Kant, 2016) and presents perhaps a more social trending face of STS theories. Other examples of 4IR-STS are the way we and our societies interact with the Internet of Things (IoT), facial recognition technologies, and biometric innovations.
The work on STS approaches to ethical AI is not as extensive yet as some more popularly trending solutions (as far as I can see right now anyway), but there are some excellent papers being published on the subject. Selbst et. al define the system of deploying ethical AI as being subject to five abstraction traps:
- The Framing Trap
- The Portability Trap
- The Formalism Trap
- The Ripple Effect Trap
- The Solutionism Trap
All of these ‘traps’ are interrelated and as with all systems do not operate in a linear order but have relational impact on each other just as with any sociotechnical system. A quick highlight from the paper is ‘The Framing Trap’ which discusses the need for moving beyond algorithmic and data framing to a social context frame. The authors recommend we “draw the boundaries of abstraction to include people and social systems . . . institutional environments, decision-making systems and regulatory systems” (p.9). The framing trap is an obvious consideration, but one that may often be easily overlooked in a doctoral paradigm that has an intensive focus on reductionist techniques.
Each of the traps described provide an entry point for deeper discussion of the need for STS approaches to improving the ethical quality of AI. As well, each of them could be easily adopted into an adaptable framework for doctoral researchers to use as a tool when assessing the potential ethical impact of their work if AI technologies are applied to the work in the future.
What is not mentioned in the Selbst paper is education. Whilst it may be implicit that education of these approaches would be useful, I believe that education could be more deeply integrated into the proposed framework. Learning how to learn about a system and one’s place and agency within that system will have an impact on the system itself. At this gap I propose and adapted framework of organisational learning, or n-loop learning could be used to enhance 4IR skill development in this area.
The authors do discuss an external “Ripple Trap”, but I think an internal ripple of the movement of agentic power should be explored. An internal ripple effect might describe how agents in the sociotechnical system do (or don’t) develop an enhanced meta-awareness of the system and their relational place within it. Developing this kind of meta-awareness would depend on a level of self-education, though that could be nested within graduate research education approaches using a cybernetic framework.