Philosophy

Page navigation


SocioTechnical Systems (STS).

As we pass into the fourth industrial revolution (4IR) with its signature fusion of cyber-physical systems, there has been a renewed interest in the field of sociotechnical systems (STS) as a way to navigate the enormous changes we are facing (Pasmore, Winby, Mohrman, & Vanasse, 2019). STS theory examines how human and technology agents interact with each other and with societal and organisational structures (Walker, Stanton, Salmon, & Jenkins, 2008)

The way each of these aspects – technologies, humans, organisational structures – interact with each other can be described by looking at their agency and dynamics in the system. We can also include the natural world in these systems, and indeed in light of climate change and severe ecological impacts of 4IR we should include the natural world when mapping these systems.

Today, people are more deeply embedded in technological systems than ever before. We use tech and tech collects data from us. These sociotechnical systems can be viewed on micro, macro, and meta levels.

The field of STS was initially developed in response to social upheavals caused by the mechanisation of coal extraction (Emery & Trist, 1965; Trist & Bamforth, 1951).  This led to the development of the Tavistock Institute in London (Baxter & Sommerville, 2011).

STS diagram showing the social and technical co-construction
of a management information system. Bostrom & Heinen, 1977

An understanding of how we create, impact, and are impacted by our sociotechnical systems has become more relevant than at any other time in history. The field of cyber-physical systems – think smart networked systems that integrate with humans – is enjoying vibrant discussion right now (Kant, 2016) and presents perhaps a more social trending face of STS theories.  Other examples of 4IR-STS are the way we interact with the Internet of Things (IoT), facial recognition technologies, and biometric innovations. The work on STS approaches to ethical AI is not as extensive yet as some more popularly trending solutions but there are some excellent papers being published on the subject. 

Selbst et al. (2019) define work in approaching fairness in Machine Learning as being subject to five data and design abstraction traps.

All of thes ‘traps’ described by Selbst et al. (2018) are interrelated and as with all systems do not operate in a linear order but have a relational impact on each other. The authors recommend we “draw the boundaries of abstraction to include people and social systems . . . institutional environments, decision-making systems and regulatory systems”

Van de Poel (2020) suggests that in the case of AI two further components should be added to STS frameworks whenapplying them to AI-driven systems: artificial agents and technical norms that “regulate interactions between artificial agents and other elements of the system”. I have adapted ideas from Van de Poel that highlight intentional-causal actions and instead focus on the movement of agency and power (see below). Traditionally, having agency implied you had access to internal morals, values, and ethics, whereas agency of AI agents in these STS is given to them. AI agents, such as algorithms, have agency, but use the ethics we build into them from the design, code, data they are fed, and context within which they are deployed.

Overview of a sociotechnical system that includes AI. Adapted from ideas described by Van de Poel (but not using his diagrammatic representation) I have given a stronger focus on relationships of agency.

One of the main reasons I like to use STS approaches to AI ethics is that it helps understand a system rather than solve a system. For example, there is significant focus in the computer science disciplines on solving the problem of black box AI systems viewing fairness as a property of the black box. Using STS approaches we can expand the boundaries of abstraction beyond the technical to encompass the social (and natural). STS approaches can achieve this as technologyis not viewed as distinct from people.

“Actors must shift from seeking a solution to grappling with different frameworks that provide guidance in identifying, articulating, and responding to fundamental tensions, uncertainties, and conflicts inherent in sociotechnical systems. In other words, reality is messy, but strong frameworks can help enable process and order, even if they cannot provide definitive solutions.”

Selbst et al. (2018)

Many calls to improve the ethics of AI focus suggest that the interests of humans and technology are at opposition; the problem with this approach is that is presupposes that humans and technologies can work independently of each other91. The approach is frequently called the ‘human centred approach’ which implies there may be a ‘machine centred approach’91. There is no doubt that these approaches are well intended but placing humans and machines in oppositional places risks missing the fundamental point that they exist in tightly interrelated systems of shifting agencies and power.


Postphenomenology

Clearly postphenomenology has come after phenomenology, and whilst they are notably different it is perhaps important to briefly clarify to the read what phenomenology is. Phenomenology has been described as the study of how we experience things, or the study of structures of consciousness as experienced from our point of view, or a way of seeing rather than a theoretical framework. Phenomenology includes concepts of intentionality, temporality, perception, and intersubjectivity; it is about understanding ourselves by examining our perceptions of our experiences.

Postphenomenology is the concrete and empirical study of the social and cultural roles of technologies in human existence and experienceHauser et al. (2018). “On the one hand, [postphenomenology] is inspired by the phenomenological focus on experience and concreteness, but on the other hand it distances itself from its romanticism regarding technology and takes its starting point in empirical analyses of actual technologies.” Verbeek

Postphenomenology moves beyond the subjective, takes a much more empirical approach to knowing, and is generally focussed on our experience of technology. Postphenomenology explores how our relationship with technology transforms our perceptions and translates our actions. The key early proponents of the postphenomenology are Feenberg, Verbeek, and Ihde who took what is often called the ‘empirical turn’ in the field of phenomenology. The postphenomenology field has a clearer focus on material technologies than pure abstract experiences and stands between the designers and the users of technologies.  

Verbeek’s tech mediation as depicted by Hauser et al. (2018) used to describe how humans appear in the world and how the world appears to humans through the mediation of technology.

It was Don Ihde (1990) that coined the term Postphenomenolpogy in a distinct movement away from more classical Heideggerian approaches and advocated for a movement toward the practicalities of technology. Ihde advocated more concrete empirical analysis in the philosophy of technology. In very simplistic terms, Ihde’s approach is to focus on the relationships between people and technologies and how the enactment of these relationships co-creates both the people and the technology.

Phenomenology of human-technology-world relations proposed by Ihde (1990), summarised by Coeckelbergh (2020)

In the 21st century, Postphenomenology has been applied in a range of interesting ways to technologies such as the Mars rover, hand-free calling in cars, self-tracking devices such as FitBits, and the role of sonography in abortion discussions. The application of postphenomenology to the ethics of AI is relatively new and there is not much literature on the topic as yet though it appears to be growing. Hongladrom (2020) uses postphenomenology to explore AI application in facial recognition building on Ihde’s material hermeneutic approaches to technology. Hongladrom explores the way that AI facial recognition has come between humans and the world in interpreting visual images of faces, thus mediating this aspect of the world to us.

Another interesting application of postphenomenology to AI ethics is discussed by Wellner & Rothman (2020) when they explore feminism in AI. They argue that linguistic approaches to improving feminism in AI are insufficient. The authors note that “In postphenomenological terms, AI algorithms possess an enhanced intentionality compared to previous generations of technologies” and advocate for an alteration to the traditional I-technology-world to I-algorithm-dataset. The shift in focus driven by postphenomenological focus on relations in a sociotechnical system.

There is no doubt of the scope for further exploration of how postphenomenological frameworks can help us better understand the ethics of AI-driven sociotechnical systems. Research into that field would notonly add to the field of postphenomenology (and thus philosophy of technology) but also to the very real-world pressing problems of ethics of emerging technologies in our sociteties.


Agency

Since the early development days of AI, the concept of ‘agent’ has been prominent, as the field developed from cybernetic theories in which the idea of agent is fundamental. When the field of AI split from Cybernetics at the now famous 1956 Dartmouth workshop, the idea of ‘agent’ was retained. Despite the field of AI trying to distance itself from Cybernetics throughout its long winter of the 1970s-1990s, it is ironic that principles of Cybernetics became not only critical to the development of deep neural networking, but can now help us better understand the ethical problems of agency and power we now face.

Considering the concept of agents in AI-sociotechnical systems opens a rich discussion into what constitutes an agent. Additional questions to be explored about agents in this context include their agency, their ability to act moraly, the possibility of their intentionality, their power, and most importantly, the nature of relationships between human and non-human agents.

When considering the agency of AI through an STS lens we can reframe the problem. For example, we could use a Necker cube metaphor to bring different agents into front relief from different angles or biases. Using the Necker cube approach, the ethics of AI can be mapped as an emergent phenomenon from the relationships between agents when we view the system from different perspectives. That is, rather than ethics needing to be ascribed to one specific agent, i.e. the AI technology, it becomes a product of the entire system. The shift would result in a shift of the focus of power away from the agents and to the nature of the relationships between agents.

Using the Necker cube as a metaphor for reframing the perspectives of agents in AI sociotechnical systems.

The responsibility of an agent within an AI-STS leads us to also consider factors beyond responsibility, authority, and power and consider their identity. Our new AI-STS demands us to consider the role of non-human ‘intelligent’ agents as semi-autonomous agents in the systems of our societies. The degree of autonomy of AI agents becomes a more critical consideration as we shift from the more symbolic-based AI agents built around decision trees or if…then statements, toward deep neural networks that can “learn” without supervision.

Many 20th century conceptions of agency appear to rely on some form of embodiment. However, most AI models (outside of robots) are not embodied, being algorithmic code inside a machine or a set of interacting codes across a network, or even a collection of machines networked together. The boundaries of abstraction to identify where one agent stops and another starts can be fluid and overlapping. In fact it is this last feature of non-human agency that most benefits from us using sociotechnical systems and principles of emergent phenomena when considering how values are moved through a system and how those values impact on people and the natural world.

The emerging field known as Machine Behaviour described by (Rahwan et al., 2019) is perhaps more a re-branding of Actor Network Theory and STS for a modern age than a completely new field.  Nevertheless, it is an important field to recognise in this research as it explicitly focuses on the agency of technologies, machines, and algorithms in our human systems (Rahwan et al., 2019). 

“The four categories Tinbergen proposed for the study of animal behaviour can be adapted to the study of machine behaviour83,84. Tinbergen’s framework proposes two types of question, how versus why, as well as two views of these questions, dynamic versus static. Each question can be examined at three scales of inquiry: individual machines, collectives of machines and hybrid human–machine systems.” Rahwan et al., 2019

Scholarly research that falls under this heading is also closely connected with fields such as “human-machine networks” (Tsvetkova et al., 2017), “human-agent collectives” (Jennings et al., 2014), “cyber-physical systems” (Lee, Bagheri, & Kao, 2015), and a host of other variations of updated approaches to the system relationships between humans and machines. 

Inclusive conceptions of agencies that bring technology into the fold become even more important when we looking at language model AIs. For example Davidson’s (1982) view that only human agents have the relevant mental attitudes for agency due to linguistic capabilities is an idea that has clearly been proven wrong by OpenAi’s GPT-3.

Emerging technologies like neural networks have challenged 20th-century beliefs of what agency is. A case in point is the Agency entry in the Stanford Encyclopedia of Philosophy in which many of the concepts of agency are inadequate to describe what we are now experiencing in relation to machines on an individual and societal level. The problem with most older conceptions of agency is that it seems to be taken that it is closely synonymous with intentionality. However, we can see that AI, algorithms, and neural networks all exhibit agency of action and impact with what is likely no intentionality as most would agree the level of consciousness of machines is zero or very low. We must move beyond this anthropocentric view that agency is in the exclusive domain of humans (or even of animals) and understand that our emerging technologies have sometimes substantial agentic power. We must place agency in all its forms at the centre of our inquiry and recognise that human agency is just one subset of a plethora of agencies.


Emergent Phenomena

Systems based frameworks, such as sociotechnical systems STS, are ideal tools when examining the relationships between actors (human, machine, social structures), their agencies, the effects of one part of a system on another, and emergent behaviours and properties of those systems. Additionally, systems approaches handle the uncertainty by facilitating research outcomes other than prediction, including system insights that may help develop counterfactual alternatives for improved system management.

Relational approaches to understanding emergent phenomena in a sociotechnical system (STS) help us better understand a system than using reductionist approaches such as a focus on converting AI black boxes to glass boxes. Understanding a system, rather than trying to solve a part of a system can help us in a variety of ways. Take for example the distribution of power in an STS: traditional reductionist approaches often struggle to map the relationships of power and knowledge of a world run by Big Tech. In the 21st Century, power is often reliant on owns the most data, the fastest AI, and the means by which the whole system moves. Understanding the flow of data and power has become critical to better planning for a future of social equality and environmental sustainability. Using frameworks that help us look at emergent system properties, like power, is a different approach to more traditional solutionist methods.

The nascent field named Machine Behaviour also considers AI systems through an emergence lens. Rahwan et al. (2019) discuss machine behaviour in several emergence frameworks such as machines networks that caused the flash crashes of high-frequency trading markets.

Another useful example is using emergence as a way of mapping bias in an AI system. For instance, how can “human biases combine with AI to alter human emotions or beliefs, how human tendencies couple with algorithms to facilitate the spread of information, how traffic patterns can be altered in streets populated by large numbers of both driverless and human-driven cars ” Rahwan et al. (2019). As we have humans from many value sets inputting their biases into large, collected, and network AI platforms, we will no doubt see emergence of expressed values and biases by machines.

A specific example of the emergence of values from complex human-machine systems is an area that my research is directly focussed on, Large Language Models (LLMs). The LLMs of today (such as OpenAIs GPT-3 that I am researching) are very large neural networks with billions of parameters or connections that are trained on vast amounts of data. These very large neural networks are adept at generating new text, text that is imbued with the values it has sourced from its training dataset, from its input texts, and from the connections of weighted numbers of each node. As LLMs grow, we should expect to see a growth in emergence in the outputs of these models.

As neural networks march toward the goal of artificial general intelligence, there are scholars that highlight the importance of relational reasoning in these large models as a function of emergence. Holyoak & Lu (2020) discuss the capacity for humans to engage in high-level, abstract relational reasoning as an example of emergence and explore how machine learning attempts to replicate this. It is the ability of leading-edge LLMs to engage in relational reasoning that gives them the advantage over competitor technologies. Hinton’s (2021) concept paper which attracted much attention, addresses just this problem and proposes hypothetical solutions (called GLOM) to address this goal.

Higher levels of relational reasoning are more desirable but they also could cause unplanned ethical problems when relational connections are made inappropriately (ie woman is to homemaker as man is to CEO). Just one more useful application of emergence to the ethics of AI.


Memetics

Memetics of cultural ideas was originally proposed by Richard Dawkins who based his idea on that of gene evolution. Whilst I believe that Dawkins perhaps tried to over-fit his ideas to the social world I think the essence of the idea is worthy of revival in the context of large language AI models. Unfortunately, the original ideas were taken into areas that were inappropriate, I think there is some good seeds of ideas to salvage and re-shape for the modern age. When these ideas were being created and explored we had nothing like the AI models we have today and it would have been difficult for those 20th century scholars to conceive of how the ideas of memetics may work better in the context of a reflexive AI model.

The Oxford dictionary defines a meme as

  1. An element of a culture or system of behaviour passed from one individual to another by imitation or other non-genetic means.
  2. An image, video, piece of text, etc., typically humorous in nature, that is copied and spread rapidly by internet users, often with slight variations.

We all know thousands and thousands of memes from being situated within a culture. Below are just a few but you encounter memes many times a day and they influence what you say, write, and often even how you think. The captions do not adequately express the extent of the memes or memeplexes conveyed by the images. Each one of these images can also be seen as representing many or deeply complex ideas and beliefs.

I do not believe that Darwinian evolution is a good analogy for the development of memes. I need to state that I think Richard Dawkins had some good ideas but took them a little too far. I do, however, think that these seedling ideas should be reconsidered in light of how LLMs pick up on our cultural memes and associations and reflect them back to us.

In the context of very large AI models and their reflection of society I plan to explore the concepts of:

  • Memetic substructures of training datasets
  • Meta-memeplexes and how they can be invoked by meta-prompting.
  • Macromemetics and large language AI models.

As we have only had large transformer AI models for less than two years there is almost no research into these ideas.


Back to Top of this page.

Back to Home Page

Forward to Tech page

search previous next tag category expand menu location phone mail time cart zoom edit close