Through a Dark Glass, Clearly: AI ethics through a sociotechnical Lens.
My work in Tech Ethics uses sociotechnical approaches to understand how we imbue emerging technologies, such as artificial intelligence (AI), with our own biases and values and how these ethics are amplified and reflected. The field of emerging technologies is broad, as is AI; I focus on the sub-field artificial neural networks. One specific type of neural nets that I research are Large Language Models (LLMs) that use deep learning to generate and categorise text.
Emerging technologies of the fourth industrial revolution, such as AI, will profoundly impact our social, economic, and environmental systems. As with other industrial revolutions, we are experiencing significant shifts as a result of these new technologies. Already we have seen tremendous gains as well as terrible (often unplanned) consequences. Many researchers and organisations are concerned with the ethical consequences of deploying these new technologies. Whilst much important research is dedicated to algorithmic transparency (transmuting the black box to a glass box), accountability, and ethical AI frameworks, my work takes a different tack.
I use sociotechnical systems (STS) approaches to better understand relationships between technologies, social structures, and the emergent phenomena that arise from those relationships. By moving the abstraction boundaries of the problem to encompass a broader range of actors and relationships, I consider cybernetic movements of agency and power within an AI-driven STS. By developing a better understanding of our AI-STS, I believe we can more clearly understand these technologies’ ethical consequences rather than focus on solutionist-based approaches.
Primary philosophies I employ include postphenomenology, theories of agency, and emergent phenomena from large complex systems. I incorporate empirical work designed on ethical and philosophical principles. OpenAI has granted me access to what is currently the world’s largest LLM, GPT-3, launched in mid-2020. Using concepts of mantic mimesis (i.e., using neural nets as a mirror to ourselves), I am experimenting with how LLMs can help us better understand our own biases and values. Large neural nets will magnify and reflect our values, perspectives, and decisions, at times refracting to impact others in ways we had not considered. We must manage these emerging technologies carefully. We cannot be complacent with an AI black box society and must make every effort to see through these dark glasses more clearly.