Your brain and predictive processing

Originally posted on Linkedin June 2023 in response to a post by Martin Ciupa.

Predictive processing is the idea that our brains are constantly predicting what is about to happen around us, and using that to construct models of reality. My colleagues in my research group at Uni Sydney spend a lot of time thinking about this deeply (esp Dean Rickles and Jules Rankin), and I have learned most of my understanding of the framework from them. They are philosophers of physics, time, quantum gravity, perception, and related fields.

Listening to them I have often thought of how much of what they describe fits with some of my deeper thoughts on generative AI. Sitting at a buzzy Sydney cafe on a Sunday morning, I am going to try and communicate some of these ideas through my thumbs on my cell phone, so please forgive typos and brevity. OK…

Point 1: I see GenAI as representing semantic hyperspaces in a sociotechnical system. Through training data, fine tuning, prompt injection, and interpretation of outputs, we co-construct these spaces with machines. I see biases and prefernces represented in these spaces like gloms or clusters or basins of attraction.

Point 2: They are globules of potentiality that we nudge when we prompt. Prompt injection and model hypersensitivity to prompt permutation is not a bug but a feature of the system. Our interaction collapses some possibilities and strengthens others.

Point 3: Models lack agency (and curiosity) except for what we loan them. When we interact with them it is similar to how our observation collapses a quantum state (ie double slit experiment). Models are predicting but require some agentic force to do so.

Point 4: On a much more simplified level machines are using their world view, their morphology of the latent semantic space, to predicte. A world view that we initially set up when we gave them our world view as a hive mind in training data and individually as annotators and prompters. They take that base morphology and try to build on it. Just like us they are trying to predict “reality” and construct their world.

Point 5: There are many arguments for humans experiencing reality as a controlled hallucination. It seems reasonable to assume that models are replicating this function on a more simplified level. Everything is an hallucination, just somethings we tend to agree on more ie basic objective things like I am sitting at a table. and somethings we agree on, on an intersubjective level ie I should pay for my coffee before I leave because that is a kind of social contract we have about not doing a dine-n-dash.

Point 6: Intersubjective realities include values and morals and whilst some are mostly shared globally, many are shared on a community level, it is likely no two people have the exact world view model. As many latent semantic spaces as there are humans on the planet.

So, maybe we can use this framework of predictive processing to reverse engineer world views, ethical and moral models that shape those latent spaces in models and broader sociotechnical systems of humans and machines. Moral models that we agree on outside the machine, human to human, that is the best world view for the use case and context of deployment of a model.

Anyway, coffee cup is empty now, so time to move through the world hallucinating one foot in front of the other till I reach the dog park.