-
Notifications
You must be signed in to change notification settings - Fork 0
Paper on Loops in AI and Consciousness
Title: A Characterisation of processing loops in AI and biological neural networks and its implications for understanding Consciousness
tbd
Any computational system is limited in the complexity that it can handle within a single computational step. For embodied agents, this appears as a limit on the environmental complexity that they can sufficiently model and respond to within a single "time step" (citation needed). For more complex problems, multiple steps of processing are required in order to determine the next physical action. Such multiple processing steps may entail, for example, further analysis of the environment in order to better model its state; or it may entail action planning over multiple iterations.
In biology, this provides an option for evolutionary pressures to trade off between a more complex brain and a simpler one that takes longer to make decisions.
According to the good regulator theory (citation needed), and as the most promising AI models today are showing (citationS needed), in order for an agent to operate within a complex environment, it must model that environment (citation needed). Additionally, if an embodied agent has complex actions, it must model its body (citation needed).
Agents that incorporate multi-step processing have a second kind of action: one that changes its internal state only, without affecting its physical state. In humans, we call this "thought". Agents with such non-physical actions, thus must additionally model their non-physical self.
tbd: argue why needing a mental-schema also mandates needing direct awareness of thought.
tbd: introduce body schema and mind schema.
tbd: argue for hierarchical architecture.
This paper introduces the concept of a visceral loop as a characterisation of processing within a looping AI or biological system. The visceral loop is so named because it refers to an agent concluding that it experiences consciousness "in a visceral way". It identifies, at the most optimum, the three iterations of a processing loop required for an agent to make such a conclusion.
Consider the following sequence of internal mental observations:
- "What's that red blob in the tree? Oh, it's an apple".
- "Oh, those thoughts just came from my mind, and not from the outside world".
- "That's what consciousness is. I am conscious".
We shall now break that sequence down.
Iteration 1:
At the beginning of the first step (or iteration) in the sequence above, we assume that the agent (biological or otherwise) has some a priori knowledge of the concept of consciousness, but has never previously bothered to analyse itself in that respect.
During the first iteration, the agent's processing capabilities produce a non-self-referential inference. In the example above, an inference about the observed red blob being an apple.
As stated within the introductory section, agents with sufficiently complex self-modelling requirements must have direct observation of their own sequence of thought actions. Thus, the act of making the iteration 1 inference becomes available for subsequent processing.
Iteration 2:
During iteration 2, the agent makes a self-referential inference about its prior mental action.
The agent has multiple sense inputs, most of which observe either the physical environment in which it exists, or observe its physical body. The agent's ability to observe its own non-physical actions counts as an additional sense input. During iteration 2, the agent explores its memory of its prior action, and produces an inference about that action; specifically, that the action was non-physical and sourced from within the agent's own processing capabilities. In other words, with reference to the agent's mind schema, the action was produced by its own mind schema...
Iteration 3: tbd
Formally, the three iterations of the visceral loop can be represented using a mathematical notation that highlights the inputs to the function, and its result:
- Iteration 1: f(inputs) -> x - some result of simple thought
- Iteration 2: f(x, mind-schema) -> relationship(x : mind-schema)
- Iteration 3: f(relationship(x : mind-schema), mind-schema) -> relationship(mind-schema : mind-schema)
tbd
If we ascribe phenomenal experience as a feeling, and recognise that feelings are just additional data input produced through heuristic predictions, then we see that a simulation of a visceral loop, that we intuit to have no phenomenal experience, is computationally indistinguishable from a biological visceral loop that we know to be accompanied by such phenomenon.
Visceral loop explains why fRMI studies have shown that we become aware of a decision after its made. Because it takes extra processing cycles to consciously consider the fact of the decision being made. In short: we can only think about one thing at a time, so the decision itself and thought about the decision require separate steps.
The visceral loop can be used to explain how someone concludes themselves as conscious. It can also be used to classify the kinds of thought that occur within an agent, and the kinds of thought that it's possible for an agent to have. For example, it may be the case that simpler organisms only ever operate with Iteration 1 thought.
tbd
For example, many current artificial reinforcement learning (RL) agents continually choose actions at a fixed rate of one (discrete or continuous) action per time step.
Copyright © 2023 Malcolm Lett - Licensed under GPL 3.0
Contact: my.name at gmail
- Theory Home
- Consciousness is a Semiotic Meta-management Feedback Loop
- A Theory of Consciousness
- What is Consciousness
- Background to A Theory of Mind
- Philosophical Description of Consciousness
- Awareness of Thought is not the mystery
- The analogy of the Thalamic symbiote
- The Hard Problem of Experience
- Visceral Loop
- The Error Prone Brain
- Proto AGI v1
- Focusing on the Why
- Human Phenomena
- Guiding Principles
- Theory Archive