-
Notifications
You must be signed in to change notification settings - Fork 0
Paper on Loops in AI and Consciousness
Title: A Characterisation of processing loops in AI and biological neural networks and its implications for understanding Consciousness
Shows that any sufficiently advanced processing system requires loopy processing, which requires regulation, which requires a self-model, and that this structure can lead to self-referential conclusions about an agents own mental faculties and their involvement in its own agency.
Any computational system is limited in the complexity that it can handle within a single computational step. For embodied agents, this appears as a limit on the environmental complexity that they can sufficiently model and respond to within a single "time step" (citation needed). For more complex problems, multiple steps of processing are required in order to determine the next physical action. Such multiple processing steps may entail, for example, further analysis of the environment in order to better model its state; or it may entail action planning over multiple iterations.
In biology, this provides scope for evolutionary pressures to trade off between a more energy hungry complex brain and a simpler less energy intensive one that takes longer to make some decisions.
An agent that regulates its environment operates within a system described in Fig 1. The environment state S_env
changes with some ambient dynamics env dynamics
, and the agent performs action A_env
against the environment in order to regulate it towards some target. The environment state outcome O_env
is influenced by both D_env(t)
and A_env
.
Fig 1:
{diagram}
S_env + D_env(t) + A_env = O_env
{diagram}
According to the good regulator theorem, if the agent is to regulate the environment state it must be a model of the system (Conant & Ashby, 1970). Furthermore, we can say that the efficiency of the agent to regulate its environment depends on its accuracy in modelling the system. Errors in the accuracy of the model result in errors in the regulation of the system. For learning agents, those errors must be used to adjust the model. An agent that must regulate its external environment requires the tuple <S_env, A, O_env>
in order to adjust its model.
An embodied agent with complex actions requires an additional level of regulation. Not only must it regulate its external environment, it must also regulate its own physical state. This includes both maintaining homeostasis and controlling action. Such an agent thus operates in a system that additionally has body state S_body
with ambient dynamics D_body(t)
. The agent performs action A_body
against its body, producing outcome O_body
. Summarised as follows:
S_body + D_body(t) + A_body = O_body
The agent's actions are performed in order to regulate it towards some target, which is dynamically inferred based on its requirement for body homeostasis and for environment action A_env
. Like for regulation of the environment, the agent requires a tuple <S_body, A_body, O_body>
in order to train its model.
Agents that incorporate multi-step processing have an additional kind of action: one that changes its internal data state without affecting its physical state. Importantly, as such non-physical actions may not elicit any change to S_body
or S_env
, this system also requires regulation. Thus the agent has non-physical state S_mind
with ambient dynamics D_mind(t)
, and performs action A_mind
producing outcome O_mind
, summarised as follows:
S_mind + D_mind(t) + A_mind = O_mind
The agent's non-physical actions are performed in order to regulate towards some target, which is dynamically inferred based on its requirement for environment action A_env
, body action A_body
, and possibly for some form of non-physical homeostasis. Like for both the environment and body, the agent must train its model for regulation from a tuple <S_mind, A_mind, O_mind>
.
By way of example, consider the case of fluent aphasia, caused by damage to the Wernicke's area of the brain. Individuals with fluent aphasia can easily produce speech, but it is typically full of many meaningless words and often unnecessarily long winded. Wernicke's area is associated with language comprehension and, as such, provides a corrective mechanism during speech production in a neurotypical individual (Wernicke's area).
This paper introduces the concept of a visceral loop as a characterisation of processing within a looping biological or AI agent. The visceral loop is so named because it refers to an agent concluding that it experiences consciousness "in a visceral way". It identifies, at the most optimum, the three iterations of a processing loop required for an agent to make such a conclusion.
Consider the following sequence of internal mental observations:
- "What's that red blob in the tree? Oh, it's an apple".
- "Oh, I those thoughts just came from my mind, and not from the outside world".
- "That's what consciousness is. I am conscious".
Those three observations are produced from (at least) three iterations of a high order processing system. However, important distinctions can be drawn between the kinds of data represented as input and result within each of those loop iterations.
The visceral loop characterises those three observations as follows.
Iteration 1:
We assume that the agent (biological or otherwise) has some a priori knowledge of the concept of consciousness, but has never previously analysed itself in that respect.
The first step in the thought sequence above is characterised as Iteration 1, whereby the agent produces an inference that is non-self-referential in terms of its mind schema. In that example the agent draws an inference about the observed red blob being an apple.
As stated within the introductory section, agents with sufficiently complex self-modelling requirements must have direct observation of their own sequence of thought actions. Thus, iteration 1 inference becomes available for subsequent processing.
Iteration 2:
During Iteration 2, the agent makes an inference about a prior Iteration 1 mental action, and this inference draws reference to its mind schema. The second step in the thought sequence above is an example of this, whereby the agent realises that it is aware of its own thoughts.
The agent has multiple sense inputs, most of which observe either the physical environment in which it exists, or observe its physical body. The agent's ability to observe its own non-physical actions counts as an additional sense input. During iteration 2, the agent explores its memory of its prior non-physical action, and produces an inference about that action; specifically, i) that the action was non-physical, and ii) that it was sourced from within the agent's own processing capabilities.
The result of Iteration 2 is a relationship between a simple Iteration 1 thought and the agent's mind schema.
Iteration 3: During Iteration 3 that relationship becomes the input data that is further processed in relation to the mind schema. The result is an inferred self-referential relationship about its own mind schema.
In the third observation in the example above, the agent draws upon its memory of its immediately prior thought, and upon its a priori knowledge about the concept of consciousness. Its immediately prior thought was a relationship between a simple thought and its own mind schema. Its a priori knowledge of consciousness is effectively a set of beliefs about mind schemas in general. The conclusion it draws is a statement of belief about its own mind schema.
A formal definition of the visceral loop shall now be presented.
Let:
-
X
be the agent's set of beliefs about the external world -
B
be the agent's set of beliefs about its own physical body -
M
be the agent's set of beliefs about minds and it own mind -
f(..)
be the function executed by the agent on the specified inputs in order to draw inferences -
x_i
be an inference that results from the execution off
(it may be any output conclusion, decision, action, or intermediate logical steps)
Iteration 1:
Given s
, some sense input or past state, iteration 1 inferences are of the following form:
- inference x_1:
f(s, X U B) -> x_1
Iteration 2:
- prerequisite:
x_1
is present - prerequisite:
x_1
is sourced from 'I' (as indicated through sense labelling) - prerequisite: ∃ memory of producing
x_1
in past thought - prerequisite:
x_1
is selected as focus of attention for processing ie: producing: i) fact of presence ofx_1
, and ii) relationship ofx_1
to mind schemaM
- inference x_2:
f(x_1, M) -> relationship(x_1 -> M)
Iteration 3:
- prerequisite: ∃ some a priori belief about consciousness or experience
- prerequisite: model contains i) fact of presence of
x_1
, and ii) relationship ofx_1
toM
-
x_1
and its relationship toM
is selected as focus of attention for processing, producing: "I am conscious oft
"
- inference x_3:
f(x_2, M) = f(relationship(x_1 -> M), M) -> relationship(M -> M)
....
Formally, the three iterations of the visceral loop can be represented using a mathematical notation that highlights the inputs to the function, and its result:
- Iteration 1: f(inputs) -> x - some result of simple thought
- Iteration 2: f(x, mind-schema) -> relationship(x : mind-schema)
- Iteration 3: f(relationship(x : mind-schema), mind-schema) -> relationship(mind-schema : mind-schema)
tbd: argue for hierarchical architecture.
tbd: argue why needing a mental-schema also mandates needing direct awareness of thought.
tbd
If we ascribe phenomenal experience as a feeling, and recognise that feelings are just additional data input produced through heuristic predictions, then we see that a simulation of a visceral loop, that we intuit to have no phenomenal experience, is computationally indistinguishable from a biological visceral loop that we know to be accompanied by such phenomenon.
Visceral loop explains why fRMI studies have shown that we become aware of a decision after its made. Because it takes extra processing cycles to consciously consider the fact of the decision being made. In short: we can only think about one thing at a time, so the decision itself and thought about the decision require separate steps.
The visceral loop can be used to explain how someone concludes themselves as conscious. It can also be used to classify the kinds of thought that occur within an agent, and the kinds of thought that it's possible for an agent to have. For example, it may be the case that simpler organisms only ever operate with Iteration 1 thought.
tbd
Conant, R. C., and Ashby, W. R. (1970). Every good regulator of a system must be a model of that system. Int. J. Systems Sci., vol. 1, No. 2, pp 89-97. https://doi.org/10.1080/00207727008920220. [Full Text]
Proske, U., and Gandevia, S. C. (2012). The Proprioceptive Senses: Their Roles in Signaling Body Shape, Body Position and Movement, and Muscle Force. Physiological Reviews 2012 92:4, pp 1651-1697. https://doi.org/10.1152/physrev.00048.2011.
Wernicke's area. (n.d.). In Wikepedia. https://en.wikipedia.org/wiki/Wernicke%27s_area.
tbd: also needed for corollary discharge?
For example, many current artificial reinforcement learning (RL) agents continually choose actions at a fixed rate of one (discrete or continuous) action per time step.
Copyright © 2023 Malcolm Lett - Licensed under GPL 3.0
Contact: my.name at gmail
- Theory Home
- Consciousness is a Semiotic Meta-management Feedback Loop
- A Theory of Consciousness
- What is Consciousness
- Background to A Theory of Mind
- Philosophical Description of Consciousness
- Awareness of Thought is not the mystery
- The analogy of the Thalamic symbiote
- The Hard Problem of Experience
- Visceral Loop
- The Error Prone Brain
- Proto AGI v1
- Focusing on the Why
- Human Phenomena
- Guiding Principles
- Theory Archive