Ken’s Substack
Ken’s Substack
Grounding Consciousness in Embodied, Adaptive Systems
1
0:00
-15:09

Grounding Consciousness in Embodied, Adaptive Systems

Insights from a Provocative New Paper
1

This is another podcast I got from Google’s NotebookLM given the paper "Why Is Anything Conscious?" by Bennett et al. When writing my book, Understanding Machine Understanding, I chose to put consciousness to the side (but did include the pro and con arguments in an appendix) taking the position that understanding (at least at some level) does not necessarily require consciousness. I am, however, starting to work on the subject of machine consciousness, including self-awareness, and came across this recent paper. It is going right down the path I had in mind, so discussing it is a good start.

I gave the paper to Claude 3 Opus together with the text of UMU and asked it how the paper fits into the discussion of understanding. Claude came back with:

The paper "Why Is Anything Conscious?" by Bennett et al. presents a theoretical framework for understanding the emergence of consciousness that aligns well with several key ideas explored in your book "Understanding Machine Understanding". Here are a few key points of connection:

1. Embodied and enactive cognition: Both works emphasize the importance of considering consciousness and understanding as arising through the embodied interactions of an organism with its environment. The paper grounds consciousness in the self-organizing, adaptive behavior of biological systems, while your book highlights the need to consider the "humble" lower bodily levels of information processing.

2. Hierarchical learning and representation: The paper describes a multilayered architecture of self-organization, where systems learn hierarchical causal models of themselves and the world. This resonates with the discussion in your book of how machine understanding may involve constructing hierarchical representations, from low-level sensorimotor interactions to higher-order models of self, others, and their interrelations.

3. Phenomenal vs access consciousness: The paper makes the provocative claim that phenomenal consciousness without access consciousness is likely very common in nature, but not the reverse. This connects to the distinction made in your book between phenomenal consciousness (the qualitative, subjective aspect) and access consciousness (the ability to report on and reason about mental contents), and the question of their relationship in machines.

4. Stages of consciousness: The paper outlines a framework for the developmental stages of consciousness as the capacity for learning and adaptation scales up. This is reminiscent of the discussion in your book of potential levels or types of machine consciousness, from basic sentience to self-awareness to theory of mind.

5. Formal models: Both works aim to provide formal mathematical models to ground the science of consciousness and machine understanding. The paper defines key concepts like self-organization, causal learning, and orders of selfhood in precise mathematical terms, similar to your book's goal of developing a rigorous evaluation framework for machine understanding.

While there are also some differences in focus and approach, overall the consciousness framework presented in the paper seems to harmonize with and provide additional theoretical support for many of the key propositions and research directions laid out in your insightful book on the frontiers of machine understanding. The paper's emphasis on grounding consciousness in embodied adaptive behavior and hierarchical causal learning aligns well with your call to take seriously the "lower levels" of information processing and the importance of interaction and self-modeling for human-like understanding in machines.

Discussion about this podcast

Ken’s Substack
Ken’s Substack
My personal Substack
Listen on
Substack App
RSS Feed
Appears in episode
Ken Clements