Chapter 6: Expressive Power of a Neo

6.1 Introduction

This chapter evaluates the representational and computational power of the Neo architecture. We show that finite Neos can emulate any deterministic or stochastic finite-state system, and evolving Neos can emulate any computable dynamical process.

6.2 Deterministic Expressivity

6.2.1 Threshold Logic as a Universal Boolean Substrate

Purpose: Relate Lex to classical threshold logic.

Expectation: Use known results to show that any Boolean function can be represented by threshold units.

6.2.2 Finite Deterministic Dynamical Systems

Purpose: Connect recurrent threshold networks to dynamical system representation.

Expectation: Demonstrate that fixed-structure Neos can implement any finite deterministic transition function.

6.3 Stochastic Expressivity

6.3.1 Probabilistic Threshold Nodes

Purpose: Formalize Lex with stochastic input as a probabilistic threshold gate.

Expectation: Show how Bernoulli-driven updates produce stochastic transitions.

6.3.2 Representation of Markov and Stochastic Automata

Purpose: Relate Neo networks to finite probabilistic state machines.

Expectation: Show that such structures can implement any finite Markov chain or stochastic automaton.

6.4 Computational Universality

6.4.1 Recurrent Threshold Networks as Universal Computers

Purpose: Connect Neo dynamics to known universality results.

Expectation: Cite that recurrent threshold networks are Turing complete, implying universality for fixed topology.

6.4.2 Evolving Structure and Open-Ended Growth

Purpose: Argue that mutation primitives allow construction of arbitrary computational graphs.

Expectation: Show that structural evolution enables Neos to approximate any computable function over time.

6.5 Continuous Parameters and Decision Surfaces

6.5.1 Continuous Parameterization

Purpose: Highlight the role of real-valued parameters in sharpening representational capacity.

Expectation: Show how continuous weights and biases create arbitrarily fine decision boundaries.

6.5.2 Refinement Through Mutation and In-Life Learning

Purpose: Connect parameter evolution to increasing precision.

Expectation: Describe how parametric adjustments refine decision functions.

6.6 Partial Observability and Internal Memory

6.6.1 Perception via Projection

Purpose: Address the fact that Neos only observe projected world states.

Expectation: Explain how internal memory compensates for missing information.

6.6.2 Representing Predictive and Latent-Variable Models

Purpose: Illustrate how Neos can learn internal structures needed for prediction.

Expectation: Show that recurrent binary states provide sufficient latent capacity.

6.7 Summary of Expressive Power

Purpose: Consolidate expressivity results.

Expectation: Conclude that Neos form an evolving probabilistic recurrent threshold architecture capable of universal deterministic and stochastic computation.

Last updated