Chapter 3: Neo's Learnability
This chapter analyzes a central question in Neosis: to what extent can a Neo, whose internal structure is a self-modifying binary graph driven by mutation and energy constraints, develop the ability to learn? Learning is not explicitly built into the Neo architecture. There is no predefined synaptic plasticity rule, no weight-update mechanism, and no gradient-based training. All changes to structure occur only through mutation. Despite this, Neos can acquire two distinct forms of learning. The first is In-Life learning, in which a Neo adjusts its behavior within a single lifetime through state dynamics and small parameter shifts generated by its own circuitry. The second is Lifetime learning, where structural mutations accumulate across long survival periods. We refer to the overall capacity for these behaviors as learnability. In this chapter, we formalize learnability, describe the mechanisms through which both In-Life and Lifetime learning arise, and explain why the Neo substrate is sufficient for the emergence of adaptive behavior.
3.1 Two Modes of Adaptation
A Neo adapts through:
Lifetime Learning (Slow). Structural mutations applied by Evo modify and accumulate over extended survival. This determines the long-term evolutionary trajectory of the Neo's computational capacities.
In-Life Learning (Fast). A Neo may adjust its parameters within a single lifetime through small, energy-limited updates computed by its own internal circuits. These parameter shifts do not change structure , but they alter how is used.
Learnability encompasses both modes, but In-Life learning is the primary source of rapid adaptation.
3.2 In-Life Learning in Neo
In-Life learning refers to parameter-level adjustments a Neo performs within its lifetime without altering its topology. It operates entirely inside Lio, independent of Evo, and relies only on signals physically available to each node: its own activation, its local inputs, and the global reward. In-Life learning introduces no new node types, no plasticity subgraphs, and no global optimization rule. Instead, each node applies a small local update that statistically reinforces behaviors correlated with positive reward.
3.2.1 Motivation
Digital organisms require a mechanism for within-lifetime adaptation that is (i) local, (ii) energy-limited, (iii) structure-invariant, and (iv) sufficiently stochastic to explore the behavioral space. Biological nervous systems satisfy these properties using simple reward-modulated Hebbian adjustments rather than gradient-based global supervision. Neo adopts the same principle: learning arises from local correlations between input, output, and reward, not from derivatives of a loss function.
To maintain exploration, Neo preserves a stochastic component inside Lex, the node-level transfer mechanism, rather than embedding noise directly into the learning rule. This separation keeps the learning dynamics simple while preserving rich stochastic behavior.
3.2.2 Local State and Required Inputs
Each node maintains two small internal memory elements:
Activation register , stored as part of Lex's computation.
Hebbian register , indicating whether the node's recent activity contributed positively, negatively, or not at all to the outcome.
Lio receives an extended external input vector
where is the externally supplied reward. The reward channel is broadcast to all nodes exactly like any other external signal. Each node also stores its incoming inputs for one tick to support local parameter updates.
3.2.3 Node-Level In-Life Learning Rule
In-Life learning is expressed as a local update to the parameters of node . After each prediction and reward cycle, the node updates its Hebbian register:
If the node was active during a successful tick, it is marked as helpful; if it was active during an unsuccessful tick, it is marked as harmful; otherwise it is neutral for that tick.
Given this flag, the node updates its parameters:
where is the input vector received by node at tick , and is a small global In-Life learning constant. The associated metabolic cost is
All computation is local; no global gradient information or structural change is involved.
3.2.4 Stochasticity and Lex's Role
Lex preserves the stochastic component originally introduced into node outputs:
This is the only source of intentional noise in In-Life learning. Learning itself remains deterministic given the stochastic outputs, so updates inherit randomness through .
Exploration arises because node outputs are stochastic; repeated reward-correlated co-activations produce consistent parameter drift, while uncorrelated fluctuations average out. This yields a statistical direction of improvement without requiring explicit noise in .
3.3 Relationship Between Learnability and Survival
In-Life learning enables a Neo to adjust its behavior quickly in response to environmental changes. If these adjustments increase expected energy,
then evolutionary pressure will favor structures that support effective local Hebbian updates and stable parameter drift.
Lifetime learning determines what kinds of In-Life learning a Neo can express through its evolved topology, while In-Life learning directly shapes immediate survival within the NeoVerse.
3.4 Summary
A Neo is learnable when a fixed structure supports behavior that adapts to experience. This adaptation arises either through internal dynamical responses or through local Hebbian parameter updates driven by reward correlation. Lifetime learning shapes structure slowly via mutation, while In-Life learning provides rapid, flexible adjustment during a single lifetime through local reward-modulated updates.
Last updated