Chapter 5: Micro Motifs
5.1 Scope and Objectives
This chapter analyzes the behavior of an individual Neo at the smallest structural scales. We focus on nodes, edges, Lex, stochasticity, energy transitions, mutation effects, and In-Life learning rules, independent of population-level dynamics.
5.2 Minimal Neo Structures
Before analyzing general Neo behavior or introducing simulation-based and macro-level analytical tools, it is essential to understand the smallest possible Neo configurations in full mathematical detail. These minimal structures allow us to introduce the exact formal treatment used throughout this chapter—state evolution, transition matrices, and stationary distributions—while also revealing the fundamental building blocks of Neo dynamics. Even the simplest Neos already display the core mechanisms of computation, stochasticity, memory, and stability that will later reappear in more complex systems.
The goal of this section is twofold. First, we provide concrete numerical examples that make the Lex update rule, noise-driven stochasticity, and state transitions completely explicit. Second, by deriving transition matrices and stationary distributions for each small case, we establish the analytical vocabulary that will be necessary for larger-scale reasoning. These elementary analyses form the conceptual bridge between the micro-level Neo defined earlier and the simulation- or macro-level treatments that follow in later chapters.
A Neo, being a finite-state stochastic dynamical system, evolves according to a Markov process once the input is fixed. At any time step, the internal state determines the distribution of the next state. For small Neos, this process can be expressed exactly through a transition matrix, which is a table containing all probabilities of moving from any current state to any future state in one time step. Formally, if the Neo has possible internal configurations, the transition matrix is a matrix
where denotes the internal state at time . The transition matrix completely determines the dynamics of the Neo under fixed external input and fixed parameters.
Once we have the transition matrix, we can study the stationary distribution, which is the long-run probability with which the Neo occupies each state. A stationary distribution is any probability vector satisfying
This equation expresses the idea that the distribution does not change over time when the system is already in its long-run regime. If the Markov chain is irreducible (all states eventually communicate) and aperiodic, the stationary distribution is unique and describes the asymptotic behavior of the Neo independent of its initial condition. In simple configurations, the stationary distribution reveals whether the Neo stabilizes on a particular state, oscillates between several states, or continuously explores the entire state space.
With these foundations, the next subsection analyzes a two-node recurrent Neo structure in detail, including a numerical example, the complete derivation of transition probabilities, the full transition matrix, and the stationary distribution. Additional minimal Neo cases (zero-node, single-node, and other two-node topologies) are provided in Appendix A.
5.2.1 Two-Node Recurrent Neo
We now analyze a minimal recurrent Neo consisting of two binary nodes that feed back into each other while both receive the same external input bit. This gives the simplest nontrivial closed micro-dynamics that can already exhibit stability, oscillation, and noise-driven switching, and it will later serve as a canonical building block for larger Neos. The construction here is consistent with the general Neo definition given in Chapter 2 of the main document.
At tick , the internal state of this two-node Neo is the binary pair
and the external percept is a single bit
Node 1 receives and the current state of Node 2, while Node 2 receives and the current state of Node 1. The resulting graph is:
Node 1's local input vector is
and Node 2's local input vector is
Each node follows the same stochastic threshold rule used in the general Neo model. For node , with weight vector , bias , and noise scale , we define the deterministic pre-activation
and the noisy activation
where is a Bernoulli noise term. The updated node state is
with the Heaviside step. In the simplified piecewise-probabilistic form we use here, the noise is summarized by a "soft band" around zero: when is outside this band the node behaves almost deterministically, and when lies inside it, the node fires with probability one half. Concretely, the firing probabilities are
where and are evaluated at the corresponding inputs.
5.2.1.1 Dynamics with Concrete Parameters
To make these abstract rules concrete, we now choose explicit parameters and examine how the two-node Neo behaves step by step. Let the weight vectors and biases be
Recall that the inputs are ordered as
We consider a fixed external input , and an initial internal state
For this tick, suppose the noise samples are and . Node 1 computes
and then
Since , Node 1 switches to the ON state,
Node 2, with the same input but a different feedback term, computes
and
Again , so Node 2 is also ON:
In this example the feedback and shared input quickly drive the system to the joint ON state
with randomness playing a decisive role only for Node 1 through the positive noise spike. As we now show, once the parameters are fixed we can summarize all such updates by a four-state Markov chain and compute its stationary distribution.
5.2.1.2 Computing Transition Probabilities
We now fix the external input permanently to for all . Under this assumption, the system becomes a homogeneous Markov chain on the four states
which we abbreviate as . For any current state we set
and compute the deterministic parts
For our chosen parameters the resulting values are
With and , we see that
Thus, for this specific two-node Neo under constant input , Node 2 is effectively deterministic and always fires, while Node 1 is deterministic when and probabilistic when .
Given and , the joint transition probability factorizes because the local noises are independent:
Using the usual Bernoulli splitting, this expands to
With the numeric values for and , we can now write the full transition matrix.
5.2.1.3 The 4×4 Transition Matrix
We order the states as . For each row, we plug the appropriate and into the expressions above.
From 00 we have , , thus and . The next state is deterministically :
From 01 we have , , so and . Hence
and transitions to and have probability zero.
From 10 we have , , hence and . Again the next state is deterministically :
all other outcomes zero.
From 11 we have , , so and . This is analogous to the case:
with all other transitions zero.
Collecting these results, the transition matrix in the state order is
States and are transient: both flow deterministically into and can never be revisited. The long-run behavior is confined to the two-state subsystem .
5.2.1.4 Stationary Distribution for Fixed Input
The stationary distribution for this Markov chain satisfies
where in the long-run equilibrium. Because the first and third columns of are identically zero, any stationary distribution must have
Hence the stationary mass lives entirely on and . Let
We only need to enforce the stationarity condition on these two states. For state we have
This immediately gives . The probability of state then follows from normalization:
Thus the unique stationary distribution for this two-node Neo under constant input is
In other words, in the long run the Neo spends half of its time in the partially active configuration and half of its time fully active in . The OFF–OFF and mixed states are only visited transiently, on the way into this two-state attractor. Even this minimal recurrent Neo therefore exhibits a nontrivial equilibrium structure: sustained stochastic switching between a "semi-on" and a "fully-on" configuration, shaped jointly by feedback and local noise. This equilibrium behavior will later connect directly to macro-level notions such as stationary distributions over larger Neo populations and the emergence of stable computational motifs.
Although this explicit transition-matrix analysis is useful for understanding the micro-dynamics of very small Neos, it becomes rapidly impractical for larger topologies. A Neo with only ten binary nodes already has a state space of size , and the full transition matrix has entries, making exact computation infeasible. In fact, determining stationary distributions or attractors in general recurrent Boolean systems is NP-hard, even without stochasticity. When noise, weighted inputs, and structural asymmetries are included—as they necessarily are in realistic Neos—the combinatorial explosion becomes even more severe. For this reason, while the analysis above is valuable for intuition, we cannot rely on brute-force enumeration for larger structures. In later sections, we therefore shift to more scalable analytical tools such as probabilistic Boolean networks, mean-field approximations, and Ising-model-based energy formulations. These methods allow us to characterize stability, entropy flow, and emergent motifs in larger Neos without computing full exponential-scale transition dynamics.
5.2.2 Scalable Analytical Tools for Larger Neos
This section introduces three analytical frameworks—Probabilistic Boolean Networks, Mean-Field Analysis, and Ising-Model Energy Formulations—that allow us to study Neo dynamics beyond the regime of exact enumeration.
Probabilistic Boolean Networks (PBNs)
A Neo with binary nodes and stochastic activations can be naturally viewed as a probabilistic Boolean network [@shmulevich2002probabilistic]. In the PBN framework, each node updates according to a Boolean function that is chosen probabilistically from a finite family. In Neosis, the stochasticity arises not from switching among Boolean functions but from noise inside each node's activation rule. Nevertheless, the effective update rule can be written in the PBN form
where the function
plays the role of a probabilistic Boolean function. For a threshold node with noise amplitude , we can express this as
where
In larger Neos, this allows us to treat the global dynamics as a Markov chain specified by the collection of local probabilistic rules . Transition probabilities remain exponentially large in principle, but PBN theory provides tools for analyzing long-term behavior through structure-based reductions, such as dependency graphs and influence measures [@shmulevich2002gene]. These allow us to identify stable motifs, absorbing sets, and highly influential nodes without enumerating the entire state space. This will be essential when analyzing medium-sized Neos (5–20 nodes), where exact methods are impossible but local dependency patterns still convey meaningful structure.
Mean-Field Analysis (MFA)
For even larger Neos, local dependency structures are insufficient, and we instead approximate node variables as weakly correlated random variables [@opper2001advanced]. The mean-field approach replaces the exact binary node values by their expectations. Let
denote the probability that node is ON at tick . Under mean-field assumptions, the expected update satisfies
where the input vector to is replaced by the vector of marginal activation probabilities. For a weighted threshold node, this approximation yields
where is a smoothed Heaviside-like transfer function induced by the noise. In the Neosis case with uniform Bernoulli noise, reduces to a clipped linear segment:
The mean-field map
defines a low-dimensional dynamical system on . Fixed points of this map approximate stationary distributions of the original high-dimensional Neo. Stability of these fixed points provides insight into whether the Neo sustains persistent activation, converges to quiescence, or supports multiple attractors. While mean-field approximations ignore correlations between nodes, they provide tractable approximations for networks with tens or even hundreds of nodes, especially when connections are dense or exhibit weak pairwise correlations.
Ising-Model Energy Formulations
When a Neo's weighted interactions are mostly symmetric—i.e., —the dynamics resemble those of an Ising system [@hopfield1982neural; @amit1989modeling]. By mapping node states to spins via
we can define an effective energy function
where the couplings approximate the symmetric part of the weight matrix and the fields reflect biases and input effects. In the presence of stochasticity at each node, the Neo performs a noisy relaxation on this energy landscape. The probability of a configuration under a stationary distribution often takes a Boltzmann-like form
where the effective temperature is related to the noise amplitudes . This mapping is not exact unless weights are symmetric and noise is small, but even in approximate form it provides a powerful tool for analyzing phase transitions, stable modes, and metastable attractors in large Neos. Energy minima correspond to stable motifs, while the barrier heights determine the switching dynamics induced by noise. In later sections, we will use Ising-style approximations to analyze specific structured Neo topologies such as chains, rings, stars, and locally modular subgraphs.
5.2.3 Canonical Micro-Motifs
Purpose: Identify recurring low-level patterns.
Expectation: Use chains, fan-in, fan-out, and loops as computational building blocks for larger Neos.
5.3 Lex and Local Computation
5.3.1 Lex Dynamics
Purpose: Formalize deterministic and stochastic transitions induced by the Lex rule.
Expectation: Analyze the influence of weights, bias, and the stochastic term on node updates.
5.3.2 Effect of Stochasticity
Purpose: Study how randomness modifies micro-scale behavior.
Expectation: Show variability, exploration, and divergence across identical initialized Neos.
5.3.3 Micro-Level Expressive Capacity
Purpose: Assess the representational power of small fixed structures.
Expectation: Describe the deterministic and stochastic input–output mappings achievable by one- and two-node Neos.
5.4 Energy Trajectories at Micro Scale
5.4.1 Tick-Level Energy Flow
Purpose: Examine energy changes during a single cycle.
Expectation: Detail computation cost, reward acquisition, and the resulting energy update.
5.4.2 Lifetime and Vitality in Simple Structures
Purpose: Quantify survival properties of minimal Neos.
Expectation: Compare deterministic, stochastic, and recurrent motifs in terms of energy trajectories and survival.
5.5 Micro-Level Mutation Experiments
5.5.1 Isolated Mutation Types
Purpose: Analyze the effect of each mutation primitive separately.
Expectation: Show structural and behavioral results for node, node, edge, edge, and param.
5.5.2 Mutation Cost and Trade-Offs
Purpose: Relate mutation outcomes to energy budget.
Expectation: Demonstrate scenarios where beneficial mutations fail due to cost and scenarios where small modifications outperform structural changes.
5.5.3 Comparative Mutation Strategies
Purpose: Compare alternative mutation strategies on identical initial conditions.
Expectation: Identify strategies that maximize accuracy, stability, or survival at the micro scale.
5.6 In-Life Learning at the Micro Level
5.6.1 In-Life Learning vs Mutation
Purpose: Clarify conceptual separation between In-Life learning rules and evolutionary mutation.
Expectation: Show why In-Life learning must be pattern-triggered rather than error-driven.
5.6.2 Minimal In-Life Learning Schemes
Purpose: Introduce simple local In-Life learning mechanisms.
Expectation: Propose conditional param adjustments and evaluate their behavior in one- and two-node systems.
5.6.3 Effects of In-Life Learning on Micro Dynamics
Purpose: Analyze situations where In-Life learning helps or harms.
Expectation: Present simulations illustrating successful adaptation versus destabilizing drift.
5.7 Role of Stochasticity in Micro Evolution
5.7.1 Fixed-Structure Stochastic Behavior
Purpose: Understand the influence of noise on stable structures.
Expectation: Demonstrate divergence in predictions and internal states across runs.
5.7.2 Stochasticity as Exploration Under Mutation
Purpose: Show how noise facilitates discovery of structural variations.
Expectation: Illustrate how stochasticity interacts with Evo to produce divergent evolutionary paths.
5.8 Summary of Micro-Level Insights
Purpose: Consolidate micro-scale results.
Expectation: Summarize patterns in structural motifs, mutation tendencies, In-Life learning interactions, and the role of stochasticity.
Appendix A: Additional Minimal Neo Structures
This appendix provides detailed analysis of additional minimal Neo configurations that complement the two-node recurrent Neo presented in Section 4.2.1.
A.1 Zero-Node and Degenerate Cases
A zero-node Neo has no internal state. Formally, its state vector is empty:
There is only one possible configuration, which we call state 0. No Lex update is applied, because there are no nodes and therefore no activations to compute. The system can only emit a fixed output or some externally defined constant, and it cannot store any information about past inputs.
Because there is only one state, the internal dynamics form a trivial Markov chain with a single state.
Transition matrix
Let the state space be
The one-step transition matrix is
meaning that if the system is in state 0 at time , it remains in state 0 at time with probability 1.
Stationary distribution
The stationary distribution is a row vector
with normalization
The stationarity condition
expands to
which is satisfied for . Thus the unique stationary distribution is
This confirms that a zero-node Neo has no representational capacity and its internal dynamics are trivial: there is a single state that is always occupied.
A.2 Single-Node Neo
We now consider the smallest non-degenerate Neo: a single internal state node. This Neo can already pass through deterministic and stochastic regimes and store a single bit of memory. The output is always taken directly from the state node:
We model Lex for a single node using a linear activation perturbed by symmetric noise. Let the input be fixed to a constant value so that the resulting Markov chain is time-homogeneous.
The activation is
where:
is the weight from the external input,
is the self-connection weight (memory term),
is a bias,
scales the stochastic perturbation,
is a Rademacher noise variable with
The state update is
where if and otherwise. The output is simply
To obtain a concrete Markov chain and show how the probabilities are derived, we fix the input to and choose specific parameter values.
Numerical Lex specification
Choose:
Then the deterministic part of the activation is
So we have:
If , then
If , then
The full activation for each case is
We now compute the transition probabilities explicitly.
Computing transition probabilities
We want for .
Because with equal probability, we can write:
where is the indicator function.
We apply this to each state.
Case 1: .
Here . The two possible activations are:
So:
,
.
Thus:
Similarly,
Case 2: .
Here . The two possible activations are:
Both are nonnegative, so:
,
.
Therefore:
and
Transition matrix
Order the states as . The transition matrix has entries
From the probabilities we derived:
From state 0:
From state 1:
Thus
Stationary distribution
Let the stationary distribution be
with
The stationarity condition is
Compute the left-hand side:
Setting this equal to yields:
For the first component:
Normalization then forces:
So the unique stationary distribution is
In words, under this Lex and fixed input, the single-node Neo eventually spends almost all its time in the state : once it flips to 1 it never returns to 0. The stochasticity only affects the transient path from 0 to 1.
A.3 Two-Node Neo
We now consider the simplest interacting Neo: two binary state nodes
We will analyze three basic topologies:
Feedforward: .
Parallel: both nodes driven independently by the same input.
Feedback: .
In each case, we specify Lex for both nodes using simple numeric parameters and derive the full transition matrix over the four states, followed by the stationary distribution.
We order the joint states as:
A.3.A Feedforward (V1 → V2)
We first analyze a two-node feedforward motif where depends only on the external input and noise, and depends only on and noise. This already yields richer dynamics than the single-node case.
Diagram:
We again fix the input to a constant value to obtain a time-homogeneous Markov chain. The randomness then comes solely from internal noise.
Lex specification
Node 1 activation:
with and .
Node 2 activation:
with independent of .
The state updates are
We will compute the transition matrix for two fixed inputs: and .
Case u = 0: computing node-wise probabilities
Set .
For node 1, the deterministic part is:
The two activations are:
So:
,
,
and
Thus
For node 2, the deterministic part depends on :
If :
so
Hence:
,
,
and
If :
so
Both are nonnegative, so
In summary (for ):
We now build the 4-state transition matrix.
Transition probabilities for each joint state (u = 0)
Let . Because depends only on and depends only on and noise, and noise for node 1 and 2 are independent, we have
We compute row by row.
Row 1: current state (0,0).
Here , but note that does not depend on the current state. We have:
For :
If , then , .
Therefore:
The row sums to 1.
Row 2: current state (0,1).
The current value of does not influence or (which depends on , not ). So the same reasoning applies:
for each joint next state . So the second row is identical to the first:
Row 3: current state (1,0).
Now . The distribution of is still independent of the current state and remains
However, conditional probabilities change because they depend on , which is 1 here:
Thus:
So the third row is:
Row 4: current state (1,1).
Same as row 3, because and is irrelevant to and :
Transition matrix (u = 0)
Collecting all rows:
Stationary distribution (u = 0)
Let the stationary distribution be
with
By symmetry of rows (1 and 2 identical; 3 and 4 identical) and columns, it is natural to look for a solution with
We now impose stationarity on, say, :
Substitute :
Using the constraint , we solve:
Therefore
Thus
Case u = 1
We now set .
For node 1, deterministic part:
Then:
Both nonnegative, so
For node 2, deterministic part:
As before, if , , and if , . However, under , is always 1, so after one step the system will behave as if at all future times.
To keep the Markov chain defined in terms of current state, we again use:
But now and , so transitions can only go to states with (i.e., states 3 or 4).
Carrying through the same logic as before yields:
The last state (1,1) is absorbing, and all other states have a nonzero probability of eventually reaching it. Thus the unique stationary distribution is:
A.3.B Parallel (U → V1 and U → V2)
Now we consider a parallel topology where both nodes receive the same input but do not directly interact. This is the smallest multi-bit output that can still be treated as two independent single-node Neos.
Diagram:
Assume both nodes obey the same single-node Lex rule used in Section A.2, and that their noise processes are independent. For a fixed input , each node has the same transition probabilities as the single-node case, and the joint transition matrix is the Kronecker product of the single-node matrix with itself.
For simplicity, consider with the single-node parameters chosen so that
Then each node at each step is an independent Bernoulli(0.5) random variable, independent of its previous state. The joint distribution over the pair is uniform over the four states.
Thus for any current state,
for each of the four configurations.
The transition matrix is therefore:
The stationary distribution must satisfy . Because every row is identical and uniform, the unique normalized solution is
The parallel 2-node Neo under symmetric stochastic input is therefore a uniform random generator over the four joint states.
For and our earlier single-node parameters, each node deterministically goes to state 1. Hence (1,1) is absorbing, and the stationary distribution is again
A.3.C Feedback (V1 ↔ V2)
Finally, we consider a feedback topology where each node depends on the other's previous state. This is the smallest genuine recurrent Neo and can already exhibit nontrivial stochastic dynamics, including concentration on particular states.
Diagram:
We again fix and choose explicit parameters.
Lex specification
Node 1:
Node 2:
Both noise variables take values with probability 0.5 and are independent.
Updates:
We again consider two cases: and .
Case u = 0
Set . The deterministic parts are:
We compute node-wise probabilities.
Node 1:
If :
so
Thus
If :
so
Again,
So for , node 1 is symmetric:
Node 2:
If :
So
If :
Both nonnegative, so
We now build the transition matrix.
State (0,0): here .
Node1: , .
Node2: , .
Joint transitions:
State (0,1): .
Node1: , .
Node2: , .
Thus the row is identical to state (0,0):
State (1,0): .
Node1: .
Node2: , .
So:
State (1,1): .
Node1: .
Node2: .
So again:
Collecting these rows, we obtain:
This matrix is identical to the feedforward case with , so the stationary distribution is again:
Case u = 1 (feedback)
With , the deterministic parts change, and the Markov chain becomes more biased toward the state (1,1). A similar step-by-step analysis (omitted here to avoid redundancy) yields:
The state (1,1) is absorbing and is reachable from every other state under the dynamics, so the unique stationary distribution is
This two-node feedback Neo therefore converges, under this Lex choice, to a fully "activated" state in the long run.
Last updated