Chapter 2: Neosis Axioms and Formal Model
2.1 Primitive Ingredients and State
Neosis is defined on top of a small collection of primitive ingredients that will be reused throughout the chapter. In this section, the goal is not to describe the full internal structure of a Neo, but to fix the basic objects and types—time, binary state, energy, and continuous parameters—that later sections will assemble into a complete formal model.
2.1.1 Time
All dynamics unfold in discrete time. We index ticks by , with denoting the initial configuration of the system. Each application of the update rules (perception, internal computation, reward, and mutation) advances the system from tick to tick . Throughout the chapter, we will describe the behavior of Neos and the NeoVerse by specifying how relevant quantities change as a function of this tick index.
2.1.2 Binary State Substrate
The underlying state substrate of Neosis is binary. We write for individual bits, and for length- bit vectors. At any tick , the internal memory of a Neo, its perceptual input, and its output will all be represented as elements of for some finite . The dimensionality is not fixed once and for all: it may change over time as the Neo gains or loses nodes through structural mutation. This choice keeps the local state space simple, while still allowing the overall system to grow in representational capacity.
2.1.3 Energy (Nex)
Each Neo maintains an energy budget, called Nex, which constrains its computation and evolution. At tick , the energy of a given Neo is denoted by
Running computations and performing structural mutations both consume energy, while successful prediction of the NeoVerse yields energy in the form of rewards (Sparks). All such costs and rewards are measured in the same units as Nex, so that energy evolves by simple additive updates of the form
Once reaches zero, the Neo becomes inert: it can no longer perform internal computation or apply mutations, and its trajectory effectively terminates.
2.1.4 Continuous Parameters and Discrete Structure
A central modeling choice in Neosis is to separate structure from parameters. The structure of a Neo—its set of nodes, edges, and connectivity pattern—will be discrete and graph-like. However, each structural unit carries continuous parameters. For node , we write
where is the parameter dimensionality associated with that node. These parameters control the local computation performed at the node (for example, weights, thresholds, or other coefficients), and they may change over time through learning mechanisms or mutation.
This separation between discrete structure (which nodes exist and how they are connected) and continuous parameters (how each node computes) is deliberate. It allows Neos to evolve by changing their topology in a combinatorial way, while still supporting rich, smooth families of local computations at each node. Later sections will make this distinction explicit when we define the internal graph of a Neo and the node-local update functions.
2.1.5 Global State at Tick
At each tick , we conceptually distinguish between the internal state of a Neo and the state of the surrounding world. We write
for the (possibly high-dimensional) state of the NeoVerse at tick , and
for the complete internal state of a single Neo at the same tick, including its binary memory, graph structure, parameters, and energy. In this chapter we will focus on formalizing ; the NeoVerse state will be treated abstractly and will be accessed only through a projection function introduced in Section 2.2.
2.2 The NeoVerse and Perception
Neos do not exist in isolation. They operate inside an external world, called the NeoVerse, whose dynamics generate the signals that Neos attempt to predict. In this section we keep the NeoVerse deliberately abstract. The aim is not to model the entire environment in detail, but to specify how it interfaces with a Neo through perception.
At each tick , the NeoVerse has a state
which may be arbitrarily complex and high-dimensional. We do not constrain how evolves over time; it may follow a deterministic or stochastic rule, and it may or may not depend on the past behavior of Neos. For the purposes of this chapter, it is sufficient to regard as an exogenous process that generates the raw conditions under which Neos must operate.
A Neo does not have direct access to . Instead, it perceives only a projection of the NeoVerse through a perceptual interface. Formally, we introduce a projection function
and define the perceptual input at tick as
The dimensionality represents the number of binary channels the Neo can currently observe. This dimensionality is not fixed: as the Neo gains or loses input nodes through structural mutation, its perceptual capacity can change, and the corresponding projection can be updated to match.
In the simplest cases, may consist of a single bit, expressing a minimal signal about the NeoVerse. More generally, can be a vector of bits encoding multiple aspects of . The exact semantics of each bit are not specified at this level; they depend on the particular environment and experimental setup. What matters for the formal model is that all percepts are binary vectors, and that perception is always mediated by some projection from into the Neo's current input space.
This view makes the Neo's situation explicitly partially observable. The Neo must form internal representations and predictions on the basis of rather than on the full underlying state . In later sections, we will define the output of a Neo as a prediction about future percepts , and we will use the accuracy of these predictions to determine the Neo's energy gain or loss.
2.3 The Neo: Internal Structure
We now turn from the external NeoVerse to the internal organization of a Neo. At a high level, the state of a single Neo at tick consists of two coupled subsystems:
Lio, the Learner, which carries the Neo’s computational graph, internal memory, parameters, and input–output interface.
Evo, the Evolver, which controls how the structure and parameters of Lio change over time.
For the purposes of this chapter, we focus on specifying the static structure of these subsystems at a given tick . The dynamics that update them from to will be introduced in later sections.
We write the overall internal state of a Neo at tick as
where is the energy (Nex) introduced in Section 2.1.
2.3.1 Lio as an Evolving Binary Graph
Lio contains all components directly involved in perception, internal computation, and prediction. At tick , we represent it as
The vector is the internal binary state (or memory) of the Neo:
where is the number of internal nodes at tick . Each coordinate corresponds to the state of a single node. We will use indices
to refer to nodes, so there is no separate symbol for the node set.
The graph structure is captured by
where is the set of directed edges between nodes. We do not impose any topological restriction: may describe a feedforward, recurrent, or cyclic graph. This flexibility allows the Neo to evolve arbitrary computational motifs, including those that resemble neural networks, finite-state machines, or more complex dynamical systems.
Each node is associated with a continuous parameter vector
which determines how that node processes its inputs. We collect all node parameters at tick into
These parameters will be used in Section 2.4 to define the node-local update rules that map incoming binary signals to new node states.
The interface between Lio and the NeoVerse is given by the input vector
The input is the percept at tick defined by the projection in Section 2.2. The output is defined as a direct readout of the internal state at the output indices:
where is the set of output node indices stored in . Since is always a direct readout of at indices , it carries no independent state beyond what is already encoded in and . The dimensions and may change over time as the Neo gains or loses input and output nodes through structural mutation.
In summary, Lio at tick is an evolving binary graph with continuous parameters, equipped with a binary input interface. The pair specifies what computational structure exists, while specifies the current binary activity flowing through that structure. The output is derived from via the output index set .
2.3.2 Evo as a Meta-Level Mutation Controller
Evo operates at a meta level: it does not directly process percepts from the NeoVerse, but instead governs how Lio’s structure and parameters change over time. At tick , we keep Evo abstract and write
where denotes any internal variables Evo maintains (for example, mutation rates or exploration preferences), and denotes a mutation policy.
Conceptually, the mutation policy is a rule that can inspect the current state of the Neo and propose structural or parametric changes to Lio. In later sections, these changes will be formalized as mutation primitives (adding or removing nodes and edges, or perturbing parameters) with associated energy costs. For the present chapter, it is enough to note that Evo:
has access to and ,
can decide which mutations to attempt at each tick, and
must respect the available energy when doing so.
This separation between Lio (which computes and predicts) and Evo (which decides how Lio itself should change) is central to Neosis. It mirrors the distinction, in biological systems, between fast neural dynamics and slower evolutionary or developmental processes that shape the underlying circuitry.
2.4.1 Node Inputs
At tick , the internal state of the Neo is
and the current percept is
The directed edge set specifies how internal nodes read from one another. To allow nodes to also depend on perceptual inputs, we conceptually extend the set of possible inputs by treating components of as additional sources.
For each node index , we define a finite index set
which lists the internal and input coordinates that feed into node at tick . From this set we form an input vector
by collecting the corresponding bits from and in a fixed order, where . As before, may change over time as edges or inputs are added or removed.
In addition to these deterministic inputs, each node also receives a stochastic binary input
where is a per-node noise bias parameter stored in the node's parameter vector. This random bit allows local computations to be intrinsically stochastic even when and are fixed. The parameter controls the bias of the stochastic input, enabling nodes to evolve different levels of intrinsic randomness.
Each node also receives a binary gate input , which may be drawn from (as a perceptual input) or from (as feedback from another internal node). The gate bit controls whether the node updates from its other inputs or tends to maintain its previous state, as described in the update rule below.
2.4.2 Parametric Local Update Rule
Each node carries a continuous parameter vector
which we interpret as a concatenation of weights, noise parameters, and a bias:
where
The parameter controls the bias of the stochastic input , allowing each node to have adjustable stochasticity. When , the noise is unbiased; values closer to 0 or 1 produce more deterministic behavior.
Given the binary input vector , the stochastic bit , and the gate bit , the node update rule is:
If , the node tends to copy its previous state:
If , the node updates from its other inputs. The node first computes a real-valued activation
and then applies a threshold to obtain the new binary state:
This gate mechanism allows nodes to explicitly freeze their state when , while enabling normal computation when . The gate bit is treated as part of the node's input set, so it may be wired from perceptual inputs or from other internal nodes via the edge set . with the Heaviside step function
This definition preserves the properties we want:
Locality: each update depends only on , , , and .
Binary state: outputs stay in .
Adjustable stochasticity: the per-node parameter controls the bias of , allowing evolution to tune the level of intrinsic randomness at each node.
Explicit freeze control: the gate bit provides direct control over whether a node updates or maintains its previous state, enabling richer temporal dynamics.
Structural robustness: when changes, we only resize and the construction of ; , , and remain single scalars.
Snapshot semantics remain as before: all nodes read , , and their own at the beginning of tick , then update in parallel to produce .
2.5 Mutation Primitives and Structural Updates
A defining property of a Neo is that its internal structure is not fixed. Both the topology of its computational graph and the interpretation of its outputs may change over time through discrete mutation events. These mutations are proposed by Evo's mutation policy and applied during the Mutation Phase of each Cycle, subject to available energy.
We introduce a unified set of mutation primitives:
where each primitive includes multiple subtypes (addition, removal, or reassignment) defined below. Each mutation type has an associated energy cost .
All mutations operate locally on the tuple
and produce an updated structure consistent with the rules of the Neo's internal graph.
2.5.1 Node Mutation
Node mutations modify the number of internal nodes. A node mutation consists of either adding a new node or removing an existing one.
node
A node-addition mutation introduces a new internal node and increases the dimensionality of the state vector from to . Formally,
where is initialized to 0 and the new parameter vector is drawn from an initialization distribution over .
Optionally, Evo may introduce new edges involving the new node:
All index sets and parameter vectors are resized accordingly.
node
A node-removal mutation selects an index and deletes it. The updated dimensionality becomes . All edges incident to are removed:
The corresponding state coordinate and parameter vector are removed, and remaining node indices are re-labeled to maintain a contiguous index set. If , it is also removed from the output set.
Node removal may disconnect the graph; the result is still considered valid.
2.5.2 Edge Mutation
Edge mutations change information flow by adding or removing directed edges.
edge
Select a pair with . The edge is added:
This increases the input dimensionality of node by one, requiring expansion of its weight vector by appending a new weight drawn from an initialization distribution.
edge
Select an existing edge and delete it:
The corresponding coordinate is removed from , decreasing its input dimensionality.
2.5.3 Parameter Perturbation
A parameter-perturbation mutation updates the continuous parameters of a single node without altering the graph structure. For a selected node :
where is drawn from a zero-mean perturbation distribution on . All other nodes and edges remain unchanged.
This primitive enables exploration of local computational behaviors.
2.5.4 Output Mutation
Output mutations allow the Neo to change which internal nodes contribute to its prediction vector . The output index set at tick is
We introduce two subtypes.
output
Select a node index with and add it to the output set:
This increases the output dimensionality .
output
Select and remove it:
reducing the output dimensionality .
Output mutations allow the Neo to evolve its prediction interface, enabling specialization, pruning, and reallocation of computational resources.
Together the unified mutation setprovides a minimal but expressive basis for evolving both the topology and computation of Lio. By associating each primitive with an energy cost and constraining mutations to be affordable at tick , Evo must balance exploration against the Neo's available energy, embedding evolutionary pressure directly into the organism's survival dynamics.
2.6 The Cycle: Operational Semantics
We now describe how a Neo evolves from tick to tick . The Cycle specifies the order in which perception, internal computation, reward, energy update, and mutation occur. All quantities are understood to be conditioned on the current internal state
and the external world state .
For readability, we keep the description at a single-Neo level; in later chapters, populations of Neos will be handled by applying the same rules to each individual.
2.6.1 Perception
At the beginning of tick , the Neo perceives the NeoVerse through the projection function introduced in Section 2.2. The world is in state , and the percept is
This value is written into Lio's input component, so that
with matching the current projection of the NeoVerse.
2.6.2 Internal Computation and Output
Given , , the edge set , and parameters , Lio updates its internal state and produces an output.
For each node index :
Construct the input index set and the corresponding binary vector
by reading from and .
Read the gate bit from the node's inputs (which may come from or via the edge set ).
Sample a stochastic bit using the node's noise bias parameter:
where is stored in .
Update the node's binary state using the local rule:
If , set (freeze).
If , compute the activation using the node's parameters:
and set
where is the Heaviside step function.
We adopt snapshot semantics: all nodes read and and their own at the start of tick , and all updates to are conceptually applied in parallel.
The output vector is defined as a direct readout of the internal state at the output indices:
Thus at tick , the Neo produces a prediction based on its internal state and the current percept, while its internal memory is updated to for use at the next tick.
2.6.3 Running Cost and Energy Deduction
Executing the internal computation incurs a running cost that depends on the size of the Neo’s active structure. We introduce a cost function
which may, for example, depend on the number of internal nodes and input bits. A simple choice is
with non-negative constants .
If , the Neo has exhausted its energy and becomes inert; its trajectory terminates, and no further computation or mutation occurs.
2.6.4 Reward (Spark) and Energy Update
After Lio has produced and updated its internal state, the NeoVerse advances to the next tick. The world transitions to according to its own dynamics, and the Neo receives a new percept
The quality of the Neo’s prediction is assessed by a reward function
which compares to . We write the resulting reward (Spark) as
The Neo’s energy is then updated to
The specific form of can vary with the environment; in many examples it will reward accurate prediction of selected components of and penalize systematic errors. For the formal model, it is enough to assume that is well-defined and can be evaluated from and .
2.6.5 Mutation Phase
If , Evo may attempt to modify Lio’s structure or parameters. At tick , Evo’s policy can inspect the current internal state and energy
and select a (possibly empty) finite sequence of mutation primitives
to be applied to .
Each mutation type has an associated non-negative energy cost
Let
be the total cost of the proposed mutations. Evo can only apply mutations up to the available energy. Formally, the sequence of mutations is truncated, if necessary, at the largest prefix that satisfies
The truncated prefix is then applied in order, yielding updated structural and parametric components (which we still denote by , , and for simplicity). All bookkeeping on and required to maintain consistency with the new structure is treated as part of the mutation operation.
The final energy after mutation is
and the Neo's internal state at tick is
where will be derived from via at the next computation step.
2.6.6 Summary of One Cycle
Putting the pieces together, one full Cycle from tick to tick consists of:
Perception: observe .
Computation: update and produce via local node rules.
Running cost: deduct to obtain .
World update and reward: compute and reward , yielding energy .
Mutation (optional): Evo selects and applies affordable mutations from , updating to and reducing energy to .
Termination check: if , the Neo becomes inert; otherwise, the Cycle repeats.
This operational definition provides a complete, minimal description of how a single Neo interacts with the NeoVerse, computes, earns or loses energy, and modifies its own structure over time. In the next section, we introduce a performance measure that summarizes how efficiently a Neo converts structure and energy into predictive success.
2.7 Performance Measures
The formal model of Neosis defines a complete energy trajectory
for each Neo interacting with a given NeoVerse. This trajectory already combines prediction rewards and structural costs, so we do not introduce an additional ratio of "reward over cost." Instead, we summarize performance with a primary long-term measure—Survivability—that captures the probability of sustained survival, along with two transient measures that describe short-term behavior.
2.7.1 Survivability
The Survivability of a Neo is defined as the probability that it maintains positive energy indefinitely in a given NeoVerse. Formally, for a Neo with initial energy and energy trajectory , we define
This measure captures the long-term viability of a Neo's structure and parameters in its environment. A Neo with high survivability has evolved a configuration that, on average, maintains a positive energy drift over time, allowing it to persist indefinitely despite stochastic fluctuations. In contrast, a Neo with low survivability is doomed to eventual extinction, even if it may survive for extended periods due to favorable short-term fluctuations.
Survivability depends on the interplay between the Neo's predictive accuracy, its structural costs, and the statistics of the NeoVerse. In later chapters, we will show how survivability can be analyzed through the energy drift and variance, revealing critical phase transitions between certain extinction and possible long-term survival.
2.7.2 Transient Measures: Lifetime and Vitality
While survivability captures long-term prospects, two transient measures provide insight into short-term performance:
Lifetime : A Neo is considered alive at tick if its energy is strictly positive, . Once its energy reaches zero, it becomes inert and can no longer compute or mutate. We define the lifetime as
the last tick at which the Neo is still alive. Lifetime measures how long a Neo persists in a single run, but it is a transient quantity: even a Neo with zero survivability may achieve a long lifetime in a particular trajectory due to favorable noise realizations.
Vitality: We define the Vitality of a Neo as the maximum energy it attains over its lifetime:
Vitality quantifies how energetically "alive" a Neo becomes during its existence, reflecting its ability to accumulate energy reserves. Like lifetime, vitality is transient: it describes a single trajectory and does not directly predict long-term survival.
In most analyses, we will use survivability as the primary performance measure for assessing the long-term viability of Neo configurations. The transient measures provide complementary information about short-term behavior and can be useful for understanding individual trajectories, but they do not capture the fundamental question of whether a Neo can persist indefinitely in its environment.
2.8 Rationale for the Neo Structure
The formal model above makes a specific set of design choices: a Neo is an evolving directed graph over binary node states, with continuous local parameters, one stochastic bit per node, and an explicit separation between fast computation (Lio) and slower structural change (Evo). In this section we briefly justify these choices and relate them to both artificial neural networks and biological synapses.
2.8.1 Relation to Neurons and Synapses
At the level of a single node, the update rule
is deliberately close to a threshold neuron: it combines a weighted sum of inputs with a bias and then applies a nonlinearity, with an explicit gate mechanism that allows nodes to freeze their state. The directed edges play the role of synapses, determining which nodes can influence which others, and the continuous parameters determine the strength and sign of those influences, as well as the bias of the stochastic input.
The key differences from a standard artificial neuron are:
Binary internal state: node outputs live in , making the local state space as simple as possible while still allowing rich global dynamics through the network.
Evolving topology: the edge set is not fixed. Nodes and edges can be added or removed, unlike conventional ANNs where the graph is chosen once and trained only in weight space.
Explicit energy accounting: each run and mutation is charged against , tying “synaptic complexity” and structural changes directly to survival.
This makes each node loosely analogous to a neuron with a discrete firing state and continuously tunable synaptic efficacy, while Evo provides a separate mechanism more reminiscent of developmental or evolutionary processes acting on circuitry over longer timescales.
2.8.2 Why Stochasticity at Each Node?
The inclusion of a stochastic bit per node is intentional rather than cosmetic. Even with binary inputs fixed, the activation
can change from tick to tick through , and thus the output can fluctuate. The per-node parameter controls the bias of this stochastic input, allowing evolution to tune the level of intrinsic randomness at each node independently.
This local randomness serves several purposes:
Exploration in parameter and structure space: stochastic node outputs can cause different sequences of rewards under the same environment, which in turn biases Evo's choices of mutations. This provides an intrinsic exploration mechanism without needing an additional external noise process at the level of Evo.
Symmetry breaking: in purely deterministic systems, structurally identical Neos placed in identical environments would follow identical trajectories. The per-node stochasticity allows initially identical Neos to diverge, supporting richer population-level dynamics without complicating the deterministic part of the update rule.
Modeling stochastic environments: many NeoVerses are inherently noisy. Allowing internal computations to incorporate randomness makes it easier for Neos to represent and approximate stochastic mappings from past percepts to future outcomes, rather than being restricted to deterministic input–output relationships.
Adjustable noise levels: the parameter enables nodes to evolve different stochasticity profiles. A node with provides balanced exploration, while near 0 or 1 produces more deterministic behavior, allowing the Neo to balance exploration and exploitation at the node level.
Crucially, the stochasticity is added in the simplest possible way: a single Bernoulli bit enters linearly with weight . This keeps the local rule analytically tractable while still providing a source of randomness that can be up- or down-weighted by evolution (through changes in ) and bias-tuned through the parameter .
2.8.3 Minimality and Extensibility
The overall structure of a Neo is chosen to be minimal but extensible:
Minimal substrate: all observable states (internal, input, output) are binary, and all structure is encoded in a finite directed graph and parameter set . This keeps the state space simple and makes it easy to reason about limits such as small-Neo behavior or single-node dynamics.
Continuous parameters with discrete structure: separating discrete topology from continuous parameters allows us to treat structural mutations (node, node, edge, edge) and local parametric changes (param) within a single framework. Conventional ANNs appear as a special case where is fixed and only parameter updates are allowed.
Clean energy coupling: by charging both running cost and mutation cost directly to , every aspect of the Neo’s complexity—depth, width, connectivity, and rate of structural change—becomes subject to selection pressure through Lifetime and Vitality. There is no separate, ad hoc regularizer.
Straightforward generalizations: the current node rule is threshold-based, but replacing by another nonlinearity, or allowing continuous-valued node states, requires only local modifications to Section 2.4. The rest of the framework (structure, energy, mutation, Cycle) remains unchanged.
In summary, the chosen Neo structure sits deliberately between biological inspiration and mathematical simplicity. It is close enough to a network of stochastic threshold neurons with evolving synapses to be cognitively meaningful, yet minimal enough to support precise analysis of survival, evolution, and emergent computation in the subsequent chapters.
2.9 Neo and Other Computational Models
To situate Neo within the broader landscape of computational models, we compare it with several established frameworks: recurrent neural networks, Hopfield networks, McCulloch–Pitts threshold networks, spiking neural networks, and probabilistic finite automata. These comparisons highlight both the mathematical connections and the distinctive features that make Neo a unique computational object.
2.9.1 Neo and Recurrent Neural Networks
RNNs provide continuous-state dynamics of the form
with a smooth nonlinearity (tanh, ReLU). Neo shares with RNNs a dependence on recurrent structure and a parallel update pattern, but differs fundamentally in its discrete state space and nondifferentiable threshold nonlinearity.
Replacing with a Heaviside function and allowing intrinsic noise turns the Neo update into a discrete analogue of an RNN cell:
except that Neo's rule includes the freeze gate , which forces exact state persistence when , something ordinary RNNs do not structurally encode except via learned sigmoid gates in LSTM/GRU architectures. The presence of a tunable perturbation shifts the Neo closer to a stochastic recurrent automaton rather than a differentiable dynamical system.
2.9.2 Neo and Hopfield Networks
A classical Hopfield update is
operating under symmetric weights to guarantee an energy-minimization principle. Neo resembles Hopfield units in its binary thresholding, yet diverges sharply through directed connectivity, stochastic activations, external inputs, and freeze gating. If one removes stochasticity (), removes freeze, and enforces weight symmetry, the Neo update collapses toward Hopfield-like behavior. But with and arbitrary directed edges (Section {CH}.4.1), Neo is no longer confined to gradient descent on a Lyapunov energy; its transitions instead form a probabilistic threshold dynamical system unconstrained by symmetry or convergence guarantees.
2.9.3 Neo and McCulloch–Pitts Threshold Networks
The closest mathematical ancestor of Neo is the McCulloch–Pitts neuron:
Neo extends this rule in two orthogonal directions. First, the stochastic term introduces controlled randomness into the activation function, keeping the local rule analytically simple but probabilistically expressive. Second, the freeze gate turns the update into a conditional assignment:
which makes Neo nodes capable of behaving like latches or memory elements independent of the weighted sum. Unlike fixed McCulloch–Pitts networks, Neo's connectivity evolves (Section {CH}.5), so the set of inputs is a dynamic quantity. This combination yields a threshold unit that is both structurally fluid and stochastically parameterized.
2.9.4 Neo and Spiking Neural Networks
A typical spiking neuron satisfies
where is a continuous membrane potential. Superficially, both SNNs and Neos emit binary spikes, but their internal mechanisms differ: SNNs rely on temporal integration and threshold crossing, whereas Neo updates instantaneously with no membrane accumulation. Noise in SNNs often appears as probabilistic spike generation conditioned on ; Neo's stochasticity is structurally simpler—the noise enters linearly.
The freeze gate has no analogue in standard SNNs, which lack native state-holding operators. Thus Neo achieves a spike-like binary output through a fundamentally different, purely threshold-based local rule with explicitly programmable persistence.
2.9.5 Neo and Probabilistic Finite Automata
A probabilistic finite automaton (PFA) uses a transition kernel
defining state changes as conditional probabilities. When one considers the global Neo state vector , the Neo update induces exactly such a kernel: for each node,
where the freeze case deterministically preserves the previous state. Because , when these probabilities take closed analytic form:
The Neo therefore behaves precisely as a parametric probabilistic finite-state machine, where weights define implicit transition probabilities and freeze introduces deterministic self-loops. Unlike classical PFA transitions, these probabilities depend smoothly on the weight vector and bias, giving Neo both the interpretability of automata and the expressiveness of parameterized nonlinear models.
2.9.6 Synthesis
Across these comparisons, Neo emerges not as a variant of any single established computational model but as a synthesis of their core mathematical motifs. The update rule retains the threshold simplicity of McCulloch–Pitts neurons while admitting the stochastic richness of probabilistic automata. It mirrors RNN recurrence but without continuous states or differentiability, and it resembles spiking networks in its discreteness without adopting their temporal membrane dynamics. The freeze gate, in particular, introduces a structural control mechanism absent from all these systems, giving Neo a formal ability to preserve state independent of ongoing computation.
This combination of binary substrate, stochastic thresholding, dynamic graph structure, and programmable persistence distinguishes Neo as a computational object whose nearest relatives lie in the intersection of threshold networks and probabilistic automata.
Last updated