An underexplored representation in mathematics. A single expression that secretly holds many values: write four numbers, describe eight; write eight, describe a hundred and twenty-eight. Once you see it, you start noticing places it might be useful.
Every ± splits a value in two. Chain them, and the count doubles each time. The notation stays compact while the set it represents grows exponentially.
5
5 ± 2
5 ± 2
± 1
5 ± 2
± 1
± 0.5
Build an expression, fiddle with the numbers, watch the branching fan out. The tree on the right shows every value the notation represents.
A single expression with n copies of ± represents 2n values, but you only ever write down n+1 numbers. Short notation, big set.
Every ± doubles the size of the underlying set.
Any duplex number corresponds to a symmetric distribution of values, and two classical statistics fall out of the notation directly, without expanding the set.
The centre term. Everything else branches symmetrically around it, so it's always the mean.
Sum of squared spreads. Each ± adds its own squared contribution, independent and Pythagorean.
The reverse direction is more interesting: any symmetric distribution over a finite, power-of-two set of values can be written as a duplex number.
In ordinary arithmetic, "1 = 0" is just false; you can't write
it without the whole system catching fire. But if a duplex
number represents its values simultaneously rather than
as alternatives, the expression
0.5 ± 0.5 is a
number that is somehow both 0 and 1 at the same time.
The same
0.5 ± 0.5 answers a
puzzle that mathematicians have argued over since 1703: what is
the sum of 1 − 1 + 1 − 1 + 1 − 1 + …? Stop after an
even number of terms and you get 0. Stop after an odd number and
you get 1. Different summation methods give different answers,
and the Cesàro sum (the average of the partial sums) lands on ½.
The duplex number encodes all three answers in one expression:
centre ½, reaching both 0 and 1.
A concrete place where the ± notation earns its keep: finding eigenvalues. Type numbers into the matrix and watch the eigenvalues compute themselves as a duplex number.
Split the diagonals as duplex numbers a±b and
c±d. When the spreads match (b = d), the
eigenvalues are just a ± c. Otherwise:
a ± √(b² + c² − d²). Both eigenvalues sit in a
single duplex expression.
The equation
y = mx + (b ± d)
describes a whole family of parallel lines in a single
expression. Drag the sliders to see how a duplex coefficient
becomes a pattern of parallel lines.
Applications where holding multiple values as a single identity is the whole point.
A digital logic line is simultaneously 0 and 1
during verification:
0.5 ± 0.5.
Propagate a single duplex input through an arithmetic
circuit and check every combination at once, by shared
identity rather than enumeration.
Any time you want two or more real values to share a single algebraic identity: exploratory maths, teaching modular-arithmetic-like ideas, or building DSLs where "these are the same thing" is a first-class concept.
Results that naturally come in symmetric pairs (or powers of two) can be written as a single duplex rather than a set. Eigenvalues of 2×2 matrices, roots of even-degree polynomials, symmetric solutions to equations.
Building conceptual intuition for superposition without the Hilbert-space machinery. A duplex number holds multiple classical values as one identity, acting as a teaching bridge to quantum states, and possibly a lightweight notation for reasoning about classical analogues of quantum algorithms.
Running a model on a duplex input evaluates every branch at once under a single identity. For symbolic reasoning, search over discrete choices, or batched inference where the branches represent alternative continuations, the notation could give a compact way to carry multiplicity through a computation.
The simultaneity framing is young, and the applications above are suggestions rather than finished stories. If the idea of "values sharing an identity" clicks with something you're working on, that's worth following.
The closest established techniques are
interval arithmetic and affine
arithmetic (x₀ + x₁ε₁ + x₂ε₂ + … with each εᵢ ∈
[−1, 1]). Both track uncertainty: the ± represents a range or
noise term, and the goal is to approximate where a quantity
might be.
Duplex numbers under the simultaneity reading work differently. The ± is a discrete branching that produces an exact finite set of values, all held as a single identity. "0.5 ± 0.5 represents 0 and 1" is not saying the value is uncertain; it's saying both values share one algebraic identity and travel together through arithmetic.
That puts duplex numbers closer to equivalence classes in modular arithmetic (where 3 and 8 are the same, mod 5), multi-sheeted functions in complex analysis, or the superposition intuition from quantum mechanics, than to any interval-style framework. As far as I know, no established part of mathematics treats symmetric branching sets as arithmetic objects with their own algebra and notation. If you know of prior work, please tell me. If you don't, perhaps that's an invitation.