Physical and Philosophical Complexities of Randomness

William Cabell
11 min readOct 18, 2023

--

The results of a double-slit experiment showing the interference patterns generated by single electrons due to their wavelike behavior.¹

In 1926, Albert Einstein wrote a letter to his fellow physicist Max Born stating: “quantum mechanics … says a lot, but does not really bring us any closer to the secret of the ‘Old One.’ I, at any rate, am convinced that He is not playing at dice.”² Einstein rejected the suggestion — a suggestion that follows naturally from the mathematics of quantum mechanics — that chance had any motive influence on our universe. His unwillingness to accept the new paradigm of a random world left him increasingly at odds with professional physicists, and often appears in histories as an example of stubborn conservatism inflicted by senility. But such an interpretation is hardly fair to Einstein: the existence (or non-existence) of randomness subtends deep philosophical consequences that cannot be lightly cast aside. And as is typical in the domain of philosophy, we cannot expect a resolution to a question so simple. Yet randomness is ubiquitous in modern science, technology, and mathematics. Despite the philosophical quandary randomness presents, we have found ways to work with it practically.

Randomness in Theory

Philosophers have wrestled with the idea of randomness for at least as long as written records of philosophy exist. The common idea of randomness requires indeterminism as a predicate. Of course, many philosophers do not accept indeterminism. Plato, as well as the ancient Greek stoics, posited theories of universal causal determinism: everything has a cause, and nothing is random. Adherents to the Abrahamic family of religions worship an omniscient God; what does indeterminism mean in this context? We inevitably arrive at the paradoxes of omniscience and free will. Many solutions to both paradoxes have been proposed over centuries of theology, but they still elude broadly accepted explanation.

Modern discussion of randomness centers around the implications of quantum mechanics. Quantum mechanics models the behavior of the quanta — the discrete, indecomposable units — of energy that comprise the elementary particles of our universe: photons, electrons, and quarks, among other, more exotic particles. One of quantum mechanics’ seminal results, the Heisenberg uncertainty principle, destroys any notion of determinism in a practical sense. The Heisenberg uncertainty principle states that there is a mathematical limit to the precision to which we can measure together the position and momentum of any object.³ This is a direct and natural result of the way that quantum mechanics models objects: they are not modeled like classical point-masses, but instead are granted some more unintuitive properties. In particular, at a microscopic scale, particles behave like waves in a number of ways.⁴ The wave-particle duality and uncertainty principles have been repeatedly confirmed by more than a hundred years of experimentation, so even any missing pieces of quantum theory are not likely to invalidate them.

The uncertainty principle is actually a much more general result than just this. It also postulates that there are other pairs of properties that, similarly, cannot both be known to infinite precision. But position and momentum are the fundamental quantities of classical mechanics — the Newtonian theory of motion that, until broken by the behavior of quantum particles, appeared as if it might be able to model the complete universe by a set of wholly deterministic laws. Given the assumptions of classical mechanics and perfectly precise knowledge of the position and momentum of every particle in a system, one can theoretically predict all future states of the system⁵ — even if that system is the entire universe. When the Heisenberg uncertainty principle exposed the invalidity of the assumption that it is possible to gain perfect knowledge of both the position and momentum of a particle, it removed any chance of predicting, with perfect confidence, the future state of any system — this is what I mean that it destroyed practical determinism.

Although it may seem like the uncertainty principle upends determinism entirely, this is not necessarily the case. The first defense of determinism against the uncertainty principle is a hidden-variable theory: that there are variables we haven’t yet found that dictate the outcome of any measurement we take. These variables may, due to the laws of quantum mechanics, be physically impossible to observe — “hidden”. But if they exist, then we can still have a deterministic universe in our system of philosophy, and enjoy the relief of living in a determined world, perhaps even free from all moral consequences. Albert Einstein, among others, favored this explanation of quantum mechanics.

But unfortunately for their proponents, hidden-variable theories were hamstrung by the publication of the Bell inequalities beginning in 1964.⁶ These inequalities (the original of which requires only vector calculus to understand⁷) show that the mathematical models of quantum mechanics do not permit local realism. Realism refers to the idea motivating the formulation of many hidden-variable theories that there exist real objects behind the uncertainties dictated by quantum mechanics — for most of us, a natural assumption. The principle of locality codifies a similarly intuitive concept: that objects which have causal interactions with each other must be near each other.⁸ Bell’s inequality shows that both of these cannot be true.

Bell’s inequality quenches any hope of determinism as it is understood in the classical tradition. It provides us with two options: give up locality, or give up realism. Perhaps surprisingly, the most popular interpretations retain locality over realism. To a physicist, the speed of light is an experimentally validated universal speed limit, and in the context of physical theories, is far more precise, concrete, and fundamental than any notion of mass or particulate “existence”. Locality thus becomes a more compelling axiom than realism; the resulting ontology is that particles do not exist independently of an observer. Until measurement, all that exists is a probability wave that describes where the particle is most likely to be. And nothing else. Interpretations sharing this viewpoint are commonly referred to in a collective sense as the “Copenhagen interpretation”. Here we have the best candidate of a physical phenomenon that is truly random. In these interpretations, the collapse of the probability wave into a single particle upon observation is mathematically guaranteed to be completely unpredictable.

Still, neither Bell’s inequality nor the laws of quantum mechanics guarantee such randomness. One may prefer to keep realism and do away with locality. There is no conclusive reason to prefer locality, after all. Maybe we would like to argue that there is an omniscient God or some comparable universal substrate. Bohmian mechanics describes such an interpretation. First studied by De Broglie, one of the pioneers of quantum mechanics, Bohmian mechanics holds that actual particles do exist inside of the observed probability waves. The unfortunate tradeoff is that the motion of these particles is governed by a universal state function that includes non-local interactions between the particles it models. In other words, it posits that there is an underlying causality that can influence two particles in an interdependent way faster than any particle could be transported between them by light. In a philosophical context, there are endless interpretations of what could explain such a causality, but in a physical context, there are none. Assuming Bohmian mechanics, it is not immediately clear that we should describe quantum measurements as “random”, since they are merely illuminating the qualities of extant objects that move according to a known set of rules.

There is a third broadly accepted option that does not relinquish either realism or locality by a trick of logic. Bell’s inequality makes an implicit assumption that a measurement must have only a single outcome. But there is nothing in physics to indicate that this is true other than the fact that we have never observed a measurement with multiple outcomes.⁹ What if all possible outcomes of any quantum measurement actually did manifest in the form of multiple, branching universes? This interpretation requires some weak assumptions about consciousness, but informally, we would exist in only one of the resulting infinitude of universes and so we would see only a single measurement outcome. At first, this appears to be a far more outlandish idea than Bohmian mechanics or the Copenhagen interpretation, but is it really more far-fetched than the idea that nothing actually exists until we look at it? Or that we have proven that information cannot travel faster than light, and yet, in only a very slightly distinct sense, that it actually can? Like those possibilities, there no known laws of physics that prevent branching universes. The subject of randomness in this interpretation depends on the factor generating the branches, and the factor determining which branch we follow. There may be randomness here, or once again, there may be God.¹⁰

Randomness in Practice

Like most other problems in philosophy, the existence and character of randomness is likely impossible to pin down without relying on untestable metaphysical assertions. But regardless of whether randomness exists, and regardless of how it precisely works in any pure sense, it is extremely useful to the modern world. So we need to find proxy ideas that we can work with without having to resort to or accept mind-bending concepts of meta-universal substrata.

First, we need to precisely define what we mean by “random”. Let’s say we want to generate a random sequence — a common activity in many technical disciplines. What property do we want from our source of randomness? In most cases, we won’t require our samples to emerge from an eternal chaos of tumultuous indeterminism. As typical when generalizing something so broad, it will depend on the specific application, but the two scenarios we will examine should inform most contexts.

In pure mathematics and statistics, random numbers most commonly appear when sampling a random variable. Alonzo Church gave a clever definition of a mathematically random sequence in 1939¹¹ that is appropriate in the context of statistical sampling. Church’s criteria are difficult to meet and most applications will not require their full power. But we are speaking of generalities and Church’s definition provides a natural¹² test for the randomness of a sequence.

Church defined a random sequence as an infinite sequence of aᵢ ∈ {0, 1} such that:

  1. If f(r) is the number of 1s in the first r terms of the sequence (aᵢ), then the limit of f(r)/r as r approaches infinity exists and is p, and
  2. If 𝜑 is an effectively calculable function of positive integers, if b₁ = 1 and b_(n + 1) = 2b + a, c = 𝜑(bₙ), the integers n such that cₙ = 1 form a sequence in increasing magnitude (nᵢ), and g(r) is the number of 1s in the first r terms of the induced subsequence (a), then the limit of g(r)/r as r approaches infinity is p.

Church develops this definition in a few iterations that serve to explain the complexity of the second condition. A short exercise for the interested reader is to extend this definition to a sequence of elements sampled from any finite set, and then to show the equivalence of the two definitions. A useful corollary property of such a sequence is that it will adhere to a probability distribution. Thus, this sequence can be transformed into a sampler for any distribution through algorithms like the accept-reject method, and thus provide sufficient randomness for any application in statistics.

Information theory also employs randomly-generated values. In particular, randomly-generated values are used as cryptographic keys. Key generation is likely the most frequent prescribed application of random number generation, since most modern Internet connections are secured by ephemeral keys newly generated with each client-server connection. But this is a very special case of random number generation, and as such we don’t need the full power of Church’s definition. We use randomness for cryptography, but we actually only desire a subordinate property: unpredictability.

The basic information-theoretic requirement for most cryptographic keys is that they must not be known to any eavesdroppers. One way to accomplish this is to keep the method of key generation secret. But especially in a practical context, this is an imperfect approach. Should an eavesdropper discover the method of key generation, then depending on the method, he may be able to recreate arbitrary keys. Thereby, he could decrypt traffic associated with any key, either new traffic or encrypted records of old traffic. A far superior approach is to use a key generation scheme in which, even given full access to the inner workings of the scheme, an eavesdropper cannot predict the keys that will be generated. The most straightforward way to do this is to use a random sequence.

In practice, the most pure way to generate a random sequence is to perform quantum measurements. As far as we can tell, these are the physical phenomenon best representative of true randomness. But quantum measurements require extremely sensitive, bulky, and expensive equipment. At least today, a quantum measurement device wouldn’t fit inside a smartphone, and so wouldn’t be of much use in universally available consumer cryptography. But Church’s mention of calculability echoes with salience. Cryptography is a practical field and so “good enough” is good enough. An eavesdropper being unable to practically compute the keys that will be generated is just as good — in practical terms — as an eavesdropper being unable to predict the keys that will be generated. Beyond Church’s formulation of effective calculability or (the equivalent) effective computability, the only requirement is to sufficiently slow any reverse engineering that the eavesdropper may attempt. Because of this, we can rely heavily on chaotic functions whose behavior — sometimes nearly indistinguishably — mimics true randomness. With limited sources of unpredictability (called entropy in this context, these are usually generated from sources like mouse movement or accelerometer input peripheral to the computing device) to ensure that the inputs of these chaotic functions aren’t predictable, long pseudorandom sequences can be generated and are sufficiently unpredictable to ensure the practical security of a cryptographic system. It turns out, these pseudorandom sequences closely enough resemble truly random sequences that they are also quite useful in many more general statistical cases, as well.

Randomness is an endlessly complex philosophical subject. Reasonable people can argue ad infinitum about whether or not it exists and exactly what form it takes. But it can be defined mathematically in terms of the secondary properties we require of it. Quantum mechanics gets us closest to true randomness, but in practice there are shortcuts that can generate high-quality pseudorandom sequences without the fuss of measuring subatomic particles.

Endnotes

  1. A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, and H. Ezawa (1989). “Demonstration of single-electron buildup of an interference pattern”. American Journal of Physics, 57:117–120.
  2. Albert Einstein; Albert Einstein to Max Born. Physics Today 1 May 2005; 58 (5): 16. https://doi.org/10.1063/1.1995729.
  3. Formally σₓσ ≤ ħ/2, where σₓ is the standard deviation of position, σₚ is the standard deviation of momentum, and ħ is a constant (the reduced Planck constant, approximately 6.582 × 10e−16 eV⋅s).
  4. The YouTube channel 3Blue1Brown has an excellent video explaining why wavelike behavior imparts quantum particles with an uncertainty principle.
  5. At least, one can compute it to any, arbitrarily small degree of accuracy — some classical configurations, like the Navier-Stokes equations or the three-body problem, do not admit closed solution forms.
  6. Satisfactory experimental confirmation of these inequalities was achieved in 2015 and was the subject of the 2022 Nobel Prize in Physics.
  7. Bell, J. S. (1964). “On the Einstein Podolsky Rosen Paradox”. Physics Physique Физика. 1 (3): 195–200.
  8. More precisely, causation cannot travel faster than the speed of light; any particles interacting must be in each others’ light cones in spacetime.
  9. I wonder if they would give me a Nobel Prize for my ability to demonstrate a measurement with multiple outcomes when I measure a cut on a 2"x4".
  10. God, of course, refers abstractly to a determinative meta-universal substrate (but “God” is much snappier).
  11. Church, Alonzo. (1940). “On the Concept of a Random Sequence”. Bulletin of the American Mathematical Society. 46 (2):130–135.
  12. Though not computationally efficient — the simplest version would require O(2ⁿ) operations for a sequence of length n, and large n will be required for most sequences.

--

--

William Cabell
William Cabell

Written by William Cabell

Mathematician, software engineer, cybersecurity professional.