What is Randomness?

post by martinkunev · 2024-09-27T17:49:42.704Z · LW · GW · 2 comments

Contents

  Empirical and Logical Uncertainty
  How does randomness relate to non-determinism?
  Indexical Uncertainty
  Sources of Randomness
  What is Probability?
    Probability and Prior Information
    Decisions Under Uncertainty
  Interesting Questions
None
2 comments

epistemic status: my intuition after reading and watching a bunch of stuff; no new information

 

You take a die in your hand. If you throw it, the result will be what people usually call a random number. Let's say you get 2. What do we mean when we say that this number is random? To answer these questions, I will try to give you some intuition on the concept of randomness.

Various types of dice

When a person says an outcome of a process (such as the number 2) is random, they mean that they are uncertain what the outcome is. The die is not a magical object that produces random numbers. The die is an object which we can use in multiple situations which can be described as "throwing a die" and those situations could lead to different numbers. Humans are usually incapable of predicting how a die would land - they are uncertain about it. The process of throwing a 6-sided die can produce any of the integers between 1 and 6. In any particular throw, an observer may be uncertain which side is on top. If the observer sees the die after it stops, the uncertainty disappears. The number is no longer random to the observer but may still be random to another observer who hasn't seen the die.

For simpler processes such as a coin toss, some humans have learned to predict the outcome. For those humans, there is no uncertainty how the coin would land.

Uncertainty is a property experienced by a cognitive system (such as a human or an artificial intelligence). We can distinguish three types of uncertainty.

Empirical and Logical Uncertainty

Alice takes a die, throws it inside a box, peeks and closes the box before Bob has time to peek inside. How the die landed is determined by the laws of physics. If Bob knew the dimensions of the die and the box, the initial position of the die, the precise movement of Alice's hand and so forth, he would be able to put this information into a computer and calculate how the die landed. Since Bob doesn't have this information, he is uncertain how the die landed. No matter how hard he thinks, he will still be uncertain. The only way for him to find out what number the die landed on is to perform some experiment, such as opening the box and counting the number of dots. We say that Bob has empirical uncertainty about the outcome of the die. At the same time Alice was able to peek and see how the die landed so she has no empirical uncertainty.

Empirical uncertainty is uncertainty due to lack of information. An observer cannot possibly know the outcome of a process because they are missing information about the world.

Alice has just finished calculating the 100-th decimal digit of τ and is asking Bob what that digit is. Bob knows a few digits: τ = 6.28... but has no idea what the 100-th digit is. He can write a program on his computer to tell him the answer but Alice is impatient and demands an answer now. Bob has all the necessary information to find the answer but is uncertain because he hasn't done the necessary cognitive processing to reach it. His actions at this moment are limited by his logical uncertainty.

Logical uncertainty is uncertainty stemming from limits in cognitive capacity. The cognitive system can in principle resolve the uncertainty but this would require using computational resources and spending time.

The term logical omniscence is used to refer to a theoretical cognitive system which can instantly resolve all logical uncertainty - it can execute any well-defined computation as fast as necessary.

Digits of tau

How does randomness relate to non-determinism?

Intuitively, we say that a process is non-deterministic if it can produce different results when repeated.

This brings up the question - what does it mean to repeat a process? Precisely repeating a coin flip would mean to put the coin at the exact same position on our hand and do the exact same movement as before. Everything else such as all the surrounding air molecules and Jupyter need to be at the same positions and have the same velocities. The amount of parameters that need to be kept the same encompasses the entire light cone of causal influence so this cannot be done in practice. Thus, when talking about repetition, we assume some of those parameters don't matter and ignore them. Strictly speaking, in classical physics throwing dice or tossing coins are deterministic. They only appear non-deterministic when we ignore parameters that determine them. We can only talk about repeating a process in some idealized model.

The question about non-determinism in quantum mechanics is still a little contentious. Imagine a stream of linearly polarized photons with the same polarization. We place a filter on the path of the photons. It turns out that we can find an orientation such that about half the photons pass. According to our understanding of physics, those photons are identical. Yet some of them pass through the filter while others do not. A number of experiments fail to show anything in the universe that determines whether a particular photon passes or not. It seems that all observers would be uncertain about which photons would pass. Is this a manifestation of non-determinism?

There are many experiments in quantum mechanics pointing to such apparent non-determinism and physicists are still debating how to interpret that. Some physicists suggested an interpretation (Copenhagen interpretation) of quantum mechanics in which there is fundamental randomness and the universe is non-deterministic.

Albert Einstein was convinced the universe is deterministic and famously said "God does not play dice with the universe". Einstein, Podolsky and Rosen came up with a though experiment which shows the implications if the physical laws governing whether a photon passes are non-deterministic. This though experiment, known as the EPR paradox, shows that the non-determinism of quantum mechanics is in conflict with a large portion of our understanding of physics - such as causality, and the speed of light being a fundamental limit. There are many philosophical reasons [LW · GW] to reject this non-determinism.

An alternative explanation for the apparent randomness is the idea of local hidden variables. In the context of the photons experiment, it was proposed that the photons are not identical - the description of each photon includes some hidden variables we cannot measure but which determine whether it passes through the filter. John Stewart Bell proposed a set of inequalities which follow from the existence of local hidden variables. Later, experiments were shown to violate the inequalities. This led to the rejection of the hypothesis of local hidden variables.

Hugh Everett proposed what is the most popular alternative explanation today. He advocated that if we simply follow the laws of quantum mechanics and treat the observer as part of the universe, the non-determinism disappears. Without technical details, we can think of each photon being in a particular state that causes it to interact with the filter in a particular way. The state can be described as a superposition of "going to pass" and "not going to pass". When a person observes whether a photon passes, they also interact with the experimental setup. After that, the state of the system can be described as a superposition of "passed and the observer saw it pass" and "didn't pass and the observer saw it not pass". This state is something we have trouble imagining because we are part of the system and only observe part of it.

One way to think of the Everett interpretation [? · GW] is that there are multiple "versions" of the observer - one for which the photon passed and one for which it didn't. Both those "versions" of the observer exist but never interact again - we say that the observers are in separate branches of the universe. In this interpretation the universe is deterministic - one way to put it is that both outcomes happen but no single observer can see them both.

All physical observations can be explained deterministically and the deterministic explanation is by far the simplest one. As far as we can tell, non-determinism is a product of the human thought and does not reflect anything we can find in the universe. It is present in human ontologies but does not correspond to anything in the territory [? · GW]. The only way we ever get non-determinism is by postulating it exists. It is not even clear what a non-deterministic universe would look like. If one day it turns out we need non-determinism to explain some physical phenomenon, we would need to rethink most of physics.

Indexical Uncertainty

Two identical robots are placed in two identical rooms (room 0 and room 1). Each robot knows that an identical replica of itself is located in the other room but doesn't know which room it is located in. As each robot is turned on, there is no mystery what the world is like - there is a robot in both room 0 and room 1. However, each robot has a location in that world that it is uncertain about.

Identical robots in two rooms

Indexical uncertainty is uncertainty from not knowing your own location in the world. An observer knows it is one of multiple identical observers but cannot identify which one.

The apparent randomness in the Everett interpretation of quantum mechanics is due to indexical uncertainty. Before we do the experiment with the photon, there is no uncertainty - the state after the experiment is a superposition of "passed and I saw it pass" and "didn't pass and I saw it not pass". When the experiment is done but the observer hasn't yet seen the result, there are two "versions" of the observer, each one with indexical uncertainty as to which branch they are in. Once the observer sees the result, the uncertainty is resolved (much like if the robots went out to see the room number).

This "splitting" of one universe into two branches is part of what earned quantum mechanics its reputation of being hard to understand. Since the observer is part of the universe, it also "splits" into two observers. There is no one version which is more true than the other. It makes no sense to ask "What will the observer observe once the photon is emitted toward the filter?" because the observer is not well-defined. After the experiment, there would be two "versions" of the observer that cannot interact with each other - one which saw the photon pass, and one which saw the photon not pass.

Sources of Randomness

Being unpredictable is essential in many domains - here are just a few. In cryptography, randomness is used for verifying message source and integrity, ensuring a third party cannot read private messages, etc. In game theory, randomness is essential to improving the outcome when competing and ensuring fairness when cooperating. In communication, randomness is used to break symmetries (when two parties need to behave differently). For content generation, randomness allows to produce text, images and other types of data which does not repeat. Randomness is also essential in science in order to exclude extraneous factors that could influence experimental results.

Randomness can come from any of the three types of uncertainty we described. Hardware random number generators use unpredictable physical processes to produce random numbers to be used by a computer.

Quantum effects are a source of randomness (depending on the quantum mechanics interpretation that turns out to be correct, this could be indexical or empirical uncertainty). Assuming the physical laws we know are accurate, one cannot even in principle predict the outcome of quantum physical systems with certainty.

Some physical systems are very sensitive to the initial conditions. A tiny amount of disturbance from the surroundings (noise) can have a big effect on the behavior of the system. The difference in initial conditions is amplified over time (often exponentially) and the state of the system becomes practically impossible to predict after a short time. We call such physical systems chaotic. A simple example of this is the double pendulum. In chaotic systems the randomness stems from empirical uncertainty. Since there is no way to make a measure with infinite precision and any imprecision is amplified fast, these systems are inherently unpredictable.

Double pendulums diverging

Alternatively, computers can use pseudo-random number generators. These generators use mathematical calculations to produce numbers which appear random due to logical uncertainty. Often, the generator uses some unpredictable value (random seed) such as the current time as an initial value for the calculations. It is also possible to calculate successive digits of a mathematical constant to use as pseudo-random numbers.

Pseudo-random number generators are considered insecure because with enough computational power one can often predict which number will be generated. Thus, they should only be used in non-adversarial settings (where nobody is actively trying to predict the number). This excludes applications such as cryptography. Pseudo-random number generators have the advantage that by repeatedly using the same seed one can exactly reproduce a run of a program.

An interesting video about chaos

What is Probability?

Probability is a tool for logical reasoning under uncertainty. The concept of probability follows naturally from the assumption that uncertain events have plausibilities which can be compared (the exact details of this assumption are formalized in Cox's theorem). The laws of probability can be derived from this assumption.

For an in-depth technical understanding of probability, I recommend Probability Theory: The Logic of Science.

Probability and Prior Information

We can only talk about probability with respect to some prior information - if I write a number between 1 and 100 on a piece of paper, you can only assign probability to each number being written by assuming some prior information (such as "all numbers are equally likely"). I would assign probabilities differently because I have different prior information (namely, I know which number I wrote so I'll assign probability 1 to it and 0 to all other numbers). In other words, a cognitive system assigns probabilities based on the uncertainty it has. It does not make sense to talk about some objective "true" probability.

When we acquire new information, we take it into account by adjusting the probabilities we assign to various uncertain events. This process of using evidence to update probabilities is called bayesian reasoning.

Suppose you throw a die on the table. In all generality, the outcome is the resulting state of the universe. For simplicity, we assume the table remains intact, the die lands on it and stops moving. We approximate the die by a cube and care only about which side of it is facing up. We know that for a particular throw, one specific side will face up and no other side will face up because the universe is deterministic. In what sense can we talk about probability of a given side facing up?

Probability is in our model. The model ignores most of the information about the state of the universe (such as the initial orientation of the die or how your hand moved) and only considers which side of the cube is up. In this simplified model, there is uncertainty about the outcome. In fact, there is a symmetry between all outcomes so we treat them all the same - we assume "the die is fair". Thus, we assign equal numbers to each outcome (1/6 for a 6-sided die).

The model is the prior information on which we base our probability assignment. Probability is subjective in the sense that different cognitive systems may assign different probabilities to the same event without any one of them being mistaken. However, for a fixed prior information, there is one correct probability assignment.

Decisions Under Uncertainty

In what sense can a probability assignment be correct? What's wrong with assigning probability 2/3 to the die outcome 5?

Often we don't think in terms of probabilities and can't easily give the probability of an event when being asked (e.g. what's the probability that the Riemann hypothesis gets solved in 2024?). What matters is not what we think or say but what are our actions. Even if we don't want to assign a probability to an event, our actions speak for themselves [LW · GW] - choices we make correspond to some implicit probability. Suppose you want to go to some city and you're wondering whether to drive or take the train. There are differences between the two options in terms of cost, time, comfort, risk of accidents, etc. If you decide to drive, then your actions correspond to a probability of accident low enough that other factors outweigh it.

A cognitive system operating in the real world (a human, a mouse, a robot) is faced with uncertainty all the time. Using probabilities correctly can help the cognitive system achieve its goals - a cognitive system which assigns incorrect probabilities to events (given its prior information) acts suboptimally. This means that it could have achieved better results by assigning the correct probabilities. Not assigning any probability is not an option because probabilities are still implicitly present in the actions chosen.

Interesting Questions

This leaves some interesting questions about probability unanswered:

In what way does some probability assignment help achieve better results than another?

How do probabilities arise in the Everett interpretation of quantum mechanics?

After a series of quantum experiments, there will be a version of the observer who sees a sequence of very unlikely outcomes. How to make sense of that?

I don't think I can answer these questions satisfactory. If you want to learn more about these topics, you can read about decision theory and the Born rule in everettian quantum mechanics.

2 comments

Comments sorted by top scores.

comment by ProgramCrafter (programcrafter) · 2024-09-27T23:14:32.224Z · LW(p) · GW(p)

How do probabilities arise in the Everett interpretation of quantum mechanics?

After a series of quantum experiments, there will be a version of the observer who sees a sequence of very unlikely outcomes. How to make sense of that?

Well, in MWI there is the space of worlds, each associated with a certain complex number ("amplitude"). Some worlds can be uniform hydrogen over all the universe, some contain humans, a certain subspace contains me (I mean, collection of particles moving in a way introspectively-identical to me) writing a LessWrong comment, etc.

It so happens that the larger magnitude of said complex number is, the more often we see such world; IIRC, that inequality allows to prove that likelihood of seeing any world is proportional to squared modulo of amplitude, which is Born's rule.

The worlds space is presumably infinite-dimensional, and also expands over time (though not at exponential rate as is widely said, because "branches" must be merging all the time as well as splitting). That means that probability distribution assigns a very low likelihood to pretty much any world... but why do we get any outcomes then?

I'm not attempting to answer question why we experience things in the first place (preferring instead to seal it even for myself), but as for why we continue to do so conditional on experiencing something before: because of the "conditional on". Conditional probability is the non-unitary operation over our observations of phase space, retaining some parts while zeroing all others, which are "incompatible with our observations"; also, as its formula is , it can amplify likelihoods for small values of . That doesn't totally fix the issue, but I believe the right thing to do in improbable worlds is to continue updating on evidence and choosing best actions as usual.
(To demonstrate the issue with small probabilities is not fixed, let's divide likelihood of any single world by ; here's a 256-bit random string: e697c6dfb32cf132805d38cf85a60c832247449749293054704ad56209d2440e).

Replies from: martinkunev
comment by martinkunev · 2024-09-28T00:27:55.596Z · LW(p) · GW(p)

You can say that probability comes from being calibrated - after many experiments where an event happens with probability 1/2 (e.g. spin up for a particle in state 1/√2 |up> + 1/√2 |down>), you'd probably have that event to happen half the time. The important word here is "probably", which is what we are trying to understand in the first place.

 

I'm imagining the branch where a very unlikely outcome consistently happens (think winning a quantum lottery). Intelligent life in this branch would observe what seems like different physical laws. I just find this unsettling.

The worlds space is presumably infinite-dimensional, and also expands over time

If we take quantum mechanics, we have a quantum wavefunction in an infinite-dimensional hilbert space which is the tensor product of the hilbert space describing each particle. I'm not sure what you mean by "expands", we just get decoherence over time. I don't really know quantum field theory so I cannot say how this fits with special relativity. Nobody knows how to reconcile it with general relativity.