
We can do a lot with probability and statistics. If we consider the case of a tossed die, we know that it will result in a six about one time in six in the die is not biassed in any way. A die that turns up six one time in six, and the other numbers also one time in six, we call a “fair” die.
We know that at any particular throw the chance of a six coming up is one in six, but what if the last six throws have all been sixes? We might become suspicious that the die is not after all a fair one.

The probability of six sixes in a row is one in six to the power of six or one in 46656. That’s really not that improbable if the die is fair. The probability of the next throw of the die, if it is a fair one, is still one in six, and the stream of sixes does not mean that a non six is any more probable in the near future.
The “expected value” of the throw of a fair die is 3.5. This means that if you throw the die a large numbers of time, add up the shown values and divide by the number of throws, the average will be close to three and a half. The larger the number of throws the more likely the measured average will be to 3.5.

This leads to a paradoxical situation. Suppose that by chance the first 100 throws of a fair die average 3.3. That is, the die has shown more than the expected number of low numbers. Many gamblers erroneously think that the die is more likely to favour the higher numbers in the future, so that the average will get closer to 3.5 over a much larger number of throws. In other words, the future average will favour the higher numbers to offset the lower numbers in the past.
In fact, the “expected value” for the next 999,900 is still 3.5, and there is no favouring of the higher numbers at all. (In fact the “expected value” of the next single throw, and the next 100 throws is also 3.5).

If, as is likely, the average for the 999,900 throws is pretty close to 3.5, the average for the 1,000,000 throws is going to be almost indistinguishable from the average for 999,900. The 999,900 throws don’t compensate for the variation in the first 100 throws – they overwhelm them. A fair die, and the Universe, have no memory of the previous throws.
But hang on a minute. The Universe appears to be deterministic. I believe that it is deterministic, but I’ve argued that elsewhere. How does that square with all the stuff about chance and probability?

Given the shape of the die, its trajectory from the hand to the table, given all the extra little factors like any local draughts, variations in temperature, gravity, viscosity of the air and so on, it is theoretically possible, if we knew all the affecting factors, that, given enough computing power, we could presumably calculate what the die would show on each throw.
It’s much easier of course to toss the die and read the value from the top of the cube, but that doesn’t change anything. If we knew all the details we could theoretically calculate the die value without actually throwing it.

The difficulty is that we cannot know all the minute details of each throw. Maybe the throwers hand is slightly wetter than the time before because he/she has wagered more than he/she ought to on the fall of the die.
There are a myriad of small factors which go into a throw and only six possible outcomes. With a fair die and a fair throw, the small factors average out over a large number of throws. We can’t even be sure what factors affect the outcome – for instance, if the die is held with the six on top on each throw, is this likely to affect the result? Probably not.

So while we can argue that when the die is thrown that deterministic laws result in the number that comes up top on the die, we always rely on probability and statistics to inform us of the result of throwing the die multiple times.
In spite the seemingly random string of numbers from one to six that throwing the die produces, there appears to be no randomness in the cause of the string of results from throwing the die.

The apparent randomness appears to be the result of variations in the starting conditions, such as how the die is held for throwing and how it hits the table and even the elastic properties of the die and the table.
Of course there may be some effects from the quantum level of the Universe. In the macro world the die shows only one number at a time. In the quantum world a quantum die may show 99% one, 0.8% two, 0.11% three… etc all adding up to 100%. We look at the die in the macro world and see a one, or a two, or a three… but the result is not predictable from the initial conditions.

Over a large number of trials, however, it is very likely that these quantum effects cancel out at the macro level. In maybe one in a very large number of trials the outcome is not the most likely outcome, and this or similar probabilities apply to all the numbers on the die. The effect is for the quantum effects to be averaged out. (Caveat: I’m not quantum expert, and the above argument may be invalid.)
In other cases, however, where the quantum effects do not cancel out, then the results will be unpredictable. One possibility is the case of weather prediction. Weather prediction is a notoriously difficult problem, weather forecasters are often castigated if they get it wrong.

So is weather prediction inherently impossible because of such quantum level unpredictability? It’s actually hard to gauge. Certainly weather prediction has improved over the years, so that if you are told by the weather man to pack a raincoat, then it is advisable to do so.
However, now and then, forecasters get it dramatically wrong. But I suspect that that is more to do with limited understanding of the weather systems than any quantum unpredictability.

The images that I’ve included don’t come from my usual source. I apologise if any should “break”.