Chapter 12 – Randomness

In the ongoing debate between Atheists and the religions, a recurring issue is how the universe, life, diversity of species, and consciousness all came into being. Those on the religious side contrast the easily explainable creation stories with the scientific explanations, which they portray as everything having arisen out of some pre-existing random chaos by pure chance. The purpose of this chapter is to look into the concept of randomness, and to illustrate and give some insight into how things that are generally regarded as random or chance or chaotic are actually the result of ordinary natural processes. The subject of chance is discussed also in Chapter 7 Intelligent Design as a Scientific Theory.

Parts of this chapter may be difficult for readers who are not mathematically minded.

 

An Elusive Concept

The term random is sometimes invoked to denigrate such ideas as the theory of evolution. Sometimes it is used as a contrast the idea of determinism.

In everyday use it is applied to a range of very different kinds of things. The path of an ant exploring a patch of ground, the roll of well balanced dice, the motion of molecules in a gas, the decay of radioactive nuclei, the third last digit of the bottom number of consecutive pages in a telephone book, each of these is a distinctly different kind of thing and each is commonly regarded as random.

Randomness is usually associated with haphazard or erratic behaviour. This implies that something random is not deliberately caused, or not caused by some consistent process. It is also regarded as the opposite of orderliness (i.e., orderliness being sometimes different from tidiness, with tidiness meaning “everything neatly in place”, no matter how the assigned places may be distributed). Haphazardness and orderliness are subjective concepts: one person may regard something as orderly but complex while someone else may regard it as haphazard. A brief look at the nature of order might help set the scene for an examination of randomness.

 

Order

In this particular context, order is the recognisable characteristic of patterns. Patterns may be visual, aural or tactile, or occur in sequences of numbers. Their qualities may be symmetry, or repetition of sameness and similarity, or sequential relationship. Examples of sequential relationships are:

  • series of numbers, such as the Fibonacci numbers (where each member except the first two is the sum of the previous two numbers, e.g.,   1, 2, 3, 5, 8, 13, 21, …..) and arithmetic and geometric progressions (1, 4, 7, 10, 13 16, 19, 22, ….., and  1, 2, 4, 8, 16, 32, 64, ..…, respectively);
  • a set of Russian dolls, each enclosed in a larger version of itself;
  • the decreasingly smaller versions of the same initial shape in a fractal (such as a Mandelbrot figure);
  • notes in a typical piece of music.

I could have started the Fibonacci series as   0, 1, 1, 2…. and it would have continued as previously shown. The arithmetic progression could have started with any number, and any number could be used as the difference between succeeding members of the series. To be an arithmetic progression, the difference between successive members must remain the same throughout. Rules of any kind can be used for numerical sequences, for example the difference between successive members could continually increase by a certain amount, such as the sequence   1, 2, 4, 7, 11, 16, 22,…., in which the difference starts as 1 and increases by 1 after each member.

So, as the previous paragraph illustrates, while some patterns are recognisable to almost everyone, in some cases some people may see a pattern where others see no pattern and no order. There may be some formula connecting the members, as in the examples above, but the formula might be so obscure that only someone who knew what it was would recognise the pattern, or be able to discover it.

Does a straight line represent order? Notionally, an infinite number of lines can be drawn between any two points, and a straight line is, by definition, the shortest possible line between two points. A straight line is orderly and not haphazard because each infinitesimal element of it is pointing in the same direction. But many of the other lines that could be drawn between two points would be haphazard. A circle is more orderly than a straight line. It is symmetrical about every straight line that could be drawn through its centre. That is, if a circle is folded along any straight line drawn through its centre, the fold will perfectly overlap the other half. But several straight lines or several circles could occur in a disorderly or haphazard fashion. This suggests that there can be sections of apparent orderliness within something that is (apparently) not orderly. A disorderly room could be strewn haphazardly with books and toys, each of which is orderly in itself. If we wanted to test whether a specific case of apparent haphazardness were really random, then we would need to distinguish the regularity of any orderly parts from the apparent disorder of the whole.

So the concept of orderliness is not always straightforward. Nor is randomness.

 

Concepts and Examples of Randomness

Examples of Sequences

Since apparently haphazard occurrences occur at “random“ intervals of time or at “random” distances from each other or with “random” intensities, etc., their behaviour can be represented as sequences of numbers. Examples are:

  • the successive angles of direction that ants take in their search for food, and the distances travelled between each change of direction;
  • the successive numbers scored as dice are continually thrown;
  • the differences of tallness at some specified age of first, second and third male siblings across families;
  •  the time between successive mutations of specified genes;
  • the time between wet days throughout the seasons over several years for some specific location.

A sequence suspected of being random might take the form of having very many members. It may be a number with an infinite number of digits following a decimal point, such as the square root of 2, where there is no apparent pattern in the decimals. (But the infinite sequence of decimals in the result of 1 divided by 3 is clearly not haphazard and not random.) If it is a line, the successive segments might have a uniform direction, as in a straight line, or be thought to have haphazard directions, as in a wriggly or jagged line. There is no reason why a configuration suspected of being random must be “one dimensional”. It might be two-dimensional, for example of lengths and angles, or indeed, a multi-dimensional matrix.

 

Pseudo-Random Sequences and Predictability

There are specially produced sequences of “random numbers”, referred to as pseudo-random, in which there is intended to be no discernible pattern or significance. These are derived either by sampling some apparently random natural process and then processing the samples, or by using special mathematical formulas.

Pseudo-random sequences are used in applications such as cryptography and the simulation of unknown or unpredictable situations for computer modelling and computer games. Various statistical and other mathematical tests are applied to them before use, to reduce the likelihood that later parts of the sequence will be accurately predicted from knowledge of earlier parts. These tests imply that some sequences may not be “sufficiently random” for their particular purpose. The practical criterion of their usefulness is difficulty of prediction. The theoretical criterion is absence of any analysable characteristic. One would therefore expect that some samples from truly random sequences might fail a specific mathematical test. For example, in a very long binary sequence there might well be a patch of, say, twenty successive 1s or 0s, without that fact preventing the sequence from being random. (Actually, the probability of such a succession occurring is no different from that of any other specified twenty-digit sequence of 1s and 0s. It is just that we can more easily recognise the more regular patterns, and therefore we think they are special.)

The term pseudo-random means having the characteristics of randomness without actually being random. What, then, is the difference between the two? One suggestion is that pseudo-random sequences are deliberately produced (but not necessarily reproducible) while randomness “occurs by itself”. Another is that some pseudo-random sequences can be precisely reproduced at will by repeating the process. This implies that randomness not only occurs by itself, but it is not reproducible. For example, the square root of 2 is an infinitely long number with no apparent orderliness or pattern in the sequence of the digits, so it would be thought to be random. But it is sometimes considered to be not random because the same sequence can be calculated at will. So here are two slants to the concept of randomness. One relates to the internal characteristics of an occurrence or set of occurrences, and the other to how the occurrence came into being. Complete absence of pattern does not necessarily imply the complete absence of any causal process.

 

Chaos

Chaos theory has shown that many things that had seemed to be haphazard or “chaotic” actually have mathematically precise patterns of behaviour. An example is the path of something moving under the control of some types of non-linear systems. (A non-linear system is one in which making changes to some part the system produces results that appear to have no consistent relationship to the changes that caused them.) An example of a chaotic system is a heavy pendulum with a lighter pendulum hanging from it. Once it is set swinging, its movements appear to be haphazard, but the movements of the individual pendulums and of the system as a whole have complex repetitive pattern. A very large number of observations is usually needed to identify a chaotic process: if you see only a small part of its sequence you are not likely to guess what will happen next.

Some short sequences may be just too trivial for the concepts of randomness and chaos to be meaningfully applied to them. To take an extreme example, is it meaningful to say that the sequence 3, 8 is either random or chaotic – or are there any circumstances where the single number 3 could be? If a single number or very short sequence that is “just pulled out of the air” could be random, then what would be the distinguishable concepts of such randomness? Unfortunately there seems to be no other suitable word than random in the English language, but a concept other than randomness would more appropriately apply to such cases. It seems meaningless to say there is no pattern. But how many members must a sequence have before it might be considered to be patternless?

 

“Compressibility”

A different kind of definition of randomness was developed by Gregory Chaitin, a mathematics researcher at IBM’s T J Watson Research centre in New York, and also independently by Andrey Kolmogorov, a Russian mathematician. By this definition, a sequence is random if it cannot be expressed using a smaller quantity of information than is needed to present the actual digits. For example, the decimal sequence that results from dividing 1 by 127 needs at least 127 bits of information to write all its digits. But the formula 1/127 takes only a small number of bits of information to write. And therefore by this definition this (infinite but repeating) sequence is not random. And nor is the square root of 2, which was mentioned earlier in this chapter. But if no shorter formula exists for a particular sequence, then the sequence is random. (However, there are two problems with this. The first is that since the number of digits needed to encode the formula is shorter than the number of digits required to write it as a number, by this definition the formula itself should be regarded to be random. The second problem is that if instead of a decimal scale, in which 10 represents ten we use a scale in which the notation 10 represents 127, then the result of the formula 1/127 would be expresses as 0.1.)

Building on the work of Alan Turing, Chaitin has shown that there will be procedures that would produce random sequences, but the randomness of a particular case may be impossible to prove or disprove. From this it can be concluded that, while randomness might imply absolute incalculability, practical incalculability does not necessarily mean randomness. For example, the infinite sequence obtained by calculating the decimal value of 1¸127 and then adding 1 to every third and 100th digit might appear to be incalculable, but the formula for obtaining it is contained in this sentence. The same would apply to the first 200 digits of this sequence.

Chaitin’s definition can be used to declare a trivial case to be random. For example, the sequence 25 requires less information to express than any formula that can produce it, such as 100 divided by 4, or 5 multiplied by 5, or every other possible way of representing it. By the same argument, any single digit, e.g., the number 2, is random. If these examples are to be taken seriously, they seem to give randomness a rather strange meaning. If they are not to be taken seriously, how long must a sequence be for Chain’s criterion for randomness to be meaningful? If this implies that only infinitely long sequences are random, then randomness would not be a practically useful concept. Or it could mean that Chaitin’s definition needs some logically justified qualification, i.e., not just one of convenience.

 

Logical Independence

In addition to compression, Chaitin invokes the principle of logical independence, which fits well with the idea of haphazard behaviour. He says, “two chance events are said to be independent if the outcome of one has no bearing on the outcome of the other”. (One might wonder why he included the word chance.) He illustrates this with a notional type of sequence of 1s and 0s, which he considers to be random. The sequence is derived using the number of possible solutions there are to a particular type of equation. One of the parameters of the equation is continually changed in value, starting at 1 and going through all the integers. For each value of this parameter, if there is a finite number of solutions to the equation, a 0 is added to the sequence, and for an infinite number of solutions a 1 is added. Chaitin regards the resultant sequence to be random, on the premise that the number of solutions to the equation when the parameter is n, has no bearing on the number of solutions when the parameter is n+1.  It seems to me that if events are the result of a systematic process they are not independent. Also, I would regard a sequence that could be reproduced by repeating the process that derived it to be consistently causal, which may or may not make it random. (Chaitin demonstrates mathematically that the particular sequence generated this way can never be known, even partially, so it is curious that there should be argument about whether it is random.)

 

“Causally Random”

When a gas (in a container) is uniform in temperature, its molecules are usually considered to be in a state of random motion. They are moving and colliding so that their speeds and directions are continually changing. Yet the gas has a demonstrable orderliness. The statistical distributions of velocity and direction of the molecules are stable, and can be readily calculated if the temperature, pressure and type of gas are known. In a gas that is uniform in temperature and pressure, any section of the total volume will display the same statistical distribution as the whole. The condition of the gas is therefore clearly causal. In fact, to the extent that it can be represented by an identifiable statistical distribution, it displays a pattern.

Another phenomenon described as random is the “background noise” in electronic devices such as amplifiers. It also can be attributed to physical causes, in this case the thermal vibrations of the molecules in the metal conductors in the circuits. This noise is sometimes used to produce pseudo-random sequences.

 

Information Content

Another concept developed by Kolmogorov is “point-wise randomness”. Here physical objects have an intrinsic information content, which is the amount of information required to completely describe an object, in terms of the individual characteristics and conditions of each particle in them.

(To be more specific, the amount of information that an object contains is the total number of possible quantum states of all the physical components of that object in its present condition. This number has no dimensions, i.e., it is just a number. The “object” could be a specified collection of material, or a particle, or a wave, or a system.

For any change in the condition of an object there would be a corresponding change in its information content. This change in condition would be the result of the object having had a corres(ponding input or output of mass and/or of energy.

According to the First Law of Thermodynamics, neither mass nor energy can be created or destroyed (although each can be converted to the other according to Einstein’s equation E = mC2). So accordingly, information cannot be created or destroyed (or, more correctly, cannot be increased or reduced)..

There is also a different concept by which information consists of symbols that represent details of descriptions. These descriptions can relate to anything that can be thought about. The symbols can be in various physical forms:

  • sound, as in speech, etc.;
  • visual, as in writing or diagrams, etc.;
  • electronic, as in computers and telecommunications systems, etc.;
  • magnetic, as in tapes and cassettes;
  • electrochemical, as in brains;
  • mechanical, as in clockwork, locks and keys, and devices consisting of levers using springs and/or weights and/or gravity that could be very complex or as simple as mousetraps.

In all of these forms, if the symbols are to be useful their significance must be recognised, that is, there must be other information relating the symbols or arrangements of symbols to the ideas they represent.

The handling of information, i.e., storing arrangements of the symbols, transferring them other places, and performing logical processes on them, can be done using each of the various physical forms. When the symbols are in electronic and/or magnetic media, and coded in digital form, the “information content” or “information capacity” is measured in bits. One bit is “the smallest unit of measurement used to quantify computer data”. When the symbols are coded as either 1s or 0s, a single symbol is measured as one bit. Larger units of information are kilobits (Kb), megabits (Mb) and gigabits (Gb). A series of eight bits is referred to as a Byte, and there are corresponding larger units – KB, MB and GB. These units are dimensionless. But the transmission of information is measured in, e.g., megabits per second (Mbps). Using this measuring system, equipment can be designed so as to be able to cater for specific quantities of information.

 In Chapter 9 The Hard Problem of Consciousness I defined intelligence as the processing of information. In this definition the two concepts of information described above are shown to be actually two perspectives of the same thing.)

 

The information content concept is in contrast to the statistical description, as described above as “causally random”. In this concept, an object becomes “more random” when more information is needed to completely describe it. It gives rise to an expression of the second law of thermodynamics in terms of information content, namely, that the information content of a closed system can never decrease. (A closed system is one that neither gains energy from some external source nor loses energy to some external sink.) This concept of randomness is more appropriately referred to as “algorithmic entropy”. It is a way of expressing the condition of a patternless collection of elements that have internal patterns, which was discussed earlier.

In the criteria of information content and what I have referred to in the previous section as causally random, there can be degrees of randomness. So these concepts are different from the other, “yes or no”, concept of randomness. The “disorder” of a gas in equilibrium (i.e., at uniform temperature) is not one of randomness but of statistical simplicity. But a greater amount of information would be necessary to describe the details of every atom or molecule in a gas than if they were in a liquid or solid, and it is this that makes a gas more “disorderly” that a liquid or solid.

I have another type of problem with Kolmogorov’s concept. More information would be needed to describe a gas at equilibrium at high temperature than an equivalent body of gas confined to the same volume at lower temperature. This is because the average velocities and the range of velocities of the molecules and the excitation of the electrons would be higher in the hotter gas, and so a larger number of bits of information would be needed for their description. So a hotter gas would be “more random” than a cooler one.

 

Absence of Pattern

As discussed under Order, above, a suggested test of randomness is the absence of a discernible pattern. In this sense, pattern means:

  • the continued repetition of a section, which might be extremely long;
  • or a systematic development of the section each time it is repeated, without any irregular intervals separating the repetitions or developments.

It is theoretically possible to check whether repetition is present in any finite sequence. In infinite sequences, however, it can never be possible to observe all of the members unless they have a recognisable repeating pattern. But when no such repetition can be discerned it might not be valid to claim that there was none. The value of p, which runs to an infinite number of decimal places, appears to have no discernible pattern, at least in the first billion or so of its digits that have been calculated. The value of p can be obtained as the sum of an infinite series of diminishing members in accordance with the sequential formula quoted earlier in this chapter. The formula can be described using a finite amount of information, so it clearly has a pattern and so by this definition of randomness it is not random. The same applies to the exponential e, and to an infinite number of other sequences, e.g., certain square roots, cube roots, etc., and patterns generated by cellular automata. Some mathematicians consider that these and many other sequences, an infinite number of them in fact, are truly random. An example is the sequence of the differences between successive prime numbers. Others can be generated by looking at other aspects of the natural numbers. If these sequences can be represented by an expression of the formulas for generating them, they are, by the criterion of compressibility, not random. But, by starting at, say, the 2000th digit of one of these patternless sequences, it would be quite easy to produce a workable pseudo-random sequence of any required length..

 

Complexity

A further suggested criterion of randomness is extreme complexity. However, some extremely complex sequences and phenomena are known to be systematic, and causal. Examples are some of the continuous curves studied in chaos theory, and the discrete formations that arise from some cellular automata. But the complexity of something is a matter of the opinion of the person considering it, who may or may not have some knowledge or insight into the particular case. (Complexity is discussed also in Chapter 7 Intelligent Design as a Scientific Principle.)

 

Types of Phenomena

In an attempt to put randomness into a context, I have tentatively made distinctions between four notional types of phenomena:

  • consistently causal;
  • probabilistic;
  • pure chance;
  • random.

Causal events are the consequence of other events. Being consistently causal implies the (theoretical) possibility of calculation or reliable precise prediction, and, generally, the capability of being reproduced (as distinct from being copied). Consistently causal phenomena may produce systematically repeated patterns, which could then be taken as evidence of their nature. But, as discussed above, the apparent absence of such patterns does not necessarily indicate non-causality or randomness.

Probabilistic implies a type of behaviour observed mainly in sub-atomic particles. The behaviour of individual particles appears to be haphazard, but a consistent probability distribution becomes evident when significantly large numbers of particles are considered as a group. An example is the half-life of radioactive elements. It is known to a precise degree of accuracy how long it takes for a piece of a radioactive element to be reduced to half its mass through radioactive decay. But the time interval between successive decay of individual atoms appears to be quite haphazard. The time intervals between successive occurrences of radioactive decay within a mass of suitable material are used in producing pseudo-random number sequences.

With everyday objects, the number of sub-atomic particles is so great that consistent causality is observed. This kind of behaviour might suggest an underlying consistent causality. Einstein expressed this in his statement “God does not play at dice.” In fact, there have been attempts to derive a theory to show an underlying consistency behind quantum behaviour, but to date none has been successful.

It may be speculated that there are types of causality that are not uniformly or probabilistically consistent. When apparent cases of this are detected, it is usual to assume that there is an underlying causal consistency, and efforts are made to discover it. In effect, this implies that there are no “sporadically causal” phenomena, by classifying all potential cases as “unknown causes”.  But might there be a category of “partially causal”? If so, what would be the part that is not causal? These speculative concepts about what is real and logical are different from the “self evident” rules of everyday logic. One set of concepts that are not self-evident applies in quantum theory. Perhaps there might be others that allow a category that is neither causal nor random. If there is really such a thing as free will it would need to be attributed to something like “partial causality”.

Pure Chance implies no practical possibility of either calculation or reliable prediction, but assumes a consistently deterministic world. One example of pure chance is the sequence of the second last digits of the last telephone numbers on pages of a directory. The roll of dice (assumed to be properly balanced and fairly thrown) is pure chance, but if all the prior conditions were precisely known, the result could, in principle, be correctly calculated.

A further example is the unexpected meeting of friends who, unbeknownst to each other, are both visiting a distant town or country for the first time. It might be possible to calculate the probability of this for a specified person or pair of persons if their lifestyles, etc., were known, but no reliable prediction could be made. But, given the number of people in the world and the time they spent travelling, such meetings occur very often. I have actually been present at two such occasions. A trivial example of pure chance coincidence, this time with a higher probability, is two friends having the same birthday, where probabilities can be calculated.

Some of these examples are isolated occurrences, such as the unexpected meeting of friends. Others are sequential, such as various processes of gambling. The differences show that there are be fundamentally different types of pure chance phenomena. Also, pure chance can present short sequences of the kind that I would regard as too trivial to consider for randomness.

In addition, there is a different class of things that are sometimes attributed to chance. In theoretical cosmology and physics several fundamental questions remain completely baffling. Why does space have three dimensions? Why do the “fundamental constants” have their particular measured values? One answer is that an intelligent creator made the universe that way. Another is that there is an undiscovered causal reason, which scientists are trying to unravel. But some theorists say it was a matter of pure chance, implying that although there would have been a cause the process would have too complex or obscure for any detailed explanation. Could it have been random?

Randomness implies (for non-trivial cases):

  • absence of consistency (including probabilistic); and
  • practical and theoretical incalculability; and
  • inability to be reproduced except by direct copying.

I think the assumed mutual exclusivity of randomness and causality is implicit in all of these (overlapping) attributes.

This might seem to be an arbitrarily exclusive set of criteria, but it presents a concept intrinsically different from each of the above examples of pure chance. In practice, though, there is often no apparent way of deciding whether a particular (significantly long) sequence is pure chance or random as defined above, or a section of a readily calculated infinite sequence.

But does the definition allow any actual identifiable cases of truly random sequences or phenomena? The definition would rule out all phenomena if we think that the material world is purely causal. But Chaitin claims that there must be an infinite number of irrational numbers that are random. These would meet the above criteria, but we have no sure way of identifying most of them.

However, the existence of irrational numbers is denied in the philosophical idea of finitism, despite the apparent “naturalness” and usefulness of π and the exponential e which are irrational. Finitism is the idea that a “mathematical object” does not “exist” unless it can be constructed from natural numbers in a finite number of steps. There is, however, no accepted philosophical justification that numbers of any kind are anything other than a mental invention. Irrational numbers are defined as numbers that cannot be expressed as a fraction of two real whole numbers, i.e., in the form of  a¸b,  with b not being zero.

 

Conclusion

If randomness is thought of as applying to something not caused by anything, it may be compared with infinity. Although invoking infinity is very useful in mathematical and other inquiries, it is like a distant light that can never be reached. Until a “real” infinity or a “real” case of non-causal randomness is identified, these concepts are just theoretical constructs. The only examples I know of are the unidentifiable irrationals.

 

If causality is allowed, randomness is merely complete absence of pattern or consistency. Cases of this are the calculable irrational numbers such as the square roots, cube roots, etc., and the patterns made by some types of cellular automata.

What category do pseudo-random sequences belong to? Those produced by an (undisclosed) but known formula are clearly causal, irrespective of how difficult it may be to predict later parts of them using a knowledge of earlier parts. If they show no pattern throughout their (usually finite) length I would accept that they could be referred to as causally random.

 

In the case of the molecules of a gas at uniform temperature, if its constituents, temperature and pressure are known, along with either its mass or volume, a statistical distribution of the locations or the velocities of the gas could be calculated. Although the individual atoms or molecules of the gas are constantly moving, sometimes faster and sometimes more slowly, and continually colliding, the situation is consistent in its condition, and could be regarded as orderly. Nevertheless, this condition is usually referred to as disorder.

But it does have one feature that could appear to be random. If some tiny area on the container of the gas could be monitored so as to record each time a gas molecule struck it, the intervals of time between successive strikes would appear to be random and could be used as a pseudo-random generator. But I would regard it to be a pure chance sequence. I think the apparent disorder of a gas should not be thought about in terms of randomness but in terms of different concept, entropy. Entropy expresses the arrangement of the energies of each of the parts of something. It is quantified by an equation known as Boltzmann’s Law. Entropy relates to how the parts of something can change or interact in some way to produce some kind of work, and in the process reduce their ability to produce more work. Examples are a gas expanding and a chemical reaction that gives off heat. Entropy is said to increase as the structure becomes less able to produce more energy. In the case of a gas, the molecules are just as “disordered” after expansion, but their average velocities are decreased.

 

I would think that pseudo-random sequences involving the sampling of natural phenomena would be pure chance. However, because of the impossibility of knowing prior conditions, sequences produced by this means are usually unrepeatable in practice. It may be pedantic to deny that this kind of pseudo-random sequence that defies mathematical analysis is truly random. If the sequence were not reproducible, the distinction would seem to be merely a matter of definition. Or perhaps a new category such as patternless would be appropriate for them, and this would maintain their distinction from both random and other pseudo-random sequences.

All of my concrete examples of pure chance occurrences are causal, even though they may appear disorderly. Their relationship to true randomness is roughly similar to the one that inductive truths bear to notional absolute truths. (The category of “partly causal” referred to above is just my tentative speculation. The origin of the fundamental numbers of physics is too speculative for them to be called either random or causal at this stage of our knowledge. A Supernaturalist might think they were carefully chosen, as per the anthropic principle, and therefore causal. But my concept of the supernatural would allow other, non-material, possibilities.)

Pure-chance phenomena can have a range of characteristics, from apparent randomness to apparent orderliness. This implies that the very convenient concept of order refers to what is individually perceived by people rather than something absolute. And the concept of randomness is continually invoked with quite imprecise intentions and criteria.

In fact, nothing in this discussion will alter the very convenient popular use of the word random to refer to either something apparently haphazard or the result of a whim.

Throughout this book I have tried to make all my arguments as “sound” as possible. But my logic has continually come up against concepts that don’t seem to belong to the material world, although in general usage they are often treated as if they do. Examples are infinity, absoluteness, perfection, nothingness, randomness and timelessness. This chapter has discussed one of them and the others have been commented on as part of the discussion of other topics. Some of them are paradoxical concepts. Some of them have been attributed to the supreme supernatural entity, God. I have continually tried to distinguish any possible supernatural entity from things that are part of the material world. So perhaps randomness is not material but could be thought of as the supernatural counterpart of the material concept of causality. Perhaps all of these concepts that don’t seem to belong to the material world are supernatural counterparts of material concepts, in addition to the supernatural characteristics suggested in Chapter 4 The Nature of the Supernatural. Perhaps some physicists who are very partial to symmetry may see them as part of the new physics mentioned in the latter part of that chapter.