Throughout this book I have tried to use arguments that are consistent with the processes of logic, assuming that an argument that is logical should be more reliable than one that is not. But I have avoided the assumption that trying to be logical always produces a conclusion that is completely reliable. Logic is generally acknowledged to be an essential tool in science and technology. It is continually useful, but not always successful, in resolving difficulties in many other aspects of life.

But there are traditions that regard logic to be intrinsically misleading, and therefore to be treated with caution or even to be completely rejected. In some religions the way to truth is always through faith not through (other people’s) logic or evidence. This raises the question of how the believers first obtained their faith, and how reliable the process was.

In Zen Buddhism it is intuition and not faith or reason that leads to enlightenment. Learning how to develop the proper intuition involves having to confront logical paradoxes or inconsequentialities and see the “truth” behind them. I think there is no truth or enlightenment behind such statements – which are known as koans – beyond the fact that words can have meanings or connotations or implications that are not usually noticed. After meditating on a koan someone may suddenly see its truth, although it may not be possible to express in words what that truth is. How useful or reliable such enlightenment may be is a matter of conjecture. To me, the logic of a koan is like the logic of the distorted images painted by Picasso or Francis Bacon. (A well-known example of a koan is:

*two hands clap and there is a sound: what is the sound of one hand?*.)

So when and to what extent can we trust logic? This chapter discusses the processes of logic, the selection of a logical process to use in a particular case, and the reliability of the inputs in the application of logic. Any examination of this kind must use logical reasoning. So if the inquiry finds that logic is indeed reliable, will this be conclusive or just a self-congratulatory exercise? If it is rigorous it should at least show that logic is consistent and reliable within its own limitations. If the inquiry finds that logic is unreliable we might have a paradox. If the inquiry finds that logic is usually but not completely reliable, it may still help to show where there may be weaknesses in particular cases, and how much confidence can be given to the conclusions.

Logic is by intention, a rigorous system for reaching inevitable conclusions from initial premises. Logic is not, as is sometimes believed, a process for arriving at truth. The process consists of applying rules (that are accepted as being reliable) to the details of the premises. The premises may be either hypothetical propositions or statements of purported fact. Whether the logic refers to any aspect of the real world, or has any significance of any kind, is incidental. It might be exploring an everyday matter or a world that operates in ways that are impossible in the one we are familiar with.

Mathematics is logical. It uses precisely defined procedures to derive conclusions from initial premises. All procedures of logic can be expressed and processed in mathematical or symbolic terms. For example, the syllogism is expressed, and perhaps more fully understood, by the theory of sets. Functions derived using Boolean algebra enable computers to perform all the procedures of classical logic and mathematics. So mathematics is a specialised part of the body of logic.

Experience has shown that one set of procedures in the range of existing logics may apply to only some kinds of cases, and that different procedures apply to others. This will be illustrated later in this chapter. Sometimes the initial premises are claimed to be illogical because they do not conform with everyday experience. Examples of this are the geometry of curved space and of space having more than three dimensions, both of which are relevant to modern physics. But premises are neither logical nor illogical; they are either right or wrong.

One might expect a particular use of logic to be reliable if its procedures could be shown to be valid, relevant and adequate, and its premises to be meaningful, unambiguous, relevant and adequate for that issue. Often this is all that we rely on, hoping every step is rigorously performed. But how can it be tested? If the particular case refers to the everyday world, it may be possible to check whether its conclusion agrees with observed facts. However, there may be doubt about the reliability of the observation itself, or of the initial premises. For hypothetical questions, evidence is, at best, of only indirect use. In some cases the “truth” or “falsehood” of a conclusion may be self-evident. Unfortunately, what is self-evident is a subjective matter. In other words, although we commonly and confidently rely on evidence, there is no absolute test of reliability, except, perhaps, some tautologies, such as *A = A, *where we might expect that *A *will always equal *A, *irrespective of what*A *represents.

These issues will now be discussed.

**The Process of Logic**

#### Logical and Non-logical Procedures

Logical reasoning may be expressed in everyday language (informal logic), or using precisely expressed language combined with symbols and letters of the alphabet, as in algebra. The symbols are substitutes for descriptive words from everyday language and the letters are generalised representations of statements or ideas. This latter type of expression is formal logic. Statements like *A = B* and *if P is true then Q is true* are examples of formal logic. Most use of logic is informal. For formal logic to have any meaning in the practical world, the symbols and letters must be replaced by words.

All statements in logic are either premises (purported or hypothetical “fact”) or conclusions drawn from the premises. In informal logic the procedures for arriving at the conclusions are often implied rather than explicitly expressed.

A reasonable-sounding series of statements may have a conclusion that is patently contrary to all experience. Sometimes this is a sign of a flaw in the reasoning, and one type of flaw is that the process uses procedures that are not accepted in logic. A conclusion is *valid *only if it is derived using accepted procedures of logic. But a valid conclusion is not necessarily true, and an invalid conclusion not necessarily untrue.

Both of these points can be illustrated using any procedure that uses one or more rules of logic. The syllogism is one example, and it could be generally expressed as:-

*All **A have the characteristic **B.*

*This thing here is an **A.*

*Therefore this thing must have the characteristic **B.*

The letter *A* can refer to any class of things whatever – crows, lines of poetry or any type of thing that can be said (not necessarily truthfully) to have some characteristic in common. The letter *B* represents the common characteristic. For example:

*All crows are black *(or have the characteristic of being black in colour)*.*

*This is a crow.*

*Therefore it is black.*

Some crows may not be black, but that would have no bearing on the *validity* of the conclusion (which does not necessarily mean that the conclusion is true). If there happened to be a crow that was not black and it was the one referred to, the conclusion would be untrue, because the initial premise was untrue.

A non-logical version of this process is:

*All crows are black.*

*This bird is black.*

*Therefore it is a crow.*

Although many other species of bird are black, in a real-life situation the conclusion may well be true, even though as derived above it is not logically valid. The second premise does not identify an instance of the class of things specifically mentioned in the first premise, i.e., crows, but refers to the characteristic of a bird which may or may not be a crow. But a valid logical process (which happens not to be a syllogism) could lead to this conclusion, as follows:

*Only crows and herons come to this place.*

*No herons are black.*

*Crows are black.*

*This bird here in this place is black.*

*Therefore this bird is a crow.*

This conclusion might indeed be untrue, if for example, the premises were untrue, such that other species of black birds were present or if some herons were black.

So what makes some procedures logical and others not? One answer is that, when you think about it, the correctness of logical procedures and the incorrectness of non-logical procedures become obvious. Unfortunately, what is glaringly obvious to one person can be glaringly false to another. But by trying to find possible cases in which a particular procedure could lead to a patently incorrect conclusion, and then amending the procedure so that such cases cannot arise, we might conclude that the procedure is justifiable.

Another answer is that the conclusions from a logical procedure will always agree with the actual evidence, and the conclusions from a non-logical procedure will not. For this to be a satisfactory answer, and I think it can be satisfactory, it is necessary that there be no errors or ambiguities in any of the statements used, and that the procedures that have been applied are appropriate to the case. But, as will be discussed, it may sometimes be debatable whether these conditions have been satisfied. But the word *always* implies something impossible, that every possible case has been tested. So, for practical purposes, we interpret it to mean that many cases have been tested and none have invalidated the procedure. But acknowledging that some invalidating case just might be waiting around, we hedge bets and give procedures (and premises) only a qualified justification referred to as *induction*. Moreover, even this doesn’t allow for untestable conclusions, e.g., from hypothetical propositions.

Nevertheless, most scientists and technologists believe that the soundness of the procedures they use in their professional work is beyond doubt. Where accepted premises have been processed using accepted relevant procedures and the conclusion tested against observed phenomena, using rigour in all stages, the process is virtually always reliable. Errors are always attributed to the premises, including any measurements and assumed interactions they contain. For example, when computer models of any kind fail to mirror observed real-world behaviour it is the premises of the model that are considered to be at fault. So, by induction, the *logic* is valid, and overwhelmingly reliable. But, when induction was discussed in an earlier chapter, it was noted that in the nineteenth century the pre-relativistic, pre-quantum, Newtonian mechanics appeared to be demonstrably reliable (although not as conclusively as logic).

### Incompleteness and inconsistency

In the late nineteenth and early twentieth centuries, mathematicians and philosophers tried to develop a logical basis for simple arithmetic, i.e., of whole numbers, 1, 2, 3, 4, …etc. They were trying to show logically why, among other things, arithmetic always produces the answers that are obtained by counting. The attempts failed, ending in paradoxes. Then it was shown in 1931 by Kurt Gödel, using a complex logical process, that no effective logical system that is able to produce elementary arithmetic can be both consistent and complete. In 1936 Alan Turing provided on independent proof of this using an approach based on considerations of an idealised computer. Because the basic rules of logic expressed by Boolean algebra apply to all branches of mathematics and to informal logic, this means that there would always be propositions that were expressed strictly according to the procedures of the particular branch of logic but could not be shown to be either “true” or “untrue”. A familiar example is the paradox..

How does this incompleteness affect the reliability of logic? In practical terms, not very much. In the case of paradoxes, it indicates that it is pointless to ask some types of question, e.g., is the following true: *This sentence tells an untruth.*? But since, in general, these are more sources of amusement than topics of practical importance, this doesn’t matter much.

Sometimes, however, a paradox may not be recognised as such, and an incorrect conclusion be reached. An example is contained in the conundrum known as “The Unexpected Hanging”, in which a prisoner is to be hanged within the next thirty days, but is told that the hanging will not occur on a day that the prisoner expects it. The prisoner thinks about the situation, and concludes that the hanging cannot be on the thirtieth day, because then it would be expected since there is no other day left for it to happen. But he also concludes that it cannot be on the twenty-ninth day, because when that day arrives the hanging would be expected, since it could not occur on the thirtieth. And the reasoning continues in this vein for all of the thirty days. Therefore the prisoner is confident of escaping the hanging. But on the seventh day a warder comes to take the surprised prisoner to the gallows.

The paradox is in the prisoner’s reasoning, yet often the claim is made that there is no fault in it. Because the prisoner has reasoned himself into believing that he will not be hanged he does not expect to be hanged on any day, presumably not even on the 30^{th} day, when by his logic he both should and should not expect to be hanged.

Of course, one might assume that any prisoner faced with such a situation would be expecting a summons to the gallows every time a warder glanced his way, and would be very wary of any rationalisation that put him off his guard. But this is not a real-life matter but a problem in logic.

Paradoxes arise from contradictory self-reference or back-reference, which might not always be immediately obvious. Gödel’s proof used a roundabout form of self-reference. Some logicians have claimed that a self-referring statement should be regarded as logically illegitimate because it can lead to paradox. But not all self-referring statements lead to paradox. Consider the statement: *This sentence contains more than twenty letters*. It is self-referring, and demonstrably either true or untrue depending on the interpretation of the word *letters*, e.g., whether the seven occurrences of the letter “t” in the sentence are counted as seven or one. In everyday life it is continually necessary to make self references, and any sequential line of reasoning must use it.

Other types of proposition whose truth or falsehood cannot be ascertained are mathematical conjectures. They may interest a few mathematicians and philosophers, but would seem abstruse and irrelevant to most other people.

These examples suggest that logic and mathematics are imperfect. I think perfection and imperfection are ideas about what something ought to be, and do not necessarily relate to what it is. We should take logic and mathematics as they are, not what we might ideally like them to be.

Are logic and mathematics both just constructs of the human mind, and, like science, very useful tools, justified by induction, but not independently true? Or is mathematics independent of the human mind but logic just a construct? Might they be different in another part of the universe, or in another universe? Philosophers differ about all this.

(As an aside, are electrons just constructs of the mind or do they exist independently of the mind? If they are independent, what about gluons, which are particles that provide the force that holds protons together against the electrical force that would otherwise push their components apart?)

Mathematics has an aura of infallibility. “Pure” mathematics, which does not refer to anything other than mathematics, is always self-consistent, if incomplete. It is when mathematics or logic is applied to actual cases that their reliability can come into doubt. But where does the fallibility lie?

**Fitting Procedures to Cases**

Syllogisms apply to things with some characteristic in common. As mentioned above, a syllogism starts with a statement of this characteristic: *All* A *have the characteristic* B. It must also have identified a particular specimen of A without identifying whether the specimen has characteristic B. It can then conclude that this specimen must have characteristic B because it is an A. So the syllogism is useful only in this special grouping of known and unknown details. For a different sort of problem a different set of procedures is needed, as, for example, in the case of crows and herons above. This demonstrates that the particular sequence of procedures to use for a problem is dependent on the thing to be discovered, the situation and the available information.

This need for fitting different sets of procedures to different cases is further illustrated by the different branches of mathematics, e.g., some procedures that are valid for scalar quantities are often invalid for vectors, and similarly for real and imaginary numbers. Some branches of mathematics were invented (or perhaps more properly, were discovered) before there were known applications for them, and were initially regarded by many people to be illogical. When they were subsequently found to be convenient ways of describing the operation of particular types of phenomena they had to be accepted. But what had been thought to be illogical was not the process of reasoning but the initial premises, for example that there could be the kind of numbers that are (perhaps unfortunately) known as imaginary numbers. So new procedures that obey the same old logical rules apply to new branches of mathematics that analyse new realms of science.

In rigorously conducted scientific inquiry, when the results are at variance with observed phenomena, it is taken for granted that something is wrong. Normal scientific procedures can usually identify whether it is the assumptions, the observations or the logical procedure that is at fault. With logic expressed in a natural language it is easier for a fallacy or a flaw to go unnoticed, as, for example, with the Unexpected Hanging paradox. Sometimes this is a result of basing the logic on premises that may be true but are not relevant to the question.

**Finding the Relevant Premises**

The difficulty in choosing the appropriate facts on which to base a chain of reasoning can be illustrated by logical puzzles. Consider the following example:

*Which is more likely to occur first, the third Monday of the month, or the third Friday of the month?*

We can derive a few facts from the way sequential days are grouped into weeks and months. Each fact might then allow a conclusion to be drawn to answer the puzzle, as follows:

- The order in which the third Monday and Friday appear will be the same as the order in which the first Monday and Friday appear.
- Monday, which is regarded as the first (or sometimes the second) day of the week, occurs before Friday. So we might conclude that the third Monday would always occur before the third Friday.
- Not only does each Monday occur a few days before a Friday, but also each Friday occurs a few days before a Monday. So we might conclude that the third Monday of the month is equally likely to occur before the third Friday, as is the third Friday of the month to occur before the third Monday.
- The first Monday will occur before the first Friday if the first day of the month is a Saturday, Sunday or Monday.
- The first Friday of the month will occur before the first Monday if the first day of the month is a Tuesday, Wednesday, Thursday or Friday. So there are three chances in seven that the third Monday will occur first and four chances in seven that the third Friday will occur first. So we conclude that the third Friday is more likely to occur first.

When given this puzzle, very few people think the conclusion in step 2, above, would be the correct answer, but quite a few settle on the conclusion in step 3. Step 5. suggests that both are incorrect. Why, then, should we be sure that the conclusion in step 5. is correct? What if the first day of the month were more likely to be a Saturday, Sunday or Monday than one of the other days? But, since it is possible to show mathematically and to demonstrate by looking at the calendars of several successive years that each day of the week has an equal chance of being the first day of a month, the conclusion in step 5. seems to be correct.

This simple example shows the need to have all the relevant facts, and also to test the conclusion once it is reached. However, if the two days had been Monday and Thursday in this puzzle, then the correct answer would be Monday, which was the conclusion reached in Step 2 above. This shows that the fact that a particular process of reasoning leads to a correct conclusion does not necessarily mean that the process is logically valid.

There is another interesting aspect to this kind of puzzle. If there are more first Fridays than first Mondays but more first Mondays than first Thursdays, will there be more first Fridays than first Thursdays? Without thinking too hard, it might seem that first Fridays are more numerous than first Mondays and first Mondays are more numerous than first Thursdays. But the only way that the first Friday can occur before the first Thursday is for the first day of the month to be Friday. So in this case the first Thursday occurs six times as often as the first Friday. This illustrates the concept of non-transitive relationships, in which *A* can beat *B* and *B* can beat *C* but *C* can beat *A*. A more common example is the game of *Paper, Rock, Scissors*, where paper can (notionally) wrap up and “defeat” a rock, a rock can blunt scissors and scissors can cut up paper.

Now consider a different type of puzzle:

*A contestant in a game show is faced with three doors and there is a prize behind one of them. The contestant can win the prize by guessing the correct door. Once the guess is made, the compere opens one of the other two doors to show that there is no prize behind it. The compere then gives the contestant the option to change to the remaining door*.

*What difference, if any, would changing make to the contestant’s chance of winning?*

In this puzzle, people usually assume that the relevant fact is that, before the choice was made, each door had an equal likelihood of being the one with the prize. They then assume that after the empty door has been opened, the chances of the prize being behind the chosen door and the remaining door are still equal. So, on average, there is neither advantage nor disadvantage in the contestant changing. This point of view is often held vigorously, even after the following reasoning is explained.

From the information given in the puzzle:

- there are two chances in three that the prize was behind a door that the contestant had not chosen;
- therefore once one of the non-chosen doors is eliminated there are two chances in three that the prize is behind the other;
- therefore the contestant’s chance of winning the prize must be improved by changing to the unopened door.

This is the usual “correct answer”, and it can be confirmed by a trial, going through all the possible choices and counting how many favour changing and how many favour not changing. Indeed, how else could the correct answer be confirmed, and hence the decision made as to which were the relevant facts to consider in the logic? But some people challenge the logic.

The assumed idea of the game is to put contestants in a position of having to decide whether to change from the chosen door, possibly with an audience shouting advice. So the door the compere opens will never be the one the contestant has chosen, and will never reveal the prize. The compere obviously has to know which door concealed the prize for the game to always give the contestant the option of changing, but this is not stated explicitly. Indeed, as expressed above, the puzzle refers neither to a series of trials of the same kind (it refers to only one), nor to whether the compere knew.

Another objection is that as far as one can tell from the puzzle, the compere may have had a plan to open an “empty” door and give the contestant a second chance whenever a contestant had chosen correctly, but not when a contestant had guessed incorrectly.

Also, if a series of contestants is assumed, it is possible that the prize may be consistently placed more often than average behind one particular door and also that contestants may consistently have a preference for a particular door. This would mean that a contestant’s initial choice could have a greater or lesser chance than one in three of being the one concealing the prize. The bias would need to be very strong before it would affect a yes/no answer to the puzzle, so a qualitatively correct answer would usually be arrived at, but on faulty premises.

To remove such objections but still justify the “correct” answer, the wording of the puzzle would need to explicitly state the knowledge and behaviour of the compere and to rule out preferences concerning the location of the prize and the initial choice by the contestant. If the puzzle stands as expressed initially, there is ambiguity about the correct answer.

But even this is not the end of the argument. In real life, the game show would soon fail if the compere did not act in accordance with the original assumptions, so why are they not valid facts rather than mere assumptions? On the other hand, the puzzle has been presented here as an exercise in logic, so shouldn’t we take the words literally, as would a computer, or a mythical genie granting a wish? Getting all the necessary relevant facts while discarding the irrelevant ones, and then knowing we have done so, is not always easy. One necessary fact may be the context in which the problem is to be considered.

Further, on all matters of probability, reliability is not quite the same as certainty. In the example just discussed, one can imagine that on some freak occasion all contestants might choose the correct box. Those who decline to change win; those who change lose. It would be hard to convince the audience at that event of the correctness of the strategy arrived at above.

**Ambiguity, Meaningfulness, Consistency, Adequacy **

#### Ambiguity

A mildly amusing “logical proof that you are not here” uses a simple ambiguity in a common word:

*You are not in Sri Lanka. Correct.*

*You are not in Antarctica. Correct.*

*You are not in Estonia. Correct.*

*Then you must be elsewhere. Correct.*

*So, if you are elsewhere, you can’t be here.*

No one would take this on seriously. But some hidden logical ambiguities can be bewildering. The ancient philosopher Zeno of Elea devised a series of logical puzzles that had hidden assumptions and ambiguities about the concepts of infinity and continuity. These still perplex people. One is the race between Achilles and a tortoise, with the tortoise being given a small start. Zeno argued (with tongue in cheek) that Achilles could never overtake the tortoise. A step-by-step presentation of it is as follows:

Tortoise has a starting position ahead of that of Achilles.Achilles can run faster than Tortoise.When Achilles has reached the position from which Tortoise started the race, Tortoise has also been running and is still ahead of Achilles, although by a lesser distance.When Achilles has reached the position of Tortoise as in the previous step, Tortoise has also been running and is still ahead of Achilles, although by a lesser distance.This situation will continue indefinitely.Therefore Achilles can never overtake Tortoise.

Step 5. in this argument is ambiguous. What does *indefinitely* mean? Forever? If a rigorous statement of what will happen is substituted, the problem disappears. The rigorous statement is necessarily long-winded, as follows:

*5.1 With each successive occasion that Achilles reaches Tortoise’s previous position, the time it takes Achilles to cover the distance will be proportionately shorter.*

*5.2 After an infinite number of reachings of Tortoise’s previous position, Achilles will have caught up with Tortoise. *

*5.3 This infinite number of reachings will take a finite period of time.*

*6. After this finite period of time Achilles will have overtaken Tortoise.*

Step 5.3 often causes contention. How can an infinite number of things add up to a finite value? But the infinite sum of 1 + 1/2 + 1/4 +1/8 +.. etc., cannot exceed 2, and a similar finite result holds for all descending geometrical progressions. So the sum of an infinite number of proportionately smaller catching-up times will be finite.

A simpler alternative would be to replace the argument from step 3 onward, as follows:

*3. There is a finite period of time after starting when Achilles will have reduced Tortoise’s lead to half its original distance.*

*4. At twice that time, Achilles will have drawn level with Tortoise.*

*5. After having drawn level with Tortoise, Achilles will continue running faster than Tortoise, and will have overtaken him.*

This argument does not address Zeno’s logical dilemma. But it avoids the argument about whether the “indefinite” addition of ever-decreasing amounts of time results in a finite or a infinite amount of time. And it again illustrates the importance of choosing the appropriate premises.

Another kind of logical ambiguity occurs in the use of *either it is or it is not*. It is necessary to be precise in specifying whatever the something is or is not, because the alternatives must completely exclude each other. For example, does *it is blue *mean that it is totally blue with no other colour, and does *it is not blue * mean that it can have no blue on it whatever? If *totally blue* is meant, it should be specified. Also, *not totally blue* is not the same as *totally not blue*. So if you want to solve a problem in which the known facts cannot be expressed mutually exclusively, the *is-or-is-not* procedure may not be the one to use.

#### Meaningfulness

The meaningfulness, or meaninglessness, of concepts can be illustrated in two different kinds of case. The first is to do with the concept of continuity, and is illustrated by the following “proof that a rising speck of dust can stop a falling boulder” (which is another version of the conundrum, set by the ancient philosopher Zeno of Elea, that it was impossible for an arrow to fly from a bow).

*The speck of dust rises and the boulder hurtles down**The speck and the boulder collide.**The upward motion of the speck is changed into a downward motion.**At the instant of change from up to down the speck must be momentarily stationary.**At that moment, because the speck is in contact with the boulder, the boulder must also be stationary.**Therefore the speck must have momentarily stopped the falling boulder.*

(In this conundrum the word *stationary* means stationary with reference to the earth.)

Of course no one believes that the boulder will be stopped, even “momentarily”, by the speck of dust (nor by molecules of air bouncing off the boulder, for which a similar argument applies). Actually the speed of the boulder would be reduced by a minute amount by the speck of dust, just as it is by every molecule of air that it has to push past. So what is wrong with the logic?

It might be argued that, at the moment of changeover, the boulder would be slightly compressed rather than stopped. But would an incompressible perfectly smooth boulder really be stopped by an incompressible speck of dust – or by an incompressible heavy object the size of a football? Physicists would point to the energy required to stop the boulder, and then re-start it at the appropriate velocity. (It is usually and reasonably assumed that the energy needed to stop the speck and send it down in the other direction would be insignificant.)

The use of the words *moment* and *momentarily* in steps 4. and 5. raises questions about continuity and extreme precision of time. Does the speck change motion before, at or after contact, and does it actually have zero motion (relative to the earth) for any period of time whatever? Does a distinguishable moment of time, of “infinitely short” length, separate the change in direction and the contact? Or is an infinitely short period of time the same as a zero length period? (The theory of infinitesimals says it is not, but is this theory, which has its uses in physics, “true”?) This difficulty is the problem of continuity, and it has perplexed philosophers for thousands of years. It concerns whether time and space consist of series of tiny steps or whether they are absolutely continuous. Quantum physics, with its concepts of granularity and uncertainty, avoids this problem. According to this theory, it is not *physically meaningful* for the combination of position and motion to be expressed more precisely than a certain finite limit. But is quantum physics reliable and, if it is, does it rescue the position?

Three possible interpretations of the situation are:

- that time is continuous, i.e., the speck changes its motion by a finite amount in zero time;
- that time is not continuos but “granular”, so there is a minuscule period of time in which the speck changes its motion, during which it is in contact with the boulder;
- that it is not meaningful to specify a precise location or time or change of motion of the speck or the boulder. Therefore there is no need for the speck and the boulder to have been “in contact” when the speck changed its direction of motion.

In the first interpretation the speck does not stop the boulder, but, since it has a minuscule mass and undergoes a finite change of motion in zero time, it must have had an infinite acceleration, requiring an infinite force, which is impossible. This conclusion depends on the assumption that is meaningful to divide a number by zero, and that the result will be infinity. (If the result of dividing a (positive) number by zero is infinity, what is the result of dividing a number by minus zero? If it is not minus infinity, why is it not? And does minus zero equal zero?)

In the second interpretation the speck changes its direction of motion in a finite time while in contact with the boulder. Therefore it momentarily stops the boulder, which requires a considerable, but not infinite, force. Then another similar force is needed to restore the boulder to its previous motion. The size of these forces would be truly massive, because they have to stop and restart the boulder in an infinitesimal length of time.

The third interpretation avoids the necessity for the boulder to be stopped and also for an infinite or massive force. But while it provides a rationalisation for solving the conundrum, it feels unsatisfying. There are aspects of quantum theory that appear to be contrary to all human experience, and are therefore commonly regarded to be illogical. This raises the question of what makes a particular set of apparently absurd assumptions, such as in quantum theory, appropriate to a particular situation. Heisenberg’s uncertainty principle, on which the resolution of the Speck and Boulder conundrum rests, is one such aspect of quantum theory. This principle is often assumed to be merely a limitation of precision in practical measurement, caused by the disturbance of the process of measuring. If this were so the conundrum would still be unresolved. However, the uncertainty principle is a logical conclusion of the demonstrated phenomenon of the “equivalence” of particles and waves, which is particularly relevant to extremely small particles, extremely small distances and extremely small periods of time. The “meeting” of boulder and speck is an interaction of waves that occurs during a very short but not infinitesimal period of time. . For the location of a particle to be fairly precisely defined, its speed must be correspondingly imprecisely defined, and *vice versa*. This refers not only to what can be known about the location and the speed but also to the meaningfulness of the speck (or any other object) having both a perfectly precise position and speed.

Before the arrival of quantum theory there was no logical way out of this conundrum. But there seems to be no paradox in the concept of the non-continuousness of time and space, so there is no logical reason why it should be rejected, even if this aspect of quantum theory were overthrown. Also, this line of discussion resulted from invoking an incompressible perfectly smooth boulder. However, incompressibility and perfect smoothness are also idealised concepts that are contrary to the actual nature of material. So if this is accepted, there is again no problem in the scientific resolution of the problem.

This illustrates that some common concepts can lead to logical dilemmas. We can only conclude that our knowledge of what actually happens, when, for example, a boulder collides with a speck of dust, sometimes makes common concepts logically impossible. So is this problem an example of the unreliability of logic, or of the fact that it is possible to ask apparently valid questions that have no demonstrably correct answer, or that the premises of a logical proposition may misleadingly appear to conform to “reality”?

Another example of a trick concept, this time a hidden paradox, is *omnipotence*. This concept is, of course, applied to the supernatural entity of some religions, but it is vulnerable because of its implicit self-contradiction. This is illustrated in the following argument:

*An omnipotent being could devise a puzzle so difficult that even it could not solve it.**But if it couldn’t solve the puzzle it wouldn’t be omnipotent.**And if it couldn’t devise such a puzzle it wouldn’t be omnipotent.*

Some people might reject this argument because it contains self-reference. (But, as discussed above, self-reference violates no law of logic.) However, the concept of omnipotence must inherently contain logical contradictions. Whether an omnipotent being would be affected by logical contradictions is a matter for theologians. But any kind of absoluteness should be handled with caution: it might be self-contradictory.

#### Inconsistency in meaning

The “proof according to design” of the existence of God, attributed to Thomas Aquinas, is an example of the meaning of a word changing through the process of reasoning. The reasoning is:

*1. In the world there is observable order.*

*2. Order is the result of design.*

*3. For there to be design there must have been an intelligent designer.*

*4. That Designer is God. *

In step 1. there is no definition of *order*, except that it is observed, or perhaps more properly, perceived. What looks orderly to some people is disorderly to others.

In step 2., *order* has now become the outcome of *design*, although some things that have been designed may look disorderly to some people, and some things that may look orderly may have happened unintentionally by chance.

Step 3. implies that an *intelligent designer* is the only type of producer of designs, without giving the designer any other attributes. But step 4. assumes the designer must be the same entity as God.

So the argument is an illustration of continual inconsistent meaning destroying the validity of the logic.

#### Adequacy

Arguments are often based on too few premises for a valid conclusion to be drawn. An example is the “proof of the existence of God as the First Mover”, also attributed to Aquinas.

*1. Everything that is moving has been given its motion by some other moving thing.*

*2. Therefore, originally there must have been a mover to start this process.*

*3. This first mover is God.*

(The original version contains additional arguments to support steps 1. and 2.)

The intent of the argument is to show that the very existence of motion in the world is adequate proof of the existence of God. The First Mover may well be, but need not be, supernatural, but this does not necessarily resolve the problem of the cause of an absolute beginning.. (The “proof according to design” also has this type of flaw.)

**Non-sequiturs**

A piece of reasoning can contain a step that seems reasonable without it being able to be logically derived from the premises For example:

*1. The world of nature is enormously beautiful, wonderful and complex.*

*2. Therefore it could not have come into being by chance.*

*3. Therefore it must have been designed.*

*4. Therefore there must be a God.*

Step 2. could just as logically – or illogically – have been:

*2. Therefore it could not have been designed*.,

and the logic would then have proceeded no further.

But in neither case is there any logical justification to form a conclusion about the existence or otherwise of any kind of god. The single explicit premise is more a statement about the aesthetic feelings of the particular person than a description of the world. Another person may be less enthusiastic about the world of nature. The other statements contain implicit premises that would need to be made explicit if their plausibility were to be examined.

*Reductio ad Absurdum* – reducing an idea to an (apparent) absurdity

*Reductio ad Absurdum*–

A quasi-logical process called *reductio ad absurdum* is sometimes used when there are no evident relevant premises to use. It takes the form of claiming that something must be untrue because it implies some absurd consequence. However, what may seem absurd on face value can sometimes seem possibly true on further consideration. Different people have different ideas about what is absurd or feasible, depending on their experiences or indoctrination. In Chapter 3. *Monism and Dualism: A case for the existence of the supernatural* I discussed a series of “absurd” propositions that would (allegedly) be necessary if there were no supernatural creator. An example is *Chaos produces order.*

*Reductio ad absurdum* is often an awkward argument to counter. But if the allegedly absurd statements are expressed in the form of premises and process, with careful, unambiguous definition of terms, their absurdity or otherwise may become more apparent. Or alternatively, the terms of the argument may be shown to be meaningless. Or an apparently absurd statement may be a false representation of the thing opposed

**Conclusion**

The are two conclusions from this investigation. The first is that while no procedure of logic might be intrinsically completely reliable, all accepted procedures have been developed through experience and their use justified by induction, that is, by the weight of continued empirical evidence.

The second is that there are many ways in which a train of argument that looks logical can produce a conclusion that is contrary to evidence. Without checking the evidence, there seems to be no way of being completely sure of any piece of logical reasoning. If an argument is set out as a series of steps, and all steps are checked to ensure that the requirements discussed earlier are met, then there is a very good chance that it will be reliable. Many extremely complex logical procedures have proved to be consistently reliable. I think an impressive case is the logical system that guided the vehicles of the NASA Cassini mission to the planet Jupiter and Jupiter’s moon Titan, negotiating through the rings that surround the planet. Also, despite the many human oversights in the programming, I think the operation of modern computers is a triumph of logical reasoning. But having said all that, feeling quite sure that we have correctly interpreted the evidence is no guarantee that we actually have.

This chapter has been through very many revisions, some of them arising when I found that I had unwittingly fallen into errors that I was warning about. I may not have discovered all of them. But those of us who have trouble avoiding such errors are in good company. The strong differences between the conclusions of different schools of philosophy are evidence that even philosophers are unreliable logicians.

To me, the question is still open as to whether a logical argument can ever be absolutely reliable. But then, the concept of absoluteness is logically paradoxical. And given the uncertainty of what can be said about the supernatural, any logical conclusion about it must always tentative.