Making Machimes Conscious

A shorter version of this article was delivered as a lecture to the Atheist Society in Melbourne, Australia, on 11 December 2018

Some people expect that we will soon be able to make computers that will be conscious.
Our bodies, and in particular our brains, are intelligent physical structures that are conscious, so it should also be possible for other intelligent physical structures, such as computers, to be made conscious.
The idea of manufacturing machines that are conscious raises a few issues.

Why would we do it?
What kinds of things would we want machines to be conscious of?
How would machine consciousness be produced?
How would we know whether a particular machine actually was conscious?

So, why would we want to give machines consciousness?
Wanting to do it is an example of the propensity of human beings to search for new things. and to try to break boundaries. Explorers and scientists have always wanted to discover the unknown. Some people are planning to put human settlements on the moon and Mars. Information scientists and computer technologists have the aspiration to make machines equal or superior to humans in every aspect of intelligence. Making machines conscious would be part of that challenge.

What advantages would machine consciousness be expected to provide, and who would benefit? One expected advantage would be the thrill and the kudos for the first people to do it. A more worthy advantage might be that by making them conscious, some machines could be more companionable and more humane. This might even make some humans more companionable and humane. Improving humaneness could be relevant to the institutions that care for children and aged and handicapped people, where there now seems to be a lot of abuse and neglect. Also, consciousness might help machines understand what they are doing, which might make them more efficient, more effective, and less likely to accidentally harm us. This could apply to driverless vehicles.
If we thought our helpful robot was actually conscious and understood us, it would be easier to regard it as a friend. On the other hand, it is possible that armament manufacturers might contemplate giving consciousness to intelligent weapons as a way of making them more effective. Or a conscious robot might want to be more independent and become aggressive.

But how feasible is the idea that consciousness would enhance machines? Is consciousness useful for humans? Would we be better off if we were zombies?
Most people probably think that human consciousness is very useful. Our human experiences contribute to our wellbeing. For example, being conscious of pain is an important function for the preservation of the body, for humans and many other species. When people have no sense of pain, they have no indication that they need to act to prevent or treat any damage to their body parts. This happens with people who have leprosy and often to people with advanced diabetes. Also, there are conditions, such as the early stages of cancer, that cause no pain but become increasingly dangerous with untreated development.

Experiencing severe pain affects the cognitive and emotional areas of the brain in a way that would not have happened if no pain had been felt. Emotional memories are created of the pain and of the incident that caused it, which could send warning signals whenever a future situation occurs that resembles one that previously caused pain.
Feeling pain is not always a good thing: very many people suffer throughout their lifetime when things go wrong with their body. There are similar traumas from being cognitively conscious of terrible events or disturbing ideas. We call them Post Traumatic Stress Disorder, or PTSD. So we often try to stop being conscious of pain. People usually feel no pain when their body is being assaulted during surgery if they have been rendered unconscious by anaesthesia.
For most people, pain is not a major part of their consciousness. Many conscious feelings are not painful but very pleasant or exciting. There are many things that we enjoy in life.

Our consciousness of what we see and hear and taste, etc., gives us a feeling of what the outside world is like. We conjure up memories if these things, and put “pictures” of them into our consciousness. Most of us have many happy memories stored away that we like to bring back to consciousness from time to time.
These feelings, and the conscious reminders of these feelings, give a sense of reality to everything that we know about. They are much more than the representations of the outside world that cameras and camcorders put into the memories of the present intelligent non-conscious machines. So a similar sense of reality might occur for conscious machines, and make them more “understanding” of what they were doing, and safer and easier to work with. But each machine and kind of consciousness would need its own special treatment.
For example, if machines were to feel pain, it would be necessary to decide what adverse conditions would have to occur to what part of the machine’s structure to cause the pain, and what emotional or cognitive processes should be stimulated as a consequence. Similar requirements would apply when other kinds of consciousness were being applied. A machine with PTSD would probably be less useful than its non-conscious counterpart.
Any consequences of having conscious machines would, presumably, depend on the kind of consciousness they were given.

What kinds of consciousness might we want to give machines?
Human consciousness has very many facets, some of which have already been mentioned. We would need to choose which ones should be given to machines to suit their specific purposes, and which ones to avoid. Some kinds of consciousness would make us morally obliged to treat the machine “humanely” – or compassionately.

One kind of human consciousness relates to the outside world and what is happening in it. Another kind of consciousness relates to our inner state.
Consciousness relating to the outside world is based on inputs from our sensory systems – sight, hearing, taste, smell, touch and pain, etc. So this type of consciousness is our awareness, at the particular time, of what we are observing and doing in the physical world, which includes our own physical body. It also includes the information that we get from reading and listening. A lot of processing in the brain is required for us to be able to make sense of the sensory inputs, such as the conversion of the inputs from the optic nerve into coherent pictures, but we are conscious of only the result of this processing and not of the processing itself.

Consciousness of our inner state is based on the memories derived from our sensory inputs, and our cognitive processing of these memories, and from memories resulting from our cognitive processes. We consciously think and create ideas about both the outside world and of our inner selves.

We also have another kind of inner consciousness. This is the wide range and the many degrees of our emotions. They range from love to liking and to hate. They range from disgust to dislike to ambivalence to appreciation to respect and reverence. And they range from fear to anxiety to restiveness to calmness and confidence, and they range from despair to depression to equanimity and satisfaction to exuberance. And there are more shades of emotional consciousness than these that I have just listed.
There is the consciousness of wanting something, and of ambition and the urge to take some specific action. And, of course, we are continually consciously taking actions. Machines that were intent on self-preservation could become dangerous, particularly if they could take independent action, or control other machines, such as self-driving vehicles or intelligent weapons. Ambition and other inclinations are abstruse feelings. Providing machine algorithms for them and linking them to the issues of the outside world would be very tricky and could produce unintended and dangerous outcomes.
The range and intensity of human emotions is different in different people. Psychopaths have little or no feelings of empathy. Sadists are disposed to get pleasure from cruelty. Such antisocial dispositions might be given to machines if it were possible to do so.

Things that enter our consciousness are initially stored in our short-term memory. But much of what our eyes see and our ears hear is not passed on to our consciousness. Much of what we experience is of very little significance to us, and it disappears from memory. What is significant is put into the long-term memory, and may be recalled later – sometimes with difficulty. It would be necessary to decide which received information detected by a conscious machine should be kept, and what should be discarded. The amount of such detail that was detected and stored, and the use it was put to, would determine how much additional memory and computing power the machine would need.

Machines are given access to specific kinds of information that are necessary for the processes of their specific purposes. They can be connected to devices that measure aspects of their environment, such weight and temperature and light and sound, and the concentration of particles in the atmosphere, etc. This does not mean that they know what weight is, or experience its effects, or feel hot or cold. And it does not mean that they know what different objects actually are, even when they can detect, identify and name them.

Feeling hot is quite different from measuring one’s own temperature. Feeling the weight of something that we are lifting or holding is quite different from measurement of weight. Our consciousness of weight and temperature tells us, among other things, that something is too heavy for us to carry or that it is light enough to carry, or whether something we are touching, or our environment, is too hot or too cold for our safety. Making machines conscious of such things could give them more understanding.
Machines can learn many things, including how systems work and precise manipulative action. They don’t need to be conscious to “keep their eye on the job” that they are doing, or to play chess or other games. So, as with human examples, for these kinds of tasks there may be no point in making machines conscious.
Machines also detect sounds and colours. They recognise patterns of all kinds, such as pictures and other shapes, including patterns of letters and words and numbers, etc. They translate spoken words into text, and text into spoken words. They translate words, numbers and patterns of any kind into commands to do something, or recognise something or someone, or decide the optimum action to take in a process or competitive game.
There is no reason to think that they have similar conscious feelings to what we get from those same sounds, pictures, patterns and words. A machine using a picture to identify someone would be recognising a pattern not a person.
Computerised processes are just the operations of established laws of physics, using the minimum amount of information needed for the particular tasks.
Some of the things that machines do are called mental tasks when we do them. Few people would accept that the machines understand the significance of what they are doing in any of these things. In each case, we would need to consider what advantages and disadvantages would result from making the machine conscious.

Some kinds of consciousness would need to be decided arbitrarily. For example, human sensations of colour might not be the same or even similar for all people. We may agree on how we name the particular colours of something, but that could be explained by the fact that our eyes all have similar mechanisms for detecting and representing the various wavelengths of visible light. Similarly, we are not able to experience what other people experience regarding, sound, tastes and smells, or their emotions. If we were to give machines consciousness of these we would have to provide more than just detection and measurement: we would have to provide something that could deliver the appropriate sensations.

Machines have thermostats that take specific action when a particular temperature is reached, and sensors for degrees of strain when they are bearing weight, and for humidity, etc. These might be made conscious in various cases.

Humans don’t need to be conscious of every detail that they detect. The same would apply to machines. with their equivalent of sensory organs, such as camcorders etc. It would be important to determine, for example, how much of what was happening along a road would be necessary to make self-driving cars safer. It might require good recognition of human gestures, both hand and facial. The value of consciousness would be dependent on the purposes the machine was put to. It would be necessary to establish that being conscious of a particular kind of thing actually did serve the particular purpose.
Should we make machines feel pain or anxiety or fear, or pleasure or confidence or happiness, or anger? Some people might think that these emotions would make machines more companionable, or more suitable for particular tasks, or as soldiers. They would have in their memories the details of situations they had had with particular people, and conclude that humans might have similar emotions and similar memories. This might make the machines genuinely empathetic and companionable to humans. Or they might outsmart us.

As mentioned earlier, humans are conscious of a great range of emotions. Our emotions are regarded to be the most significant influence of our decision-making, eclipsing our reasoning. They may give us the incentive to achieve what we might otherwise not have started. But emotions can also make us do inappropriate, or silly, or dangerous things. A good balance between emotion and rationality is important for our dealing with our very complex environment. A similar balance would be desirable for machines that felt emotions

Giving machines a range of emotions might also mean that they could develop psychiatric problems. This might be useful for research into treating these conditions in humans. But once machines could have such experiences, the same ethical issues that apply to humans and animals would have to apply to machines.
There would be no need to make machines conscious to make them obsessive: some are already non-consciously obsessive. Perhaps they might be appropriately tempered by consciousness.

Humans and some other organisms have memories of their lives. Throughout their lives they are aware of the changes in their bodies and minds, and also their continual interactions with their human and non-human environment. They are conscious of a lot of these memories of their experiences. None of these aspects of consciousness seem relevant to machines.

A brain learns to do tasks unconsciously. Humans and some other organisms learn through constant practice to perform complex tasks without thinking about how they are doing them. Often humans are more skilful when doing things unconsciously than consciously. Examples are manipulative tasks such as playing sport, using a keyboard or writing, and walking, and mental tasks like calculating. There is no time to think about how to hit a tennis ball that is speeding towards you, but your unconscious reaction that has developed through practice will perform the task.
Could there be some sort of consciousness that is not part of human conscious but might be useful for a machine? Many organisms can detect and use phenomena that humans cannot. We might think of echo location, which many species of bats use. Some blind people have learnt to use echoes to obtain mental pictures of their surroundings. They make clicking sounds and can detect the echoes and so recognise the nature of their surroundings. Some organisms can navigate by detection of Earth’s magnetic field, and others by detecting polarised light.
Such kinds of consciousness may be appropriate for some machines. Aircraft and missiles use radar, which are echo-location systems. Gyroscopes are used to keep machines in a particular orientation.
Non-conscious driverless cars will use these technologies for navigation. But they seem to relate to our unconscious judgments that we use when driving. Perhaps there would be no advantage in giving machines consciousness of such processes.
Similarly, it would often be more efficient and more reliable and safer for machines to just rely on non-conscious algorithms. It would not be necessary for a machine to be conscious, for it to give or be given warnings, such as the equivalent of pain. In some cases, warnings, and other information would preferably be sent directly to humans. also, it would not be necessary for a machine to be conscious in order to detect the changing moods of individual people. The present machines sometimes misjudge the situation, but we often do that too.
All these may be conundrums. But choosing whether to give a machine consciousness, and what kinds or conscience to give it, will not be the biggest problem.

How could consciousness be produced in a machine?
It might be argued that to be conscious it is necessary to be alive, so machines could not become conscious. The only argument for this is that all the conscious entities that we are aware of are living organisms. But we are not sure whether every living organism is conscious. All that we know about any physical characteristics of consciousness is that the content of consciousness seems to be dependent on information held in the brain. And machines contain information. But consciousness is not the same as information.

I think that for any person, or any inanimate thing, to have any conscious experiences, there must be certain conditions. There must be:

• something to be conscious of, which would become the content of the consciousness;
• a system for detecting, storing, recovering and processing information and making decisions;
• and the capability of having sensory, emotional and all the other feelings derived out of specific pieces of this information.

There is, of course, a universe full of things to be conscious of, and organisms and some machines have the capability of detecting aspects of the universe and processing the information they detect, and making decisions.
There is plenty of evidence that the processing of information in the brain is the only source of the content of the consciousness of human beings. And there are a few ideas about achieving it from the information in machines.

Some people think that when a brain has developed a certain degree of complexity it automatically becomes able to be conscious. This gives no clue to what kind of role complexity may have.
There is no apparent reason why sheer complexity in any kind of system should, of itself, automatically produce consciousness. There are no plausible suggestions of what kind of complexity, or how much would be enough, or of whether different kinds of complexity would be needed to create different kinds of consciousness, such as for pain, for seeing colour and for being happy.
There is a branch of mathematics called complexity theory. It deals with two aspects of complexity. One is the analysis of complex and chaotic systems, including the solving of very difficult mathematical equations. The other examines processes by which apparently independent elements can come together to produce coherent complex systems. But complexity theory does not show how consciousness might occur.
The information contained in the brain is in the form of specific complex networks of neurons and in the contents of the nucleus of each participating neuron. These information systems are very different from the consciousness that is derived from them and that we experience. We have no clue as to what kind of role complexity might have. Anaesthetics don’t take away or block complexity. Psychedelic drugs are said to “open the mind”, but they don’t provide more complexity; they act like neurotransmitters. If complexity were the source of consciousness, would there be a different kind of complexity for each kind of consciousness, that is, for example, for pain, for seeing colour and being happy?

One interesting suggested idea is to “download a human mind” onto a machine that is already suitably equipped.
This might seem like a straightforward process. every operation of a brain involves electric currents, which can be detected using wires attached to specific parts of the head. Also, structures within the brain can be detected using magnetic resonance imaging (MRI). Processes within the brain can be watched using functional MRI (fMRI). “Brain scans” using these technologies have been performed for a long time, for diagnostic and scientific purposes.
People can now control devices, including wheelchairs and prosthetic limbs, using the patterns of electric signals generated by the process of thinking specific thoughts.
All this seems to suggest that, even though we might not know how a brain produces consciousness, we could produce artificial consciousness by copying all the information in a brain, and keeping that information in exactly the same structural format as it was in the brain.
But detecting the electric currents in the brain does not provide a picture of the structure of the neurons and their connections. It’s like hearing the sounds of the brain’s processing. CT scans and MRI might provide detailed 3D pictures. but there is a lot of difference between a picture and complete knowledge of the thing pictured . A comprehensive detailed examination of the brain would be needed.

The human brain has tens of billions of neurons and many other kinds of cells. Each neuron has multiple connections. The brain is three dimensional, so access to individual connections between neurons would have to go through other brain matter. The neurons are not idle, not even when the person is asleep or anaesthetised.
Downloading a live brain would not be feasible. So a dead brain would be necessary. And the brain would have to have retained all the connections and their content that it had when it was alive. But, since brain death is the criterion for death, the brain might already have had some damage. I will assume that the brain could be revived and that there was no connection between the affected parts of the brain and the parts involved in enabling consciousness.

The dead brain would need to be kept at a temperature that prevented any deterioration of the tissues. This would mean cooling the entire body from immediately after the death of the person. The downloaded information would then be needed to create a replica that could operate using a suitable power supply, and then be connected to the machine that was to be made conscious.

In 2015 a scientist at Harvard University completed a six-year project, completely analysing the structure of a tiny fragment of mouse brain. The volume of brain tissue was 1500 cubic microns, equivalent to a cube whose sides were slightly longer than a hundredth of a milimetre. While developments in technology will probably increase the speed of such projects, which are still ongoing, completely downloading an entire non-conscious dead human brain would take many decades to complete. Since 2015, it has been discovered that the operations of neurons are many times as complex as we had thought. And they are continuously moving.

Constructing the downloaded replica suitable for attaching to a machine would take even longer. Once completed, it would need to be appropriately attached to the machine, given a power supply and switched on. But switching on might not make it start functioning.

But what if, despite these problems, all this actually were to be achieved?
A machine fitted with such a replicated brain would have the knowledge, the personality and the consciousness of the person whose brain it was copied from. And it would expect to have all the sensory inputs that that person had. So it would need to have visual inputs equivalent to those delivered to a brain by the optic nerve, otherwise it would be visually impaired or blind. It would also need to have the equivalent of the motor nerves that cause eyes to move and to focus. The same would apply to all the sensory and motor nerves so as to match those of the person whose brain had been copied.
With a person, loss of a limb often causes “phantom pains”, and a similar effect would apply if such a conscious machine was not given the equivalent feelings of active arms and legs.
Lots of sensory and motor devices would need to be attached to the machine, otherwise it would suffer a continual agony and anxiety. And the machine would want to do the kinds of things the person would have wanted to do. The immorality of making machines conscious in this way without such attachments would be an important social issue. Providing all the necessary attachments would be costly.
One alternative might be to give a machine the consciousness of a dog, or a mouse or a cockroach. That might sometimes be sufficient. The cockroach would be easier.

Some scientists and philosophers have tried to find or envisage a sort of neural process that could produce consciousness. Some have suggested specific parts of the brain, or some “high level” networking, without saying how it would actually produce any of the different kinds of consciousness.
It might seem to be sufficient to download particular areas of the brain as it is operating. Collections of such processes might then be used to suit the requirements particular machines. This would have similar problems to downloading an entire brain, but there are additional issues.
For example, for an identifiable set of information in the brain that is associated with something such as pain or a colour or an emotion, etc., there would probably be some particular neural arrangement that would contain an algorithm, i.e., a recipe, to produce it. However, this would refer to information, not to consciousness. What is needed is an algorithm for consciousness.
We might compare this with a computer that has a screen, a loudspeaker and a printer. Each gets its particular kind of information format from a specific part of the computer’s memory and produces its particular kind of output. But now the output would need to be not sounds and pictures, etc., but consciousness of them.
In tests where a person has to identify colours or shapes or numbers, etc., that appear on a screen, and press a button immediately on recognition, the brain action of identification occurs a measurable time before the person becomes conscious of identifying it.
It has been found that specific parts of the brain become active when the person is thinking about what they are doing but are not active when the same person performs the same act unconsciously, that is, without thinking about what is being done. These parts of the brain would seem to be at least a part of what causes the consciousness. If these parts of the brain, or the brainwaves generated by them, could be copied, it might be possible to apply the relevant ones to a machine.
So the connections within such parts of the brain might constitute algorithms for consciousness. But there are no detectable structures corresponding to the screens or printers in the brain. There is no known physical process by which an algorithm could produce consciousness, nor any suggestion of how algorithms might differentially produce the different kinds of consciousness, that is, of pain, colour, happiness, sadness, doubt, etc.
It would be necessary to identify each set of connections in the brain for each kind of content information and for its relevant consciousness. This would be a simpler process than downloading the entire brain, but, the complexity of the neural networks would still make it extremely difficult.
Brains don’t have the tidy separation of functions that are in computers. Brain functions are intermeshed with networks that also are associated with other functions. You can’t hand-pick complete individual functions in the brain.

Another suggested process of the production of consciousness in the brain is that it arises as a result of sections of the brain being successively fed back to themselves. This was advocated by Douglas Hofstadter, an American professor of cognitive science, indirectly in his book Gödel, Escher, Bach, and directly in his later book I am a Strange Loop. In the latter book, he illustrated his ideas with pictures from a television screen being fed by a camera that was videoing the screen but slightly distorting the picture. Both books won literary prizes.

While the video pictures were often interesting, there is no reason to think such a process would produce consciousness in either a machine or a brain. There were no suggestions of what either would be conscious of.

So we are still skirting around a problem without a solution.

What if there was no need to discover how to produce consciousness? Suggestions have been made that everything in the universe is conscious, or that the universe itself is conscious. These are not new ideas: the ancient concept of Animism regards all material to have the same existential status as living organisms, and many Japanese people think that things they have created have extensions of their souls.
A modern version of the idea comes from the science of quantum mechanics. The rationale is that quantum events occur as a consequence of being observed, so there must be some universal observer, which, presumably but not necessarily, is conscious. Most quantum scientists reject this.
If the universe or its parts actually were conscious, what would the various physical objects, big and small, be conscious of?
For example, would a lump of rock be conscious of its temperature or of the light or other radiation that might be falling upon it? The answer, I think, is no. Humans and other species can become conscious of, for example, warmth, when sensory nerves in their skin convey information to the brain in response to a change in the temperature of the skin.
Unless the conscious rock had sensory organs and a processing system to store the details of the inputs from these systems, and a way of associating these details with its consciousness, it could not feel warmth or anything else. Being conscious of nothing would seem the same as not being conscious.

If there were any truth in the idea of universal consciousness, a machine could already be conscious – in the same way, perhaps, as a rock. So this idea is of no help for creating “useful” consciousness.

Charles Birch, an Australian biologist, ecologist, theologist and professor of zoology, believed that quarks and all other fundamental objects had feelings of experience. He believed that evolved organisms could have conscious experiences in accordance with their physical abilities to interact with the outside world, but geological and constructed objects had no consciousness. That would mean that it would be impossible for a machine to be conscious.

Birch attributed consciousness to a supernatural entity, but one that did not otherwise interact with the material world. There is no known way that any of these ideas could be tested.

He also thought that science might eventually discover how his supernatural  consciousness worked, which seems to contradict his religious ideas:  if a valid demonstrable scientific explanation were ever produced, it would mean that consciousness was not supernatural.

Some presently unknown physical entity might provide answers to the mystery sometime in the future. It is not so long ago that that we had no knowledge of electricity, or electromagnetic waves or gravitational and nuclear forces. These were discovered by a mixture of chance, curiosity and theoretical imagination. We can only wait and hope to discover whatever it is that causes consciousness.

There is no evidence or theory of how a brain might have the capability of being conscious, or of how information that is stored in the brain might be converted into its specific content of consciousness. All this makes me think that the only way to provide machine consciousness is to find out how organic consciousness is produced.

Some people dismiss all this arguing about consciousness. They say it is a non-issue; we all have it, so we should just accept it. This attitude is of no help to anyone who might want to produce a conscious machine.
Some people say there is no such thing as consciousness. Sometimes they say that consciousness is just an illusion, but that is a self-contradiction. You can’t have an illusion if you are not conscious
Until a satisfactory physical explanation appears, I am unable to rule out something that is non-physical or supernatural being one of the components. I wonder what a supernatural entity might think about machines having consciousness. If it’s not a physical process, would there be any prospect of discovering how it occurs?

Until there is some physical theory that explains how conscious arises from the patterns of connections in brains, we cannot begin to work out how to produce consciousness in machines.

How would we know whether a machine was conscious?
If all the scientific and technological problems relating to producing consciousness were to be solved, how would we know whether a particular machine was actually conscious?
The only kind or consciousness that we can discuss with confidence is the consciousness that humans experience. Each person feels their own consciousness and assumes that other people have similar feelings of consciousness. These assumptions are based on the observation that other people are similar to us, and behave similarly to us, and can talk about the things that they and we are conscious of. This seems eminently reasonable, but it is not direct evidence that other people are conscious.
We also deduce similar things about the likely consciousness of other species of organisms, but we are unable to have conversations with them about it.
Most people think that some other animals are conscious, but they are less sure about animals that are smaller and very different from humans. Most people assume that plants, fungi and microorganisms are not conscious. We might think that this is because their sensory systems are different from ours and of other mammals. There is no valid evidence of which are conscious and which not.

Conscious machines would cost more than their non-conscious counterparts. This is because of the additional complex attachments and programming they would need, and their attributed advantages. So it would be important for the purchasers to be able to tell that what they were buying actually was conscious, otherwise there could be a lot of customers not realising that they were not getting what they paid for.
How would they tell?
A machine might be conscious of only a few aspects of the outside world, and/or of a few emotions or dispositions, so separate tests would be necessary for each. For example, a test of whether a machine felt hopeful, or liked caring for children and elderly people, might have to include observing its behaviour.

Would some kind of Turing test be reliable? In the Turing test, which is named after Alan Turing, who suggested it, a person has a conversation with an unseen person or machine, and has to decide which it is. The person doing the test chooses what to talk about and what questions to ask, expecting that a machine would reply in a different way from a person. Telling whether a machine was conscious might be similar.

In every Turing test, someone knows in advance whether it is a person or a machine. This means that it is possible to know whether the people doing the test came to the right conclusion. They often get it wrong.

Finding whether a machine is conscious  is a different kind of issue. No one knows in advance.

A machine might be asked about its experiences, such as describing them, liking or disliking them, and what made them good or bad, and what things made the machine happy or sad. It might be asked if it was conscious. Replies could be “Yes”, “No”, “Sometimes”, “I don’t know”, or “I don’t understand the question”.

In every Turing test, someone knows in advance whether it is a person or a machine. This means that it is possible to know whether the people doing the test came to the right conclusion. They often get it wrong.

Finding whether a machine is conscious  is a different kind of issue.  No one knows in advance.

A machine might be asked about its experiences, such as describing them, liking or disliking them, and what made them good or bad, and what things made the machine happy or sad. It might be asked if it was conscious. Replies could be “Yes”, “No”, “Sometimes”, “I don’t know”, or “I don’t understand the question”.

A machine might be asked about its experiences, such as describing them, liking or disliking them, and what made them good or bad, and what things made the machine happy or sad. It might be asked if it was conscious. Replies could be “Yes”, “No”, “Sometimes”, “I don’t know”, or “I don’t understand the question”.

Whatever the questions, answers and discussions were, the machine could have been lying, consciously or unconsciously. Asking a machine to do tasks, like identifying or finding something, or solving a problem, would not identify whether it was conscious.
Testing how good it was at playing chess would not be very useful – unless you thought that it might be conscious if it lost.

Just as the content of human consciousness seems to be entirely dependent on the information contained in the brain, so would we expect the content of the consciousness of a machine to be entirely dependent on the information contained in its memory. A machine that was not conscious should be equally capable, or equally incapable, of passing tests as a similar machine that was conscious.

You could program a machine to say “ouch” whenever someone hit it, but that would not mean it was conscious.
In all cases it could have been programmed to give a false answer. Even a machine programmed to tell the truth could be programmed to “believe” it was conscious.

There seems to be no alternative test.

So how would the developers know whether they have succeeded? How would they convince the doubters?
If there are ever going to be conscious machines, there are sure to be some buyers.

I wish them luck.

-0-