[Socialism Today, No 19, June 1997, p. 22-24]
Deep Blue’s recent victory over World chess champion Gary Kasparov did not represent the triumph of conscious machines over humankind. But then, what is consciousness? Geoff Jones writes.
After Deep Blue’s victory, one commentator joked that they wouldn’t accept the computer as intelligent until it appeared on Sports night, saying how it respected Kasparov and how much it was looking forward to the next contest. A cynic added that it should also have retained Max Clifford to organise some sponsorship deals.
Jokes often illuminate real questions of principle. In this case, the vital question is ‚What do we mean by intelligence?‘ Obviously the mere power to take in data, process it and produce an output based on the input, is not a useful definition, or a supermarket cash register would have to be defined as intelligent. The vital question is whether the computer can be thought to be ‚conscious‘ in the way that human beings are – aware of itself as separate from the outside world. Alan Turing, a pioneering computer genius, imagined a test to illuminate the problem. You are in a room, at a desk holding a computer screen and keyboard. The computer is connected to the next room, where there is either a human being sitting at a similar keyboard or another computer. You can ask any questions you like by using the keyboard and you receive answers on the screen. The test is to decide, from the answers you get, what is next door – human being or computer. Turing postulated that if the inhabitant of the other room were a computer and if you were unable to tell, the computer would have to be accepted as having consciousness.
Leaving aside the fact that some individuals like John Redwood might well fail the Turing test, modern computing systems have been constructed that can actually pass it. In fact, some so-called „expert systems‘ can be used to give uncannily human-like results. One early programme. ELIZA, could conduct question and answer sessions with psychiatric patients and was claimed to have had as good results as some human psychiatrists. But passing the Turing test does not necessarily mean that the machine ‚knows what it is doing‘ – the vital component of consciousness. A philosopher, John Searle, devised an ‚anti-Turing test‘: the ‚Chinese room‘ experiment. You are locked in a room, with a big pile of cards each of which has a Chinese character on it. You also have a big dictionary-like book. Someone pushes a card under the door which has ’squiggle‘ on it. You consult the dictionary which tells you that when you get ’squiggle‘, you should pick out a card ’squoggle‘, and push it back under the door. This process continues for some time, then the door is unlocked. You are let out and congratulated on your perceptive contribution to a discussion on Chinese ballet.
Searle argued that the programmes devised for computers in Turing tests or in expert systems (known technically as „algorithms‘) were identical with the dictionary used in the Chinese room, and the computer could no more be thought of as conscious than you could be considered an expert on Chinese ballet. In the case of chess-playing computers, the algorithm fed in covers possible moves and counter moves in a game. In chess, the number of such moves is very large but finite, so the job of the algorithm is only quantitatively different from that of the supermarket cash register which is able to quickly total up your three packets of coffee (with two for three offer), six tins of beans, loaf of bread and special offer woolly socks, give you a bill, accept your credit card and rack up bonus points on your store card.
* *
*
There are strong theoretical arguments that it is impossible to produce an algorithm for conscious action, but leaving that apart, is it meaningful to try to define ‚consciousness‘ in the abstract, independent of the physical mechanism which embodies that consciousness? The materialist answer is no.
There is an alternative approach. Although humans are complex, there are plenty of simple creatures which take in information from the outside world and act on that information, Although amoebas, for example, could not be thought of as conscious, an animal like a dog certainly is, even if its intelligence and self-awareness is at a much lower level than a human’s. If it is possible to make mechanical equivalents of such animals, it should be possible to add to their complexity until, in an equivalent of Darwinian evolution, conscious machines could be produced.
Back in the 1950s, a British scientist called W Gray Walter produced simple battery-powered mechanical animals which he called ‚tortoises‘.
Left to itself, a tortoise would move around on a flat surface in a random way. However, if the surface was unevenly lit, the tortoise would move until it found the darkest part, where it would stop. Also, when the battery charge was getting low, the tortoise would move to an electrical socket and plug itself in until it was recharged. It is seductive to talk of them ‚hiding‘ in the darkest spot, and going to ‚feed‘ at the socket.
However, to go further in this direction, it was necessary to find out how the brain works, and this proved very difficult. Even at the lowest level, the processes of transmitting impulses along and between nerves was found to be extremely complex, requiring detailed analysis of the physics, chemistry and biology of the nerve cells. The problem was like that of someone with no electronic knowledge looking inside a TV set. They wouldn’t even know which were the important bits and which mere packing, or which had been left there by accident.
Over twenty years, the result of such work has enabled the production of systems called neural nets which model animal nervous systems and which can modify themselves as a result of external stimuli. The production of these systems, usually modelled in computers rather than actual physical devices, has moved slowly towards greater complexity.
This approach starts from the basic premise that since consciousness is a function of the human brain, it can only be understood by understanding the way in which the brain works. In the words of the Canadian philosopher Patricia Churchman, ‚The basic story has to be in terms of the brain‘. In the past twenty years, the areas of the brain responsible for various functions have been identified, with the beginning of an understanding of the way in which those areas are linked and, most important, how those linkages are changed by input from outside; how memories are stored and lost, how physical skills are ‚built in‘. This materialist approach holds out the only hope of a real understanding of what consciousness is and, more important, how consciousness can be produced in an artificial creature.
* *
*
By any test, Deep Blue cannot be considered ‚conscious‘. Similarly, computers which merely pass the Turing test cannot be considered conscious, but that does not mean that systems with consciousness cannot be constructed. However. a being constructed artificially with the same physical, chemical and biological structure as an ‚organic‘ human, would have a similar consciousness if, and only if, it developed in interaction with the external world in the same way as a human baby does. Children brought up in isolation from humankind have been found whose conscious development seemed little greater than that of animals.
But what about an electronic system? Gray Walter’s tortoise went searching for its mains socket, but it could not really be described as ‚hungry‘. Not even the most dedicated supporter of animal rights would step in to stop its mains supply being switched off.
There is no reason in principle why a more complex electronic analogue of a human brain might not have ‚consciousness‘, and this poses important questions. The TV series Star Trek features Data, an android with consciousness. In one episode, a robotics expert applied to have Data dismantled for study. Data refused to cooperate and it was necessary to decide whether Data, a machine, had the right to refuse. The episode reached an interesting conclusion. If the android did not have the right to refuse, there was the possibility of creating a new type of slave society, with a slave class of beings with no rights. After all, the Romans defined their slaves as „machines with voices‘.
Consciousness is just ‚what the brain does‘. Construct a replica brain and you construct a replica consciousness. But since an integral part of the construction of consciousness is the effect of interaction with the outside world, that consciousness of an electronic system might be very different from human consciousness. Concepts such as fear or hope, selfishness or solidarity, might be completely alien to it. Or else, like the monster in Mary Shelley’s Frankenstein, the system’s consciousness might be formed by the reactions of humans to it. Deep Blue’s victory does not represent a step towards the production of conscious machines, but such machines could be produced in ten or twenty years. In the development of those machines, our idea of the meaning of the word ‚consciousness‘ will be dramatically expanded.
Schreibe einen Kommentar