Hofstadter’s main subjects in that book (at least as I read it) are Kurt Gödel’s incompleteness theorems, self-reference and “strange loops”. He also tied these subjects to artificial intelligence (AI) and the nature of computers.
Just to get one thing straight out of the way to begin with.
Neither this piece nor Hofstadter’s book have much — or indeed anything — to do with the debate about IQ (i.e., intelligence quotient) and how it’s related to human groups and individuals. (This has become such a politicised issue — on all sides — that it’s hardly worth contributing to.)
Predictably, the question “What is intelligence?” is raised (if often obliquely) in Gödel, Escher, Bach: an Eternal Golden Braid. Thus Hofstadter offered his readers eight “essential abilities for intelligence”. So what immediately follows are some biographical details and quotes from Hofstadter which may help put this piece in some kind of perspective.
Hofstadter once said that “a large fraction [of his audience] seems to be those who are fascinated by technology”. Indeed his work “has inspired many students to begin careers in computing and artificial intelligence”. Yet Hofstadter himself has also (rhetorically?) said that he has “no interest in computers”.
Hofstadter has also advanced various sceptical positions on AI. For example, when Garry Kasparov was defeated by Deep Blue, Hofstadter wrote:
“It was a watershed event, but it doesn’t have to do with computers becoming intelligent.”
Yet despite all that, Hofstadter did once write (in his collection of articles Metamagical Themas) that
“in this day and age, how can anyone fascinated by creativity and beauty fail to see in computers the ultimate tool for exploring their essence?”
Indeed Hofstadter went so far as to organise a symposium (at Stanford University and in April 2000) called ‘Spiritual Robots’.
Eight Essential Abilities for Intelligence?
The following piece is not a set of arguments which attempts to state that computers are identical to humans when it comes to all aspects of intelligence. Instead, it’s simply an attempt to challenge what are seen — by many — to be absolute and (as it were) eternal differences between computers and human beings when it comes to intelligence.
It’s worth saying here that some of my responses to each of Hofstadter’s eight “essential abilities for intelligence” may get a little repetitive. That’s mainly because what can be said about one such ability — in relation to artificial intelligence or computers — can be said about (some of) the others too.
In any case, much depends on how one takes these abilities. And, as we shall see, it certainly depends on how the words (or concepts) contained in the sentences which express them are interpreted.
It’s also worth noting that Hofstadter’s eight essential abilities for intelligence have been extensively quoted over the years (though little discussed) in the literature on AI and other related matters. (See the pages of quotes — on Google Search — here.)
The following is the relevant extract from Hofstadter’s book (to which I’ve added numbers):
“No one knows where the borderline between non-intelligent behavior and intelligent behavior lies, in fact, to suggest that a sharp border exists is probably silly. But essential abilities for intelligence are certainly:
(1) to respond to situations very flexibly; (2) to take advantage of fortuitous circumstances; (3) to make sense out of ambiguous or contradictory messages; (4) to recognise the relative importance of different elements of a situation; (5) to find similarities between situations despite differences which may separate them; (6) to draw distinctions between situations despite similarities which may link them; (7) to synthesize new concepts by taking old concepts and putting them together in new ways; (8) to come up with ideas which are novel.”
It can be seen in the above that Hofstadter uses the words “intelligent behavior”, not the more abstract and general word “intelligence”. This suggests that he believed that if any x is capable of behaving — or acting — intelligently, then it is intelligent. Thus this position works against the idea that intelligence is some kind of entity in the mind or brain that effectively works as a kind of all-purpose and private mental “organ”. Behaviourists (from Gilbert Ryle and before) have, of course, argued against this conception of intelligence — and for many good reasons.
I shall now tackle each essential ability for intelligence one by one.
(1) To respond to situations very flexibly
The problem here is that not all adult human beings “respond to situations very flexibly”. Some human beings are very inflexible and they are so in many different ways. So reacting very flexibly can’t be entirely definitive of what it is to be a human being. Of course many humans do respond to situations very flexibly — yet the point still stands.
And neither can it be said that all computers (or all computer programmes) don’t respond very flexibly. In many contexts, they do. Think of an Internet search in which your wording (either for a question or for something else) is very ungrammatical or includes misspellings. Despite all that, the search picks up on these things and still provides a good answer. Of course sceptics may question this and say that it’s not a good example of responding to a situation very flexibly. But why isn’t it a good example — at least within this limited context? That said, one can also cite the well-known case of computerised robots and their not being able to navigate very well around new environments (but see here). But this is also true of some adult human beings and all young children. (Incidentally, many other animal species are better navigators than most humans.)
Of course all this will partly depend on what “very flexibly” means because, as with all Hofstadter’s examples, this phrase is — to some extent at least — vague. Indeed Hofstadter probably wouldn’t have denied this problem of vagueness when it comes to these abilities for intelligence.
(2) To take advantage of fortuitous circumstances
This is even vaguer — or simply broader — than (1) above.
The obvious thing which will be argued here is that computers won’t know anything about specific “fortuitous circumstances”… unless they’ve been programmed to know about them. Of course if they’ve been programmed to know about specific fortuitous circumstances, then they won’t actually be fortuitous circumstances to that computer. The problem here is that human beings also need to be programmed (or educated) enough in order to take advantage of fortuitous circumstances. And, as with all the other abilities cited by Hofstadter, some human beings are not very good at taking advantage of fortuitous circumstances. Still, compared to most computers (or computerised robots) and in most situations, humans are good at this.
Yet all that hints at a central problem.
This particular distinction between humans and computers isn’t absolute. Indeed there’s no categorical reason why computers can’t be programmed to take advantage of fortuitous circumstances.
Again, this is not an argument which states that computers are identical to humans when it comes to all aspects of intelligence. Instead, it’s simply to challenge what are seen — by many — to be absolute and eternal differences.
(3) To make sense out of ambiguous or contradictory messages
Some human beings are often not very good at making sense out of ambiguous or contradictory messages. Indeed some adult humans are no good at this in any circumstances. Of course many other human beings are good at this. Yet because not all human beings are good at this, then this can’t be seen as a clear cut and definitive dividing line between human beings and computers. In other words, in order to make clear cut and definitive distinctions between human beings and computers one mustn’t rely on the best human beings and the worst computers to do so. (This is what the physicist and mathematician Roger Penrose does when he highlights the ability of higher-level mathematicians to “see” Gödelian “truths”.)
It’s hard to reason as to why computers can’t— at least in principle — be suitably programmed to make sense of ambiguous or contradictory messages. Sure, it would need a lot of detailed and fine-tuned programming; though human beings also need a lot of detailed and fine-tuned programming to make sense out of ambiguous or contradictory messages.
(4) To recognise the relative importance of different elements of a situation
If anything, a computer could be better at recognising the relative importance of different elements of a situation than many human beings. (A point which also applies to some of the other abilities cited by Hofstadter.) This would of course depend on the “situation” and its “different elements”. In addition, we’d need to know how exactly humans recognise the relative importance of different elements of a situation and if it would be possible — or impossible - to replicate this cognitive ability in computers. As it is, this ability — as expressed by Hofstadter — is almost too vague to comment upon.
(5) To find similarities between situations despite differences which may separate them
As already stated, some of these responses may be getting a little repetitive. This is because what can be said about one ability for intelligence — in relation to computers — can be said about others too. Thus the main response to (5) can also be found in (6) below.
A computer could be better at finding similarities between situations despite differences which may separate them than most — or even all — human beings. That primarily because a computer may have easy access to a huge amount of data. Of course human beings also have easy access to a huge amount of data (in memory, by reading books, etc.) too. A computer may, of course, find many irrelevant or useless similarities between situations — which are otherwise different — without “knowing” that they are in fact irrelevant or useless… yet the same is true of some human beings in some circumstances. So it must now be said that computer programmers have spent much time “training” computers to ignore irrelevant or useless information and therefore irrelevant or useless similarities or differences. (See Daniel Dennett’s ‘Cognitive wheels: the frame problem of AI’.)
(6) To draw distinctions between situations despite similarities which may link them
The ability to draw distinctions between situations despite similarities which may link them too is hard to grasp because it’s a fairly vague ability. And again it must be said that a computer could be very good at this. After all, some computers have very-easy access to a vast amount of data which they can quickly sort out. Admittedly, a large data pool may not be helpful here if the computer hasn’t got the ability to “draw distinctions” in the first place. However, that’s unless a computer — or its programme — contains something that enables it to draw distinctions. And then we’d need to know how that can — or could — be done.
As it is, computers (or programmes) do often make the mistake of linking things which are really dissimilar (as in Internet searches based on word connections); rather than of drawing distinctions between things which are otherwise similar. But, here again, many adult human beings do this too. After all, don’t psychologists tell that adult human beings tend to automatically use (or employ) “stereotypes” and “generalisations” in which clear distinctions are ignored and vague similarities are emphasised?
(7) To synthesize new concepts by taking old concepts and putting them together in new ways
It can be intuitively accepted that this ability does seem to go beyond what computers will be capable of… at least at the present time. It does seem like a problem to get a computer to take “old concepts” and put them “together in new ways”. This would require a lot of semantic nous and contextual skill.
Yet all this depends.
It depends on which kind of concepts are being discussed. It may even depend on what’s meant by the word “concept”. Computers can certainly — and with ease — shuffle numbers and equations around. They can even answer questions that haven’t been asked before — and surely that would require putting old concepts together in new ways.
(8) To come up with ideas which are novel
Firstly we’d also need to establish what an “idea” is. Computers can, after all, establish their own theorems (although a distinction has to be made between proving theorems and creating them — see here), design better routes from A to B, connect given sets of data to other data, respond to human questions which themselves express ideas (say, in Internet searches, etc.) and so on.
In any case, surely it would be very easy for a computer to come up with ideas which are novel. What’s important is whether those novel ideas would also be interesting, informative, useful and relevant. In other words, novelty in itself can be a very shallow phenomenon. That said, even if it is a shallow phenomenon, it may still be something that computers are very bad at. As it is, it’s hard to say if computers are bad at creating novel ideas because the notion is fairly vague and/or broad. I suppose we can say — and few AI aficionados would at present deny it — that no computer could come up with a novel theory for a new form of political government or a solution to the mind-body problem. That said, most humans couldn’t do these things either. In addition, it isn’t — in principle — impossible that a computers could do these things.
Finally, no one is claiming that any computer can do these kinds of thing today. The claim is that it is possible — and even very likely — that some of these things will be done in the future by computers or by other examples of AI.