The Skeptics Society & Skeptic magazine



In this week’s eSkeptic we present a feature article by Peter Kassan which appeared in Skeptic magazine Volume 12, Number 2.

For decades now, computer scientists and futurists have been telling us that computers will achieve human-level artificial intelligence soon. That day appears to be off in the distant future. Why? In this penetrating skeptical critique of A.I., computer scientist Peter Kassan reviews the numerous reasons why this problem is harder than anyone anticipated. — Michael Shermer


digital image by Daniel Loxton and Jim Smith

digital image by Daniel Loxton and Jim Smith

A.I. Gone Awry
The Futile Quest for Artificial Intelligence

by Peter Kassan

On March 24, 2005, an announcement was made in newspapers across the country, from the New York Times1 to the San Francisco Chronicle,2 that a company3 had been founded to apply neuroscience research to achieve human-level artificial intelligence. The reason the press release was so widely picked up is that the man behind it was Jeff Hawkins, the brilliant inventor of the PalmPilot, an invention that made him both wealthy and respected.4

You’d think from the news reports that the idea of approaching the pursuit of artificial human-level intelligence by modeling the brain was a novel one. Actually, a Web search for “computational neuroscience” finds over a hundred thousand webpages and several major research centers.5 At least two journals are devoted to the subject.6 Over 6,000 papers are available online. Amazon lists more than 50 books about it. A Web search for “human brain project” finds more than eighteen thousand matches.7 Many researchers think of modeling the human brain or creating a “virtual” brain a feasible project, even if a “grand challenge.”8 In other words, the idea isn’t a new one.

Hawkins’ approach sounds simple. Create a machine with artificial “senses” and then allow it to learn, build a model of its world, see analogies, make predictions, solve problems, and give us their solutions.9 This sounds eerily similar to what Alan Turing10 suggested in 1948. He, too, proposed to create an artificial “man” equipped with senses and an artificial brain that could “roam the countryside,” like Frankenstein’s monster, and learn whatever it needed to survive.11

The fact is, we have no unifying theory of neuroscience. We don’t know what to build, much less how to build it.12 As one observer put it, neuroscience appears to be making “antiprogress” — the more information we acquire, the less we seem to know.13 Thirty years ago, the estimated number of neurons was between three and ten billion. Nowadays, the estimate is 100 billion. Thirty years ago it was assumed that the brain’s glial cells, which outnumber neurons by nine times, were purely structural and had no other function. In 2004, it was reported that this wasn’t true.14

Even the most ardent artificial intelligence (A.I.) advocates admit that, so far at least, the quest for human-level intelligence has been a total failure.15 Despite its checkered history, however, Hawkins concludes A.I. will happen: “Yes, we can build intelligent machines.”16

A Brief History of A.I.

Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon,17 Norbert Wiener,18 John von Neumann,19 Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available.

In 1955, a research project on artificial intelligence was proposed; a conference the following summer is considered the official inauguration of the field. The proposal20 is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people.

A.I. has splintered into three largely independent and mutually contradictory areas (connectionism, computationalism, and robotics), each of which has its own subdivisions and contradictions. Much of the activity in each of the areas has little to do with the original goals of mechanizing (or computerizing) human-level intelligence. However, in pursuit of that original goal, each of the three has its own set of problems, in addition to the many that they share.

1. Connectionism

Connectionism is the modern version of a philosophy of mind known as associationism.21 Connectionism has applications to psychology and cognitive science, as well as underlying the schools of A.I.22 that include both artificial neural networks23 (ubiquitously said to be “inspired by” the nervous system) and the attempt to model the brain.

The latest estimates are that the human brain contains about 30 billion neurons in the cerebral cortex — the part of the brain associated with consciousness and intelligence. The 30 billion neurons of the cerebral cortex contain about a thousand trillion synapses (connections between neurons).24

Without a detailed model of how synapses work on a neurochemical level, there’s no hope of modeling how the brain works.25 Unlike the idealized and simplified connections in so-called artificial neural networks, those synapses are extremely variable in nature — they can have different cycle times, they can use different neurotransmitters, and so on. How much data must be gathered about each synapse? Somewhere between kilobytes (tens of thousands of numbers) and megabytes (millions of numbers).26 And since the cycle time of synapses can be more than a thousand cycles per second, we may have to process those numbers a thousand times each second.

Have we succeeded in modeling the brain of any animal, no matter how simple? The nervous system of a nematode (worm) known as C. (Caenorhabditis) elegans has been studied extensively for about 40 years. Several websites27 and probably thousands of scientists are devoted exclusively or primarily to it. Although C. elegans is a very simple organism, it may be the most complicated creature to have its nervous system fully mapped. C. elegans has just over three hundred neurons, and they’ve been studied exhaustively. But mapping is not the same as modeling. No one has created a computer model of this nervous system — and the number of neurons in the human cortex alone is 100 million times larger. C. elegans has about seven thousand synapses.28 The number of synapses in the human cortex alone is over 100 billion times larger.

The proposals to achieve human-level artificial intelligence by modeling the human brain fail to acknowledge the lack of any realistic computer model of a synapse, the lack of any realistic model of a neuron, the lack of any model of how glial cells interact with neurons, and the literally astronomical scale of what is to be simulated.

The typical artificial neural network consists of no more than 64 input “neurons,” approximately the same number of “hidden neurons,” and a number of output “neurons” between one and 256.29 This, despite a 1988 prediction by one computer guru that by now the world should be filled with “neuroprocessors” containing about 100 million artificial neurons.30

Even if every neuron in each layer of a three- layer artificial neural net with 64 neurons in each layer is connected to every neuron in the succeeding layer, and if all the neurons in the output layer are connected to each other (to allow creation of a “winner-takes-all” arrangement permitting only a single output neuron to fire), the total number of “synapses” can be no more than about 17 million, although most artificial neural networks typically contain much, much less — usually no more than a hundred or so.

Furthermore, artificial neurons resemble generalized Boolean logic gates more than actual neurons. Each neuron can be described by a single number — its “threshold.” Each synapse can be described by a single number — the strength of the connection — rather than the estimated minimum of ten thousand numbers required for a real synapse. Thus, the human cortex is at least 600 billion times more complicated than any artificial neural network yet devised.

It is impossible to say how many lines of code the model of the brain would require; conceivably, the program itself might be relatively simple, with all the complexity in the data for each neuron and each synapse. But the distinction between the program and the data is unimportant. If each synapse were handled by the equivalent of only a single line of code, the program to simulate the cerebral cortex would be roughly 25 million times larger than what’s probably the largest software product ever written, Microsoft Windows, said to be about 40 million lines of code.31 As a software project grows in size, the probability of failure increases.32 The probability of successfully completing a project 25 million times more complex than Windows is effectively zero.

Moore’s “Law” is often invoked at this stage in the A.I. argument.33 But Moore’s Law is more of an observation than a law, and it is often misconstrued to mean that about every 18 months computers and everything associated with them double in capacity, speed, and so on. But Moore’s Law won’t solve the complexity problem at all. There’s another “law,” this one attributed to Nicklaus Wirth: Software gets slower faster than hardware gets faster.34 Even though, according to Moore’s Law, your personal computer should be about a hundred thousand times more powerful than it was 25 years ago, your word processor isn’t. Moore’s Law doesn’t apply to software.

And perhaps last, there is the problem of testing. The minimum number of software errors observed has been about 2.5 errors per function point.35 A software program large enough to simulate the human brain would contain about 20 trillion errors.

Testing conventional software (such as a word processor or Windows) involves, among many other things, confirming that its behavior matches detailed specifications of what it is intended to do in the case of every possible input. If it doesn’t, the software is examined and fixed. Connectionistic software comes with no such specifications — only the vague description that it is to “learn” a “pattern” or act “like” a natural system, such as the brain. Even if one discovers that a connectionistic software program isn’t acting the way you want it do, there’s no way to “fix” it, because the behavior of the program is the result of an untraceable and unpredictable network of interconnections.

Testing connectionistic software is also impossible due to what’s known as the combinatorial explosion. The retina (of a single eye) contains about 120 million rods and 7 million cones.36 Even if each of those 127 million neurons were merely binary, like the beloved 8×8 input grid of the typical artificial neural network (that is, either responded or didn’t respond to light), the number of different possible combinations of input is a number greater than 1 followed by 38,230,809 zeroes. (The number of particles in the universe has been estimated to be about 1 followed by only 80 zeroes.37) Testing an artificial neural network with input consisting of an 8×8 binary grid is, by comparison, a small job: such a grid can assume any of 18,446,744,073,709,551,616 configurations — orders of magnitude smaller, but still impossible.

2. Computationalism

Computationalism was originally defined as the “physical symbol system hypothesis,” meaning that “A physical symbol system has the necessary and sufficient means for general intelligent action.”38 (This is actually a “formal symbol system hypothesis,” because the actual physical implementation of such a system is irrelevant.) Although that definition wasn’t published until 1976, it co-existed with connectionism from the very beginning. It has also been referred to as “G.O.F.A.I.” (good old-fashioned artificial intelligence). Computationalism is also referred to as the computational theory of mind.39

The assumption behind computationalism is that we can achieve A.I. without having to simulate the brain. The mind can be treated as a formal symbol system, and the symbols can be manipulated on a purely syntactic level — without regard to their meaning or their context. If the symbols have any meaning at all (which, presumably, they do — or else why bother manipulating them?), that can be ignored until we reach the end of the manipulation. The symbols are at a recognizable level, more-or-less like ordinary words — a so-called “language of thought.”40

The basic move is to treat the informal symbols of natural language as formal symbols. Although, during the early years of computer programming (and A.I.), this was an innovative idea, it has now become a routine practice in computer programming — so ubiquitous that it’s barely noticeable.

Unfortunately, natural language — which may not literally be the language of thought, but which any human-level A.I. program has to be able to handle — can’t be treated as a formal symbol. To give a simple example, “day” sometimes mean “day and night” and sometimes means “day as opposed to night” — depending on context.

Joseph Weizenbaum41 observes that a young man asking a young woman, “Will you come to dinner with me this evening?”42 could, depending on context, simply express the young man’s interest in dining, or his hope to satisfy a desperate longing for love. The context — the so-called “frame” — needed to make sense of even a single sentence may be a person’s entire life.

An essential aspect of the computationalist approach to natural language is to determine the syntax of a sentence so that its semantics can be handled. As an example of why that is impossible, Terry Winograd43 offers a pair of sentences:

The committee denied the group a parade permit because they advocated violence.

The committee denied the group a parade permit because they feared violence.44

The sentences differ by only a single word (of exactly the same grammatical form). Disambiguating these sentences can’t be done without extensive — potentially unlimited — knowledge of the real world.45 No program can do this without recourse to a “knowledge base” about committees, groups seeking marches, etc. In short, it is not possible to analyze a sentence of natural language syntactically until one resolves it semantically. But since one needs to parse the sentence syntactically before one can process it at all, it seems that one has to understand the sentence before one can understand the sentence.

In natural language, the boundaries of the meaning of words are inherently indistinct, whereas the boundaries of formal symbols aren’t. For example, in binary arithmetic, the difference between 0 and 1 is absolute. In natural language, the boundary between day and night is indistinct, and arbitrarily set for different purposes. To have a purely algorithmic system for natural language, we need a system that can manipulate words as if they were meaningless symbols while preserving the truth-value of the propositions, as we can with formal logic. When dealing with words — with natural language — we just can’t use conventional logic, since one “axiom” can affect the “axioms” we already have — birds can fly; but penguins and ostriches are birds that can’t fly. Since the goal is to automate human-style reasoning, the next move is to try to develop a different kind of logic — so-called non-monotonic logic.

What used to be called logic without qualification is now called “monotonic” logic. In this kind of logic, the addition of a new axiom does- n’t change any axioms that have already been processed or inferences that have already been drawn. The attempt to formalize the way people reason is quite recent — and entirely motivated by A.I.. And although the motivation can be traced back to the early years of A.I., the field essentially began with the publication of three papers in 1980.46 However, according to one survey of the field in 2003, despite a quarter-century of work, all that we have are prospects and hope.47

An assumption of computationalists is that the world consists of unambiguous facts that can be manipulated algorithmically. But what is a fact to you may not be a fact to me, and vice versa.48 Furthermore, the computationalist approach assumes that experts apply a set of explicit, formalizable rules. The task of computationalists, then, is simply to debrief the experts on their rules. But, as numerous studies of actual experts have shown,49 only beginners behave that way. At the highest level of expertise, people don’t even recognize that they’re making decisions. Rather, they are fluidly interacting with the changing situation, responding to patterns that they recognize. Thus, the computationalist approach leads to what should be called “beginner systems” rather than “expert systems.”

The way people actually reason can’t be reduced to an algorithmic procedure like arithmetic or formal logic. Even the most ardent practitioners of formal logic spend most of their time explaining and justifying the formal proofs scattered through their books and papers — using natural language (or their own unintelligible versions of it). Even more ironically, none of these practitioners of formal logic — all claiming to be perfectly rational — ever seem to agree with each other about any of their formal proofs.

Computationalist A.I. is plagued by a host of other problems. First of all its systems don’t have any common sense.50 Then there’s “the symbol- grounding problem.”51 The analogy is trying to learn a language from a dictionary (without pictures) — every word (symbol) is simply defined using other words (symbols), so how does anything ever relate to the world? Then there’s the “frame problem” — which is essentially the problem of which context to apply to a given situation.52 Some researchers consider it to be the fundamental problem in both computationalist and connectionist A.I.53

The most serious computationalist attempt to duplicate human-level intelligence — perhaps the only serious attempt — is known as CYC54 — short for enCYClopedia (but certainly meant also to echo “psych”). The head of the original project and the head of CYCORP, Douglas Lenat55 has been making public claims about its imminent success for more than twenty years. The stated goal of CYC is to capture enough human knowledge — including common sense — to, at the very least, pass an unrestricted Turing Test.56 If any computationalist approach could succeed, it would be this mother of all expert systems.

Lenat had made some remarkable predictions: at the end of ten years, by 1994 he projected, the CYC knowledge base will contain 30–50% of consensus reality.57 (It is difficult to say what this prediction means, because it assumes that we know what the totality of consensus reality is and that we know how to quantify and measure it.) The year 1994 would represent another milestone in the project: CYC would, by that time, be able to build its knowledge base by reading online materials and ask questions about it, rather than having people enter information.58 And by 2001, Lenat said, CYC would have become a system with human-level breadth and depth of knowledge.59

In 1990, CYC produced what it termed “A Midterm Report.”60 Given that the effort started in 1984, calling it this implied that the project would be successfully completed by 1996, although in the section labeled “Conclusion” it refers to three possible outcomes that might occur by the end of the 1990s. One would hope that by that time CYC would at least be able to do simple arithmetic. In any case, the three scenarios are labeled “good” (totally failing to meet any of the milestones), “better” (which shifts the achievements to “the early twenty-first century” and that still consists of “doing research”), and “best” (in which the achievement still isn’t “true A.I.” but only the “foundation for … true A.I.” in — 2015).

Even as recently as 2002 (one year after CYC’s predicted achievement of human-level breadth and depth of knowledge), CYC’s website was still quoting Lenat making promises for the future: “This is the most exciting time we’ve ever seen with the project. We stand on the threshold of success.”61

Perhaps most tellingly, Lenat’s principal coworker, R.V. Guha62 left the team in 1994, and was quoted in 1995 as saying “CYC is generally viewed as a failed project. The basic idea of typing in a lot of knowledge is interesting but their knowledge representation technology seems poor.”63 In the same article, Guha is further quoted as saying of CYC, as could be said of so many other A.I. projects, “We were killing ourselves trying to create a pale shadow of what had been promised.” It’s no wonder that GOFA.I. has been declared “brain-dead.”64

3. Robotics

The third and last major branch of the river of A.I. is robotics — the attempt to build a machine capable of autonomous intelligent behavior. Robots, at least, appear to address many of problems of connectionism and computationalism: embodiment,65 lack of goals,66 the symbol-grounding problem, and the fact that conventional computer programs are “bedridden.”67

However, when it comes to robots, the disconnect between the popular imagination and reality is perhaps the most dramatic. The notion of a fully humanoid robot is ubiquitous not only in science fiction but in supposedly non-fictional books, journals, and magazines, often by respected workers in the field.

This branch of the river has two sub-branches, one of which (cybernetics) has gone nearly dry, the other of which (computerized robotics) has in turn forked into three sub-branches. Remarkably, although robotics would seem to be the most purely down-to-earth engineering approach to A.I., its practitioners spend as much time publishing papers and books as do the connectionists and the computationalists.

Cybernetic Robotics

While Turing was speculating about building his mechanical man, W. Grey Walter68 built what was probably the first autonomous vehicle, the robot “turtles” or “tortoises,” Elsie and Elmer. Following a cybernetic approach rather than a computational one, Walter’s turtles were controlled by a simple electronic circuit with a couple of vacuum tubes.

Although the actions of this machine were trivial and exhibited nothing that even suggested intelligence, Grey has been described as a robotics “pioneer” whose work was “highly successful and inspiring.”69 On the basis of experimentation with a device that, speaking generously, simulated an organism with two neurons, he published two articles in Scientific American70 (one per neuron!), as well as a book.71

Cybernetics was the research program founded by Norbert Wiener,72 and was essentially analog in its approach. In comparison with (digital) computer science, it is moribund if not quite dead. Like so many other approaches to artificial intelligence, the cybernetic approach simply failed to scale up.73

Computerized Robots

The history of computerized robotics closely parallels the history of A.I. in general:

  • Grand theoretical visions, such as Turing’s musings (already discussed) about how his mechanical creature would roam the countryside.
  • Promising early results, such as Shakey, said to be “the first mobile robot to reason about its actions.”74
  • A half-century of stagnation and disappointment.75
  • Unrepentant grand promises for the future.

What a roboticist like Hans Moravec predicts for robots is the stuff of science fiction, as is evident by the title of his book, Robot: Mere Machine to Transcendent Mind.76 For example, in 1997 Moravec asked the question, “When will computer hardware match the human brain?” and answered “in the 2020s.”77 This belief that robots will soon transcend human intelligence is echoed by many others in A.I.78

In the field of computerized robots, there are three major approaches:

  • TOP-DOWN  The approach taken with Shakey and its successors, in which a computationalist computer program controls the robot’s activities.79 Under the covers, the programs take the same approach as good old-fashioned artificial intelligence, except that instead of printing out answers, they cause the robot to do something.
  • OUTSIDE-IN  Consists of creating robots that imitate the superficial behavior of people, such as responding to the presence of people nearby, tracking eye movement, and so on. This is the approach largely taken recently by people working under Rodney A. Brooks.80
  • BOTTOM-UP  Consists of creating robots that have no central control, but relatively simple mechanisms to control parts of their behavior. The notion is that by putting together enough of these simple mechanisms (presumably in the right arrangement), intelligence will “emerge.” Brooks has written extensively in support of this approach.81

The claims of roboticists of all camps range from the unintelligible to the unsupportable.

As an example of the unintelligible, consider MIT’s Cog (short for “cognition”). The claim was that Cog displayed the intelligence (and behavior) of, initially, a six-month old infant. The goal was for Cog to eventually display the intelligence of a two-year-old child.82 A basic concept of intelligence — to the extent that anyone can agree on what the word means — is that (all things being equal) it stays constant throughout life. What changes as a child or animal develops is only the behavior. So, to make this statement at all intelligible, it would have to be translated into something like this: the initial goal is only that Cog will display the behavior of a six-month-old child that people consider indicative of intelligence, and later the behavior of a two-year-old child.

Even as corrected, this notion is also fallacious. Whatever behaviors a two-year-old child happens to display, as that child continues to grow and develop it will eventually display all the behavior of a normal adult, because the two- year-old has an entire human brain. However, even if we manage to create a robot that mimics all the behavior of a two-year-old child, there’s reason to believe that that same robot will without any further programming, ten years later, display the behavior of a 12-year-old child, or later, display the behavior of an adult.

Cog never even displayed the intelligent behavior of a typical six-month-old baby.83 For it to behave like a two-year-old child, of course, it would have to use and understand natural language — thus far an insurmountable barrier for A.I..

The unsupportable claim is sometimes made that some robots have achieved “insect-level intelligence,” or at least robots that duplicate the behavior of insects.84 Such claims seem plausible simply because very few people are entomologists, and are unfamiliar with how complex and sophisticated insect behavior actual is.85 Other experts, however, are not sure that we’ve achieved even that level.86

According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.)

Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs.

Even if it were true that current robots or computers had attained insect-level intelligence, this wouldn’t indicate that human-level artificial intelligence is attainable. The number of neurons in an insect brain is about 10,000 and in a human cerebrum about 30,000,000,000. But if you put together 3,000,000 cockroaches (this seems to be the A.I. idea behind “swarms”), you get a large cockroach colony, not human-level intelligence. If you somehow managed to graft together 3,000,000 natural or artificial cockroach brains, the results certainly wouldn’t be anything like a human brain, and it is unlikely that it would be any more “intelligent” than the cockroach colony would be. Other species have brains as large as or larger than humans, and none of them display human-level intelligence — natural language, conceptualization, or the ability to reason abstractly.87 The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation.

Conclusions

With admirable can-do spirit, technological optimism, and a belief in inevitability, psychologists, philosophers, programmers, and engineers are sure they shall succeed, just as people dreamed that heavier-than-air flight would one day be achieved.88 But 50 years after the Wright brothers succeeded with their proof-of-concept flight in 1903, aircraft had been used decisively in two world wars; the helicopter had been invented; several commercial airlines were routinely flying passengers all over the world; the jet airplane had been invented; and the speed of sound had been broken.

After more than 50 years of pursuing human- level artificial intelligence, we have nothing but promises and failures. The quest has become a degenerating research program89 (or actually, an ever-increasing number of competing ones), pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.

References & Notes

References can be viewed in the archived version of this arcticle.

Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2023. All rights reserved • Privacy Policy