The Skeptics Society & Skeptic magazine


banner

Beware the Dystopian Visions of Celebrity Scientists

Dec. 04, 2014 by | Comments (13)
Stephen Hawking

Stephen Hawking’s future: doomed singularity or invading aliens? Image by J. Nathan Matias, via Wikimedia Commons Commons. Used under Creative Commons Attribution-NonCommercial 2.0 Generic license.

Several years ago, eminent British cosmologist Professor Stephen Hawking made headlines by speculating that first contact with sentient aliens probably wouldn’t end in high-fives and tribble-cuddles. “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans,” he suggested.

As if that rosy idea wasn’t enough, Professor Hawking has now claimed the invention of artificial intelligence (AI) could precipitate the end of humanity. “It would take off on its own, and re-design itself at an ever increasing rate,” he recently told the BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Intergalactic robots landing on his lawn must surely be nightmare-fuel.

It’s easy to dismiss the famous author’s pessimism as the harmless speculation of a respected intellectual (or, for those inclined, to accept his opinions with alarm). Yet given the challenges involved in engaging the public in the realities of science, the sci-fi musings of a world famous scientist might be less than helpful.

In 2009 I put together a short radio documentary on the technological singularity for Australia’s ABC Radio National show ‘All in the Mind’. If I’m to be honest, I was relatively naïve on what this entailed, picking the topic in order to contrive an interview with one of my favorite science fiction authors. My crash-course on the progress of AI left me with slightly contrasting impressions on what the future might hold.

For some, AI was defined as the emergence of agency. At some point, computational technology would produce a chaotic system that displayed some form of intention. With this auspicious event the speed limits of biology could be overcome and vroom—innovation could accelerate. This artificial mind could find ways to improve itself faster than our meat brains ever could, leading to an explosive growth in improvement. This new age of exponential progress was christened ‘the singularity’ by mathematician John von Neumann in the mid-20th century, and has since been popularized by writers such as Vernor Vinge and futurists such as Ray Kurzweil.

Yet given we struggle to even define human intelligence in such a clear fashion—let alone human-like agency—AI has meant something less ambitious to many computer scientists. Discussing the topic with various researchers at the Australian National University, it seemed AI had no one overarching paradigm. It was a mix of concepts that covered everything from modelling the ways nervous systems processed information to making programs that mimicked human behavior.

What’s more, there seemed to be a gulf between the ‘synthetic agency’ of the singularity explosion and the ‘artificial intelligence’ as studied by computer scientists. The former presumed the quasi-random, chaotic property of self-awareness would emerge spontaneously from the right mix of coding. The latter was about creating a complex simulacrum of humanity at best, indistinguishable from most humans in interfacing but fundamentally deterministic in programming. One felt kind of mystical, and sort of dualistic in a Rene Descartes kind of way. The other was about the hard reality of studying our neurology, sociology and psychology to make ones and zeroes do weird new things.

To the likes of Stephen Hawking, the emergence of synthetic agency reflects something more like a mystical awakening than the product of a computer model. It presumes this entity will acquire its drive and its values not from a sequence of codes, but from some non-defined, nigh metaphysical realm.

Thinking of our mind as a homunculus driving a meat-machine serves us well in day-to-day life, where the hard questions of consciousness are irrelevant. But reconciling such fields of AI would demand a close look at that uncomfortable question. If a program that fulfills our diverse expectations on intelligence and agency were ever to be constructed, it would mean facing the significance of what in our own minds is determined by our neurological wiring and what—if anything—is the result of immaterial intent, or true free will.

But would such an code-based entity value its own ongoing existence and evolution to the detriment of humanity? Of course, if it becomes possible to build such a mind, it could very well be designed to prioritize actions that could be considered detrimental to humanity at large. However, this is very different to the spontaneous emergence of malevolence.

Like many scientists, those who study AI are often forced to justify their research to the public, whether for funding or to accept the validity of their findings. Few have the celebrity pull of Stephen Hawking, and many already field questions on tired clichés from Terminator’s Skynet to a less than glorious future within The Matrix. As a scientist, one might expect Stephen Hawking would be sympathetic to the impact of influential non-experts speculating wildly on the alarming consequences of technology.

It is unfair of me to single out Professor Hawking, of course. ‘Expert-creep’—the phenomenon of scientists celebrated within one discipline choosing to make public statements on matters outside of their field of experience—is encouraged by our love affair with sound bites and symbolic figureheads in the media. Who can blame him for sharing his whimsical thoughts on aliens and synthetic brains?

Yet in earning the trust of the diverse sections of the public, scientists need to work together, role modelling the respect we all should have for the years of experience many individuals devote to earning the right to an opinion.

Mike McRae

Mike McRae is an Australian science writer and teacher. He has worked for the CSIRO’s education group and developed resources for the Australian government, promoting critical thinking and science education through educational publications. His 2011 book Tribal Science: Brains, Beliefs and Bad Ideas explored humanity’s development to think scientifically—and pseudoscientifically—about the universe. Read Mike’s other posts on this blog.

13 responses to “Beware the Dystopian Visions of Celebrity Scientists”

  1. Bad Boy Scientist says:

    Regarding Hawking’s concern about attracting the attention of aliens… _I Love Lucy_ broadcasts are already drifting out into space at the speed of light so our doom is sealed.

    Actually, any artificial signal we send would not likely be understood… at best an alien civilization would recognize it as an artificial signal. Also, we’re already sending such signals to communicate with our spacecraft in the outer solar system. If an alien neighbor happens to be along that line of sight – with suitably sensitive equipment – our presence will be discovered.

    So my response to Hawking on that one is “Whoops!”

  2. Stan Roelker says:

    I don’t mind scientists guessing what the future may hold. I am more concerned when politicians “creep” into fields they really know very little about. Now that is dangerous!

  3. A.L. Zinn says:

    I like your comment about “expert creep” . My first reaction to Dr. Hawking’s pronouncement was shock and dismay. “Holy anthropy! Hawking said THAT?!” But then, he appears to have a witty humor, (Big Bang on TV) and maybe was just being ornery. We need more geniuses gone drifting.

    Why on Earth, or anyplace else, would AI machines give a hoot? They have no need for a mandate to reproduce. That is why ”natural selection” is the proper term for evolution. What other purpose in the Cosmos could our plastic progeny serve?
    They are here “to serve mankind!”

  4. Bent K. Nielsen says:

    Actually – by denouncing Stephen Hawking’s point of view on an alien invasion, I think that you may undermine your own credibility. I am under the impression that you advocate scientific scepticism. And to me Hawkin’s warning is a pretty good example of applied scepticism. Okay – maybe stretched a bit too far and on a speculative extrapolation originating from science. – But in essence: Why should we believe that an advanced hypothetical foreign species by default has to be benevolent and caring in its behaviour towards humans?
    We can hope so, and we can speculate about it. but for obvious reasons we have no facts to back up any such informed view on that matter. So – Both perspectives are pretty equal and pretty speculative scenarios. And therefore – Not science.
    Which does not exclude that we can use scientific methods while treating it as hypothetical possibility.
    By the way – “free will” is a semantically preposterous construction…

  5. Bryan Schear says:

    I cringed when I read the sound bite from Professor Hawking and was immediately reminded of the suggestion to remain quiet lest our stellar neighbors find us. I wonder if he truly believes that a species capable of such technology would have a culture akin to that of Europeans of the 15th century?

    Likewise, as we advance with our accumulated knowledge we become more aware of the impact we have on other species and are doing more to protect them in their natural habitats. An advanced artificial intelligence would help us in the same manner, however it’s highly likely that we will be that AI. Humanity will not end; it will continue evolve.

    As a public science figure he has a responsibility to police himself. AI just may very well be the ONLY way we are all making it out of this mess.

  6. Robin P Meyer says:

    Oh, Mr. McRae: not comfortable at ALL with you on this point: “…the respect we all should have for the years of experience many individuals devote to earning the right to an opinion.”
    Not sure you can hear yourself. I mean yes, an expert is better at the technical details of a field, but Prof. Hawking and anyone else should have the right to voice their opinions concerning the potential Pandora’s Box aspect of AI.
    Even if they sound “wild” or “whimsical” at first hearing does not mean his speculations are not worth airing for discussion and refutation if needed. Let the AI experts take the challenge of convincing all of us worry-worts that the science fiction we grew up with was just plain silly, and that there is Absolutely Nothing To Worry About, Folks.

    • Mike McRae says:

      Thanks for the response, Robin. I can indeed hear myself loud and clear. I wonder if you might have provided a similar response if this was instead a story about James Watson, who might also have been entitled to express well-refuted opinions which are outside his field of expertise. Or indeed any one number of non-climatologist scientists refuting climate change science, or non-immunologist medical doctors who present unsupported opinions on the ills of vaccination.

      I understand your point that ideally, any individual should be on equal ground to advocate an idea for experts to shoot down. Unfortunately Hawking’s celebrity-scientist status doesn’t put him on equal ground with the average AI expert. If it’s one thing we know in communicating science, it’s the challenge experts have in taking the time to argue why a wrong idea is wrong to a lay audience.

      I’m certainly not advocating an authority who would censor anybody from speaking. But I do think it is important that if society is to respect the value of expertise, scientists themselves should act as good role models.

      • Bad Boy Scientist says:

        I whole-heartedly agree w/ Mike McRae about Scientists needing to be good role models for the rest of us. It is far too easy for statements to the media to be misinterpreted or sensationalized – journalism is all about selling papers (and ad space), after all. Simply by putting Hawking’s name on an opinion makes it ‘sensational.’

        The problem is worsened because too few understand that all scientists – even Dr Hawking – are lay-persons outside of their fields of expertise.

  7. BobM says:

    There was a short story about the development of artificial intelligence – by Isaac Asimov, when they asked the world’s biggest computer is there a God? The answer was “there is now.” I’ve always thought science fiction writers were better predictors of the future than actual scientists :-).

    • Helena Constantine says:

      You’re referring to Fredric Brown’s 1954 short story “Answer.”

      The story I describe is Campbell’s “The Invaders” (1935).

      • BobM says:

        You are correct, I was getting confused with as Asimov’s “The Last Question.” :-). Long time since I read either.

  8. Helena Constantine says:

    John W. Campbell wrote a story along these lines in the 1930s. In the far future the earth was controlled by an AI that kept true to its original function of helping and protecting mankind, but we were like children in a kindergarten compared to it. An alien fleet eventually shows up and attacks. The AI is able to develop through whole generations within a few days to the point where it can defeat the invaders, but not before they wiped out human beings. The AI wasn’t that concerned–it just went on about its business.

Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy