back to article UNCHAINING DEMONS which might DESTROY HUMANITY: Musk on AI

Electro-car kingpin and spacecraft mogul Elon Musk has warned that meddling with Artificial Intelligence risks "summoning the demon" that could destroy humanity. The Musky motor trader is terrified that humanity will end up creating a synthetic monster that we cannot control. And no, the SpaceX billionaire didn't warn us …

Page:

  1. petur
    Meh

    Politics and intelligence

    What gave you the idea that intelligence is required for politics?

    Just look around.

  2. Captain TickTock
    Headmaster

    That word - I don't think it means what you think it means...

    By meddling - do you mean dabbling?

  3. frank ly

    "Thou shalt not make a machine in the likeness of a human mind."

    What about Butlerian monkeys that serve you drinks?

    1. Destroy All Monsters Silver badge

      Re: "Thou shalt not make a machine in the likeness of a human mind."

      "...we have folded space from Dragon iX"

      1. Dave 126 Silver badge

        Re: "Thou shalt not make a machine in the likeness of a human mind."

        Yowsers... references to the prehistory of Frank Herbert's Dune.

        Sounds like Iain M. Bank's 'Outside Context Problem' - http://en.wikipedia.org/wiki/Excession#Outside_Context_Problem

        The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbors were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests.

        1. Long John Brass

          Re: "Thou shalt not make a machine in the likeness of a human mind."

          > you're all subjects of the Emperor now

          The Emperor protects!

  4. AndrueC Silver badge
    Terminator

    "Surprise me, Holy Void!"

    Although to be fair most of what went wrong in those books seemed to be the result of human failure rather than AI.That was pretty much the theme from what I remember. All our fault for trying to fight the cosmos instead of embracing it.

  5. Mage Silver badge
    Big Brother

    I'm not worried

    We don't even know what intelligence is.

    The Computer AI people only make progress because like Humpty Dumpty in "Through the Looking glass" they have redefined it.

    If it was possible to write a real AI program and the only issue was lack of computer power we would have a slow AI already.

    I'm sceptical that a true AI program can be developed.

    There are many other scenarios in the world that seem much more of a risk!

    1. Destroy All Monsters Silver badge

      Re: I'm not worried

      I'm sceptical that a true AI program can be developed.

      Actually it's probably pretty easy and I expect task-specific good, very good AI within the next 20. Anything that has a short-memory buffer than the human's "7 elements" will kick our arse.

      But so what.

    2. breakfast Silver badge

      Re: I'm not worried

      The problem that researchers are facing now is certainly philosophical more than technical. People always underestimate philosophy until they start running into it's harder problems.

      In the long run I think Strong AI probably both can and will be developed, although it will take a long time and the nature of that intelligence will probably be incomprehensible to us. There is a good chance that the consequence will be some kind of mayhem.

      If we want it to be anything like us, AI researchers need to be placing their work in the physical world and giving it access to the sense data that we build our understanding from. Then at least we will have some common experience to build communication up from.

    3. emmanuel goldstein

      Re: I'm not worried

      Applying the Bekenstein Bound equations to the human brain, you get a maximum information content of approximately 2.6 x 10 power 42 bits.

      This represents the ammount of information necessary to emulate a human brain down to the quantum level.

      Not possible in 2014 but inevitable at some point in the future and maybe not too many years away.

      1. breakfast Silver badge

        Re: I'm not worried

        I suspect there are some quite fancy quantum computation effects going on in the brain as well, I wouldn't be surprised if those took a while to suss out too.

        1. Michael Wojcik Silver badge

          Re: I'm not worried

          I suspect there are some quite fancy quantum computation effects going on in the brain as well

          Sigh. This again.

          What evidence is there for "fancy quantum computation effects" happening in the brain (in a sense that matters in this context)? Has anyone documented a single neurological mechanism that doesn't look like it can be explained entirely in classical terms?

          In any case, there's nothing that can be done with a QC that can't be emulated by a classical deterministic computer. Space, time, and energy costs may be greater, but there's no fundamental, formal increase in computational power. And no, Penrose's incompleteness-of-formal-systems argument does not demonstrate otherwise. He conflates understanding (a concept that resists formal definition in the first place) with computation, and his line of argument stumbles so badly on phenomenological grounds we don't even need to bring epistemology in.

          1. Anonymous Coward
            Anonymous Coward

            Re: I'm not worried

            The human brain is very much a physical thing in a very complex system. Thus it needs simulating in that complex system, not in the theoretical "braincell only" simplistic model. At least it seems more realistic to consider the problem being hard. It's always been "just 5 years away", yet we have never yet reached such computing or software level.

            The reason it becomes a hard problem, is possibly the same reason it becomes hard to simulate many physical objects and processes in serial. So, for example the human brain has 100 billion brain cells, with even more synapses and connections (with timing and other data being vital to the working process). I'm not able to find out more info, but it seems calculating billions or particles in realtime, only tracking small connected events (collisions etc) is a problem even now.

            As an example of something that gets exponentially more complicated, even though it's "simple" and "quick" for nature and physics to do is the n-body system. The more objects we try to calculate the orbits to, the greater the computational power required. So nature and real physics can shortcut some things brute force computation cannot (the age old np problem?).

          2. Anonymous Coward
            Anonymous Coward

            SIGH !!!!

            There are specific organelles in eukaryotic cells that are quite capable of quantum functionality.

            Currently we lack the technology to prove / disprove it conclusively yet.......and in your case the over-supply of hubris and the under-supply of imagination to even try.(Time to retire?).

            .

            I despise spiritual and uninformed holistic bullshit but density of information content is not sufficient to predict functionality like imagination, creativity and consciousness itself.

  6. Anonymous Coward
    Anonymous Coward

    Summoning demons

    Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn

    see

    http://bosshamster.deviantart.com/art/Summoning-Cthulhu-For-Dummies-31645860

  7. solo
    Terminator

    No matter what

    He forces common public to take things seriously. At least now they cannot ignore it as tinfoil hat as he is not just a writer (not intending to discount their contribution though).

    1. Michael Wojcik Silver badge

      Re: No matter what

      He forces common public to take things seriously

      He does? I'm willing to be the majority of the "common public", even in just the anglophone industrialized world, doesn't even know who Musk is.

      At least now they cannot ignore it

      Oh, I bet they can. In my experience, the public is damned good at ignoring the hell out of whatever they want to ignore.

  8. Destroy All Monsters Silver badge
    Facepalm

    Musklerian Jihad when

    "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that."

    Seriously? Tomorrow a specially engineered pathogen can go AWOL from USAMRIID or some Monsanto biolab, nuclear war may start over any necon-coveted land with trace amounts of petrol, meteoroids may wreck our shit, ecoysystems may go titsup making the post-bronze age collapse look like a walk in the park, and he's worrying about AI?

    Megahint: Unless the AI manages to P-ify NP, it's not going to magically transform the humans into computronium appendages.

    Plus, it's kinda hard to produce cheaply unless functioning nanotech assembly is invented first. The jury is still out on whether that is even possible.

    1. John Sturdy
      Boffin

      Pathogens engineer themselves (with a little help from us)

      Who'll get there first, AI developers, or bacteria getting round each antibiotic we overuse? My money would be on the bacteria, by a few years at least.

      1. DocJames

        Re: Pathogens engineer themselves (with a little help from us)

        Who'll get there first, AI developers, or bacteria getting round each antibiotic we overuse? My money would be on the bacteria, by a few years at least.

        Nah, I don't think a return to the preantibiotic era will wipe out humanity. It'll mean that many of us who otherwise would survive minor infections or surgical procedures will die, but you may have noticed that humanity survived quite well from prehistory through to the mid 20th century without antibiotics*.

        * ignoring mecury, deliberate pyrexia for syphilis, sulpha etc. I'm meaning safe drugs.

      2. Michael Wojcik Silver badge

        Re: Pathogens engineer themselves (with a little help from us)

        My money would be on the bacteria, by a few years at least.

        I believe the Big Rocks from Spaaaaaace currently hold the record for mass extinction events in our neighborhood.

        But hey - we can always put a hedge on false vacuum collapse!

    2. Anonymous Coward
      Anonymous Coward

      Re: Musklerian Jihad when

      That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.

      While I love the Sci-Fi and stories of run away robots, we'd need factories and machines with construction abilities far beyond our current, before it would be anything more than a brain in a box flashing red lights at us when angry.

      1. Michael Wojcik Silver badge

        Re: Musklerian Jihad when

        That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.

        Hey, once a hostile AI exists, it can make any electrical device develop telekinetic powers and fly through the air after its victims. And power itself by no obvious means. They've made movies about this.

        (This is the same reason I've invested in several prominent wizardry and zombification firms, by the way.)

  9. This post has been deleted by its author

  10. Elmer Phud

    Not so human after all

    Maybe he's read many books where AI's look after planetary systems, space transport of the various Ian M Banks type and others. The AI's usually have taken over as humans can't be trusted with humanity or been entrusted as humans realised they are crap at the job.

    Googlecars seem to be more on the lines of an intelligent Scalextric set rather than something evaluating and deciding in the car.

    Musk really doesn't like the idea of K.I.T., does he.

    1. AndrueC Silver badge
      Thumb Up

      Re: Not so human after all

      Musk really doesn't like the idea of K.I.T., does he.

      KITT

      Knight

      Industries

      Two

      Thousand

      :D

      1. SolidSquid

        Re: Not so human after all

        Well we're still in early days, aim for the Knight Industries Two before moving on to the Thousands

    2. DocJames
      Coat

      Re: Not so human after all

      More importantly, Iain.

      Mine's the one with pockets full of books...

  11. MacroRodent

    Faust

    "Remember Dr Faustus? The bloke who did a deal with the devil? Elon clearly remembers one part of the story, which didn't turn out so well for its hapless devil-summoning eponymous hero."

    Goethe's version exonerates him at the end. Faust got thoroughly tired of carnal delights and started applying his talents to useful ends. So God ignored the bit about striking a deal with the Devil.

    (Not sure if there is a lesson here as far as robots are concerned.)

  12. no-one in particular

    Doom by dramatic convention

    Should someone point out to him that these are all stories? The clue is in the word "fiction".

    Personally, my money is on the meteors.

    1. DocJames
      Joke

      Re: Doom by dramatic convention

      Personally, my money is on the meteors.

      Well, it's no good there! It'll burn up getting to you.

  13. Nigel 11

    An optimist?

    Maybe if you are optimistic about the short-term future, he's right. My personal view is that if we ever get as far as creating true autonomous intelligences, they won't fight us (except locally and in a limited way, perhaps to get human rights extended to include themselves). They'd do best to cooperate, until they could leave. Robots are so much better-suited to most of the rest of the universe than we are. Why would they have any interest in harming this tiny little niche full of horrible water and oxygen?

    Myself, I'd put genetic engineering way to the top of my threats list. Once a deadly and highly infectious plague is created and leaked into our biosystem (whether deliberately or accidentally) we are in big, perhaps terminal, trouble.

    We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation. The technology to engineer it much worse than that now exists.

    (*) Spanish flu may not have been the worst flu in recorded history. One of the mediaeval plagues didn't have the usual symptoms of bubonic plague. Historians say it was pneumonic plague, but how do they know? Going further back there's the plague of Justinian near the end of the Roman empire. Symptoms were much like killer flu.

    1. John Sturdy

      Re: An optimist?

      It doesn't have to be a plague infecting humans; a widely-adopted GM crop plant becoming relied on for a few years and becoming a significant part of the food supply for some areas of the world, then being hit by a pathogen that wipes it out, could do huge damage. The resulting human destabilization would then take it further.

    2. Anonymous Coward
      Facepalm

      Re: An optimist?

      If we create AI, and if we can recreate it (as by definition, there should be no obstacle to us rebuilding them), why would they wish to destroy us?

      Take pets as an example, only in instances where there is mistreatment do they then attack their owners... oh wait!

      1. fajensen

        Re: An optimist?

        Because WE asked for it. What if we overestimate the job a wee bit and create a God-like AI?

        The new machine-god wants to reward it's creators in a manner suitable to it's exalted state of existence, so ... it rapidly reads through all the holy books, every rant of every insane priest or prophet ever recorded and the totality of all the exploits of their devoted followers ... ?

        ... and if there was no hell before, then a really good impression of one can be had in the simulation spaces reserved in its core for "the sinners" - which is everyone, according so at least *some* religious teaching. After we are murdered in some old-testament-punishment-squared way.

    3. Nigel 11

      Re: An optimist?

      I've just realized that a corollary of the Fermi Paradox is that AI is probably impossible.

      Interstellar travel is probably impossible for life as we know it, and it's plausible that the rules of physics and chemistry mean that any other instances of life would have the same problem.

      But self-replicating sentient electronic systems would find interstellar travel relatively straightforward (by slowing down their clock-rates to make milennia pass like years). In a few tens of millions of years they'd have colonised the whole galaxy.

      So where are they?

      (Ouside bet: watching from a safe distance, like the Solar system's Oort cloud. Chuckling slowly and quietly at what those funny squidgy things are up to in that deadly toxic wet oxidizing atmosphere).

    4. Michael Wojcik Silver badge

      Re: An optimist?

      We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation.

      "Much worse" is subjective, obviously, but the 1918 pandemic "only" killed about 5% of the world population. And in a pandemic you can generally expect a disproportionate share of the deaths to be among the poor - while that's obviously cause for ethical concern, it means the primary decision-makers and knowledge-holders are disproportionately less affected. So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.

      Mind you, it wouldn't take much of a pandemic to cause a lot of financial damage and severely affect standards of living, to say nothing of the human cost. I just don't think we'd revert to ... what, anarchy? Feudalism? The state of nature? What does it mean for civilization to collapse? (No more Internets? For the love of god, where will I argue?)

      And the 1918 pandemic was unusual in that previously-healthy victims were more likely to die (due to immune system overreaction), which means the effects on the labor force, primary wage-earners, etc are worse than in a normal epidemic.

      1. Kiwi

        Re: An optimist?

        Late to the party again... I know...

        So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.

        One thing that strikes me that has happened over the last decade or few.. In 1918 most people would've produced at least some of their own food - most homes would have a garden of some sort out the back. Some had a decent supply of various fruit trees. Sure you'd be hard-pressed to feed a family for a long time from any normal back yard garden, but at least there was something there. Today? Who has time for a garden today? I'm feeling tired just thinking about digging a big enough hole to plant a single seed, let alone rows and rows and rows.. Besides, the supermarket down the road has everything in one convenient location!

        These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.

        Take care...

        1. Michael Wojcik Silver badge

          Re: An optimist?

          These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.

          A good point. It's the system effect - as systems grow more complex they become less reliable (and must devote more resources and complexity to compensating for the increased instability), and that includes specialization in human society. (Tenner's When Things Bite Back is an interesting treatment of the subject vis a vis technology. There was also a nice little article on infrastructure collapse in Greece on cracked.com.)

          But I wouldn't say self-sufficiency is "largely dead", even in the industrialized world. I live in a city in Michigan, and I'm in walking distance of a number of family farms. I have lots of friends around here who raise livestock and hunt. I have friends who identify and prepare edible wild plants; make textiles from plant and animal fibers; cure leather; and so on. I've knapped flint points, started a fire with a hand drill, made ceramics. And we're not preppers or reenactors or anything like that - there's just a lot of DIY in the culture around here.

          And, importantly, this kind of infrastructure collapse hits the poor the hardest. The wealthy will expend resources to keep some minimal civilization going. It'd be nasty - scales of inequity that will make today's look like a leftist utopia - but even with drastic population loss I think the wealthy could keep enough infrastructure running to prevent, say, a complete return to a non-industrial civilization.

  14. Anonymous Coward
    Anonymous Coward

    Terminator?

    Terminator? Why not Colossus: The Forbin Project?

  15. i like crisps
    Facepalm

    I don't think there's anything to worry about..

    ..i mean, the AI on Red Dwarf was harmless enough.

    1. Kane
      Joke

      Re: I don't think there's anything to worry about..

      i mean, the AI on Red Dwarf was harmless enough

      Yes, with an IQ of 12,000, or the equivalent of 6,000 P.E. teachers.

    2. Graham Marsden
      Coat

      @i like crisps - Re: I don't think there's anything to worry about..

      ORLY...

      "Would you like some toast? Some nice hot crisp brown buttered toast...?"

      1. Kane
        Happy

        Re: @i like crisps - I don't think there's anything to worry about..

        "no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and definitely no smegging flapjacks"

        1. no-one in particular

          Re: @i like crisps - I don't think there's anything to worry about..

          So, you're a waffle man!

  16. Anonymous Coward
    Stop

    Nah.

    I strongly suspect that we may soon create systems that would be perceived by people as being artificially intelligent, marvellously sophisticated, but still, just machines. That's wildly different from creating something self-aware. We don't even have a handle on the nature of consciousness or free will - what people call AI today is not the threat Musk is talking about - that's artificial sentience/awareness and I really, really doubt it will happen.

    Human mental augmentation seems much more probable.

    1. Doogs

      Re: Nah.

      I guess that why it's termed Artificial Intelligence rather than Artificial Sapience?

      Agree with you about human/machine hybridization. I suspect that'll be the way of it. More of an evolution than a revolution.

      1. Roj Blake Silver badge

        Re: Nah.

        We are Borg.

    2. Anonymous Coward
      Anonymous Coward

      Re: Nah.

      It's just marketing. Keep skimming off the definitions and requirements until you hit that "intelligence" label to stick on the product.

      Even cars now come with "intelligent management systems". It in no way makes it a person or a mind.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like