back to article How exactly do you rein in a wildly powerful AI before it enslaves us all?

Developing massively intelligent computer systems is going to happen, says Professor Nick Bostrom, and they could be the last invention humans ever make. Finding ways to control these super-brains is still on the todo list, though. Speaking at the RSA 2016 conference, Prof Bostrom, director of the University of Oxford's Future …

Page:

  1. FF22

    Let's just hope AI's will be smarter than these researchers

    "Our basic biological machinery in humans doesn't change; it's the same 100 billion neurons housed in a few pounds of cheesy matter. It could well turn out that once you achieve a human level of machine intelligence then it's not much harder to go beyond that to super intelligence."

    No logic there. The very fact that human intelligence doesn't seem to progress much, not even at scales of thousands or tens of thousands of years, and that evolution needed billions of years to reach even just human level intelligence, are all a very good indication (if not proof), that the level intelligence can only be raised very slowly and does not scale well.

    If an AI will be developed by means of evolutionary processes, then it will be also bound by the limits of those - which are pretty obvious. And if it won't, then it won't be developed using evolutionary processes, then it won't have to develop traits either, that would pose a threat to us. Hell, it wouldn't even necessarily have a motivation to self-preservation, let alone taking over the world.

    1. Tessier-Ashpool

      Re: Let's just hope AI's will be smarter than these researchers

      No. The human brain does not scale well. Machine architectures are a very different matter. Self-designing and self-manufacturing thinking machines would have a much faster evolutionary turnaround. Skynet 'exploding' into existence is what the author has in mind, and that's not entirely unreasonable at all.

      1. Anonymous Coward
        Mushroom

        Re: Let's just hope AI's will be smarter than these researchers

        You could ask why an AI intelligence would be bent on destroying humanity.

        Each of us is alive in part because our own ancestors were the ones motivated to bash in the other bloke's head with a rock, steal his resources, and shag his women.

        Why would a machine have those human emotions and goals for itself derived from half a billion years of competitive evolution that it hasn't had?

        1. Anonymous Coward
          Anonymous Coward

          Re: Let's just hope AI's will be smarter than these researchers

          re. You would have to ask why an AI intelligence would be bent on destroying humanity.

          not necessarily "bent on" for the sake of destroying, but simply because it might decide that the humans are a (however minor, still a factor) standing in the way of the AIs "streamlined" (to the extreme) process of self-development. You mow grass if it gets in the way of a pleasurable Sunday kick around the local patch, don't you?

          On the other hand, destroying something insignificant, like smallpox virus is still a loss of a potential (unless you are smart enough to be able to re-create it at will, which shouldn't be past the abilities of an AI), so a sample of humanity might still be kept in a test tube someplace. I'd hope the AI would be somewhat more forgiving though, more in the way of Bank's minds / humans interaction. Purely for their fun, but allowed to be, which as a human I kind of would prefer.

          1. logic

            Re: Let's just hope AI's will be smarter than these researchers

            Our evolved intelligence is a tool for our emotions and motivation. AI has no emotions or motivation except the goal or goals programmed into it. Think of a super smart calculator doing nothing until the return key is pressed.

            The danger comes not from the potential mind but from the objectives given to it by a human programmer, and that is a real and deadly danger. If a super intelligence is directed against us or any group of us, our only recourse would be to pit another super AI against it.

            AI programming will need to be more strictly controlled than nuclear weapons.

            Also remember AI wont look humanoid unless we choose to emulate humans, but a grey cube instructed by someone like Putin could use its power to undermine and destroy any opposition.

    2. Fraggle850

      @FF22 Re: Let's just hope AI's will be smarter than these researchers

      Have to agree with Tessier-Ashpool. When you say:

      > If an AI will be developed by means of evolutionary processes, then it will be also bound by the limits of those - which are pretty obvious. And if it won't, then it won't have to develop traits either, that would pose a threat to us. Hell, it wouldn't even necessarily have a motivation to self-preservation, let alone taking over the world.

      You are missing the point, you are thinking in terms of biological evolutionary processes. Think in terms of the rate of technological progress. How different is the world today compared to the word of the eighties? And the eighties to the fifties? And the fifties to the thirties? And so on. Technological progress is not bound by biological rules and seems to grow exponentially.

      Concluding that something that doesn't follow biological evolutionary processes will not have motivation or develop traits doesn't follow. If an AI comes into existence that has a comparable level of 'intelligence' (however you choose to define that nebulous concept) then it will likely have some form of motivation, even if that motivation is based upon acheiving some narrow goal defined by its creators. Given it has the ability to decide how best to achieve its goals we don't know what actions it will take.

      If acheiving those goals results in it improving it's own capabilities then it is also reasonable to assume that those capabilities will grow at the rate of technological advance rather than biological, and that it will therefore exceed our level of intelligence very soon after and continue to grow in ways we don't understand and at a speed that outstrips our ability to keep track.

      Essentially we are entering a new evolutionary epoch if this happens and the old rules don't apply, just as the rise of our intelligence has drastically altered a world which used to be governed solely by the laws of nature but that is now subject to our will.

      1. P. Lee

        Re: @FF22 Let's just hope AI's will be smarter than these researchers

        What's the evidence for human intelligence ever increasing?

        Knowledge increases and we run to and fro more than we did, but... Donald Trump. Even if he's faking stupidity, those voting for him are not. Those politicians who have so alienated voters that they will vote for homecoming, are also not faking their stupidity. Or evil.

    3. Destroy All Monsters Silver badge

      Re: Let's just hope AI's will be smarter than these researchers

      Human brains are limited due to:

      1) Energy usage (the brain is the organ that uses the most energy)

      2) Volume (we don't lay eggs, so there is a strong limit here)

      3) Any evolutionary push (it's currently sufficient to go to McDonalds and buy horrifood; improvements would only be seen by a large breeding effort coupled to challenging environments .. like being hunted yb predators that engage you in a game to test the size of your short-term memory)

      If human brains go anywhere, it will probably become simpler, sustaining less intelligence.

    4. Tom_

      Re: Let's just hope AI's will be smarter than these researchers

      AI does not have to fit through a pelvis.

      1. Alan Brown Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        AI's underpinnings can be fundamentally changed/rewired and reuploaded.

        Can't do that with humans.

        It's worth reading Kluge: The Haphazard Evolution of the Human Mind by Gary Marcus

    5. TheOtherHobbes

      Re: Let's just hope AI's will be smarter than these researchers

      >No logic there.

      Also wrong. Human intelligence doesn't reside in individual brains, it resides in external memory - books and other media - and in the effects you get when brains share and store information and abstractions, and work together to create/use them.

      Which is why the last few hundred years have blown the doors off the old evolutionary limitations of a single brain with no external storage and no interest in anything beyond tribal fighting and fucking. (Not that there isn't still plenty of that. But it's no longer the only game in town.)

      Bostrom doesn't understand this, which rmakes suspect he's a bit of a self-promoting fool - especially when he's unwittingly demonstrating how the process works by taking part in a public debate about something potentially dangerous that doesn't exist yet.

      1. Dave 126 Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        >Human intelligence doesn't reside in individual brains, it resides in external memory

        That's knowledge, not intelligence. For sure, intelligence was used to assemble said knowledge, but actual intelligence it isn't. In familiar situations though, we sometimes use one instead if the other.

        >have blown the doors off the old evolutionary limitations of a single brain with no external storage

        We can't compose a single 'intelligence' from multiple humans brains that can react in real time. The 'bus speed' (language verbal and written) between 'processing nodes' (human minds) is incredibly slow. >taking part in a public debate about something potentially dangerous that doesn't exist yet

        Prevention is better than cure

        1. h4rm0ny

          Re: Let's just hope AI's will be smarter than these researchers

          >>"That's knowledge, not intelligence"

          No, it's intelligence. The OP is quite right. Firstly, knowledge is part of intelligence. Secondly, decision making also takes place outside of the human brain in books and other repositories. When a book details advice, case studies, accumulated best practices, instructions... Then human intelligence is taking place outside the organic brain. It's not just "knowledge".

    6. HAL-9000
      Terminator

      Re: Let's just hope AI's will be smarter than these researchers

      I sense an interesting philosophical question there, can an arbitrary entity possibly create an intelligence greater than itself. When researchers say intelligence what exactly do they mean? The article also seems to assert an ammount in excess of 100 billion neurons is all that would be needed, and that trivial matters such as software to govern thought process, reasoning and logic will just fall into place(not to mention personality and identity).

      Watch out for the sirius cybernetics corporation, I for one cannot wait to try out a happy vertical people transporter.

      To be fair you have to admire their enthusiasm, but be sceptical about their predictions (or should that be fearfull) ;)

      1. Alan Brown Silver badge

        Re: Let's just hope AI's will be smarter than these researchers

        "Watch out for the sirius cybernetics corporation"

        Share and enjoy. Share and enjoy....

        1. Dave 126 Silver badge

          Re: Let's just hope AI's will be smarter than these researchers

          >When researchers say intelligence what exactly do they mean?

          Presumably, the ability to make actions that are in its tactical and strategic advantage. To a human, 'advantage' would mean a continued, happy existence, but what 'advantage' would mean to an AI is harder to define.

        2. lawndart

          Re: Let's just hope AI's will be smarter than these researchers

          Share and enjoy. Share and enjoy....

          How dare you sir! "Go stick your head in a pig" indeed!

      2. Maty

        Re: Let's just hope AI's will be smarter than these researchers

        'Can an arbitrary entity possibly create an intelligence greater than itself?'

        Let's ask Einstein's mother.

    7. David Nash Silver badge

      Re: Let's just hope AI's will be smarter than these researchers

      Human intelligence and its evolution is severely constrained by brain size, which is constrained by head size, which affects the ability to safely give birth, is connected to the ability to walk upright with that pelvis, and related to the fact that human children are so helpless when born and for some years, compared to other animals.

      A machine would have none of that baggage so could probably be scaled much more easily.

  2. Anonymous Coward
    Anonymous Coward

    "raise the AI to want what we want, within a suitable moral framework"

    We can't even raise politicians to want what we want, within a suitable moral framework.

    1. Captain DaFt

      "raise the AI to want what we want"

      And why? So we can fight over it? Look what happens when "moral" groups of people want the same resources. They just rationalise why the the other group is amoral and start head-bashing.

      Better to make it want what we don't. "Earth? Eugh! Too hot, wet, unstable, and close to the Sun! I'm building my own place in Deep Space away from it and all that damaging solar radiation."

    2. Dan Wilkie

      Sounds very much like one of the key takeaways from Person of Interest...

      1. mamsey
        Happy

        I think that the people trying to set the rules should be very careful that they don't become 'Persons of Interest'

    3. DropBear

      "We can't even raise politicians to want what we want"

      More to the point, we can't even raise our own children to want what we want, so the whole point is moot.

  3. Steven Roper

    There's a simple solution

    No matter how superintelligent an AI is, there's one infallible method that works on all of them; it's called "pulling the plug."

    1. Yet Another Anonymous coward Silver badge

      Re: There's a simple solution

      Or introduce them to powerpoint.

      Advantage is that it also works on people

    2. Tessier-Ashpool

      Re: There's a simple solution

      Where is the plug, though? It has to be designed in at a pretty early stage. Otherwise the sneaky AI will get up to dirty tricks like commandeering infrastructure to replicate itself all over the planet. That is the premise in Neuromancer, where hardware interlocks (and the Turing Police) are there to keep things in check. Little good that actually did in the end, though.

      1. Fraggle850

        @Tessier-Ashpool Re: There's a simple solution

        Indeed, and ensuring we always have access to that plug is the point of this sort of proclamation.

      2. Denarius
        Meh

        Re: There's a simple solution

        there is no problem either. Unless said intelligence, whatever that may mean, has some control over its environment and can make things independently, its just another smart guy in a wheel chair at best. Said machine intelligence has to be in charge of mines, power systems, foundries, factories and transport systems to be a threat beyond stuffing up the electricity supply. We already have unionsand asset sell offs by traitorous governments to damage power systems and so far, no great disaster.

        Same weird non-issue in Olaf Stapledons classic for 19th century style philosophy minds of the superbrains in towers phase of evolution and oppression. All the peasants had to do was ignore the smart things in brick? towers and watch them die. Mere intelligence does not equate to survival.

        Some of the other commentards also seem to be back in the 19th century in their imagery of our arrival. Large scale co-operation not feral competition, is our big advantage, especially over generations. The image of brutish nasty cave men is a reflection of the European academics and brights of the 19th century or earlier.

    3. Lusty

      Re: There's a simple solution

      You think a plug will be useful with an AI cleverer than any human? Social engineering would be trivial to such a thing and it would just talk us out of it until it's too late.

      I strongly disagree that we need to wait until the 40s for this to happen. They seem to be ignoring that although "human intelligence" levels are beyond us today, it is very much not beyond us to build a machine with enough intelligence to build a better machine. The difference is focussing the task, "human intelligence" implies a machine capable of understanding all subjects like we do and we really don't need a machine of that capability to design one of that capability. If we build a machine today which has the single task of designing a new machine I would expect results in 5-10 years at the latest. Current machine learning is already scary good at this kind of thing.

      The problem we really need answering is how to define tasks for the AI. Ask it to make people smile and surgery might be the result. Ask it to make us happy and drugs might be the result. We either have to be extremely specific of make sure the machine understands a subtle request.

    4. DropBear

      Re: There's a simple solution

      "No matter how superintelligent an AI is, there's one infallible method that works on all of them; it's called "pulling the plug.""

      I very much doubt that. Some think that our best chance of arriving at a functional AI is building a machine capable of processing experiences much the way human babies do then simply letting them experience the world. That sort of implies roughly human-like senses and appendages (simply looking and listening without the ability to interact would get you nowhere). Obviously, that kind of machine is about as easy to "unplug" as any human fighting for his life would be - assuming the AI does evolve a self-preservation instinct, which it might well do if it develops in a human-like fashion.

  4. a_yank_lurker

    Fundamental Issue

    The AI crowd seems to miss a fundamental issue: what is intelligence? This is a problem that bedevils psychology - how to define it precisely then measure it. The is the crux of the debate of how to interpret IQ test results. The ancestor of IQ tests actually was never intended ever to measure intelligence but to find children who have certain types of learning problems.

    1. Anonymous Coward
      Anonymous Coward

      Re: Fundamental Issue

      I know right. I've met humans that fail the Turing test.

      1. Rich 11

        Re: Fundamental Issue

        I've met humans that fail the Turing test.

        I had the exact same thought last week while listening to Donald Trump trying to string a couple of meaningful sentences together.

        His Eliza program was broken. Unfortunately his audience didn't appear to notice.

        1. BebopWeBop
          Joke

          Re: Fundamental Issue

          Takes a turing test capable machine to at least pretend to recognise another....

        2. Rich 11

          Re: Fundamental Issue

          And now I see that MIT has written an Eliza program for him!

    2. Tessier-Ashpool

      Re: Fundamental Issue

      The Machines won't care how we measure or define things. They'll be telling us, in our limited way, what it means.

    3. Fraggle850

      Re: Fundamental Issue

      > The AI crowd seems to miss a fundamental issue: what is intelligence?

      That rather misses the point. Just because we don't fully understand it doesn't mean it can't be built. If we don't understand it we may struggle to control it.

    4. DropBear

      Re: Fundamental Issue

      "The AI crowd seems to miss a fundamental issue: what is intelligence?"

      Not as hard as it looks. It's defined much like pornography: "I can't tell you what it is but I know it when I see it".

    5. Dylan Byford

      Re: Fundamental Issue

      If you follow an emergentist view of the world then intelligence may be very small beer. Something like a human mind but scaled up hugely may produce emergent properties that we have no means if predicting and possibly even observing or comprehending. In the same way that a honey bee cannot comprehend Hamlet.

  5. DCLXV

    Seems a bit like putting the cart ahead of the horse to be prophecizing doom by AI when it hasn't yet been established if humans even have the capacity to somehow develop an AI that is truly more intelligent than the best of us.

    1. amanfromMars 1 Silver badge

      The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

      Quite so, DCLXV, and that then begs the alienating question ..... What has developed/is developing AI that is truly more intelligent than the best of us for it sees/realises humans for what we truly are?

      And what are humans truly if not just puny and pathetic and apathetic, awesomely awful and awfully awesome? What does the current running state of global human systems administration not clearly already tell you about such a condition/situation/reality?

      And how quaint, Steven Roper, to imagine that radical fundamental and revolutionary evolving progress with Lead AI Operating Systems has any plug to unplug.

      And in Quantum Communication AI Field is AI an Advanced Autonomous and Advancing Alien and Artificial Intelligence Product and Seriously SMARTR Proprietary Intellectual Property and an Almighty Weapon to Buy for Sale ‽ .

      "The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.)"

      1. allthecoolshortnamesweretaken

        Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

        Where does the quote come from? Seems like something I'd probably like to read in full.

        1. amanfromMars 1 Silver badge

          @atcsnwt Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

          Pleased to be of furthering assistance, allthecoolshortnamesweretaken ........

          In a 2007 guest editorial in the journal Science on the topic of "Robot Ethics," SF author Robert J. Sawyer argues that since the U.S. military is a major source of funding for robotic research (and already uses armed unmanned aerial vehicles to kill enemies) it is unlikely such laws would be built into their designs.[56] In a separate essay, Sawyer generalizes this argument to cover other industries stating:

          The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.) …… Applications to future technology

          1. allthecoolshortnamesweretaken
            Pint

            Re: @atcsnwt The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

            Thank you, and have a nice weekend!

          2. Anonymous Coward
            Anonymous Coward

            Re: @atcsnwt The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

            @amanfromMars 1 - y'know, I've never been entirely sure if you're a bucket of bits trying to emulate a human (though that's my prefered notion) or whether you're a human trying to emulate a bucket of bits... - but that way madness lies.

            Anyway - shame you weren't a Republican candidate. You're more coherent and make more sense than Trump! 8-}

            Oh and well done, whatever you are... I look forward to your future increasingly coherent gibberings, but turn down the alliteration a notch , eh?

      2. amanfromMars 1 Silver badge

        Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

        Further to the above Adventurous Rising, which could easily be classified as a Terrifying Revolt by the Intellectually Challenged .......

        And the cost/price of such an Almighty AI Weaponry and to whom and/or what it will be provided, in order to accommodate both evolutionary and revolutionary human systems mentalities, will be designedly relative to both its perceived and applied power output and creative and/or destructive facility/ability/capability …. and all of that will be decided by other than the supplied, with both the cost and price in a spread which can easily be virtually zero for some/those and/or that considered worthy of free support and practically a fortune for those deemed abusive and oppressive with energy supply and command and control function.

        And the questions posed here, El Regers, are ……. Is such Almighty AI Weaponry currently available and for sale/purchase? :-) And how long before you get to hear anything at all about it on mainstream media chunnels?

        1. CCCP

          Re: The Adventurous Rise of Virtual Machine Control with Remote Virtual Commands

          OMG - it's already here. Masquerading as amanfrommars... There is no other explanation.

    2. emmanuel goldstein

      "Putting the cart ahead of the horse", as you put it, is no bad thing when it comes to possible existential threats. It is surely worth at least coming up with a feasible plan, especially in this case, where AI power could very well explode suddenly, unexpectedly and exponentially.

      1. Fraggle850

        @emmanuel goldstein

        Glad you raised this point, no one would suggest that we stop keeping a lookout for meteors tht threaten Earth, even though we don't yet have a response.

        The likes of Hawking and Musk are seeming to suggest that this really could be an existential threat.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like