back to article The Register Lecture: AI turning on us? Let's talk existential risk

A sneaking fear that the machines might turn on us is just not good enough - we need to be able to quantify that risk if we want to avoid it, or at least manage it. Or we could just push on regardless and see how things work out. Whatever your take, we were thrilled to have Dr Adrian Currie of Cambridge’s Centre for the Study …

  1. Steve Davies 3 Silver badge

    Whatever happened to...

    The Video of the previous Talk?

    1. Anonymous Coward
      Anonymous Coward

      Re: Whatever happened to...

      http://www.theregister.co.uk/2018/01/18/what_will_drive_our_cars_when_the_combustion_engine_dies/

  2. Destroy All Monsters Silver badge

    Looks at watch

    I think it's way too early to talk existential risks from robots. We don't even have a Philip-K-Dickian underground "Autofac" self-repairing and spitting out new robots per fas et nefas, even after nuclear exchanges, limited or otherwise.

    Let's talk Europe being unable to handle refugees. And war.

    1. dan1980

      Re: Looks at watch

      @DAM

      I don't think so.

      Oh, don't get me wrong - I believe there are more pressing problems and I believe that real AI, of the type that could pose a threat to humanity, is very far off in deed - but that doesn't make it too early to start thinking about where things may lead and how best to proceed.

      Think of the problems that have occurred at the intersection of technology and privacy and law recently. All that has been caused because there wasn't enough thought and talk - and action, of course - about what the future may bring and how to handle it.

      The massive migration of services and communications to the online realm coupled with the huge increase in processing power and data storage has seen private corporations, criminal elements and law enforcement agencies able to access previously unimaginable troves of our personal data and regulation has lagged woefully far behind.

      The explosion of commodity drones and ridiculously cheap HD and 4K cameras has seen this growing market pose risks to safety and privacy that authorities are trying to get to grips with but, now that the market is there, it's that much more difficult.

      Look at the problems caused by Uber - the 'disruption'. It's a huge issue because authorities just weren't prepared for it. Think of the issues in London that amounted to an argument over what constituted a 'taximeter'. The problem? That the wording was laid down some time ago without consideration that, in the future, a nearly ubiquitous hand-held device could perform the same function.

      The point is that while we may not need to work everything out right now, it's not too early to start honestly and openly discussing it. As a society, what do we want from this technology? What are our definitions - what is "intelligence"? Are there different levels and do they need to be considered separately?

      On your other point, of there being more pressing concerns - I agree. But it is not as though the human race, as a whole, can't grapple with multiple issues. People have expertise in different areas and it's wrong to suggest that those people can't apply their knowledge and time to considering those problems that relate to their specialist areas just because there are other issues, in other fields, that present a more urgent problem.

      Some problems require vast quantities of money and resources to address and this is mostly because they are urgent problems. Trying to get ahead of that with discussions such as this helps avoid something becoming an urgent problem down the road.

      Of course, you still have to get broad agreement and buy-in but that doesn't make the exercise a waste.

      1. sisk

        Re: Looks at watch

        and I believe that real AI, of the type that could pose a threat to humanity, is very far off in deed

        I would argue that weak AI of the sort we could make in the very near future potentially poses a much greater risk than strong AI. The paperclip maximizer could only be a weak AI. A strong AI would be able to intelligently interpret the instructions and realize that we did not actually want it to turn the entire universe into paperclips. In fact we could likely write an AI capable of that sort of nonsensical interpretation of its instructions right now. It need not even be good enough to really even be called weak AI.

  3. steelpillow Silver badge
    Devil

    Science?

    I'm not sure you can have a "science" of existential risk. Every scientific hypothesis is at least in principle testable. Who is going to put an existential risk to the ultimate test under controlled laboratory conditions?

    1. sisk

      Re: Science?

      I don't know. Futurology is considered a science - granted, a "soft" science - and it has the exact same problems in that regard.

  4. Peter2 Silver badge

    A sneaking fear that the machines might turn on us is just not good enough - we need to be able to quantify that risk if we want to avoid it, or at least manage it. Or we could just push on regardless and see how things work out.

    At the moment, the threat is massively overhyped. Even if we were to ignore the fact that AI simply isin't capable of developing a desire for world domination and is in truth more akin to a complicated excel macro than an intelligence then what could an AI do to us?

    On the desktop side machinery has no ability to harm the operator (excepting these) https://www.theregister.co.uk/2006/07/07/usb_missle_war_breaks_out/

    At a high level, nuclear weapons are very, very offline and rather throughly airgapped. They are also controlled by 1970's floppy discs and elaborate man in the loop security proceedures, so nothing is going to happen there. That leaves causing industrial accidents from companies putting too much online secured very poorly, but that's not going to wipe out humanity and has a questionable ability to harm any significant number of people.

    The only thing likely to change that is actually self driving vehicles if they are insufficently secured as a few million self driving EV's roaming around under computer control trying to run anybody over on sight would be a mite unpleasant, but simply requiring a physical key in the circuit (doing it in software would create a risk of bypassing the safety measure) would allow humans to remain in control there, eliminating that as a threat.

    In short, AI can't seriously affect the RealWorld™ unless we allow it to. I'm all in favour of making sure that anything connected to the internet is (by design) set up to be physically incapable of causing serious damage as the more real threat is people hacking who would try and cause serious harm for "teh lolz".

    1. This post has been deleted by its author

    2. dan1980

      @Peter2

      ". . . simply requiring a physical key in the circuit . . . would allow humans to remain in control there, eliminating that as a threat."

      But that's exactly the type of forethought that is required!

      My point, given above, at some length, is that we really should consider these ideas before they are needed so that regulations can be constructed accordingly. Private corporations will do what is best for them and if regulations aren't there to govern their actions, there is no reason to believe that any given 'common sense' move will be implemented.

      If we think there is a risk in allowing self-driving cars to operate without requiring a human in the loop then we have to lay that down so companies are forced to comply. It might seem like a simple, common-sense step but that one step actually makes the idea of self-driving car-share much less useful. Requiring physical human interaction (in the form of a 'key' in actual proximity to the car) precludes any service where you book a car and it comes to your door, ready for you. You would have to go pick it up from a dedicated location.

      Those locations must therefore be scattered around so there are enough vehicles available near where people need them. This limits the number of vehicles available in any area and takes up already limited parking space in densely-populated areas. Allowing no-interaction self-driving opens up a much greater user base as large numbers of cars can be stored in depots and these can be packed tighter, saving space.

      It also also allows the idea of a drop-off where the car drops you off at your destination - like a taxi - and then goes back to base or off to another customer. That, again, helps with parking. If the car requires a human with a 'key' to move then the car needs to be parked somewhere at the destination. Which leads to people booking cars for longer than they need so they can park it and then drive it back later. Cars that don't require interaction allow there-and-back to be two separate trips, with the car not needing to be parked and idle in the interim. That's more efficient.

      The idea, then, that just requiring a key solves everything is missing the benefits that can be had from other setups. My though is that the benefits means that zero-interaction cars will happen unless there are regulations like your proposed one to prevent it. People will, after all, prefer the option that is easier for them and, given the choice, a service that comes to them will be chosen over one they have to walk to; it's cheaper and more convenient.

      A different question is whether self-driving cars should be able to be controlled remotely. This impacts even more as, not only would it prevent the above, it would also prevent law enforcement locking down cars - something I am sure they would love to be able to do.

      It's not black-and-white, of course, which is why the discussions need to be had. Perhaps cars are able to be controlled remotely unless there is a human in them, in which case, they are sacrosanct as the risk to the occupant(s) and other road users and pedestrians would be too great to even allow the police to shut down a moving car remotely.

      One thing you can be sure of - if it's not sorted out ahead of time, the result will be that law enforcement agencies will have unfettered access to monitor and manipulate your car in whatever way they deem 'necessary'.

      I have ended up going on at some length again and the subject of self-driving cars is somewhat tangential but my point remains the same - we really need to work these things out before they are ubiquitous and history tells us that the distance from a useful technology being available and it becoming ubiquitous is rather short and has caught regulators off-guard time and again.

    3. sisk

      AI may not be able to directly cause any harm, but indirectly they can cause a LOT of harm. Imagine for a second an intelligent virus (yes, I know, such a thing is well beyond our current capabilities, but this is a thought experiment) that manages to infect air traffic control workstations with the intent of causing as many deaths as possible. Or traffic light control systems. Or the emergency alert system.

      And that doesn't even get into the nightmare scenarios of hospital systems and infrastructure control systems. How many people do you think would die if medical equipment started putting out inaccurate data and all the lights went out? Heck, just shutting off gas pumps would result in millions of deaths in the US inside of a month.

      True, we don't have to worry about AI triggering a nuclear apocalypse directly, but what about sending falsified communications to all the world's nuclear powers making it seem like they were under attack?

      Unfortunately there are LOTS of ways strong AI could harm humanity if it chose to. But, on the plus side, the kind of strong AI that could choose to do that would probably have little reason to make an enemy of humanity. I personally think we have a lot more to fear from the paperclip maximizer than from terminators.

      1. Peter2 Silver badge

        Imagine for a second an intelligent virus (yes, I know, such a thing is well beyond our current capabilities, but this is a thought experiment) that manages to infect air traffic control workstations with the intent of causing as many deaths as possible. Or traffic light control systems. Or the emergency alert system.

        Ok. Firstly, I don't think you understand how ATC works. I do because I have been taught to fly and have spoken to them via radio, which is how commands are passed. If aircraft seperation is compromised then ATC will know quite quickly via irate pilots shouting about it and they'll revert to their ermergency plans. As pilots are responsible for their aircraft there is unlikely to be any serious trouble if ATC packs up.

        Traffic light systems are designed with physical safeguards such that they'd blow a fuse if you illuminated both sets of lights. Harm, zero as it just falls back to manual operation.

        Emergency alert systems could cause people to panic and stand around doing nothing, but that's not going to cause the end of the world.

        And that doesn't even get into the nightmare scenarios of hospital systems and infrastructure control systems. How many people do you think would die if medical equipment started putting out inaccurate data and all the lights went out? Heck, just shutting off gas pumps would result in millions of deaths in the US inside of a month.

        I've worked for the NHS. My guess would be zero casualties, because everything critical is airgapped. Yeah, returning the wrong patient records wouldn't be good but that's about the most harm possible and the damage would have to be done by humans. Lights aren't going to go out because light switches aren't connected to computers. Power is backed up with generators that are tested weekly, the switchover and switchback to and from which causes more damage to computers in tests yearly than an AI could aspire to. FFS, UK hospitals are built with EM buffers on incoming lines designed to protect against a nearby nuclear detonation.

        In the UK, petrol pumps are a very manual and offline process. Harm, zero.

        The biggest harm would be that Just In Time supply systems would probably become SomewhatTooLate, which is suboptimal when it comes to things like food.

        True, we don't have to worry about AI triggering a nuclear apocalypse directly, but what about sending falsified communications to all the world's nuclear powers making it seem like they were under attack?

        Again, knowing something about these systems I know that they are designed by people who are considerably more paranoid than I am and have far less trust in technology and people programming it than I have, which is why everybody has their nukes set up to survive the first strike and then launch in response later.

        They deal with alerts tolerably well. You know about the horror stories of training tapes of a full scale attack being ran on a live system by the USA during the cold war, right? It happened, yet failed to set off a nuclear war.

        This is sort of like the X-Files. Things seem plausible when you don't know how they work, but the more you know the more it seems a bit silly.

  5. Anonymous Coward
    Anonymous Coward

    "we need to rethink what science looks like, and perhaps the role of scientists in society"

    "It occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalize on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and slagging each other off in the popular press, you can keep yourself on the gravy train for life. How does that sound?"

  6. amanfromMars 1 Silver badge

    Uncle Sam in the Firing Line .... or Invited to Lead from Points/Places/Spaces Way Out in Front

    I think it's way too early to talk existential risks from robots. .... Destroy All Monsters

    Oh? ..... Do you really think so?

    Silicon Valley Open Doors and Blackrock Aladdin supporters au fait with Advanced IntelAIgent Programs for Sublime Projects and Future Building might care to fundamentally disagree with you, Destroy All Monsters.

    And with Europe being unable to handle refugees and war, with both those risks being presented and manufactured by humans, it is certainly high time for a radical change to a better intelligence source for ..... well, Future Product Production for Global Media Presentation will automatically immediately transform Any and All Landscapes to Project a Beta See, courtesy of Beings Ethereal.

    Spooky AI Services Servering and Tendering to NEUKlearer HyperRadioProACTive IT Systems of Exclusive Executive Command and Control.

    Please note that there are no questions posed there to give false credence to doubt.

  7. SeanC4S

    If you look at the latest videos of pose estimation the results are exceptional. That gives AI direct access to a key human skill, imitation learning. Skills can build on each other.

    Anyway when you can import that information directly into MMD the world will be a better place.

    https://youtu.be/pW6nZXeWlGM

    1. Andrew Orlowski (Written by Reg staff)

      I wish I could understand a word of that.

      But I have a feeling I need to improve my “pose estimation”.

  8. CarpeNoctem

    With a straight face, can you honestly tell me that AI even exists? From what I can see it's all largely propaganda designed to scare people.

    AI used to mean something, now it refers to everything anyone cares to attach the label to. Yet, here's the thing - the world's biggest media companies, despite all of their 'AI capabilities', are reliant on hiring tens of thousands of people to screen content published to their platforms.

    That's right, the supremely evolved 'AI' can't even manage to ensure a world where publishers have control over what is or is not published to their platforms - it requires human intervention.

    Intelligence isn't context dependent, whereas all technology - and that's all it is - given the label 'AI' in the modern day is context dependent. After all, a self-driving car can't open a can of chopped tomatoes or hang a picture on the wall.

    It has no idea how to operate independently of the very restricted context it has been programmed to operate in.

    It really is about time we dropped this nonsense about 'intelligent machines' - it's embarrassing reading about it.

    1. dan1980

      @CarpeNoctem

      What is "intelligence" in the first place and is it actually reliant on consciousness?

      In other words: can something have "intelligence" which does not also have a concept and awareness of itself?

      What does that even mean?

      Does one spring from the other and, if so, which way around; does consciousness only emerge once a certain threshhold of "intelligence" has been crossed or is consciousness, by definition, a pre-requisite for intelligence.

      And so on.

  9. amanfromMars 1 Silver badge

    Live Operational Virtual Environments ...Initial COIN Offering

    Would Instruction in the Greater Use of AI be Desirable ...... and Helpful in Dispelling Concerns and with Protection for the Future from Systems Abuse and Virtual Misuse?

    There's surely nothing to misunderstand there, Andrew. The Text is Perfectly Clear and Beautifully Transparent.

    You are aware that such is A Current Stage of Our Future Development? Whether such be The Current Stage of Our Future Development, with no other Programs competing in the Field, is unknown here.

    Company is welcome though in Empty Spaces/Novel HyperRadioProACTive Places

  10. Anonymous Coward
    Anonymous Coward

    AI Turning on us

    Makes a change from States, Governments, Corporations, Friends or Relatives etc

    Can we kill protagonist AI's with impunity ???

    Might be fun !

  11. robots2005 AI32080

    I've AI divided into 3 types. After Snowden, I conjecture an AI can amass a military win by hacking. Type 2 is able to enact the standard futurist conjecture of seed-AI where it makes better hardware and software. Type 3 is smart enough to attempt attack like synthetic AI and mantle replicators that might be tough to defend against even next century. Much of the discussion is classified. For example, homes of the future with engineering books will require acoustic robot sensors. The easiest #1 attack appears to be to launch a rocket to space. Similar is to hack NASA space assets. The idea being to blot out the Sun, or hit us with a rock, or just come back from the Oort cloud with a superior fleet.

    For this reason NASA's interconnectivity should be cancelled. Fibre optics are harder to hack, especially with a new coating. The best optical computer appears to use a phase change wafer as the switch; plastic holographic memory is cheaper but glass is better. Eventually you'd have all PLCs made optical but at first basic controls like ventilation fans and on-off engines would be easier.

    VTOL aircraft mitigate a first strike. Rail guns and lasers are all useful munitions. Bad weather makes GEO lasers, balloon lasers, captured ice NEO lasers, L-point and Lunar lasers necessary. A safe lunar bunker of spaceships can reinforce airforces; it may be necessary to keep NPP in reserve here.

    I see electric ships with a VTOL fighter jet travelling between dielectric elastomer wave power floating islands. Entangled microwaves sent from aircraft can look for bunkers. Jeep's Hurricane protoype is able to mount a rail gun and swivel to track a robot. First responders will need vehicles able to find and kill robots and climb over cars before military assistance arrives. Neuro imaging will be able to pick and pick off leaders who aren't rational and honourable.

    3d printing shouldn't be in space. Neither should assembly wires, or Robonaut 2. NASA will need a rotating (two crafts tethered) space station at Earth gravity and with enough reality programming interior content and neuro imaging to keep astronauts sane enough to staff Lunar lasers with the right stuff. Displays will need to be now light pipe, and soon head mounted and directional holographic, to avoid fly-spy hacking. Neuro-imaging will ensure the internet is used by good humans. Fibre-optics or quantum ghost imaging needs to be used around critical infrastructures. Robots should not have hands. Robotics, materials science and AI researchers may already be tracked.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like