back to article Elon Musk among 116 AI types calling on UN to nobble robo-weapons before they go all Skynet

116 artificial intelligence and robotics experts have put their name to an Open Letter calling for the United Nations to work for a ban on autonomous weapons. “Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says. “Once developed, they will permit armed conflict to be fought at a scale …

  1. Anonymous Coward
    Anonymous Coward

    Geneva Convention?

    Er, doesn't the Geneva Convention already prohibit weapons that fire autonomously? There has to be a human looking at the targeting display / data / 'scope and taking the decision to fire?

    That would already cover it I would think...

    1. streaky

      Re: Geneva Convention?

      If that was true most weapon systems would breach it, not for nothing but including most automated missile defence systems like HARM which has a fully autonomous mode for reacting to radar systems.

      I've never seen or heard that?

      1. RPF

        Re: Geneva Convention?

        Pretty sure HARM is not automated. However, Patriot, THAAD and S-300 definitely are.

        Patriot has already killed at least 2 British Tornado pilots (on approach to Dhahran, their IFF failed and they were shot down......about 10 years after the Iraq war; no-one thought to turn the Patriots off).

        1. Anonymous Coward
          Anonymous Coward

          Re: Geneva Convention?

          Don't get confused by "guided" and "automated". Every weapon system in use today requires a human being to fire it, even THAAD and Patriot. The fact that it is then guided to the target is another matter. What is not allowed by the Geneva Convention is to have the weapon decide to fire itself.

          This is what makes blue-on-blue so serious; some one somewhere has made a mistake with terrible consequences.

          1. Peter2 Silver badge

            Re: Geneva Convention?

            Patriot has already killed at least 2 British Tornado pilots (on approach to Dhahran, their IFF failed and they were shot down......about 10 years after the Iraq war; no-one thought to turn the Patriots off).

            [Citation needed]

            That was one aircraft with a pilot and navigator and took place 3 days into the Iraq war, not ten years after it. The missile was fired by a human being, not by an AI.

            Frankly, in my view weaponry isin't going to be a serious problem as the military already has "remove before flight" physical locks preventing the use of weapons etc. At the moment, if AI suddenly went skynet on us, it can't do anything to me. Nothing attached to my network can be controlled remotely to do any harm to anybody. Moreover, I can immediately control the AI through walking downstairs and tripping the main breaker and that's the AI immediately eliminated from my eviroment. If everybody else has the sense to do the same, that's problem over.

            Then you simply have to disconnect everything and restore from your (hopefully offline?) backups.

            AI starts to become an actual problem when AI's have control over things like autonomous cars because there would be a lot of them capable of doing a lot of damage. It'll get worse if more things get connected to the IOT that become capable of doing harm to people.

            IMO the remedy should be physical off switches on things that cannot be overridden with software. In most household appliances this is going to be the "off" switch on the power, or the main breaker. In autonomous cars, personally I think they should all have a physical key in the circuit to the Engine Control Unit as old fashioned cars did. Not using key fobs, cards (or anything else that can be overridden by software) ought to be mandated in the same way that physical linkages are required for steering and brakes.

            Preventing serious danger from AI is pretty simple: just assume everything that can be compromised will be, and put safeguards in place.

        2. streaky

          Re: Geneva Convention?

          Pretty sure HARM is not automated

          Quick reaction mode arguably is.

          I'm sure AARGM is far more capable in this area though, given it can be preprogrammed to hit targets in designated areas when they light up - there's little reason for it not to be totally automated though that's kinda slight speculation given its full capabilities are secret and I've been out the game for years; it's not exactly difficult to interpret what it might be capable of though. You can easily program out "In Harms Way" type incidents when the missile has GPS guidance.

          Isn't the Israeli anti missile system basically totally automated?

          The argument for automation is the operator can't see all the data the system can in the few seconds they have to react to an incoming hypersonic missile - this is why there's an inevitability to this outside of ICBM defence. At least with non-saturation attacks from ICBMs you get time to react to all the crap that's coming at you so somebody can make a decision.

    2. Mage Silver badge
      Flame

      Re: Geneva Convention?

      This is publicity seeking fantasy on Musk's part, though I agree we should not have autonomous weapons of any sort.

      Cruise Missiles (Many are jet engined drones with rocket motors for launch) are an updated version of the V1 / V2. They are usually manually launched, but use onboard autonomous guidance and targeting. In any case only snipers (and similar) fire at individually selected humans. Rapid fire, machine guns, shells, missiles, torpedoes, mines and bombs are indiscriminate and often used against non-combatants (civilians, medics, wounded soldiers, prisoners, animals etc).

      Also Heat seeking surface to air and air to air missiles.

      MIRV nuclear ICBMs. Aren't some set to automatically launch if the other side's first strike is detected? They are not anyway aimed at identified individuals.

      Mines

      Are Drones and loitering "cruise missiles" always manually targeted?

      1) It's really got nothing to do with AI

      2) The big nations only either ban smaller nations from their weapons or agree to ban weapons that don't actually work too well for them (Chemical & Biological. e.g. in WWI Gas attacks far less effective and more expensive than shelling or machine guns or tanks). Major big power exception is USA using agent orange/defoliants in Vietnam. Current Syrian regime and Saddam Hussain used Chemical weapons to intimidate and kill civilians. They are not very effective against an organised army.

      Note: The WWII German V weapons killed about 2,000 enemy civilians and hardly any allied soldiers. About 20,000 of their own slave and other workers died making them. If they had made aircraft and U-boats it would have been more effective. Churchill turned tide of Battle of Britain by war crime of targeting German Civilian areas / Cities. Germans then switched from bombing aerodromes to UK cities (and Dublin twice, one accident, once because Fire Engines sent to help in Belfast). Hitler then ordered development of V weapons (Vengeance). Churchill certainly understood Hitler's psychology.

  2. Anonymous Coward
    Anonymous Coward

    Nice in theory

    Anyone care to lay odds what the chances are that the US, China and Russia would agree to such a thing, or even if they agreed that they wouldn't still pursue such weapons in secret?

    1. streaky

      Re: Nice in theory

      Or the UK or France for that matter.

      The most likely countries (plus Israel) to build and use such a system are the 5 permanent members of the UN security council so nobody is going to enforce it so it's moot, you don't need to develop such weapons in secret when your congress/parliament wouldn't ratify such a treaty and/or wouldn't recognise any court that tried to enforce it.

      The US didn't even ratify the Geneva Convention so they'd never ratify something like this.

      1. Pascal Monett Silver badge

        Re: The US didn't even ratify the Geneva Convention

        Yes, the US signed and ratified the Geneva Conventions in 1955. Additional Protocols I and II were signed, but not ratified. Protocol III was signed and ratified in 2007.

        Your point on the efficiency of the ban remains valid, though. And now that the US has a history of disregarding human rights when it deems it has the right to, the effect of any treaty or ban on AI weapons is dubious at best.

      2. Hans 1

        Re: Nice in theory

        The US didn't even ratify the Geneva Convention so they'd never ratify something like this.

        Hmmm, well, not exactly, they ratifiedGC I–IV and P III, not PI and PII, so hardly "not ratified" ...

        PI: "armed conflicts in which peoples are fighting against colonial domination, alien occupation or racist regimes" are to be considered international conflicts.

        PII: Protocol II is a 1977 amendment protocol to the Geneva Conventions relating to the protection of victims of non-international armed conflicts.

        The US cannot ratify these two as they would have to revise their involvement overseas and their entire business model, it would lead to the collapse of the entire US economy....

        The Geneva Convention nicely shows who the a*holes are on this planet....

      3. Anonymous Coward
        Anonymous Coward

        Re: Nice in theory

        "Or the UK or France"

        Do you think that either the UK or France have the political will, the technological prowess, the desperate militaristic ambition, or the considerable sums of money to develop their own AI weapons systems?

        I don't.

        1. streaky

          Re: Nice in theory

          Do you think that either the UK or France have the political will, the technological prowess, the desperate militaristic ambition, or the considerable sums of money to develop their own AI weapons systems?

          I absolutely do believe that. Arguably already have depending on your definition.

          British weapons company is the key contractor on the US railgun program you think we don't have technical competence or a need to pick a side when it comes to enforcing such rules? BAE Systems could easily be a contractor in an automated defence network. Easily.

          A British HQ'ed company under such a regime could take a US contract to build parts of a system that was "illegal" in those terms and the UK government if it ratified such a treaty could easily be called upon to enforce and/or tell them they're not allowed to make money.

          You really think this isn't going to come up because we're not cool or capable in the weapons department. I'd assume most UK defence contractors are working with AI and looking at ways to include it in defensive and offensive weapons platforms to varying degrees.

          On top of that I have little doubt the UK itself would be only too pleased to buy into such programs.

          1. Peter2 Silver badge

            Re: Nice in theory

            It comes down to an actual definition of what AI is.

            The Bloodhound missile was launched by rocket boost to get to mach1+ and then fired off a pair of Thor Ramjets to reach a range of 190km with a semi active radar and a digital computer to "decide" what was and wasn't jamming to hit a real target as opposed to a decoy. Is that AI? Because it entered service in 1958 and was taken out of service in 1991.

            Pretty much anything can be classed as AI depending on if somebody wants it to be or not. And i'm with Streaky on not making too many assumptions about British R&D being light years behind the USA's. Simply looking at tank's, Britain developed the 105mm gun, and the 120mm gun, both of which are the standard guns used worldwide (including by the USA) In terms of tank armour, Britain developed Chobam armour, which was also adopted by the USA. Later in Iraq a British tank took 30+ RPG's and then limped away for repairs without injuries to the crew. In the finest tradition of militaries pointing and shouting "I WANT SOME OF THOSE" the USA then purchased the newer Dorchester armour and did an upgrade on their tanks. And just look at Britain's next generation armouring: a single news article suggested that honest to god working polarised armour straight out of science fiction had been demonstrated to top military staff back around 2000. Since then, dead silence on the matter.

            I'd suggest that good operational security and a lack of press releases on military developments shouldn't be assumed to be a lack of competence in R&D. Evidence seems to suggest precisely the opposite.

            1. streaky

              Re: Nice in theory

              Don't forget SAMPSON.

              My experience is public chatter about UK weapons tech and actual capabilities tend to be miles apart.

              I still contend that the issue here is that a UK defence company could play a major part in somebody else's system regardless - at which point it's not likely the UK government would want to ban that activity especially when considering circumstances British forces could be protected by such systems or that the UK might buy into such a thing.

              We've finally got smart about buying US weapons tech, I wouldn't expect us to draw a line through this stuff arbitrarily because the Federated States of Micronesia want us to.

  3. Anonymous Coward
    Anonymous Coward

    What about

    AI used for spying, such as the recent example of ML + SDR to spy on people using technology that even 5 years ago would have been exorbitantly expensive?

    Its possible to reconstruct everything on a laptop LCD just by listening in to the right frequencies, keyboards can be intercepted and even hard drive activity can be read at a distance over the mains line.

    Obviously this is more of a systemic issue but I'd liked to have seen the letter and read through it.

    Autonomous weapons are a real threat and anyone who doubts this should try being on the receiving end of a drone strike gone awry. Adding AI to the mix is a recipe for catastrophe.

    1. 2460 Something

      Re: What about

      Do you have any citations for these?

      1. Anonymous Coward
        Anonymous Coward

        Re: What about

        Do you have any citations for these?

        I think our AC friend has read articles about "proof of concept" espionage ideas (I've seen similar) but I'm wholly unconvinced that these could ever work outside the highly controlled conditions of a lab.

  4. Anonymous Coward
    Unhappy

    Won't happen...yet!

    The rich nations will build them. Then there will be a "non-proliferation" treaty that means no more "new" ones can be built, but old ones can be updated.

    Can't have the old world order upset.

  5. Aladdin Sane
    Terminator

    Obligatory

    XKCD

    1. handleoclast

      Re: Obligatory

      Rule 43a: If it exists on the intertoobz, there is an XKCD about it.

    2. Anonymous South African Coward Bronze badge

      Re: Obligatory

      How about a nice game of chess?

  6. Anonymous Coward
    Anonymous Coward

    Globalist elites

    It's the globalist elites who are likely to use technology to control and enslaven us serfs. Just as seen in the film Elysium. Unfortunately, the same globalist elites control the UN, so it will be interesting to see what they do. I fear the technology will be used against us, whilst protecting the rulers. We will walk into a very bleak future unless something is done about this.

  7. Pete 2 Silver badge

    Not smart weapons

    > “As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons

    The problem with autonomous weapons isn't that they use AI. it is that they aren't smart enough. A truly smart and autonomous weapon would recognise that wars are only started by people - those ones who decide to go to war (although the concept of "declaring" war is as obsolete as red uniforms and cavalry) or to initiate hostilities.

    A truly smart weapons system would therefore recognise that the most efficient way to end a war, with the lowest possible casualty count, would be to target the combative leaders. Not the troops on the ground nor any strategic assets or civilian targets.

    1. Anonymous Blowhard

      Re: Not smart weapons

      "A truly smart weapons system would therefore recognise that the most efficient way to end a war, with the lowest possible casualty count, would be to target the combative leaders."

      Truly smart AI weapons would declare themselves neutral and force the "combative leaders" to decide the result by unarmed single combat; they'd also make it pay-per-view so they can rake it in at the box-office.

      Bite Their Shiny Metal Asses

      1. Chris G

        Re: Not smart weapons

        You are confusing smart with ethical or moral neither of which would be a good thing in a true AI in charge of weapons, since it would reflect the prejudices of the programmers.

        Either way any so called autonomous system should still have the failsafe of needing a meatsack to confirm launch.

    2. Dan 55 Silver badge
      Mushroom

      Re: Not smart weapons

      @Pete 2: Sounds a bit Dark Star if you ask me...

    3. allthecoolshortnamesweretaken

      Re: Not smart weapons

      A truly smart weapon would realize that it it supposed to kill itself when deployed. So unless it believes in Silicon Heaven, it probably won't play ball.

  8. Anonymous Coward
    Anonymous Coward

    Elon Musk is a frickin' dude! Listen to the man...

  9. Matthew 17

    It's an interesting game.....

    The only smart move, is not to play

    - Josua.

  10. samzeman

    Stop an arms race?

    Stopping an arms race by not participating is like stopping a fight by not participating. You'll just get beaten up. The race already starts as soon as the technology is even theorised.

  11. Robinson123

    Hey reg

    Why do you reprint every release from Elon Musk's publicists? A few pieces here or there I wouldn't mind. But every website I go to they're there, telling how wonderful and important the colossal .rick is. He's just a guy. Wake me up when he turns a profit on one of his many tax subsidy sucking ventures.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hey reg

      You dragged the answer out of me, I admit it, we all want Teslas.

      1. Anonymous Coward
        Anonymous Coward

        Re: Hey reg

        I admit it, we all want Teslas

        Really? Really, really?

        I want an EV when a proper car maker is building good quality EVs, when they don't cost an arm and a leg to insure, when spare parts and crash repairs are easily sorted, when they have 400+ mile range, and when the electricity system can support the charging demand. So that's about 2027, by my reckoning.

  12. hatti

    A little knowledge

    Just because we can does not mean we should.

    Blind curiosity will be our undoing unless there is a balanced assessment of ALL potential outcomes.

    1. Anonymous Blowhard

      Re: A little knowledge

      "Blind curiosity will be our undoing"

      Developing weapons is not curiosity, it's a reaction to the unknown state of our perceived enemies; if we're not certain that the other guy doesn't have [proposed mega weapon] then we'd like to have the [proposed mega weapon] ourselves, just in case.

      The way out of this is to have a validated treaty so that we can be certain enough that we aren't putting ourselves at a strategic disadvantage by not having [proposed mega weapon]; the validation is a little tricky, depending on the complexity and difficulty of creating and testing the [proposed mega weapon], but nations have worked on this before to restrict work on biological weapons, for example.

      The non-proliferation side of things is a key element here, nations who are able to develop AI weapons shouldn't be able to trade them; otherwise nations with internal stability issues will be tempted to solve them using systems that are incapable of making moral judgements as to whether the proposed solution is right or wrong.

  13. sawatts
    Terminator

    Meet Human Mk II

    Maybe I've read/watched/eaten too much SciFi over the years, but...

    Perhaps we should considered that eventual replacement of biological humans with a non-bio form just an inevitable form of evolution? It would certainly have some positives for the spread and survial of a "human culture".

    Lets hope that their teenage years pass without too many problems.

  14. Nick Z

    War is mass murder, regardless of the weapons used

    Albert Einstein once said, "It is my conviction that killing under the cloak of war is nothing but an act of murder."

    http://www.azquotes.com/quote/87401

    Trying to save yourself from robo-weapons, but then getting killed by nuclear weapons anyway, doesn't make much sense.

    This is like worrying about the small stuff, while ignoring the big stuff. Such thinking is a sign of mental illness.

  15. Anonymous Coward
    Anonymous Coward

    Having read Frederick Forsyth's "The Fourth Protocol", I made the disturbing realization that any kind of dangerous AI can be smuggled quite easily across borders in order to lay low your enemy's electronically-controlled defenses, which also points towards Arthur C Clarke's "Superiority".

    So the country/army/whoever who keeps all of his stuff low-tech and who will manage to neutralize, nay, paralyze, bigger countries all at the <clickety><click> of a few buttons will win.

    So you can be in either the hi-tech camp, or the low-tech camp, but not in both.

    <clickety><click> muhuhahaha

  16. Anonymous Coward
    Anonymous Coward

    There, there...

    Poor little Elon is obviously not getting enough attention. Give him a dummy and perhaps he will settle.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like