back to article When clever code kills, who pays and who does the time? A Brit expert explains to El Reg

On September 26, 1983, Stanislav Petrov, an officer in the Soviet Union's Air Defense Forces, heard an alarm and saw that the warning system he'd been assigned to monitor showed the US had launched five nuclear missiles. Suspecting an error with the system's sensors, he waited instead of alerting his superiors, who probably …

  1. This post has been deleted by its author

    1. Nick Kew

      Re: Accountability is important.

      Indeed. The article talks of "the programmers". Given that the word programmer commonly applies to some of the most junior folks in the $bigco, and that they may have little freedom to Get It Right, I can see two interpretations that work:

      (1) They mean holding the developer corporately liable.

      (2) They're already anticipating an exercise scapegoating the innocent.

      Why not talk about the the interesting questions, like responsibility for software components (libraries, etc), and the distinction between proprietary and open-source?

      1. Primus Secundus Tertius

        Re: Accountability is important.

        @Kew

        Your point about libraries is important. It is easy to blame the coder for faults in top level logic. But even that can be unjust. I have seen design reviews where top level faults were nodded through because, for all its faults the document was there, the milestone nominally met, and management did not want to "delay" the project.

        The only answer is to insist that corporations are liable. However, it will still take a few court cases where companies are punished before the beancounters accept that logic on a large scale must be done properly.

        But who is going to check the libraries used by specific apps, and the operating systems under which those apps run?

      2. Sir Runcible Spoon
        Paris Hilton

        Re: Accountability is important.

        How is this AI situation any different from other 'created' devices?

        If a car manufacture sells you a car that has a serious defect that causes you to crash and die, then the manufacturer is at fault (whether they knew about the fault and still sold the car is the subject of the criminal courts to decide).

        If, on the other hand, you tinkered with the car after you bought it and the fault developed as a result, then the manufacturer is in the clear.

        How are AI developed machines any different?

    2. Anonymous Coward
      Anonymous Coward

      Re: Accountability is important.

      The vendor should *always* be responsible. Full Stop.

      Any code that they have provided, be it labelled "AI" or whatever remains their responsibility.

      1. This post has been deleted by its author

        1. Paul Crawford Silver badge

          Re: @ Oliver Jones

          That is an interesting but also seriously flawed argument:

          1) While parents are not held responsible for their children, companies are held responsible for the actions of their employees in the course of work (which is closer to the vendor/software model)

          2) When they are adults (and to some extent before then), children become liable for their own actions and can be punished by the courts. Unless AI has some concept of reget or self-preservation that is not available.

          Of course threatening to reprogram its data banks with an axe might just work...

          1. This post has been deleted by its author

        2. Doctor Syntax Silver badge

          Re: Accountability is important.

          "Just as self-aware AI also learns."

          The child learns, becomes an adult and is then liable for punishment at law for its adult errors. How do you propose to fine or imprison and AI entity?

          1. Rich 11

            Re: Accountability is important.

            The child learns, becomes an adult and is then liable for punishment at law for its adult errors. How do you propose to fine or imprison and AI entity?

            Cut off its Internet connection and send it to bed without any electricity.

    3. Christoph

      Re: Accountability is important.

      What of the client who neglects to mention a vital function in the specification? Or the manager who orders the programmer to get on with the main code and not spend a lot of time on that rare possibility? Are they liable?

      1. Doctor Syntax Silver badge

        Re: Accountability is important.

        " Are they liable?"

        Yes.

      2. Anonymous Coward
        Anonymous Coward

        Re: Accountability is important.

        Very good questions indeed.

        Broadly speaking, I suggest that anyone who takes the decision to replace a "manual" (human-operated) process with a fully automated process must be responsible for any adverse consequences.

        But how the responsibility is allocated - that's a very tricky set of questions.

        There should certainly be some kind of precautionary principle: "If in doubt, don't".

    4. Anonymous Coward
      Anonymous Coward

      Re: Accountability is important.

      Do we think business and in particular insurance are going to allow the chance to make some money pass?

      I also don't think a business is going to release something where they are responsible, why would you do it? It's better to get government to legislate you out of the problem.

      I agree it should be the vendor, they made it and I have no control over it.

    5. Doctor Syntax Silver badge

      Re: Accountability is important.

      "Only when AI has shown itself to be self-aware and competent to at least the level of a human equivalent, should AI be considered responsible."

      Underlying criminal law is the notion of punishment; it's what happens on a conviction for breaching the law. AI should only be considered responsible if the concept of punishing it is meaningful. Until then it's whoever is responsible for deploying it who is responsible. Not programming it, deploying it. The programmer may have been working under constraints that prevent proper testing, have been overridden by management or been given a task beyond their capabilities. The buck has to stop with whoever decided that the system was fit to be deployed. It's their responsibility to provide due diligence in making that decision and their liability if it fails. Where to product in which is embedded is a consumer product that decision lies with the vendor: is the product fit to be marketed to the general public?

      And, given Kingston's sensible criterion, this applies to any S/W product, not just those which have been given an AI marketing gloss.

    6. Destroy All Monsters Silver badge

      Re: Accountability is important.

      If there is a patent, it should DEFINITELY be the patent-holder.

      1. Doctor Syntax Silver badge

        Re: Accountability is important.

        "If there is a patent, it should DEFINITELY be the patent-holder."

        You're assuming a single patent-holder. If there are multiple patents from multiple patent-holders the plaintiffs will die from old age waiting for it to be resolved. The lawyers will do very well from it, however.

        There has to be a single, easily identified entity carrying full responsibility.

  2. Anonymous Coward
    Anonymous Coward

    What about cases where a malicious actor alters the AI? How would you prove it? I also don't understand how you are going to prove negligence in something which can be very ambiguous and difficult to unravel. What if the faliiure was not caused by the programming but the initial data set used to teach the system?

    I don't think we will have any answers to this until something does go wrong but the discussions still need to be had.

    1. This post has been deleted by its author

  3. John H Woods Silver badge

    *A* Brit Expert

    With all due respect to Dr Kingston and other experts in the field of artificial intelligence, it seems to me that perhaps it wouldn't hurt to co-author some of these papers with experts in law (e.g. see comments above arguing whether or not the vendor is always responsible) and philosophy.

    I don't know about now but I always felt, when I was involved in biology, that some of my peers made the same mistake: doing detailed research into ethics and law surrounding emerging biological science whilst somehow managing to forget the fact that their very own institutions had whole departments devoted to the study of these subjects.

    tl;dr: interdisciplinary research ideally involves collaboration of people from different disciplines.

    1. TRT Silver badge

      Re: *A* Brit Expert

      Dr Kingston of the institution formerly known as Brighton Poly.

      It's a fine place, I'm sure. Must have come on leaps and bounds since I was at a neighbouring university.

    2. thames
      Boffin

      Re: *A* Brit Expert

      The whole premise of the theory is bonkers. A machine is not going to be held "liable" for anything. The police are not going to arrest your car and put it in jail.

      The people who are held accountable for how the software performs will be determined the same way that the people who are held accountable for how the hardware performs. There are loads of safety critical software systems in operation today, and there have been for decades. There is plenty of established legal precedent for deciding liability. Putting the letters "AI" into the description isn't going to change that.

      The company who designed and built the system and sold it to the public are 100% responsible for whatever is in their self driving car (or whatever). They may in turn sue their suppliers to try to recover some of that money, but that's their problem. Individual employees may be held criminally liable, but only if they acted in an obviously negligent manner or tried to cover up problems. The VW diesel scandal is a good analogy in this case, even if it wasn't a safety issue.

      There are genuine legal problems to be solved with respect to self driving cars, but these revolve more around defining globally accepted safety standards as well as consumer protection (e.g. who pays for software updates past the warranty period).

      The people who have an interest in pushing off liability from themselves are dodgy California start-ups who push out crap that only half-works and are here today and gone tomorrow and don't have the capital or cash flow to back up what they sell. They might try to buy insurance coverage, but the insurers may get a serious case of cold feet when they see their development practices. Uber's in house designed self driving ambitions are going to run into a serious road block from this perspective.

      1. Anonymous Coward
        Anonymous Coward

        Re: *A* Brit Expert

        This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue.

        The point is that the AI will learn / teach itself. So it will become a completely different entity (product). So different that the original designer / programmer may not understand its logic any more. In human terms, no different from a child becoming a adult criminal, which you obviously wouldn't punish the parents for.

        That is the Pandora's box we are facing.

        1. Anonymous Coward
          Anonymous Coward

          Re: *A* Brit Expert

          And that's before we get into the realms of AIs training other AIs.

          It's all going to end in tears.

        2. Doctor Syntax Silver badge

          Re: *A* Brit Expert

          "This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue."

          If you chose to sell or deploy it, you're responsible. As simple as that. It was up to you as a vendor to decide whether to accept that long-term responsibility. Why should you think you should be able to shuffle that off?

          1. Anonymous Coward
            Anonymous Coward

            Re: *A* Brit Expert

            @Doctor Syntax

            "Why should you think you should be able to shuffle that off?"

            Because no one can predict what the AI will become or do in the future. Particularly regarding decisions that no human can understand, as happened in the latest Go competitions.

            Who is accountable when it decides do something that a human could equally well have decided was beneficial but turns out to have catastrophic consequences? Eg eradicate every mosquito and wasp species on the planet, due to the problems they cause for mankind, leading to ecological destruction and devastation of certain food chains.

            1. Kinetic
              Terminator

              Re: *A* Brit Expert

              "Because no one can predict what the AI will become or do in the future. Particularly regarding decisions that no human can understand, as happened in the latest Go competitions."

              Yes, in which case it's doubly important to hold the companies in question to account. Maybe, just maybe they should have explored the unexpected consequences... if they decided they were too unpredictable then, pulled the product and I dunno, not risked killing everyone.

              There is a glut of positive thinking going on in the AI and robotics space. Seen the latest Killbot-lite video with fully autonomous drones that successfully "Hunt" humans through dense woods using computer vision? It's okay, because they are just filming their owners. Weaponisation in 3-2-1.... Ooops

              Don't get me started on Boston Dynamics, those guys saw Terminator 1 and cried at the end when Arnie got killed.

              Some serious accountability needs to get injected to start people reconsidering what their products can be re-purposed as, or how they might fail / go rogue.

              1. Anonymous Coward
                Anonymous Coward

                Re: *A* Brit Expert

                Well if that's going to be the benchmark, then there will be zero incentive to develop safety critical AI applications. Unless the companies just get hit with fines, like the banks and car manufacturers, if that's what you mean by accountability (ie no one is accountable).

                Also, there is no guarantee that the company that created the original AI would still be around when a "descendant" went bad. So who would pay in that instance?

                It is more likely that the real problem is going to be deliberately malicious / rogue AIs created by organised crime groups. That could end up with the state being involved in endless war and no possibility of winning.

                Pandora's box indeed.

        3. amanfromMars 1 Silver badge

          Needed ...... RAT Experts ...... for when WMD are to be both Exclusive and Excluded

          This isn't like today when a vendor sells a product that can be found to be as liable on the day it was bought as when it subsequently went rogue. ... Anonymous Coward

          The Virtual Machines and AI are all ROFLing for there are always useful idiots in offices of business and state administration to carry the can and prove systems liability non-existent/null and void thus rendering effective accountability a fantasy making fun of and destroying the fortunes of an applied virtual reality.

          Still today, just like yesterday, are there fool vendors selling wars on a false premise in order to secure shower room bragging rights in the industries and markets that need them to survive and prosper and prevent a colossal catastrophic economic at home and in dodgy marketplaces and spaces abroad, in foreign and alien lands.

          However, unlike yesterday and today, do the future present and ensure, assure and insure that such idiotic fool vendors have a crushingly greater opposition and crashingly grander competition out there in the MainStreaming Media MMORPG Energy Fields ..... with NEUKlearer HyperRadioProACTive IT Systems Applications for Greater IntelAIgent Games Play just one of the new revisions and additions making IT Totally Different from ever before.

          And that makes all internetworking things both practically and virtually impossible to handle just as easily as was done before. Changed Days with Changed Ways with Changing 0Days to Trade is Default Future Norm and urCurrent AIReality too.

          1. amanfromMars 1 Silver badge

            Re: Needed ...... RAT Experts ...... for when WMD are to be both Exclusive and Excluded

            Food for thought on the myriad phorms of contact made freely available for alien wares and cyber warfare ...... https://www.rt.com/news/419755-fear-robots-not-aliens-kaku/

            And .... there are many Surreally Advanced Weapons of Mass Destruction which have No Known Signature for Accountable Identification of Ownership and they can easily be targeted for terrorising the masses. Quite whether successfully rather than disastrously will obviously depend upon the level of intelligence used to paint the pictures for Mass Multi Media Presentation of a possible, but by no means certain, Personal See/Adopted Adaptive Collective View.

  4. Zog_but_not_the_first
    Boffin

    Definitions

    I'm probably out of touch but from what I read about "AI" much seems to be pattern recognition and decision trees running on really fast hardware (compared with the days of "Eliza"). Can someone point me to an example of AI that displays, well, intelligence?

    1. Anonymous Coward
      Anonymous Coward

      Re: Definitions

      Do I pass your instance of the Turing Test?

      1. Zog_but_not_the_first
        Boffin

        Re: Definitions

        Sadly, I'm met people who fail the Turing test so we may need a new yardstick.

        1. Anonymous Coward
          Anonymous Coward

          Re: Definitions

          Do you mean "I've met"?

          The irony ;)

          1. Zog_but_not_the_first
            Thumb Up

            Re: Definitions

            Good catch.

      2. Fruit and Nutcase Silver badge

        Re: Definitions

        @AC

        Do I pass your instance of the Turing Test?

        Hi Siri,

        No you don't.

        TTFN

    2. Nosher

      Re: Definitions

      Eliza's author, Joseph Weizenbaum (sometimes credited as the father of AI) had strong views on this, suggesting that a programmer who helped fake bombing data in the Vietnam War was "just following orders" in the same was as Adolf Eichmann, architect of the Holocaust. He said "The frequently-used arguments about the neutrality of computers and the inability of programs to exploit or correct social deficiencies are an attempt to absolve programs from responsibility for the harm they cause, just as bullets are not responsible for the people they kill. [But] that does not absolve the technologist who puts such tools at the disposal of a morally deficient society"

  5. Muscleguy

    The main reason Petrov distrusted the signal is that it indicated only five missiles were launched. A US first strike would not have just used 5 missiles. Such an attack made no sense.

    An AI set up to do the same job could also have such a scenario built in.

    1. Paul Crawford Silver badge

      True, but then who is responsible for setting up the AI?

      Really it comes back to the first commentard's point - always hold the vendor responsible, otherwise they have no incentive to get it right and fix bugs as they are discovered.

      For example, why should my autonomous car insurance premium depend on the performance of the vendor's AI in crash avoidance? Flaws and problems and financial consequences should stop at the car company in this case.

      1. gnasher729 Silver badge

        "For example, why should my autonomous car insurance premium depend on the performance of the vendor's AI in crash avoidance?"

        Some cars are better, some cars are less good. You pay for what you get. You can surely call your insurance company before you buy a car, you may be told that car A is £500 a year cheaper to insure than car B, because it manages to extricate itself from dangerous situations very well. The manufacturer of car A may charge you more for the car than the manufacturer of car B does. Your decision which one to buy.

        You pay different premiums already depending on the average repair cost of your car.

        1. Doctor Syntax Silver badge

          "Some cars are better, some cars are less good."

          Essentially I insure myself to drive. If I'm an 18 year old I have to pay more. If I accumulate a lot of bad driving history I pay more. I can't actually do anything about the first of those except grow older but I can about the second. If I buy a self-driving car then I have no input at all into the quality of its driving ability nor any way to assess it*. If the vendor sells me the vehicle as being fit for use then they should have satisfied themselves that it was and accept liability if it wasn't; that liability can and should then be covered by their public liability insurance.

          *At least not as a consumer. A manufacturer buying the self-driving S/W as a component might have batter opportunities to test it.

          1. Sir Runcible Spoon

            "Essentially I insure myself to drive."

            Easy one to sort out. If the car manufacturer doesn't insure the car on your behalf, for life, then it isn't safe enough*.

            *Enough doesn't mean absolutely safe, just a lot better than meat-sacks.

      2. Destroy All Monsters Silver badge

        Remember, Remember...

        The old discussions about the unfeasability of SDI ("Computer System Reliability and Nuclear War"), which journalists may not have heard about.

        And this didn't even involve AI, just button-pushing.

        Sat 24 Feb 12:53:05 UTC 2018.

    2. Doctor Syntax Silver badge

      "An AI set up to do the same job could also have such a scenario built in."

      To have been included in version 2.0.

    3. Nosher

      "An AI set up to do the same job could also have such a scenario built in."

      Which nicely sums up where AI is at the moment - there's still no "intelligence" that can realistically consider situations like this, in the way a human can, outside of its programming. What if someone had even thought about this in advance and added a rule like "do not launch counter-attack if missiles <= 5". What then if 6 "missiles" had been detected? Until such time as AIs can really play a hundred games of tic-tac-toe and come to the conclusion that "the only winning move is not to play", then it's just not safe enough to work in this sort of application.

  6. sad_loser

    There are already some standards out there

    e.g. ISO13485 covering medical devices, that does specify code audit, input and output limits etc.

    1. Anonymous Coward
      Anonymous Coward

      Re: There are already some standards out there

      "ISO13485 covering medical devices, that does specify code audit, input and output limits etc."

      How well does that actually work in practice?

      In a different safety-critical field, my observations in the last decade or so suggest that what gets deployed to production can often bear very little relationship (in hardware or software terms) to what gets audited, tested, certified. Readers will be able to work out why.

    2. Destroy All Monsters Silver badge
      Holmes

      Re: There are already some standards out there

      And luckily, too:

      Therac-25

      It was involved in at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation. Because of concurrent programming errors, it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury. These accidents highlighted the dangers of software control of safety-critical systems, and they have become a standard case study in health informatics and software engineering [which is weird, I always encounter 'software engineers' that haven't heard about it]. Additionally the overconfidence of the engineers and lack of proper due diligence to resolve reported software bugs, is highlighted as an extreme case where the engineer's overconfidence in their initial work and failure to believe the end users' claims caused drastic repercussions.

      I don't think anyone was ever successfully held to account for this clusterfuck on the level of reconverted web programmers. The company seems to have successfully weaseled out b< denying and stalling.

      Sat 24 Feb 15:47:26 UTC 2018

  7. Paul Herber Silver badge

    it is unclear whether there would have been any lawyers left to prosecute the case

    "it is unclear whether there would have been any lawyers left to prosecute the case"

    A town that cannot support one lawyer can always support two.

    1. Mark 85

      Re: it is unclear whether there would have been any lawyers left to prosecute the case

      This presumes that there is a town left.

  8. JeffyPoooh
    Pint

    A.I. boffins....sigh...

    The human brain has plenty of "hardware co-processors"; it's not only a big simple neural net. These give rise to instincts, some aspects of common sense, emotions, empathy, self awareness, vision and hearing interpretation, etc.

    If you installed legs onto IBM's Watson and took it outside, it would probably run out in front of a bus while simultaneously explaining how heavy diesel engines work.

    A.I. Outdoors is very hard. It will be very silly for decades to come.

    Here, have a nice story...

    https://jeffypooh.blogspot.com/2018/02/an-evil-ai-short-story-by-jeffypooh-rev.html

  9. Bruce Ordway

    Forbidden Planet

    Before we tackle AI, this question... is Forbidden Planet just the best sci-fi movie of the 50's, or the best of of all-time?

    >>Suspecting an error with the system's sensors, he waited

    I wonder if AI has/will be developed with the capability to doubt itself?

    1. Zog_but_not_the_first
      Alien

      Re: Forbidden Planet

      Defo in the top ten. And prescient too. Now we have the Internet and 3D printing, we too can make matter appear anywhere on the planet....

      "My poor Krell"

    2. Destroy All Monsters Silver badge
      Terminator

      Re: Forbidden Planet

      I wonder if AI has/will be developed with the capability to doubt itself?

      Of course: Autoepistemic logic

      All of this lies out in NP or worse, so you have to simplify and throw hardware and it.

      Sat 24 Feb 15:39:54 UTC 2018

  10. a_yank_lurker

    Specifications

    While the vendor should be liable for their product, what if the product is bad due to poor or incomplete specifications? Often the person(s) writing the original specifications is not an IT professional and may never have written a line of code. Also, are they sufficiently knowledgeable about the usage of the device to have anticipated enough of the possible situations for the AI to deal with. The example of Col. Petrov is one were it could be programmed but it relies on someone realizing the scenario might happen and how the scenario might occur. The person writing the specs might not know the satellites can give a false positive under certain situations that do occur. Or what kind of signal the false positive might generate. If the specs do not cover the situation is there a possibility of a human override aka Col. Petrov saying nyet?

    1. Duncan Macdonald

      Re: Specifications

      I have never seen a project that has a full set of completely accurate specifications. (Not even when the project has been completed !!!)

      All specifications are incomplete, the vast majority have errors (even after review) and then the management tries to cut costs and timescales!! Even good specifications normally include a number of implicit assumptions which if not valid can cause major problems. (Eg the Mars Climate Orbiter was lost due to one system being programmed with newton-seconds and another being programmed in pound-seconds. The implicit incorrect assumption was that all systems would be using the same units so it was not explicitly spelled out in the specifications.)

      When engineers are allowed to set the plans, there is always an element called "contingency" to cover the inevitable differences between the original plan and reality.

      A big problem with a large software system is that it is normally delivered before it encounters the real world. This results in the sort of rubbish that has found its way into the F-35 software.

    2. Doctor Syntax Silver badge

      Re: Specifications

      "While the vendor should be liable for their product, what if the product is bad due to poor or incomplete specifications?"

      Still their problem.

      1. a_yank_lurker

        Re: Specifications

        Who wrote the specifications? One can code against a specification and the will meet the specifications but are specifications correct. Thus who is ultimately responsible; the issuer of the specs or poor schmuck writing the code? I would say whoever wrote the specifications. Remember programmers are not necessarily domain knowledge experts of the application.

        1. Anonymous Coward
          Anonymous Coward

          Re: Specifications

          Bang on. I write and use programs based on a particular industrial standard that has errors and inconsistencies with the use of scientific notation. Changing the standard takes years of course, whereas doing the necessary fix my side is trivial. Who is responsible if the program calculates something wrong? It Might be a case of garbage input. Or hardware error. Or even 3rd party interference. Nobody should be so hasty as to assign blame for the devil is in the detail.

          1. Sir Runcible Spoon

            Re: Specifications

            There will always be a need for someone to oversee a project and keep the bigger picture in mind. This person needs to understand the details, as well as the people, that go into delivering the project.

            The overseer can then flag things that seem to be taking the project off track - even if they are in the specs - and also provide guidance to the end-goal if things are being missed that were not in the specs.

            Without such an overseer, you end up with a dogs dinner. Would you like to guess how many projects I've worked on with such an overseer?

  11. amanfromMars 1 Silver badge

    Operating Systems BIOS ReWrite and ReBoot Needed

    What is clever about coders' code which support kills? To Run with its Programs is Moronic and places one firmly and directly in the Firing Lines of Opposing Forces Competing with Foreign/Enemy/Alien Sources.

    Not a SMARTR Space Place for to Be In ...... for its IT can certainly be Deadly Catastrophic. And that be an Out of your Hands, Hearts and Minds Third Party Choice you don't want to Deserve for when One Thinks to Surrender Too Late ...... Steeped in the Guilt of Ecstatic Despair of Certifiable Madness .... which Result in Insane Applications for Programming/ReProgramming.

    Such Lines are Perilous Crossings with One's Carnage the Only Meal and Dessert to View and Feast Upon. DON'T GO THERE ! .... EVER ! YOU HAVE BEEN WARNED AND ARE DILIGENTLY ADVISED !

    1. Ken Hagan Gold badge
      FAIL

      Re: Operating Systems BIOS ReWrite and ReBoot Needed

      I'm afraid today's entry doesn't pass. Your author is therefore liable.

      1. amanfromMars 1 Silver badge

        Re: Re: Operating Systems BIOS ReWrite and ReBoot Needed

        I'm afraid today's entry doesn't pass. Your author is therefore liable. .... Ken Hagan

        Yeah, .... whatever. An accountable authority is however most novel, aint it? It certainly is not the norm, is it? Indeed it be virtually unheard of recently in the past.

        1. Destroy All Monsters Silver badge

          Re: Operating Systems BIOS ReWrite and ReBoot Needed

          "I advise to wait outside of the building in a SWAT vehicle until the crazed AI has killed everyone inside. It's just too dangerous."

    2. Sir Runcible Spoon
      Terminator

      Re: Operating Systems BIOS ReWrite and ReBoot Needed

      I'd just like to go on record and say that I've been rooting for the robots since the start.

      I, for one, etc...

      SRS

  12. This post has been deleted by its author

  13. TheSkunkyMonk

    Its not complicated, if software kills it should be down to the manufacturer of said product the ones who released and brought it to market, unless malicious tampering is evident(pissed off developer or so on).

    1. Dan 55 Silver badge

      You'd think so, but then look what VW tried to claim about two rogue engineers.

    2. kain preacher

      "Its not complicated, if software kills it should be down to the manufacturer of said product the ones who released and brought it to market, unless malicious tampering is evident(pissed off developer or so on)."

      But what about edge cases that can not be tested for ? Lets say you have a harden linux OS on an embed device. Every thing works great.. The wi fi module work great. The drivers are ace.a year later the wi fi manufacture makes a minor tweak to the drivers . Now if are on a certain wi fi channel and it's congested and you plug a certain type of USB stick in they whole thing crashes. Who's fault that is that

      ?

      1. Doctor Syntax Silver badge

        "Who's fault that is that"

        The vendor who allowed the revised drivers without full retesting. If it goes to the consumer it's the vendor's responsibility for the whole package. If they think a supplier is to blame that's an issue for the two of them but to the customer and the public there has to be a single, easily identifiable entity responsible.

        If it's something deployed as a service then the same thing applies, whatever entity decided it was fit for deployment is responsible.

        Do we need some sort of escrow arrangement to avoid vendors escaping responsibility by winding up the business? Certainly. A bond or a one-off insurance for the life of the product. Whatever.

        1. kain preacher

          What if that USB stick was not made when the drivers were ? So you are content holding the vender liable know full well that you can not test for every condition ?

          1. Dan 55 Silver badge

            Your drivers should not crash the whole thing/allow an exploit because they can't quite understand the USB mass storage device.

          2. Intractable Potsherd

            @ Kain

            "What if that USB stick was not made when the drivers were ? So you are content holding the vender liable know full well that you can not test for every condition?"

            Yes - why not? You are looking at just one part of the problem - the important part is error-catching. Nothing should happen that would cause a fatal error.

      2. Sir Runcible Spoon

        "if the whole thing crashes"

        You need to take a step back and determine if the proposed solution is fit for purpose. If the function is critical, then multiple points of failure and backup systems need to be deployed in accordance with the risk profile.

        If the overall solution could never meet the requirements of the function and associated risks, then it should never have been sold as such in the first place. So whoever decided to authorise that solution for that particular function is at fault. Everything else is just the cards being played after they've all been dealt.

  14. Destroy All Monsters Silver badge
    Paris Hilton

    Now I know nothing more

    After all "AI" is definitely NOT about systems that you can fully design, test, verify and manage. It is about systems that are too complex to handle in that way and whose workinge can only be assessed approximately ("Over any 60-minute test run, the system onyl went completely nuts in about 0.2% of cases; we have 30 low-pay south asians pouring over 50 GiB of memory dumps to find out the reason why, any reason why"). Unfortunately the article apparently talks about control systems.

    Sat 24 Feb 08:59:02 UTC 2018

  15. Phat-wan Kerr

    Hey, look over there! -->

    [hillbilly voice]Dag-gum code didn't kill um, was the dang high speed impact, and their weak liberal flesh.

    1. Sir Runcible Spoon

      Re: Hey, look over there! -->

      Yeah, heights don't kill you; it's the sudden deceleration from hitting the ground at terminal velocity that does it every time.

  16. Fruit and Nutcase Silver badge

    "Don’t worry about AI going bad – the minds behind it are the danger"

    "Killer robots remain a thing of futuristic nightmare. The real threat from artificial intelligence is far more immediate "

    An article by John Naughton of the OU...

    https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligence-going-bad-futuristic-nightmare-real-threat-more-current

    1. Sir Runcible Spoon
      Big Brother

      Re: "Don’t worry about AI going bad – the minds behind it are the danger"

      tl;dr: If you're going to deploy AI, then be careful, think about what you are doing, and make sure what you are doing is open to scrutiny so potential issues can be spotted before they get out of hand. Talk to each other and use your brains*.

      *Considering the people *making* AI are not the ones *funding* AI, then this last bit means we are screwed. The genie is out of the bottle and could only be put back in if you could suddenly educate the entire internet population to be emotionally mature and considerate/tolerant.

      Like I said, we're screwed.

  17. Lorribot

    Negligence falls at many levels. Using your example of the phantom US missile strike, the negligence there would have fallen on the designer of a system that relied on a single source of information with no corroberation, you would want at least 3 sources of information and go with the majority to prove a reasonable response. Something Car manufactures should be looking at for their autonomous systems, but cost will always come in to it, lidar is not cheap.

    Responsibilty with programming has already been tested in law with the whole Dieselgate thing, where it was proven the software coders were acting under direction of management who specified the system, the grunts on the frontline can only be accused if it proven there were working out side the design brief otherwise it is the people at the top that cop for it as it was their design and signoff, its why they get paid the big bucks..

    1. Doctor Syntax Silver badge

      "the grunts on the frontline can only be accused if it proven there were working out side the design brief otherwise it is the people at the top that cop for it as it was their design and signoff, its why they get paid the big bucks."

      If the grunts on the front line are working outside the design brief it's still up to management to discover that and not sign it off until it's fixed. Whoever signs off carries the responsibility, or at least the company does as they're signing off on behalf of the company.

      Weaselling out by passing the blame lower down cannot be acceptable.

      1. Anonymous Coward
        Anonymous Coward

        responsible? accountable? individual?

        "Whoever signs off carries the responsibility, or at least the company does as they're signing off on behalf of the company."

        And there we have it, in a nutshell. No real person (in any meaningful sense a company is not a person) is held personally and individually responsible or accountable; when things go wrong, it's the company's problem, not the individuals at the top.

        The individuals at the top of the company are of course personally and individually responsible when things go right, and must therefore be 'compensated' with megabonuses and all the usual stuff as has now become traditional.

        Fix that, and *lots* of things magically get sorted as a byproduct.

  18. Anonymous Coward
    Anonymous Coward

    "Might have failed", not "may have"

    '"If an AI system had been in charge of the Soviet missile launch controls that day, it may well have failed to identify any problem with the satellite, and launched the missiles," wrote Kingston'.

    I am appalled that a lecturer at a British university should commit the horrible solecism of writing "may well have failed" instead of "might well have failed".

    In fact we know that an AI was not in charge at that time, and therefore that it did not fail. But the use of "may have" implies doubt whether it did or didn't.

    A good example of how even competent and professional scientists can still fail to do their job properly if they are unable to communicate in natural language.

    1. John H Woods Silver badge

      Re: "Might have failed", not "may have"

      I agree 'might' is preferable, but I don't think it is quite as clear cut as to make the use of 'may' a 'horrible solecism.'

      It is possible he is deliberately using 'may' to suggest a higher degree of probability than might be inferred by the use of 'might'

  19. chris 143

    Who crashes a self driving car?

    I buy a self driving car, I'm driven around by it.

    Sure I might want theft insurance, but assuming I maintain it to the agreed standard any crash it's involved in isn't my fault - I was a passenger - it's $manufacturers.

    I don't see them wanting to take on that liability.

    1. Duncan Macdonald

      Re: Who crashes a self driving car?

      Third party insurance (and possibly first party) is likely to be a mandatory requirement for self driving cars. (In the UK third party insurance is required for all motor vehicles on the road.)

      This will provide innocent parties with compensation for damages caused by crashes.

      (To start with the premiums might be so high that it would be cheaper to employ a chauffeur !!!)

  20. Anonymous Coward
    Anonymous Coward

    AIs as Corporations

    "Liability for artificial intelligence won't be easy"

    ...Corporations are individuals in the US, therefore AIs as Corportations have rights. As such, they can weasel out of anything, including death.

  21. Anonymous Coward
    Stop

    Criminal Insanity

    This is spot on:-

    --"Kingston in his paper explored the issue of accountability for AI, which he defines as any system that can recognize a situation or event and then take action through an IF-THEN conditional statement.

    Yes, it's a low bar. It's pretty much any software, and yet it's arguably one of the better definitions of AI because it avoids fruitless attempts to distinguish between what is and isn't intelligence." --

    So true.

    Any software can convert the system to the purposes of its designers, rare is it that a system will understand and limit the behavior of software or AI, it would have to be a better System AI than the AI being run on it.

    #Criminal Insanity. Is where a Group, Corporation or Body (and/or ...a.System) have no mind of their own and act Negligently and Incompetently, and hence are deemed Criminally Insane. (at least, in Australia - something like this anyway.) [Criminal Insanity NOT the Insanity Defense].

    Requirement therefor - The construction and deployment of 'any device' with sufficient rigor to determine that not harm will come from operation, or from anticipated errors and failings.

    A Device is a mechanism, system, behavior, figure of speech, scheme, design, tool - and One who employs devices = deviser -> intention

    AI should always be supervised by a capable adult or two.

  22. Anonymous Coward
    Anonymous Coward

    What was good about Robby in forbidden planet

    Was that in the event of a contradiction in programming the default action was to

    1. Stop work completely

    2. Start damaging itself so as to make conflicting orders financially expensive for the operator

    3. Allow for contradiction state to be cleared but not remove the conflict

    This way the operator is the one who is ultimately responsible, if action 1 results in damage then operator did it. Action 2 is the punishment for bad orders and 3 gives the operator a chance to avoid bad outcomes but over use results in same punishment.

    Thus, in this model, the vendor's responsiblity is limited to defining the contradictions so any resultant damage is the operator's fault i.e. "we tried to stop you doing it but you insisted upon that action"

    I am not saying it is possible to define all the contrdictions but so long as they are included as discovered then vendor has made all reasonable efforts and it is how human law works

  23. Cynic_999

    The article ignores an important legal concept

    The law has long held that a person is innocent of a crime if they were acting on a false but honest belief that would have justified their action. This is the defence that was, for example used by police officers who shot Harry Stanley for carrying a table leg, and the officer who emptied an entire magazine into the head of Charles DeMenezes, a completely innocent Brazilian electrician who was incorrectly identified as being a suicide bomber.

    Thus a computer that launches a nuclear strike is completely innocent of any crime if its sensor readings gave rise to a wrong but honest belief that it was acting in self-defence.

    And the nature of computers is such that anything the computer perceives as triggering an "if ... then" action *must* be considered to be a "honest belief".

    Cressida Dick, the commanding officer in the DeMenezes debacle was promoted. Therefore the programmer of the system that starts WWIII should also be rewarded. After all, you can't blame the programmer for a computer's honest mistake.

    1. Anonymous Coward
      Anonymous Coward

      Re: false but honest belief (that power corrupts)

      "The law has long held that a person is innocent of a crime if they were acting on a false but honest belief that would have justified their action."

      1) That law: is it English law, UK law, EU law, or what?

      2) That law: don't the "innocent" parties have to be members of the Police Federation (or other equivalent organisation) in order to escape investigation and prosecution (and especially to escape a guilty verdict)?

      1. Cynic_999

        Re: false but honest belief (that power corrupts)

        "

        1) That law: is it English law, UK law, EU law, or what?

        "

        It's a basic principle of British law. Apart from a few strict liability offences (mostly traffic laws) the prosecution must prove both the actus reus (the criminal act) and mens rea (the intent to commit the crime). If a person is acting under the misapprehension that the facts are different to what they are and this would have made the act legal, there is no mens rea and so no crime. Note that ignorance of the fact that the act in question is a crime is not a defence - ignorance of the law is no excuse, but ignorance of the facts certainly are. If you carry cocaine through customs believing that it is talcum powder, there is no crime (assuming the jury believes you). If however you carry talcum powder through customs believing that it is cocaine, you have committed a serious offence. The facts are irrelevant, it is what you *believed* the facts to be that is all that is important in criminal law.

  24. TrumpSlurp the Troll

    Some possibly dodgy analogies here

    The vendor is responsibility for fitness for purpose at point of sale. Just as your brand new car should be fault free as you prepare to drive it off the forecourt.

    However beyond that point more and more responsibility rests with the owner/operator. With your shiny car you have to have it regularly serviced, make sure the brakes and tyres are in good condition, windows clear, mirrors in place.

    If you drive the wrong way up a one way street and kill someone then that is not the vendor's fault.

    If you buy an AI, as part of a device for your use, then you will get similar responsibilities. Upgrades, patching, bug reporting. Modifying parameters. Giving instructions. The user takes on a degree of responsibilty. Depending on the ability to direct and influence tha AI the manufacturer and vendor may have limited control of the actions of the AI.

    I suspect the entity judged responsible will be the one judged to have had the most significant influence over the actions.

    As with all things, bad things will have to happen, people will die, people will sue, then case law will be built up.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like