back to article Google goes peacenik, chip wizardry and AI gets into art and drugs

While we've already covered a lot of AI stories this week a few slipped under the radar so, as is traditional, here's the roundup of some news you may have missed. Google turns peacenik: Under fire for helping the US military use AI to better bomb people, Google has not only stepped away from that particular Pentagon contract …

  1. Anonymous Coward
    Anonymous Coward

    So the shit would hit the Fan

    If it were ever to be discovered in the Future that Google had set up an Evil Twin as a separate Legal Entity to do the Dirty Work.

    1. Mark 85

      Re: So the shit would hit the Fan

      If it were ever to be discovered in the Future that Google had set up an Evil Twin as a separate Legal Entity to do the Dirty Work.

      Why would you think that? Google did away with the "don't be evil" thing some years ago. And then, they say this: "And they are pretty good: be socially beneficial (the AI equivalent of 'Do no evil'?); don't introduce biases; be safe; respect privacy; be accountable; be scientific; limit abuse. " which isn't the way they're acting at the current time.

  2. Spanners Silver badge
    Terminator

    Some good principles

    As we all may know, the basics were laid out in 1942.

    1. Don't harm people

    2. Do what you're told unless 1

    3. Look after yourself unless 1 or 2

    A bit simplified because I don't have a copy of I Robot handy but still a pretty good start.

    1. Anonymous Coward
      Anonymous Coward

      Re: Some good principles

      It's a start, but only a start. The hard bit is to make them binding so the robot/AI can't decide not to be bound by the silly human "laws" .

      1. amanfromMars 1 Silver badge

        Re: Some good principles and Re: Some good principles to the bazillionth degree

        It's a start, but only a start. The hard bit is to make them binding so the robot/AI can't decide not to be bound by the silly human "laws" . ... Anonymous Coward

        That's not hard for a lickspittle of a program, AC, whenever impossible for binding robot/AI decisions.

        What sort of a being and/or virtual machine heeds silly human "laws"? Be they retarded and/or defective with myriad ineffective special needs?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon