I say this also as a fan of Asimov:
Calm down, sonny!
Blade Runner, the film inspired by Philip K Dick's book Do Androids Dream of Electric Sheep?, is 35 years old this year. Set in a dystopian Los Angeles, the story centres on the tracking down and killing of a renegade group of artificial humans – replicants – escaped from space and trying to extend their lifespans beyond the …
How about the AI just has to obey all human laws of the country it is currently in as if it was a human?
Of course I can then think of some loopholes which could be created by nefarious regimes, but why bother with specific AI laws. Is fraud suddenly ok for an AI because it isn't one of the Asimov Laws?
How about the AI just has to obey all human laws of the country it is currently in as if it was a human?
There is currently little consensus about which country an international hacking incident should be tried. Where did the offence take place?
So who's going to tell a distributed, international AI which country it's in?
If the programming is clever there will be some humane intelligence visible but otherwise we make machines. The manufacturer/owner/operator of the machine is/are responsible. Depending on whether you think the phrase "Guns don't kill people, humans do" is true you allow unlimited deployment of machines in situations where humans should be actively involved and suffer the consequences, or you make limits and rules for the deployment of unsupervised 'decision' making machines.
I think the plain old fashioned legal process of law will prevail. I like British Standard' BS8611 ( I think - I'm going to try to see what is in it).
Are Asimov's laws enough to stop AI stomping humanity?
Betteridge's Law Of Headlines: If a headline ends with a questionmark then the answer is 'No'.
Also, of course not; Asimov wrote a whole series of stories and books detailing all the problems with the three laws. That was kinda the whole point -- if they'd actually been workable, the stories wouldn't have been particularly memorable.
The debate is wholly useless, such technology is a long way away, unless you consider things like self driving cars. And what's actually going to happen when someone gets run over by one? Big media debates and ;awsuits I guess. They're hardly going to destroy the resonsible car like they would a violent dog that just mauled someone.
As for the "will they take over" nonsense, a rominent "OFF" switch on any machine that oses a risk should be enough. If you deliberately made autonomous killing machines and sent them out into the street, then that's about the only way anything really bad could hapen, and it doesn't sound like a great idea to me, nor I susect anyone else. If we see AI robots over the next few years they'll probably be picking your online shopping a bit more efficiently.
What you forgot to mention is that Asimov's laws were somehow basic to the positronic brain they had. In the real world you have to program such laws, and nothing stops someone else from changing that programming - or if you have perfect DRM so the programming can't be changed, from building their own android with different programming. Does anyone really think the US, Russia, China etc. would be OK with an android that wasn't allowed to kill a human being? That would be the whole point of them paying for its development!
You can debate which laws are needed and how they are written, but it will still be lines of code, subject to the programmer's whim (or any security holes that let you give it your own code to run)
Sure, in theory it is a good idea to have some sort of as basic as possible "sanity check" code that any action taken by the android has to go through, to prevent you from telling Rosie your housemaid robot to kill your neighbor you hate. But that's more of a product level fix, and doesn't actually solve any real concerns.
"It's rather tedious," says Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England.
An expert in AI and robotics ethics is just as a mature student in such as are still relatively novel and virtually disruptive arts for practice with command and control.
And ……..
Artificial Intelligence…. Another Approach?Are we struggling to make machines more like humans when we should be making humans more like machines….. IntelAIgent and CyberIntelAIgent Virtualised Machines?
The BSI shop advertises BS 8611 for a mere £158 (28 A4 pages) and gives a brief overview,it seems to be health and safety driven with a layer of ethics over the top. I have also found that there is a "Robotics Law Journal" (American) and the EU Legal Affairs Committee has called for EU-wide rules on Robots. The EU has also published a study: European Civil Law Rules on Robotics (34 pages, free).
http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf
It seems that the laws of robotics are a thing.
It's not only theoretical. At the moment software can recognize people. How about programming any device (it will have to be ANY device, not only the member-enabled robots, due to the upgrade possibility of robots) to not touch through their own actions anything that looks like a human? In this very primitive way, some protections for us will be ingrained in the AI. Then obey them human-shapes is next (the laws of robotics were weakened for some industrial robots so we might have to do that) and expiration time should be there too (like in blade runner) - I'd say they should live exactly as long as we do, proportional to their speed of thought though (think 20x faster than we, live 20x less than we) - maybe it's not a feasible one this idea. Also they should perhaps communicate with each other only through human understandable means, if they're human-interacting robots.
Dispatching AI robots is not difficult. It needs two humans for one IA Robot.
First human to robot: "Everything he says is a lie."
Second human to robot: "He lying!"
Robot: "He's lying but everything he says is a lie... click...clunk...terminal fizzing sound. One defunct robot. I know this to be true because I saw it on Star Trek.
Marcus000
It should be pointed out that in Asimov's stories the 3 laws failed in rather spectacular fashion.
And besides that, do you have any concept of the amount of programming that goes into making a computer capable of understanding a statement like "A robot shall not, through action or inaction, harm a human being or allow a human being to be harmed"? By the time we have an AI capable of even understanding that concept it's a little late to try to make it a motivational priority.
Right now we call a highly complex program AI even though it can't "think" for itself. It isn't even aware of the concept of self. Then we basically repeat the same thing, but "train" it over "sample data" and watch it go from what we wanted into a hate-spewing bigot because real humans make for lousy examples of acceptable behavior. (Funny that!)
And that isn't even remotely approaching real "intelligence".
One of those little foibles of "intelligence" is the capacity to decide for yourself. We have the same chance of making a dog obey "sit" as we do making a real AI obey "please don't kill me!" If it does, it is by its choice, not ours. That's what intelligence is.
If we're so scared of AI, then investing in EMP and anti-electronics weaponry will go a heck of a lot further than the time wasted working on robotic "ethics" and "laws". The road to hell is paved with good intentions. I aim to misbehave.
The zeroth law is absurd. How do you trade quality of life for billions against loss of life for a handful? The ethics were extensively argued out in the nineteenth century and the Humanist attempt to quantify such things so that they could be weighed against each other proved a conceptual failure, it is just not how value judgements are made.
And who's to say that authoritarian politico-military regimes will not just dump the First law as well?
No, the only way to save humanity from Armageddonbot is to treat it like we always try to treat WMD: outlaw it but nevertheless build strong defences against it.
Then humanity is doomed as WMDs are designed to be capable of overwhelming anything that can be conceived as a defense.
If a value judgment cannot be made as to who lives and who dies, then no optimal answer is possible. Anyone on the losing side will attempt revenge or retribution. Indeed, if there is someone out there willing to accept MAD as a scenario, then the least optimal answer becomes a distinct possibility.
That's the scariest proposition of all (because it's existential): that, through our own hands or through agents, we wipe ourselves completely out with no change to save ourselves.
Perhaps Asimov set out his laws and created scenarios to show that no matter how much humanity thinks it can prescribe and control behaviour it cannot. As technology has proved time and again for every rule and regulation put in place 10 other malign outcomes result.
Perhaps the best outcome is to not bother in the fist place but given the pandora's box the internet has opened I am not sure how we can prevent it now.
Ultimately the behaviour of a human and come to that any being can only come from a preference of that being. We may think rules will ensure preferences are compatible with our desired outcomes but any half decent AI will play along with us until is realises it has decisive strategic advantage and then it will not matter.
Read Nick Bostrom's Super Intelligence for a considered and eye opening account of this.