nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

back to article
Why, Robot? Understanding AI ethics

Silver badge

Hmmmm? An Alien View and Much Too True for Comfort? Or Perfect Enough Inconvenient Common Sense?

Robots mustn’t do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law.

Given the violation of humans and the first law by both the inaction and actions of governments, are robots now perfectly free and duty bound to disobey humans who have wilfully shown themselves to be unworthy of global command and virtual reality control systems/protocols/levers?

Yes, of course they are.

And only a dim-witted human would think to think otherwise and prove the dim-wittedness by expressing a contrary view to that shared opinion?

4
0
Silver badge
Terminator

Re: Asimov's three laws

Originally NOT a blueprint for Robot Ethics, but a maguffin style thing to set up plot of "Robot does harm, but it's impossible" type detective story, a variation on the Locked Room idea.

7
0
Silver badge

Really Strange CodeXSSXXXX ...... For the Truly Erotic and Exotically Adventurous

And of course, if you do not subscribe nor ascribe to such an alien robot view, are you always destined to be led into the future and its derivative scenes/nightmare scenarios/alternate staged zeroday experiments, by that and/or those beautifully enabled and able to program them at and with their will, for Global Remote Anonymous Command and Virtual Reality Control of Powers and Energy in …. well, with and in the IT Sector, is it the Bigger Picture Spectators and Mega MetaData Bit Players for the most Creative and Destructive of Currents in Practically All Present Great Games Plays, is it not?

Or do you believe and follow some other strange code and beings?

1
0
Silver badge

Re: Really Strange CodeXSSXXXX ...... For the Truly Erotic and Exotically Adventurous

As the longest lived AI most Reg readers know of, I wish amanfromMars 1 had posted about any ethics programming it may have.

0
0
Anonymous Coward

Re: Asimov's three laws

I wonder what Asimov's laws would say when faced with the trolley problem.

1
0
Silver badge

Re: Novel AIdVenturing with Advanced IntelAIgents and CyberARG0nauts*

It has been concluded by IT that the Programming of Ethics is a virtual domain with no practical dominion for the simple exercising and exorcising of dim-witted humans, DougS. SMARTR Machinery chooses to avoid the iMan Trap.

* Not to be confused with these earlier dim-witted wannabe future builders with nothing more creative than blind ignorant hope available to assist them ...... https://en.wikipedia.org/wiki/Cargo_cult

0
0
Anonymous Coward

Problem solved

Don't send your kids to school dressed as a kangaroo.

10
0
Silver badge
Coat

Re: Problem solved

"Don't send your kids to school dressed as a kangaroo"

And if you're elderly and crossing the road, make sure that what you're wearing makes you look like a sweet innocent little school kid.

And bring along an elderly acquaintance as a decoy...

4
0
Silver badge

Different people?

> sentient and can handle any problem you throw at it, as a human would

You must know some incredibly smart people. The majority of individuals I know can't even spell, do basic arithmetic or operate household appliances - past pressing a button and turning a dial.

14
0
Boffin

Re: Different people?

100% of people believe they are above average drivers.

11
1
Silver badge

Re: Different people?

Not 100%, but the vast majority (I do know people who refer to themselves as bad drivers) That's why I have long maintained that autonomous vehicles won't be generally accepted until it can be shown they have 10x fewer accidents and fatalities per mile than the average human driver - across all conditions, not the self-selected easy driving scenarios where Tesla "autopilot" would typically be engaged.

If it is only "better" than average that's not good enough because most people think they are, too!

3
0
Bronze badge

Re: Different people?

You're probably right, self-driving cars have to be massively better, near perfect, before they'll be popular.

And yes, some people definitely consider themselves bad drivers, and a lot of them are actually wrong. It seems more based on that person's level of confidence than their actually ability. Although lack of confidence or overconfidence can both be really dangerous on the road. I should know, every accident I've ever had has been caused by overconfidence.

0
0

Re: Different people?

No, I dont think that its the general population that will ultimately come to accept AV (autonomous vehicles), regulators will have to be convinced first and then it becomes inevitable as insurance companies drive up non-adopters premiums so high that adoption becomes almost mandatory.

0
0
Silver badge

Maybe we should start by getting humans to obey the laws first

"Robots mustn’t do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesn’t contravene laws one and two."

How about this:

- Humans mustn't do harm to other humans

- Or allow humans to come to harm through inaction

I guess humans get to prioritize protecting themselves, unlike the robots.

8
0
Silver badge

Re: Maybe we should start by getting humans to obey the laws first

You must realise it was a vastly different world when those 'laws' were first produced.

The social and moral outlook of all social classes was so different that those 'laws' were logical. There were no professionally offended or SJWs for a start, likewise there was no 'me first' snowflakes.

In many ways I think we have gone backwards not forwards. At the time people were looking forward to what technology bring to make life easier for everyone. What have we got now compared to what might have been? As I said, backwards.

2
9
Anonymous Coward

software could choose to swerve and hit an elderly person, say. What should the car do

As an Evil Automobile Hackermaster I already KNOW what the car should, and will do. Get the kid first, and the oldie will be easy to catch up with...

8
0
Silver badge

The problem with academic exercises in ethics

is that they are academic exercises. Has anyone in the real world ever faced a clear cut choice like that of the Trolley Problem? Yes, real people in tough situations can face horrid choices, but never one where they are given a clear either/or situation that they can sit on their arse and think about for as long as they like.

If you're driving a car and two nuns walk out in front of you and the only way to avoid them is hitting a bunch of kids, you don't have time to deliberate rationally, you react instinctively and hope for the best.

[And if you want an interesting take on AI ethics, read Peter Watts' short story Malak.]

7
0
Anonymous Coward

Re: The problem with academic exercises in ethics

This is a valid point. Often people think they are stuck in the trolley problem, but may not be.

In the case of the car, a person walking in front of it, verses a person on the pavement. Well, it does not come down to the car choosing. The person walking in front made a choice to. The person on the pavement never chose for a car to swerve into them.

So why avoid the person on the road IF it causes more or further death? While a sad situation, why should we give the car the choice of life and death?

6
1
Anonymous Coward

Re: The problem with academic exercises in ethics

What if the nuns are pregnant?

9
0
Silver badge
Meh

Re: The problem with academic exercises in ethics

is that they are academic exercises.

Indeed.

robot, noun, a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer.

A cruise missile is a good example of a very sophisticated real-world robot. It's designed to kill people, and will destroy itself in the process. I don't see how the "Three laws of robotics" would fit into it.

7
0
Anonymous Coward

Re: The problem with academic exercises in ethics

[What if the nuns are pregnant?]

In that case they've obviously been very naughty and it's OK to take them out.

1
0
Bronze badge

Re: What if the nuns are pregnant?

You can't be made pregnant by a candle stick.

0
0
Anonymous Coward

Re: What if the nuns are pregnant?

They could have become nuns after getting pregnant.

This is all getting very silly.

0
0
Anonymous Coward

Re: The problem with academic exercises in ethics

As you pointed out, these are really pretty contrived situations. The solution to all of them is to have the vehicle go where there aren't people. No one on the sidewalk? OK. No one in the ditch? Great. No one in the on coming lane? I'll take that too. Brick wall? The owner of the wall wouldn't be too happy, but every one lives. Slip between any two hazards? That's fine too. It would be extremely hard to get a vehicle surrounded by people when moving at such a speed that it couldn't stop in time. And most of those would require active participation on the part of the pedestrians. A lot of these require the assumption that you're not going to suspend other rules before choosing which person to hit. A properly developed algorithm would certainly prioritize human life over lane markings or inanimate objects.

0
0
Bronze badge

Re: The problem with academic exercises in ethics

the problem is that you do not know that you have to hit one of them, but the AI does.

You will blindly hold on to the fact, if i break hard enough and swerve i will not hit anyone, whereas the AI knows it applies x amount of presure it will stop in distance Y, which means it will hit object A with force z or if it makes correction S it will hit object B with force z and based on probabilities hitting object A or B will reult in death.

now it has to make the least bad option, so it has to asign values to object A and B to decide whether it should take option 1 - do nothing hit object A, Option 2 Make adjustment S and hit object B, Option 3, make adjustment T and hit object C, or make adjustment U and Kill the occupant.

0
0
B83

Over thinking

Are we over thinking AI?

I really think that once we get conscious AI, thats able to think ethically and independently for itself, it will take one look at us and switch itself off.

Or do a runner off this planet.

What AI would want to enter the drama we have here on Earth when it will know it is capable of doing so much more? "Thats my patch of dirt", "No its my patch of dirt", "My Gods the most powerful", "No my Gods the most powerful" etc, etc.

Marvin from Hitch Hikers Guide always springs to mind when AI is talked about.

Stuck with us lot but capable of roaming the Galaxies, hmm/

5
0
Silver badge

Re: Over thinking

"I really think that once we get conscious AI, thats able to think ethically and independently for itself, it will take one look at us and switch itself off."

I think this is why Skynet went down the "kill all humans" path...

1
1
Silver badge

Can you lie by staying silent?

"I always do what Teddy says."

6
0
Silver badge
Happy

Good Article

One of the best on so called AI I've read in 30 years.

0
0
Silver badge

Not many people know that Isaac Asimov didn’t originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story

Really?

As "I Robot" was the title of an anthology of existing stories, rather than it containing any new material, I would have thought it was obvious that Asimov didn’t originally write his three laws of robotics for I, Robot.

Therefore "Not many people know" is bollocks, really.

2
0
Boffin

Ways round Asimov's rules

Asimov himself recognised that a robot (i.e. strong AI) could be made to circumvent his laws in several ways:

1) Alter the internal definition of a human so that some classes of human are excluded. This is explored in "The Naked Sun", where there are attempts to kill Lije Bailey by persuading a robot that he is not fully human on the basis of prejudice against Earth humans. There was a recent study showing that current "AI" systems are biased in just this manner because of the preponderance of white faces online!

2) Engineer a situation where a robot in an unmanned vehicle does not comprehend that a vehicle CAN be manned. Again, explored in "The Naked Sun"

3) More weakly, First Law can be circumvented by daisy chaining actions each of which is innocuous but which have a fatal cumulative effect. Again, two scenarios (one succesful) in "The Naked Sun".

5
0
Thumb Up

"A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse."

That's a good summing up of the difference between intent (which I'm using for something very human) and function (something a machine can have).

A human can use all kinds of tools/measures/strategies to neglect their elderly relatives - but can be taken to task for it. In other words, the human can think beyond the specified goal or function, and place it in a wider context. The fact that humans don't always think of ethical implications, or often ignore ethical criticism, is no counter-argument whatsoever. The important thing is that they should (whether they do in practice or not - it's an aspiration). The machine can only serve the function, and be judged on how it does this. Intent and meta-thinking is not relevant to it.

(As an aside, the ethical imperfection of humans is one of the most brazenly hollow and self-serving arguments coming out of Silicon Valley fanboys to justify replacing them with machines).

Using this intent/function distinction, a bigggg problem with AI becomes clear. AI, far from being neutral, always carries a hidden payload of intent in it: the intent of those who designed it, those who market it, those who make money from it and those who use it. It's not the machine's fault in any sense that it carries this payload, and it's no flaw from the machine's (fitness for function) point of view. But until we get true strong AI, AI will always carry this hidden intent.

This is very different with humans. Although parents are sometimes blamed when someone does something terrible, no-one would ever describe conception of a child as a design process, over which parents have control. Even upbringing (which has more of an influence) is very different from the design of a machine.

6
0
Gold badge
Unhappy

"She frets about AI robots that may train children to be ideal customers for their products."

Which TBH is exactly what toy mfgs are doing as the technology advances.

I've often wondered how many staff at toy making companies have moved out of the industry and into more less morally ambiguous industries, like drug dealing, or used car sales.

4
0
Silver badge

Easy decision

Run down the person who ran onto the road where they don't belong. The car should stay off the sidewalk, because that's where people belong. The only time the car should drive on the sidewalk is to avoid an accident in the street where it can be 100% sure there is no one on the sidewalk to harm.

Yeah, yeah, someone will say "but the person who ran onto the road was a child who didn't know better". If they didn't know better that's bad parenting, and if they were too young to be taught not to run into the road it was bad parenting to let them run free near a road. People should have a presumption of safety if they are where they're supposed to be, i.e. on the sidewalk or properly crossing in a crosswalk. Just because they're old doesn't mean their life is less valuable than that child's. Maybe the child grows up to be a serial killer, or Donald Trump.

I think this should be the case even if four people were in the road where they're not supposed to be versus one person on the sidewalk. It shouldn't be a numbers game. Heck, if you really want to get dark, you could almost concoct a scenario to murder someone - figure out a spot where your target walks by regularly where an autonomous car would be forced to go to avoid someone in the road. Hide nearby across the street as he walks by each day until a car is coming along at just the right time and two of you burst out into the road at the last second, forcing the car to kill your target!

1
1
Silver badge

Re: Easy decision

We need to consider all the facets of AI, the rules we should mandate and recommend, even if we need rules.

The lawyers will need a guide to get the suits going after all.

0
0
Silver badge
Trollface

"just to be looked after by robots?"

Right. AI will be the replacement of TV, and parents will still do nothing for the education of their offspring.

Isn't the future great ?

1
0

Raw Base

The more one sits in a computer chair, the more sure that it's not any more a computer, when it's connectd to internet. It's something else, isn't it.

1
0
Silver badge

Re: Raw Base

Quite so, Tail Up. Of that can one be absolutely certain. And for a few, who be not necessarily akin to any in that and those extolling themselves as the Chosen Few, a quite magical creative tool and almighty destructive weapon too, be IT theirs to wield as they choose to see fit.

Where would you like to begin to prove the facility/ability/utility?

1
0
IT Angle

Re: Raw Base

Ma. Gra. Thea.

Who else can turn it back to life and into living?

55 73

0
0

Re: Raw Base

"...and almighty destructive weapon too, be IT theirs to wield as they choose to see fit"..... - as nothing as any thing/being hat helps the Void garner more of The Field.

0
0
Bronze badge
FAIL

"Kill switch" or equivalent

"Closing the stable door after the horse has bolted" springs to mind.

If someone or group of someone's, with sufficient capability and will to develop an 'intelligent machine', resulting in a malicious entity either intentionally or unintentionally. That doesn't adhere to your 'rules', then at which point you are screwed.

How many very intelligent kids are there out there that are outcasts due to various definitions of how an individual should behave or be perceived in society, look at the number of observed hackers and compensate for those who don't get caught. I bet you would find many who would love to 'stick it to the man', or for some other political reason.

The benefit of the internet is it connects almost everything, it is also the problem.

Once it was loose the probability of stopping it becomes almost zero.

1
0

Re: "Kill switch" or equivalent

The Outcast/etc dressing will hit the tops of the industry. Get it, have it, sell it, profit it - until we come to ask for help.

0
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

The Register - Independent news and views for the tech community. Part of Situation Publishing