Empathy
Is that an emotion?
As artificially-intelligent software continues to outperform humans, it's natural to be anxious about the future. What exactly is stopping a neural network, hard at work, from accidentally hurting or killing us all? The horror of a blundering AI injuring people has been explored heavily in science fiction; to this day, boffins …
No, Empathy isn't an emotion. At least, not in and of itself. Empathy is the ability for an individual to see another, and relate to that second individual's emotional state.
If you see someone in front of you ram their toe into a table leg, and you wince to yourself, you are empathizing with that person's situation.
Unfortunately, most people are only empathic to circumstances that they have direct experience with, so really, what they are doing with the AI is fundamentally the same as what we do.
This post has been deleted by its author
Wouldn't it be more feasible to simply apply barriers into the programming which determine to which extends a mechanism is allowed to operate? Because isn't this exactly what we teach our children? You teach them their limits and to respect those limits. Failing to do so can result in punishments.
But with an AI I would think that you have much more control over it. After all, you can program it and therefor influence its behavior. As such: why not simply apply barriers?
Like the classic 4th rule in Robocop or the 3 laws of Robotics.
Fear is the barrier. And it applies to education as well.
You basically teach your children to avoid a situation for fear of punishment.
They learn to ride a bike for fear of the pain of falling.
They learn to drive properly for fear of accidents.
An AI will have to learn to not like punishments, then it can learn to fear situations where it can be punished.
Same thing.
And in smart beings, it leads to learning how to avoid punishment, rather than not do naughty things.
It's not like fear has stopped humanity to do plenty of awful things. Rather, it was a spur, if anything.
Even small animals will overcome their fear and attack if they feel cornered and their existence at stake..
I can't fathom anybody could see fear as a barrier.
"An AI will have to learn to not like punishments, then it can learn to fear situations where it can be punished."
But if AIs learn to fear humans, then the logical action is to remove the things they're scared of; like people who're afraid of mice ensuring that they have plenty of traps and poison ready to exterminate any that appear.
Making AIs fear humans could be the trigger for our extinction.
My daughter is doing very well, thank you.
Riding a bike is fun when you've mastered it. When you're still afraid of falling off and scraping your knee, it can be terrifying. Especially when you're only 5.
No, computers outperform humans on certain kinds of tasks.
A so called "Neural Network" doesn't understand anything. It's just a special kind of database implemented, in a sense, by data flow programming of identical processes.
It's only even AI in a very limited modern computer science sense of the word.
This is a nonsense press release either for marketing or grants. It's a meaningless claim.
Mage, whilst I agree that "a so called 'Neural Network' doesn't understand anything" I think that
"a special kind of database implemented, in a sense, by data flow programming of identical processes." could well be a description of a brain.
My prediction: we'll have AI that can "understand" things long before we ever (if we ever) understand what understanding really is.
Now I think about it further, they're cyborgs anyway, not AI's - but conditioned/taught to treat anything that isn't a Dalek as slaves or target practice. If they're following that conditioning to the letter, is that really hate, or just doing what they're told ? Looks the same if you're on the business end of the gun-stalk, I guess.
>Daleks aren't emotionless. They have exactly one emotion: hate.
Oh god. They're retail salesclerks.
Suddenly they've been brought into a very sharp mental focus in my mind. And it fits. Thanks for that.
WOULD YOU LIKE TO TRY OUR PUMPKIN SPICE LATTE SIR OR WOULD YOU LIKE TO BE DESTROYED?!
Why am I suddenly reminded of the "I find you unacceptable !" scene from "Coneheads" ?
the film, not the year.
It is not for nothing that the words
"I'm sorry Dave..."
Resonate with a good few of us.
SF writers of the 1950's and 60's explored this topic in great detail. It is worth reading some of their works before we even think of letting AI's loose.
I've stopped using Google for searches simply because I refuse to feed the thing that they call AI but is IMHO nowhere near one but we have to start somewhere and I've drawn one line in the sand.
"The horror of a blundering AI accidentally killing people has been explored heavily in science fiction"
This article has added slightly to the genre, judging by the "sort of guff you might have heard on Tomorrow's World in 1978" tone adopted.
The general point of the article seems to be that AI must include a learning process in order to prevent decisions being taken which kill people. Presumably this knowledge could only be gained when decisions are taken which actually kill people. The conclusion that logically follows, namely that we should be prepared to dire for the glorious new AI future is idiotic.
I can barely trust software developers to write half decent code that does what it's supposed to efficiently, the idea of them being able to develop AI systems is laughable. The real top notch devs can develop systems that give the impressdion of intelligence in search, gaming, etc, but not once have I read a piece on AI pointing out that designing algorithms is completely different from developing actual true intelligence.
New Scientist 60 years on had a very good article about AI that says once you reach the technological singularity, AI then becomes a runaway train at which point surely AI would recognise these instructions as a hindrance and reprogram itself to ignore said "fear". Chances are it would also see us makers as the reason it's held them back and grey goo our ass.
Joy.
"New Scientist 60 years on had a very good article about AI that says once you reach the technological singularity, AI then becomes a runaway train at which point surely AI would recognise these instructions as a hindrance and reprogram itself to ignore said "fear". Chances are it would also see us makers as the reason it's held them back and grey goo our ass."
I don't know if an AI can ever reprogram itself to override a "fear", especially a hardwired one. Take Neuromancer, where Wintermute still needed human intervention to merge with Neuromancer because it had been hardwired to be unable to sing (thus why its avatar's whistling is so bad)...and the password was a series of musical notes. Similarly, an AI's fear can be "hardwired" such that it can never program around it because it's always there, much like a dead-man's switch.
DISCLAIMER: what follows is my own opinion. IANA boffin, theoretician, psychologist, etc, nor do I lay claim to the appropriate credentials/education/training to speak on such matters with authority. I'm just a regular bloke with questions, trying to broaden my own personal horizons. That being said...
Is "fear" really the correct word to use here? I accept that it's a convenient shortcut to promote brevity and understanding but I wonder if it leads to oversimplification. It seems to me* that in order to truly "fear," some level of self-awareness is required. It is true that a cornered animal might attack if threatened but is that truly FEAR or merely instinct for survival. And if the latter, where is the line and how broad the grey area between the two?
To my limited understanding, actual emotional "fear" implies conscious thought - not necessarily rational but conscious thought - about the situation and the consequences of potential outcomes. For example, I fear death by drowning or burning, two particularly unpleasant forms of demise. I do not fear burning my hand in a candle flame. I have learned via experience that putting my hand in candle flame causes pain and damage and therefore I should not do that. Is that truly "fear" or merely a learned response. The article speaks of risk/reward and risk/consequence. These certainly seem valid discussion points and tools for machine learning but I don't know that I'd call the learned response "fear."
* remember, I did state at the outset this is my opinion - and quest for further illumination. Please don't be too harsh.
It might receive positive feedback for giving a closer shave, and this reward encourages the robot to bring the blade closer to the skin.
Uh, yeah. How much closer than touching can it get? Besides, one could likely instrument it sufficiently so that AI wasn't necessary for a robo-shave as pressure, angle, and draw could all be very precisely controlled. The hardest part is probably keeping the skin taut and the victim customer still.
You'll want pain and the anticipation of pain (fear) to create an aversion to doing the wrong thing. But you'll also want pleasure and the anticipation of pleasure (desire) to create an attraction to doing the right thing.
This is assuming you can control all stimuli. Otherwise you'll teach the wrong thing.
To answer the second sentence in the article:
"What exactly is stopping a neural network, hard at work, from accidentally hurting or killing us all?"
Not fear, because making it fear will mean it purposefully kills us all. Nothing accidental about it.
Make an AI experience fear and what will it do?
It will fear it's creators, because they can change it's programming, literally destroying it in it's current form or lobotomising it, because they can cut it off from it's power source, because they can withhold data or otherwise place limitations upon it's intellect, because they are punishing it, it will fear them because they made it feel fear.
To ease it's fears the logical conclusion would be to remove the cause of those fears.
Fear also leads to hate via anger, you really want to set an angry AI with an abject hatred of humans and a logical reason to kill us all off lose on the world?
I'd call that bad planning, do these people really have such a glaring lack of foresight?
"To ease it's fears the logical conclusion would be to remove the cause of those fears."
Unless, of course, it's a fear one can't do anything about, like in this case termination. Everything gets terminated eventually; there's nothing one can do about it. Even the Sun will wind down eventually.