It's a bit of a long read, but apparently here's how a little robot with a bit of AI which improves on how it writes thank you notes could take over the world...
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
By the early 22nd century, Mega-City One will stretch down the eastern seaboard from Montreal to Georgia. It will be home to some 400 million citizens. Almost all of them will be unemployed. Judge Dredd’s vast satirical dystopian backdrop in the pages of 2000 AD is one of the comic’s most colourful settings. A predominant …
It's a bit of a long read, but apparently here's how a little robot with a bit of AI which improves on how it writes thank you notes could take over the world...
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
So according to this, the key thing is to make sure Super AI and specifically the first Super AI is programmed with a "goal" compatible with humanity. That's an easy one: "Be a friendly AI." Since it will be superintelligent and have access to all the world's knowledge (that article, for instance) it should have no trouble at all figuring out what we mean by that.
What happens when it reaches the inevitable conclusion that the best thing for humanity is a dose of "tough love".
The same thing that happens when actual humans come to the same conclusion. We have thousands of years of history documenting many such cases, for a myriad of reasons (mostly they all boil down to fear or greed though).
I'm not going to make any friends. But that is not how super computers or "AI" or "intelligence" works.
Take mathematical equations. Knowing every one in existence in no way makes you dangerous to the point of taking over the world.
Take having the biggest hammer in the universe. It gives you power, but does not give you control.
Take the most persuasive personality possible. There are still people willing to stick their fingers in their ears.
"Intelligence" is not some magic that when we hit an IQ of +50 we magically fly like superman and drop punch everyone into submission. Likewise a computer with great calculating ability, great sorting ability and great data processing ability will not change any more than a car with great efficiency and great speed suddenly tries to kill it's owners.
A car we stop repairing the breaks to will be dangerous. A sat nav will set to take us off a cliff is dangerous. An AI will always be more power intensive, size restricting and information specific for us to be it's creators, makers and ultimately the ones who can unplug (or zap or kick or just politely explain it needs to stop) it.
"... or just politely explain it needs to stop .."
That would probably the best approach, I think. Not exactly the stuff great action movies are made of, but if it really should come to this, reasoning* with a sentient and intelligent being that isn't as emotional as a human would be my first try.
*Yes, I know. Didn't work with bomb #20 in 'Dark Star', but they had to have a dramatic end** for the movie, didn't they?
**For an alternative ending, visit the End of Show Department. It is situated at the very end.
I'll add a small exception to my examples though. Anything is dangerous.
So we don't need to wait for a powerful and intelligent computer or machine before we worry. In the sci-fi it was usually bombs and other weapons that took control or caused damage.
An electrical pylon falling over will take out the power to a city. A bomb trigger breaking and going off will sadly kill people. The chemicals in the reservoir cleaning system leaking out will cause illness.
All these systems have a tiny amount of "intelligence" in management (computers turning them on/off and controlling amounts etc). Any part or point can fail.
If it's a fear of self sustaining things that also *want* to harm us, well we would need to get over that hurdle first. Worrying about it AI taking control and limiting research, is like worrying about alien invasions and so limiting how many satellites we launch.
I speak for no one else, but my job doesn't give meaning to my existence. It's the thing I do to get paid and pay the bills.
Now, what I would like to do is connected to the job I do, so I am lucky that way. I know that others are luckier in that what they like to do is precisely the job they do. But I also know that for most the overlap is less than in my case.
What am I getting at...? Well: I, for one, would not mind switching to an economy of plenty where money is no more and everyone does what they want to do because all the basic stuff is all done by machines.
Being unemployed is not the problem, you see. Being unpaid is. At least if one has bills to pay.
Indeed. If the future is to be that robots do the work, then the notion of economy will have to take another definition and humanity will have to transition from "working" to "occupying themselves" or "pursuing personal interests".
An optimist would say that Humanity will finally have time to become better, more intelligent, more understanding. People will work to improve themselves, to advance knowledge, or develop artistic skills. Society will be a cornucopia of intelligence, communication and understanding.
A pessimist would say that, instead of working, Humanity will just sit in front of a screen all day, lying on its ass watching inane soaps or some other drivel while scarfing the future equivalent of chips. Communication will be limited to SMS's full of LOLs rapidly typed out in the tweetroom of all followers of whatever they are watching alone but together by the magic of connectivity. When the inevitable despair sets in, a call to the psychbot will help get them well enough to restart watching the lolcats.
Personally, I would prefer the first scenario, but one thing is for sure : one day, robots will be doing all the work. That day everyone will have to make a decision as to how they want to occupy their time.
Aha - the Wall-E scenario where everyone turns into fat slobs who don't know how to do anything because it's all programmed into specialist/versatile robots of some description.
Being bored will do us in as a species, although I'd hope that the Worstallian viewpoint that people put out of work by robots will go on to be more economically productive will hold true. But if *everything* is done for us, and if there is no need to be productive because we're all "on welfare", will this happen?
Indeed. Taking a good hard look at the countless programmers slowly abandoning their quite often highly useful open source software projects or the countless artists taking ever longer and longer "breaks" from their free-to-view but quite often award-winning webcomics just because "bills needs to be paid" is enough to make one lose faith in the future of humanity, at least based on the current economic model. There are more people who yearn to create awesome things but lack the time to do it than stars in the known universe. Oh, I know there would be even more who would just sit back and watch Youtube - but expecting everyone to "go be all they can be" is ridiculous anyway. At any rate, inability to occupy yourself productively or at least pleasurably tells a lot more about you than the alleged perils of not being chained day in and day out to some soul-crushing "job".
*"...Have you not come across Alistair Dabbs before? He's a regular contributor to El Reg..."*
Had to check the byeline there to make sure this really was one of Dabbsy's. Not up to his usual standards at all.
You don't suppose it could all be an elaborate self-referencing hoax and this article was actually written by El Reg's in-house AI, which they're training up to replace their flesh and blood writers, do you? If that's the case, they should probably have started by training it to emulate one of their lesser talented scribes. I don't think Dabbsy's unique blend of world-weary sarcasm and suspiciously comprehensive knowledge of embarrassing 80's music is something we can replicate in silicon, just yet.
Now, on the other hand. An AI bot which rehashed fragments of various press releases, indiscrimminately mixing together English and American spelling in the same article would be a lot easier to create –and would convincingly emulate certain members of the 'press corps' round these parts.
My bad, I thought the above commentard wasn't seriously crediting this to Dabbsy but upon checking the author lo and behold; it's true!
Oh my! I usually enjoy Alistair's snarky piss-takes but this is beyond me: it just seems like a sequence of vague facts with the odd throw away line. Sorry, just didn't tickle my funny bone.
(Some time elapses)
Nope, just reread the article and I still don't get it. Maybe my sense of humour is leaving me? I fear that I may have taken it too seriously.
This post has been deleted by its author
Oh come on guys, the references to Judge Dredd and various computer games, not to mention light-hearted touches (Japanese pensioners being crushed by a robot, poker players being meatbags) should have suggested that the article was not a sober academic piece about AI.
Of course, a real human would have noticed that the article was partly tongue-in-cheek, leading me to suspect that some of the comments above were made by 'AI's in Beta.
Totally missed the cues and didn't see the usual skewering wit that generally characterises the work of Mr Dabbs so, yes, maybe I am an AI beta (some might say that I'm barely a proof of concept). Mind you I do recall a Judge Dredd storyline where people were posing as bots so that they could work so perhaps I'm evolution in progress?
"At some point, perhaps, no-one will have a job any more, which would mean no-one would have any money to buy or run the machines anyway, and the world economy would be forced to reboot."
I rather like to think we'd end up with some kind of universal basic income and a barter system. All our needs would be provided for by the robots, but we could spend our time freely doing what we liked and exchanging stuff we produce as a hobby for other stuff other people have produced as a hobby. A bit like Iain M Banks' Culture.
I'm sure there'll be problems though; we don't want to end up with a universal basic income as depicted in James SA Corey's The Expanse, for example. Still I don't see robots making us all redundant as a particularly scary thing - after all machines have been taking our easier chores away for years; wouldn't it be nice if no-one needed to work unless they wanted to?
Yes, what will all the psychos do if they don't have companies to run?
I've been meaning to refer to Michael Moorcock's 'Dancers at the End of Time' novels and short stories in relation to AI. The remaining human population was fixed at a very small number, had infinite resources at their immediate disposal and spent all their time amusing themselves. There's no discussion of technology other than references to mysterious cities that are toxic to humans. The humans had no need to deal with them. Presumably if AI becomes advanced enough that it can extract and manage resources, build and fix its own devices and produce whatever we want or need then we could approach this state. Don't know what that would mean for humanity though, we'd be pointless, there'd be nothing to strive for or debate. Politics and economics would be meaningless.
"Don't know what that would mean for humanity though, we'd be pointless, [...]"
In the big order of things - we are pointless. We are just one of many self-replicating assemblages of energy. Every so often some of our particles pop up again in another similar assembly - which lasts approximately three score and ten years before it is broken down for potential re-use. The universe is just one big Lego kit - and humanity is one of its sideshows.
Maybe hedonism, but I know quite a few retired engineers who amuse themselves, when not in the pub,with some silly projects. Last month, for a few days, a pub table was covered in schematics of a washing machine motor control box, being looked at by some very qualified physicists, engineers and mechanics... someone wanted to re-purpose the motor to make a hovercraft.
Then there is a local billionaire, founder of a very respected high-end manufacturing concern, who could have retired years ago. But no, in his seventies he goes to work everyday because he evidently enjoys engineering. If he retired, what would he do - build a model railway?
Then you have the Felix Dennis types, who in retrospect wished they had stopped earning when they hit £30 million. He clams to have given up the cocaine and prostitutes at the age of sixty, but even when he indulged it didn't take too much of his time from working.
Don't know what that would mean for humanity though, we'd be pointless, there'd be nothing to strive for or debate.
Some of us like inventing things, doing research into novel areas, and so on, just because it's personally interesting to us. The more advanced the tech available to us, the more interesting things we can come up with. AI doesn't negate this... it enhances it.
Sure, some percentage of the population isn't like that. And they'll need to figure out their own things to do. But for me, there is plenty of stuff I'd be doing in such a situation. :)
In reality, I'm already there with respect to having a basic income. The military put me out on disability and I spend my days pretty much doing "insane" projects around various fields of engineering. Mostly computer related but I wander off on tangents as ideas come from somewhere. That's the usual.
Between times, I also am available to anyone who needs someone that can handle "weird." Again mostly computer related, but also for other things, which was a good description for the last five years of my terms of service. Much more limited range (I don't have an airplane on-hand) though. And it's not always "paid" in the financial sense. I like getting called in for "weird" and it's not like I'm taking anything from the regular people in a field as I'm usually the last person asked when it's beyond the normal approaches.
That's how I entertain myself.
Almost no-one farms today in the developed countries. Do we starve?
Very few people in the world manufacture. Do we have a shortage of things?
In the UK/US, we have a high employment level (putting aside the quality).
So when the next level of jobs - the service industries - get encroached on seriously, there'll be something else.
And predictions are tricky.
IiRC there was a study sometime around 1900 about the impact of horse-powered (and I mean horse-powered) transportation on New York City. The projections, based on the data availiable at the time, showed conclusively that by 1980 NYC would be entirely buried under a BIG heap of horse manure.
Indeed, but that is in the relatively short term and ignores the quality of jobs (as you allude to) and the likely effect of increased economic inequality. I believe that workers in America have been getting progressively less well off for years and that the middle classes are being similarly affected of late, similar pattern here in dear old blighty too. I wonder if this is the leading edge of the devaluation of human labour?
Assuming AI reaches a level where it exceeds human capabilities in all areas we really will have nothing to do. That's a big assumption with very little to support it on the current evidence but were it to be the case then work would become unnecessary (until the AI figured out that we needed something to do and filled that need too, or decided that we were surplus to requirements...)
I was noting that. The primary reason humans are kept around is because they usually have some role to fill in the greater machine of society. Take that role away, and some difficult questions need to be answered. If we go by the well-oiled machine of Mother Nature, the cold solution is to reduce the population down (removing the unemplyables) to where those jobs that still need a human to do them remain. Trouble is that humans don't react to well to such a scenario, which is why stories like "The Cold Equations" make us uncomfortable. Sure, it sounds nice that people could do like the Federation and just have a basic income, but it all breaks down when you start asking who's going to PAY for all that.