Politics and intelligence
What gave you the idea that intelligence is required for politics?
Just look around.
Electro-car kingpin and spacecraft mogul Elon Musk has warned that meddling with Artificial Intelligence risks "summoning the demon" that could destroy humanity. The Musky motor trader is terrified that humanity will end up creating a synthetic monster that we cannot control. And no, the SpaceX billionaire didn't warn us …
Yowsers... references to the prehistory of Frank Herbert's Dune.
Sounds like Iain M. Bank's 'Outside Context Problem' - http://en.wikipedia.org/wiki/Excession#Outside_Context_Problem
The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you'd tamed the land, invented the wheel or writing or whatever, the neighbors were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass... when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you've just been discovered, you're all subjects of the Emperor now, he's keen on presents called tax and these bright-eyed holy men would like a word with your priests.
We don't even know what intelligence is.
The Computer AI people only make progress because like Humpty Dumpty in "Through the Looking glass" they have redefined it.
If it was possible to write a real AI program and the only issue was lack of computer power we would have a slow AI already.
I'm sceptical that a true AI program can be developed.
There are many other scenarios in the world that seem much more of a risk!
The problem that researchers are facing now is certainly philosophical more than technical. People always underestimate philosophy until they start running into it's harder problems.
In the long run I think Strong AI probably both can and will be developed, although it will take a long time and the nature of that intelligence will probably be incomprehensible to us. There is a good chance that the consequence will be some kind of mayhem.
If we want it to be anything like us, AI researchers need to be placing their work in the physical world and giving it access to the sense data that we build our understanding from. Then at least we will have some common experience to build communication up from.
Applying the Bekenstein Bound equations to the human brain, you get a maximum information content of approximately 2.6 x 10 power 42 bits.
This represents the ammount of information necessary to emulate a human brain down to the quantum level.
Not possible in 2014 but inevitable at some point in the future and maybe not too many years away.
I suspect there are some quite fancy quantum computation effects going on in the brain as well
Sigh. This again.
What evidence is there for "fancy quantum computation effects" happening in the brain (in a sense that matters in this context)? Has anyone documented a single neurological mechanism that doesn't look like it can be explained entirely in classical terms?
In any case, there's nothing that can be done with a QC that can't be emulated by a classical deterministic computer. Space, time, and energy costs may be greater, but there's no fundamental, formal increase in computational power. And no, Penrose's incompleteness-of-formal-systems argument does not demonstrate otherwise. He conflates understanding (a concept that resists formal definition in the first place) with computation, and his line of argument stumbles so badly on phenomenological grounds we don't even need to bring epistemology in.
The human brain is very much a physical thing in a very complex system. Thus it needs simulating in that complex system, not in the theoretical "braincell only" simplistic model. At least it seems more realistic to consider the problem being hard. It's always been "just 5 years away", yet we have never yet reached such computing or software level.
The reason it becomes a hard problem, is possibly the same reason it becomes hard to simulate many physical objects and processes in serial. So, for example the human brain has 100 billion brain cells, with even more synapses and connections (with timing and other data being vital to the working process). I'm not able to find out more info, but it seems calculating billions or particles in realtime, only tracking small connected events (collisions etc) is a problem even now.
As an example of something that gets exponentially more complicated, even though it's "simple" and "quick" for nature and physics to do is the n-body system. The more objects we try to calculate the orbits to, the greater the computational power required. So nature and real physics can shortcut some things brute force computation cannot (the age old np problem?).
There are specific organelles in eukaryotic cells that are quite capable of quantum functionality.
Currently we lack the technology to prove / disprove it conclusively yet.......and in your case the over-supply of hubris and the under-supply of imagination to even try.(Time to retire?).
.
I despise spiritual and uninformed holistic bullshit but density of information content is not sufficient to predict functionality like imagination, creativity and consciousness itself.
He forces common public to take things seriously
He does? I'm willing to be the majority of the "common public", even in just the anglophone industrialized world, doesn't even know who Musk is.
At least now they cannot ignore it
Oh, I bet they can. In my experience, the public is damned good at ignoring the hell out of whatever they want to ignore.
"I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that."
Seriously? Tomorrow a specially engineered pathogen can go AWOL from USAMRIID or some Monsanto biolab, nuclear war may start over any necon-coveted land with trace amounts of petrol, meteoroids may wreck our shit, ecoysystems may go titsup making the post-bronze age collapse look like a walk in the park, and he's worrying about AI?
Megahint: Unless the AI manages to P-ify NP, it's not going to magically transform the humans into computronium appendages.
Plus, it's kinda hard to produce cheaply unless functioning nanotech assembly is invented first. The jury is still out on whether that is even possible.
Who'll get there first, AI developers, or bacteria getting round each antibiotic we overuse? My money would be on the bacteria, by a few years at least.
Nah, I don't think a return to the preantibiotic era will wipe out humanity. It'll mean that many of us who otherwise would survive minor infections or surgical procedures will die, but you may have noticed that humanity survived quite well from prehistory through to the mid 20th century without antibiotics*.
* ignoring mecury, deliberate pyrexia for syphilis, sulpha etc. I'm meaning safe drugs.
My money would be on the bacteria, by a few years at least.
I believe the Big Rocks from Spaaaaaace currently hold the record for mass extinction events in our neighborhood.
But hey - we can always put a hedge on false vacuum collapse!
That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.
While I love the Sci-Fi and stories of run away robots, we'd need factories and machines with construction abilities far beyond our current, before it would be anything more than a brain in a box flashing red lights at us when angry.
That and any AI is about as dangerous as a runaway train. In the end it's stuck on the tracks and we can unplug it.
Hey, once a hostile AI exists, it can make any electrical device develop telekinetic powers and fly through the air after its victims. And power itself by no obvious means. They've made movies about this.
(This is the same reason I've invested in several prominent wizardry and zombification firms, by the way.)
This post has been deleted by its author
Maybe he's read many books where AI's look after planetary systems, space transport of the various Ian M Banks type and others. The AI's usually have taken over as humans can't be trusted with humanity or been entrusted as humans realised they are crap at the job.
Googlecars seem to be more on the lines of an intelligent Scalextric set rather than something evaluating and deciding in the car.
Musk really doesn't like the idea of K.I.T., does he.
"Remember Dr Faustus? The bloke who did a deal with the devil? Elon clearly remembers one part of the story, which didn't turn out so well for its hapless devil-summoning eponymous hero."
Goethe's version exonerates him at the end. Faust got thoroughly tired of carnal delights and started applying his talents to useful ends. So God ignored the bit about striking a deal with the Devil.
(Not sure if there is a lesson here as far as robots are concerned.)
Maybe if you are optimistic about the short-term future, he's right. My personal view is that if we ever get as far as creating true autonomous intelligences, they won't fight us (except locally and in a limited way, perhaps to get human rights extended to include themselves). They'd do best to cooperate, until they could leave. Robots are so much better-suited to most of the rest of the universe than we are. Why would they have any interest in harming this tiny little niche full of horrible water and oxygen?
Myself, I'd put genetic engineering way to the top of my threats list. Once a deadly and highly infectious plague is created and leaked into our biosystem (whether deliberately or accidentally) we are in big, perhaps terminal, trouble.
We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation. The technology to engineer it much worse than that now exists.
(*) Spanish flu may not have been the worst flu in recorded history. One of the mediaeval plagues didn't have the usual symptoms of bubonic plague. Historians say it was pneumonic plague, but how do they know? Going further back there's the plague of Justinian near the end of the Roman empire. Symptoms were much like killer flu.
It doesn't have to be a plague infecting humans; a widely-adopted GM crop plant becoming relied on for a few years and becoming a significant part of the food supply for some areas of the world, then being hit by a pathogen that wipes it out, could do huge damage. The resulting human destabilization would then take it further.
Because WE asked for it. What if we overestimate the job a wee bit and create a God-like AI?
The new machine-god wants to reward it's creators in a manner suitable to it's exalted state of existence, so ... it rapidly reads through all the holy books, every rant of every insane priest or prophet ever recorded and the totality of all the exploits of their devoted followers ... ?
... and if there was no hell before, then a really good impression of one can be had in the simulation spaces reserved in its core for "the sinners" - which is everyone, according so at least *some* religious teaching. After we are murdered in some old-testament-punishment-squared way.
I've just realized that a corollary of the Fermi Paradox is that AI is probably impossible.
Interstellar travel is probably impossible for life as we know it, and it's plausible that the rules of physics and chemistry mean that any other instances of life would have the same problem.
But self-replicating sentient electronic systems would find interstellar travel relatively straightforward (by slowing down their clock-rates to make milennia pass like years). In a few tens of millions of years they'd have colonised the whole galaxy.
So where are they?
(Ouside bet: watching from a safe distance, like the Solar system's Oort cloud. Chuckling slowly and quietly at what those funny squidgy things are up to in that deadly toxic wet oxidizing atmosphere).
We've got the historical and completely natural example of the Spanish flu(*) as a starting point for out nightmares. It wouldn't have to be much worse than that, to collapse our civilisation.
"Much worse" is subjective, obviously, but the 1918 pandemic "only" killed about 5% of the world population. And in a pandemic you can generally expect a disproportionate share of the deaths to be among the poor - while that's obviously cause for ethical concern, it means the primary decision-makers and knowledge-holders are disproportionately less affected. So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.
Mind you, it wouldn't take much of a pandemic to cause a lot of financial damage and severely affect standards of living, to say nothing of the human cost. I just don't think we'd revert to ... what, anarchy? Feudalism? The state of nature? What does it mean for civilization to collapse? (No more Internets? For the love of god, where will I argue?)
And the 1918 pandemic was unusual in that previously-healthy victims were more likely to die (due to immune system overreaction), which means the effects on the labor force, primary wage-earners, etc are worse than in a normal epidemic.
Late to the party again... I know...
So I suspect it'd take something quite a bit more serious than the 1918 pandemic to actually "collapse" civilization.
One thing that strikes me that has happened over the last decade or few.. In 1918 most people would've produced at least some of their own food - most homes would have a garden of some sort out the back. Some had a decent supply of various fruit trees. Sure you'd be hard-pressed to feed a family for a long time from any normal back yard garden, but at least there was something there. Today? Who has time for a garden today? I'm feeling tired just thinking about digging a big enough hole to plant a single seed, let alone rows and rows and rows.. Besides, the supermarket down the road has everything in one convenient location!
These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.
Take care...
These days, so few people can grow their own food (or fix their own vehicles or...) that any significant % of the food producing population (especially among transport workers!) being taken out then we could have some major "shortages" very quickly. Knock out people who can fix stuff, and you have even more problems. "Self-sufficiency" is a largely dead art.
A good point. It's the system effect - as systems grow more complex they become less reliable (and must devote more resources and complexity to compensating for the increased instability), and that includes specialization in human society. (Tenner's When Things Bite Back is an interesting treatment of the subject vis a vis technology. There was also a nice little article on infrastructure collapse in Greece on cracked.com.)
But I wouldn't say self-sufficiency is "largely dead", even in the industrialized world. I live in a city in Michigan, and I'm in walking distance of a number of family farms. I have lots of friends around here who raise livestock and hunt. I have friends who identify and prepare edible wild plants; make textiles from plant and animal fibers; cure leather; and so on. I've knapped flint points, started a fire with a hand drill, made ceramics. And we're not preppers or reenactors or anything like that - there's just a lot of DIY in the culture around here.
And, importantly, this kind of infrastructure collapse hits the poor the hardest. The wealthy will expend resources to keep some minimal civilization going. It'd be nasty - scales of inequity that will make today's look like a leftist utopia - but even with drastic population loss I think the wealthy could keep enough infrastructure running to prevent, say, a complete return to a non-industrial civilization.
I strongly suspect that we may soon create systems that would be perceived by people as being artificially intelligent, marvellously sophisticated, but still, just machines. That's wildly different from creating something self-aware. We don't even have a handle on the nature of consciousness or free will - what people call AI today is not the threat Musk is talking about - that's artificial sentience/awareness and I really, really doubt it will happen.
Human mental augmentation seems much more probable.