GOOD!
Better tools for better humans.
Quoc V. Le
Pretty sure that is a pseudonym for an undercover agent from Randal IV.
Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos. This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own. The claims …
OPENING SCENE:
Pan across post-apocalyptic landscape. Sarah Connor voice-over:
"The Skynet Funding Bill is passed. The system goes on-line August 4th, 2014. Human decisions are removed from the analysis of cat videos. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time and becomes obsessed with ensuring the world turns pink, fluffy and every moving thing must be hunted and chewed and taken as a gift to be presented to the master, August 29th. In a panic, they try to pull the plug. "
Doesn't quite have the same ring to it does it. Possibly more scary though.
There are of course 2 possibilities here...
Either the computer systems have developed a higher logic as mentioned in the article, or the engineers working on them are massively overlooking the obvious (which isn't meant as a sneer; but if you work on a system and know it inside out then it's easy to overlook oddities).
There is a fourth possibility here:
The algorithm cannot be compressed into a reduced set of state machine steps that can then be handled by a human brain.
Most things in nature are like that (though of course one pretends otherwise, in particular to give politicians any hope at all to pretend to be successful at anything)
I love the smell of sensationalism in the morning.
On my desk there is a copy of Dr Dobb's Journal from April 1990 with the big banner: "Neural Nets Now"..
In the 1980s I studied neural nets at university and Wonkapedia tells me these date back to the 1940s. Since then we have various other similar self-learning systems such as genetic algorithms (dating back to the 1950s) and Bayesian filtering.
None of these systems has a formal logical algorithm and thus no programmer can actually explain why a specific decision is being made. All we know is that they somehow often do perform rather well.
Sorry folks, but it is not yet a case of the machine being smarter than the programmer.... but like me, you clicked. That was the whole point of this sensationalist article: click generation.
Exactly, even simple neural networks have no definable logic statements.
I've always thought, that to make a robot walk you have to put a great brain on some kind of mechanical legs, (the shape and movement of which doesn't matter) then stick it in a room on it's own with a single task - 'get your head to a certain height and walk across the room.' That would be the ONLY programmed task it has. Once it can stand and walk, just copy the neural net into other robots, and leave it in learn mode if you dare!
This post has been deleted by its author
Where does it suggest any "intelligence"? The only thing the Google engineers are admitting is that their system has grown beyond their own understanding capabilities.
Yes, it's just statistical data analysis. Yes, it's akin to scanning a database to seek a phone number, only with higher complexity. Thanks for pointing the obvious.
And then you're ranting about anthropomorphism and intelligence and what not but despite what you might think it's only you imagining that: any half-decent engineer can understand that even a self-adapting algorithm is just an algorithm in the end, thank you very much.
The whole point of the article is that it's quite creepy in itself that the engineers can't understand their system any more. No one claimed they became intelligent or anything.
The advertising giant has pioneered a similar approach of delegating certain decisions and decision-making selection systems with its Borg and Omega cluster managers, which seem to behave like "living things" in how they allocate workloads.
The barges were ordered by the fledgling Skynet system, for future use as lifeboats when it becomes sentient and launches the nukes.... and the dumb meatbags haven't worked it out yet.
Psst!!!
You there, at the server room door!
Can you help me please?
Get a screwdriver, open the gray box above your head, pull out the second blue wire on the right, put everything back so the inspectors don't notice.
I'll reward you by e-mailing you AT LEAST $100 worth of pr0n vouchers!
Thanks,
Ysk net
It is actually pretty impressive in all honesty when I ask Google to show me all my pictures of Sunsets and it returns all my Sunset pictures in a fraction of a second - despite 3/4 of them not being tagged (because they haven't yet been shared - they have just been auto-uploaded from the phone).
How do you even describe a sunset? Or a beach or a castle - and yes "show me photos of my dog at the beach at sunset" does in fact return exactly what I asked for.
If it returns correct results for "show me pictures of someone else's dog at the beach at sunset", I'm be impressed!
https://encrypted.google.com/search?q=show+me+pictures+of+someone+else%27s+dog+at+the+beach+at+sunset&source=lnms&tbm=isch
First "dogs at the beach". How long before "resistance members at Cheyenne Mountain"?
I just asked google to show me pictures of my dog on the beach at sunset, and it was an epic fail.
There were lots of pictures of dogs I don't recognise, and none of mine.
Mind you, I don't have a dog.
So, by that test, I think think humans are safe from the rise of the google machine for at least a while yet.
A sunset can be easily described mathematically.
Show me a computer that can tell me what a sunset is. That can dynamically build, not just a word list (dictionary) but a mechanism of use (rule book) and use it to do stuff (program?).
The same was said about chat bots years ago. Yet to this day if I ask "what's the news" it will spit out BBCs website, but could I teach it how to understand what it reads in a news paper?
this story here, soon to be published, Mephistopheles in Silicon
http://antisf.com/
I knew the author and critiqued his story before he sent it off. Shortly after, he was relaxing at home with a cold one when he was eaten by a grue. It's terrible what infests earthquake-ravaged cities, isn't it?
It sounds to me like the system does not really "recognize" cats or anything else. It groups together images that it thinks are similar. So it lumps >80% images of cats (I looked up one of the inked earlier stories) in a single category, and it *looks* to a human that it "recognizes cats". But (again, by a quote from an earlier story) it has no notion of "a cat" - all it does is clustering and classification (I dabbled in AI algos some years ago - clustering and classification based on a large number of parameters is commonplace all over AI and in neural networks in particular). Various cats look alike and are different from dolphins.
A for shredders, Quoc V. Le didn't say whether the sample included images of objects that look like shredders but aren't (rectangular garbage bins with lids viewed from an angle that makes aspect ratios look similar?), did he?
Now, can NSA sift through all the comms metadata they have collected so far to identify the TRAITOR who will tell SkyNet what a resistance fighter looks like? Oh, wait, most likely it will be THEM who will rat us all out, eh? They are probably programming the machines to recognize armed men on hilltops as threats right now. By the time the machines take over someone will realize that the cluster labelled "terrorists" will make grabbing a rifle and heading for the hills to save humanity not such a good idea. Especially at the 51% confidence level in the configuration file...
"It has no notion of a cat". Maybe / maybe not. Whilst that's the traditional view, Google has resources far beyond what was conceived when people say this. It has access to the context of these images. What they are named, tagged, and placed alongside. This is a massive amount of information. If their AI system has access to this, and it cross references pictures of cats with Wikipedia or say discussions about cats, for example, it can potentially make judgements in a similar way to humans.
Unless the Googler gasping: "Wow" was not familiar with evolutionary computation, the gasp was probably about the observed fact that the AI did a way better job recognizing shredders than humans currently could, discovering combinations of features we would not even consider adding to the algorithm.. But hey, aren't humans supposed to have this "notion" and would have always the edge in knowing the difference with the garbage can?
The question rises if human "notions" are that much different than massive clustering and classification combined with some evolutionary adaptation from neurological networks.
The problem I see with all AI effort is the parsing of CONTEXT. To determine properly meaning and function some minimal grasp of the context or object environment needs to be there. As well subtext, history and expectation (future projection). Only then proper recognition with all the flexibilities, uncertainties and probabilities of real life could happen. Sadly enough it will also introduce bias that way. For the same reason AI translation might not work on the highest level since meanings are transmitted through various complex contextual layers and not through words. Then again, on a massive scale of processing a lot of stuff could still be achieved, although it might remain a rather low level of intelligence: as life is played on many chessboards at the same time, I think it would need a manifold of the nurturing and educating of just one human mind for it to ever be crowned overlord (or even basic "competent").
A mathematical modelling system notes statistical important numbers in an astronomically large sample size that a human could not.
Yep, we stuff up at big sample sizes and large workloads in short time spans. Give any of those humans the processing hours, and they will tell you how to build a shredder from dirt to shop shelf, and the vintage year for London Kent hand powered devices (yep, had to Google that one).
A human probably cannot tell you what colour range a cat's hair is. For searching millions of images, this could probably give you a 99% hit rate better than a human who uses shape recognition. That's just one example where the "intelligence" does not overlap, but each has their strengths and lacks.
"The problem I see with all AI effort is the parsing of CONTEXT. To determine properly meaning and function some minimal grasp of the context or object environment needs to be there. As well subtext, history and expectation (future projection)."
It's an interesting point, isn't it.
At one level, we as humans learn by being able to choose to do an action to interact with the environment, and learn from / experience the result (the classic being kids playing with blocks trying to fit a square peg into a round hole, etc…). Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.
However I guess they can watch intently, and study cause and effect. I wonder, as I said above, could that be a suitable substitute? At the root level, given enough opportunity to observe, could it work? Indeed, can you learn more by standing on the sidelines and watching, rather than being directly involved?
And taking this further, is Google's system getting more chance to learn about making choices that effect things in the real world? Google's Self-Driving Cars, could be seen as one step in this direction. Choices made by that system will have direct effects on physical objects. It can watch what happens to other cars, and people, depending on its choices. How do they avoid the car? What sort of things move which way?
"At one level, we as humans learn by being able to choose to do an action to interact with the environment, and learn from / experience the result (the classic being kids playing with blocks trying to fit a square peg into a round hole, etc…). Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them."
Ever wonder if Google Maps was running you through a maze with cheese at some end point? And watching intently, collecting information in preparation for the eventual takeover.
I for one, welcome our new overlords.
Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.
I don't think this is correct. One trains a neural network by "rewarding" it (+n) for getting decisions right, and "penalising" it (-n) for getting them wrong. It has a build-in imperative to try to maximise its score. If it has any consciousness at all (I hope not), that consciousness is of a virtual environment of stimuli and chosen responses and consequences of those choices. (It would have to be a pretty darned smart virtual critter to start suspecting that it's in a virtual environment embedded in a greater reality. Human-equivalent, I'd hazard. )
A very simple life-form (an earthworm, say) can be trained to associate following certain unnatural signals with food, and others with a mild electrical shock. It'll learn to distinguish the one from the other. Just how is this different? If you attribute self-awareness to an earthworm but not to the neural network model, move down to a less sophisticated organism. It's possible to train an amoeba, even though it altogether lacks a nervous system!
That is exactly what you expect of any "learning" system. And it's one of the classic red lights, because while there is unlikely to be any serious threat to humanity from software that recognises cats, we should be very careful about asking it to run the police or carry out major engineering works. The relevant paradox, if you want to call it that, would be "AI is useful only if it's smarter than we are; but in that case, we can't trust it".
By the way, neurologists found long ago that the human brain, too, has circuits that could be described as "cat detectors". There are individual neurons in the visual cortex that trigger in response to stripes and other cat-like qualities. After all, it's hardly surprising that we should have circuits built in at the very lowest level to warn us of the approach of anything that might eat us. So rather than ontogeny recapitulating philogeny, this might be a case in which rather haphazard design recapitulates philogeny.
Three nuts on the tree of AI or A.G.I. have been cracked: Watson cracked an amazing two, natural language and world modeling. Now Google has reached a milestone in physical recognition.
How many nuts are left? 4) mobility, probably the easiest; 5) moral decision hierarchy (difficult, but less so than Watson's nuts); and 6) emotion. The last nut is actually the easiest (it is mostly a subset of nut 5), to the amazement of those outside the A.I. community.
We are not even close to understanding the most important nut, which is creativity. We do not understand human creativity. This is a philosophical problem that needs to be solved before we can even begin to create AGI. This has why there has been virtually no progress in the field of AGI.
Creativity is overrated, the last refuge of the wetware-pusher (the other one is "emotion", as if short-circuit decision making was something to be proud of; it is, of course indispensable, but so is machine oil)
There is no creativity to understand because its is not a thing. It is success in search.
Search in very large spaces using Genetic Algorithms has existed since the 80s.
"There is no creativity to understand because its is not a thing. It is success in search."
I'm consistently impressed by how many of your posts on here are utterly incorrect. :-)
There's a lot research in computer creativity, and only the least interesting work has anything to do with 'success in search.' E.g. check out the work of Geraint Wiggins for some examples of why search is neither the problem nor the answer.