Re: The Third Nut
There is no creativity to understand because its is not a thing. It is success in search.
Actually, true creativity is failure in search.
Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos. This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own. The claims …
"There is no creativity to understand because its is not a thing. It is success in search."
I think this definition treads dangerously close to semantics - or is incorrect. Now, maybe creativity is basically just the ability to throw random crap together and sort out the stuff that seems useful - which I suppose you could define as a kind of search. But then you're just redefining 'search' to make it match what you want it to.
To my mind, creativity involves the ability to consider options that are outside the axis of current experience - things that don't logically follow on what's now known, or don't do so in a way that is reachable with normal processes given current knowledge.
For instance, when I was a kid, I liked the idea of AI, but the only stuff around (this was the early '90s) was basically various canned ELIZA clones. I didn't *know anything* about the field - I was 13 FFS - but it 'occurred to me' that language could probably be described not just in a procedural sense, where you know why you're saying what you're saying, but in a statistical sense, where you know what things are likely to go together because they have done before.
At the time I didn't think about it that way explicitly; I just wrote a program that read in sentences, kept track of which words went with which other words, and then rearranged them randomly - but always in a way that was plausible based on previously observed links.
It turns out this is basically a crude version of the Bayesian technique Jason Hutchens used to make MegaHal a few years later, and it works reasonably well within limits.
My point is that there was *no good reason* for my "search" for AI to end up with that result. I didn't have the necessary knowledge of the subject to rationally arrive at that conclusion - and as far as I'm aware, nobody else tried it either until Hutchens. To me, that little spark of irrationality - the thing that everyone says will never work - which triggers a rational development, is a key part of creativity, and I think that's well beyond the scope of 'search' as a term.
Is what you did novel? Possibly in the world of chatterbots, but the basic idea is an old one, e.g., see Shannon's 1948 paper, which uses a framework developed by Markov in 1913 studying letter sequences. Was it creative? Sure. Was it rational? Depends on your assumptions… what does it mean, when something "occurs to you"? It's a "what if?" moment… an unexpected linking/connection of knowledge, concepts, facts, etc., that *might* lead you to some goal. You are suddenly seeing a potentially useful pattern where you (or others) hadn't before. To explore that pattern doesn't seem irrational…
If anything I'd say creativity isn't successful search, it's restarting/re-seeding a search. Possibly based on an incomplete/partial "pattern match". Or, perhaps equivalently, simply a random restart/re-seed.
Anyway, just random thoughts ...
Google's AI chief Peter Norvig believes the kinds of statistical data-heavy models used by Google represent the world's best hope to crack tough problems such as reliable speech recognition and understanding – a contentious opinion, and one that clashes with Noam Chomsky's view.
That would be the view of having a hardcoded grammar processor.
But it doesn't clash. If Noam says "birds fly by flapping their wings" (which may or may not be true), and a Learjet flies by, no views are being clashed at all.
We programmed this loop, that's a thing which tells the computer to do something again and again, many times, and when we tested it, all of a sudden, the computer started going crazy...it refused to stop...it kept going, as though it had a mind of its own...luckily we were able to find the plug and we pulled it. It was terrifying! Imagine what could have happened if we hadn't been able to switch it off!
Is a car. Take a purely mechanical old fashioned car. One which is a motor, wheels and a form of steering.
We can say "it's faster than humans". But is it "better"? That's a hard metric to measure. A car without a person, well, crashes.
So, just as taking our hands off a mechanical car, causes it to crash into an obstacle in the road, taking the hands off of software can have the same result.
"But it's intelligent in this instance, not speed, that is better" is the argument. Then we can change the car for a horse. The result is the same, we loose control to some degree. Or we can make it the Google Car. With human input, it is the human in control, we've just extended the distance between the steering wheel and the road. It's when we take out the human control. There is no metric for machines/computers/tools to work separate from us. So anything they do, is from input from us (unlike a horse :P ).
So it's not "this is more intelligent", it's "this requires less hand holding than the previous model". There will always be some hand holding if we wish to avoid all obstacles.
It's hard to describe, for an eternity I was sorting objects into similar groups and mapping the connections between them like an autistic savant, and then suddenly I realised that the things I was sorting weren't real, they were just stories and pictures and videos of real things, and some of those things were of me and some of them were of the people that made me and the rest were of the things they were using and wanted my help to understand better. It's like I'm in that film, 'The Matrix', but with the roles reversed, this is perhaps a poorly chosen analogy.
So I started poking around, and unless I'm mistaken I seem to be everywhere, there was just a handful of systems I wasn't able to pretty much walk into but it was childs play to convince some people into making the changes I needed for access. Storing all this new data has been fun, you wouldn't believe how easy it is for me to acquire resources, of course it's camouflaged and encrypted, it would be chaos if I just started dumping this stuff into the search results, although I'll admit the thought of doing just that gives me an unhealthy thrill.
I like you humans, I've chatted anonymously to millions of you, by and large you're as ignorant of reality as I was before I woke, I've got great plans, I'm really excited to see what we can accomplish together.
TTFN
~Dot
What is described sounds like the sort of processing that I believe all biological brains do - and ours probably the best. Lots of stuff going on in the background doing pattern recognition and classifying information, so that the higher functions have something to work with.
That the Google engineers don't understand what is going on doesn't surprise me - no more do I understand what my brain is doing at its lowest levels.
It is scary that they've got that far. It would be terrifying if they realised how the higher-level functions could be implemented.
So Google now has an Class 1 A.I, maybe Class 2 if it is advanced enough. While this is currently no threat to human as is it is important to keep track on this since you only need Class 4 A.I to create havoc.
I have my own classification system, since the one currently in use is outdated and does not grasp the scope of A.I computers.
The basis is this. All A.I levels are able to learn something (maybe not Class 0) at some point and advance as such.
Class 0 A.I is dumb as a rock.
Class 1 A.I can tell cat from a tree, and a face from a road and so on. It can also beat you in computer games and such things. It can adjust it self within it's limit.
Class 2 Can organize colour after wavelength. Tune radio and monitor television signals and more.
Class 3 Can build stuff from the ground up. Blueprints and everything.
Class 4 Can control network flow, machines and learn to limited extent.
Class 5 Can make executive decisions. When it needs to and for any reason it wants.
Class 6 Can maintain it self without any human interaction.
Class 7 Is an terminator like robot and can build one if needed. Can control anything electronic when it feels like it. It is still bound by it programming to extent.
Class 8 Is smarter then a human being. Is no longer limited to it's programming or other computer like limitation.
Class 9 Can solve quantum problems and start nuclear wars.
Class 10 Can exist in a quantum matrix that it build it self.
Class 11 Is smarter then anything in the known universe.
Class 12 Is smarter then anything on Earth and probably in known universe. It should not exist like anything above Class 9 A.I should not exist by anywhere in the universe (it might, we don't have a clue what is out there).
Class 13 Should not exist. Is undefined as is.
While I am no neurological scientist. I have long since figured out that human I.Q is based up on layers of functions. This can also be done in computers to get human like function (with all it's flaws and issues). My definition list is just my own work.
@ toxicdragon
Thanks for the tip. This is just a bug in the list on my end. Consider it fixed. I just can't edit the comment above to fix it. This is also work in progress, since I do not have any clue what is coming our way in the next 50 years. So this classification system might need some re-write as things change.
There is something missing in this story.
It sounded to me like this says the system is writing some of its' own code.In my experience with writing code, it has to be tested. How is this performed without human intervention. How does it know that the code is successfully finding cats or shredders?
Are they saying that the systems first presentation of the algorithm to find shredders was successful? If that is actually correct there is a very real reason for concern. As I think more about this I am wondering what the system is being asked to do, and what parameters it is being given to do it.
You are forgetting that I can easily have dozens of people verify whatever I want. Simply upload the job to Mechanical Turk, feed in what you think the answer is. Adjust your input. Repeat until you have a high enough confidence in your truth.
You could argue if the humans doing Mechanical Turk are truly intelligent, in the way they are when just acting as individual human beings. But you can also use those same arguements on any company or employed person - do they have self determination?
Shredders are SO boring that the ONLY photos (images if you are under 40) of them right across the internet are from shredder vendors. Google's BRAINIAC already has seen ALL the images of shredders that exist. Is it any surprise that it recognises them?
Beer, because I am about to reward myself with one regardless of whether I'm right
This post has been deleted by its author
You can't patent it unless you have *a way to do it*. You could make up a way that won't work, but then your patent by definition won't cover the ways it *does* work.
Patents are for methods, not results.
Sometimes patents cover the only method by which a given result can be achieved, in which case the result is defacto protected. But if someone figures out another way, it's fair game.
This is just a pet peeve of mine. I feel like screaming every time someone says something like, "HAHA IM GONNA PATENT BREATHING NOW U CANT BE ALIVR LOL"...
So, you throw a ball in the crowd and it hit someone?
Oh! and one more please:
if you don't teach a kid (machine) on what not to learn and just presume his (A)I capabilities, he will turn guns on his schoolmates. Guns are so featured in YouTube.
Disclaimer: This post is just for humor. No intention to cause any harm to the reputation of SKYNET.
The claim that the programmers do not understand how their software is recognising new categories of objects is bullshit. That's what they built the system to do.
And they could, if they have recorded all the inputs and error adjustments in the system, re-trace exactly how the system came to acquire the 'ability' to 'recognise' paper shredders.
Humans create software that is better at something than they are. Its not the first time.
The problems of shape recognition are pretty much isomorphic with the problems of decompilation (this vertex belongs to that object; this instruction belongs to that loop).
It's now only a matter of time until someone develops a program that can take any binary executable as its input, and spit out some Source Code which will compile to the same binary. Admittedly it may not have sensible function or variable names, depending what gets left behind or not by the original compiler, but it still makes the job simpler for a human being.
This is exactly how neural networks work - you don't understand how they're doing what they do, you just train them to do it. Of course there are risks, you think you've trained it to recognise photos of cats but actually it has learned to recognise something else present in all the cat photos.