The Apple Newton had a primitive form of this in the 1990's - it could straighten lines and circle circles (so to speak) in sketches. Pretty impressive for the time.
AI finally understands primitive sketches – aka marketing presentations
Artificial intelligence scientists have developed a neural-network that understands incomprehensible scrawled drawings of the sort created by children, marketing departments, architects, design creatives, and so on. The academic developers of the "Sketch-a-Net" software proudly boast that their brainchild is actually better at …
COMMENTS
-
-
Wednesday 22nd July 2015 23:58 GMT JeffyPoooh
I don't mean to get all Philosophy 101 on you, but...
Some *humans* gather up a suite of sketches, and arbitrarily assign semantic definitions to each example. Then, a neural network, one that THEY trained, amazingly happens to agree with THEIR assigned definitions.
To make my point crystal clear, in your mind, replace the sketches with Rorschach Inkblots.
Who decided what the "correct" answers were?
And who trained the neural net?
See the issue?
-
Thursday 23rd July 2015 07:20 GMT chris swain
Re: I don't mean to get all Philosophy 101 on you, but...
Technically incorrect
Semantic meaning is surely assigned by the intention of the sketcher in deciding what to draw
People don't train AI neural networks, data does, although people do select the training data and regime.
'Correct' presumably means if the result matched the intention of the person doing the drawing
The article states that the algorithm relies on knowing the order in which the marks of the sketch were made so giving it a Rorschach test is irrelevant. Aren't Rorschach tests about drawing out subconscious influences?
I see no issues here
If you really must throw Rorschach tests at AIs use an image recognition AI (I'd be interested to see the results)
-
-
Thursday 23rd July 2015 12:43 GMT chris swain
Re: I don't mean to get all Philosophy 101 on you, but...
I disagree. The article seems to refer to generally accepted pictorial representations of fairly basic objects, not abstract concepts.
If I look at a photograph of a house and say "that's a car" my subjective interpretation is likely to be met with derision. You can argue as much as you like that my statement is valid from a subjective point of view but I'd still be wrong. Meaning on the other hand is a separate thing, a picture of a house might have a different meaning to me.
I've never studied philosophy so maybe in that rarified world there is no objective reality, in which case any classification problem is presumably impossible to solve but back here in the real world there does appear to be a basic objective reality.
-
-
-
-
Thursday 23rd July 2015 00:47 GMT Anonymous Coward
There are at least a couple of ways to get a human level AI. A. Take the simple effective algorithms that are already known and engineer them into an AI. B. Keep elaborating the algorithms and throwing more and more hardware at the problems until you are using 100,000 CPU cores for example.
Route B is the way they are going. Eventually of course they will realize that they can replace this algorithm with a simpler one that runs 1000 times faster and that algorithm with one that runs 100 times faster etc. Then you are back down to 1 core. Eventually you will see human level AI but there will be delays due to a lack of engineering insight. Even with the engineering solution you are going to need a good amount of low latency memory. 128 Gbytes with a latency of less than 20 ns would be good starting point. You could actually put 1 Tbyte of SDRAM on a single PCB and for engineering reasons run it below its normal speed. It would be about $15,000 for 1 Tbyte at 20ns.
Just as an example of a simple algorithm. Given a hash function h(x) and a sequence a,b,c,d,e,f..... then the hash walk starting from some seed value x: h(x+a), h(h(x+a)+b), h(h(h(x+a)+b)+c), h(h(h(h(x+a)+b)+c)+d).....
If you showed that to most AI researchers they would dismiss that out of hand, if you showed it to an engineer they would start thinking "hey, I could do this with the hash finger prints and wow I could do a high compression pattern detector using it that way.
There is a big difference in mind set. I think it is still possible that human level AI within 5 years. Google clearly has the full skill set required. It is just a question of shifting over engineering types to AI department and letting them do what engineers do.
-
Thursday 23rd July 2015 12:25 GMT JeffyPoooh
"I think it is still possible that human level AI within 5 years."
SeanS4 (stop tailgating): "I think it is still possible that human level AI within 5 years."
People have been thinking that for *decades*.
A bit like fusion power, which is always about 40 years out.
And Flying Cars...
Eventually it'll come true; and then some dim-wit will say, "See? I TOLD you so..."
-
Thursday 23rd July 2015 14:59 GMT Anonymous Coward
Re: "I think it is still possible that human level AI within 5 years."
Yeh, I know. But if you had read as many of the AI papers out there as I have you would put your head in your hands and cry. I would propose the hash algorithm as a mini psychometric test to see if an AI researcher has basic engineering insight.
-
-