Digital neural networks vs. analog resistance meshes
Ever since I first saw an analog-digital hybrid computer in EE lab (1960's), I've been intrigued by using the programmable POTS to solve problems such as the traveling-salesman shortest route. In the digital world this path-following becomes very expensive as the number of nodes increase. In the analog world a preset mesh would calculate the route instantaneously (well, almost.)
It seems that these proposed 8-bit network nodes could be programmed to act like POTS with a huge number of interconnections. I wonder if this could lead towards DNNs taking advantage of some of the analog circuitry with sufficient accuracy.
Our own vision uses it...
I heard our own eyes only use high resolution and color fidelity on the very focal center, while the rest just picks black-and-white patterns, motion detection... *while our brains fill the gaps*...
So image classification could use the same principle our eyes do...?
The same principle is already used in compressing MP3 with VBR, so why not apply it to speech recognition too?
Image recognition for specific cases will even drop the color entirely, and use black-and-white patterns for the processing... things like checking if a bottle has a cap correctly placed on it. I heard Coca-Cola will check with a camera for their specific Pantone color when printing their cans, otherwise they don't use color whatsoever. Applying the same principles to neural networks should be the next logical step.
Re: Our own vision uses it...
It works because we look around a lot. And we don't notice what we don't see.
Something that I don't remember doing before in an eye test: the optician asked me to look ahead while he moved his hand around - I was to say when I saw his fingers wiggling. I assume he was wiggling throughout the test, but for an evidently not unusual amount of time, I was aware of the hand but not the wiggling. I repeat, this is a test of SIGHT.
My test in 2016 was somewhere else and included a screen behind which lights twinkled and I was to click when I saw one, which I think I messed up by breathing on the screen and misting it up so that a lot of it couldn't be seen.
No loss in model accuracy?
That sounds a lot like the typical advertising "There is no better <x>", which they intend us to read as "This is clearly the best" while those who stayed awake in rhetoric might discern it as "This is not really any worse than the rest of the crap".
I like the analog stuff, though. At last an explanation for occasionally wildly odd AI results. "It works just like your brain", which is so simple/obvious that even Uncle Phil can understand, after a few too many pints.
There's $<x> in those <y>
A lot of early geophysical data tapes used very weird floating point formats, and for some it was as little as 6 or 8 bits of 'precision' with wonky small exponents affixed.
It was good enough precision to find last century's oil. This century they want to find your face in a crowd. Congratulations, you're resources.
So if you use lousier numbers, you can even more vastly overfit patterns you don't understand without the errors getting noticeably worse.
Couldn't be because they stank already in really important ways, right?
How's that turtle==gun stuff doing these days? Everything I see complaining about issues with NN's was identified in the 90's or earlier, by people who said it would only get worse with more layers, more overfitting, and a lousier squash function (relu) with...math to prove it. Guess what. GIGO is still a thing.