back to article Why, you're no better than an 8-bit hustler: IBM punts paper on time-saving DNN-training trick

IBM has said it is possible to train deep learning models with 8-bit precision rather than 16-bit with no loss in model accuracy for image classification, speech recognition and language translation. The firm claimed today this work will help "hardware to rapidly train and deploy broad AI at the data center and the edge". Big …

  1. elDog

    Digital neural networks vs. analog resistance meshes

    Ever since I first saw an analog-digital hybrid computer in EE lab (1960's), I've been intrigued by using the programmable POTS to solve problems such as the traveling-salesman shortest route. In the digital world this path-following becomes very expensive as the number of nodes increase. In the analog world a preset mesh would calculate the route instantaneously (well, almost.)

    It seems that these proposed 8-bit network nodes could be programmed to act like POTS with a huge number of interconnections. I wonder if this could lead towards DNNs taking advantage of some of the analog circuitry with sufficient accuracy.

  2. Anonymous Coward
    Anonymous Coward

    Our own vision uses it...

    I heard our own eyes only use high resolution and color fidelity on the very focal center, while the rest just picks black-and-white patterns, motion detection... *while our brains fill the gaps*...

    So image classification could use the same principle our eyes do...?

    The same principle is already used in compressing MP3 with VBR, so why not apply it to speech recognition too?

    Image recognition for specific cases will even drop the color entirely, and use black-and-white patterns for the processing... things like checking if a bottle has a cap correctly placed on it. I heard Coca-Cola will check with a camera for their specific Pantone color when printing their cans, otherwise they don't use color whatsoever. Applying the same principles to neural networks should be the next logical step.

    1. Robert Carnegie Silver badge

      Re: Our own vision uses it...

      It works because we look around a lot. And we don't notice what we don't see.

      Something that I don't remember doing before in an eye test: the optician asked me to look ahead while he moved his hand around - I was to say when I saw his fingers wiggling. I assume he was wiggling throughout the test, but for an evidently not unusual amount of time, I was aware of the hand but not the wiggling. I repeat, this is a test of SIGHT.

      My test in 2016 was somewhere else and included a screen behind which lights twinkled and I was to click when I saw one, which I think I messed up by breathing on the screen and misting it up so that a lot of it couldn't be seen.

  3. happy but not clappy
    WTF?

    8 levels = 3 bits, no?

    Wouldn't 8 bits need 256 levels?

  4. Mike 16

    No loss in model accuracy?

    That sounds a lot like the typical advertising "There is no better <x>", which they intend us to read as "This is clearly the best" while those who stayed awake in rhetoric might discern it as "This is not really any worse than the rest of the crap".

    I like the analog stuff, though. At last an explanation for occasionally wildly odd AI results. "It works just like your brain", which is so simple/obvious that even Uncle Phil can understand, after a few too many pints.

  5. Notas Badoff

    There's $<x> in those <y>

    A lot of early geophysical data tapes used very weird floating point formats, and for some it was as little as 6 or 8 bits of 'precision' with wonky small exponents affixed.

    It was good enough precision to find last century's oil. This century they want to find your face in a crowd. Congratulations, you're resources.

  6. G.Y.

    codecs

    A-law on mu-law codecs use funny 8-bit floating-point formats

  7. DCFusor

    So if you use lousier numbers, you can even more vastly overfit patterns you don't understand without the errors getting noticeably worse.

    Couldn't be because they stank already in really important ways, right?

    How's that turtle==gun stuff doing these days? Everything I see complaining about issues with NN's was identified in the 90's or earlier, by people who said it would only get worse with more layers, more overfitting, and a lousier squash function (relu) with...math to prove it. Guess what. GIGO is still a thing.

  8. Robert Carnegie Silver badge

    Previously

    When I last tried speech recognition, 8-bit audio which sounded quite clear to me did not get recognized. It had to be 16-bit, which I think is officially CD quality.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like