back to article Qualcomm goes for the grey matter with neural net SDK

Qualcomm wants to stick a neural network in your hands, and has announced it will ship a software development kit (SDK) in the second half of this year to get the ball rolling. The deep learning SDK for the Snapdragon 820 processor will be, the company says, designed to take advantage of the SoC's “heterogenous compute …

  1. Brian Miller

    Brains? Or ancient gods?

    Are you sure Zeroth isn't actually an ancient Sumerian or Hittite god come again to soak up all the energy of people worshiping their phone idols? Fanboys would be such a fabulous target.

    1. allthecoolshortnamesweretaken

      Re: Brains? Or ancient gods?

      "... an ancient Sumerian or Hittite god ..."

      Make up your mind. I need to know in order to calibrate the trap accordingly.

  2. Anonymous Coward
    Anonymous Coward

    Not again

    Neural nets aren't magic. I love the way they've been re-discovered. To get any classifier to work well you have to pre-process the raw data to get features. Thats the clever bit. Then wang them into a classifier.

    If you have good features then the benefits from using a neural net arent worth the effort. Its a black box so its nigh impossible to figure out where errors come from (compared to working with classifiers in a Euclidean space) or whether the nnet is working in a region where it knows what its talking about.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not again

      You are right in so much as they are not magic and have been around longer than many other machine learning techniques, but there are very good reasons for their current popularity. At least as far as computer vision is concerned, your comments regarding features and pre-processing are somewhat off. Perhaps one of the biggest attractions of convolutional neural nets is the need for minimal pre-processing and the way they learn suitable filters/features. You can pretty much just wang a load of raw images in without the need to hand-craft features you think might be appropriate to the application. The depth also permits some very expressive models that simply cannot be rivalled by shallow methods.

      Just to be clear, I'm not really a fan of neural nets, although I have had to become familiar with them recently because they are beating everything else on a wide variety of computer vision problems. I hope something better will come along in the future, but they are around to stay for the moment. If you can't beat them...

      1. Anonymous Coward
        Anonymous Coward

        Re: Not again

        Convolutional networks have a feature design built in; they have a non-generic architecture based on receptive fields. Taking differentials of pixels is a feature type so you've embedded a feature into the nnet. Plus a domain specific layer to normalise levels and variance. Plus domain specific techniques to cater for shift/scale invariance. Plus you need to adjust the scope of the receptive field to optimise for each database.

        There's been success in image classification yes, but this has come as an ad-hoc fudge to the architecture. Interesting yes, useful yes, but an odd engineering designed hybrid that includes a lot of domain specific assumptions that constrain and create features implicitly before classification.

        Nothing wrong with that if that's what you know you want.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like