back to article How to stealthily poison neural network chips in the supply chain

Computer boffins have devised a potential hardware-based Trojan attack on neural network models that could be used to alter system output without detection. Adversarial attacks on neural networks and related deep learning systems have received considerable attention in recent years due to the growing use of AI-oriented systems …

  1. eldakka
    Coat

    > Your free guide to trick an AI classifier into thinking an umbrella is the Bolivian navy on maneuvers in the south pacific

    Or that an aircraft carrier has a fully operational wing of F-35Bs....

    1. hplasm
      Devil

      "a fully operational wing of F-35Bs...."

      Or weaponised unicorns!

      1. Anonymous Coward
        Anonymous Coward

        Re: "a fully operational wing of F-35Bs...."

        Don't underrate weaponized unicorns. The glitter bombs they shoot from those horns gets in everything, and causes the opposing army to lose their will to fight.

    2. The Man Who Fell To Earth Silver badge

      Been done before

      It was how the Cylons beat the humans in the initial surprise attack on the remake of Battlestar Galactica

  2. Jeroen Braamhaar
    Coat

    Not a new problem ....

    So this is just like the infamous "Clipper Chip" brouhaha except it's done on the sly ?

  3. DavCrav

    Lie Mode

    Of course it will work, sir. No worries.

    1. Woza
      Megaphone

      Re: Lie Mode

      Hook, line, sinker, rod and copy of Angling Times, sir.

      Icon: A small off-duty Czechoslovakian traffic warden.

  4. Michael H.F. Wilkinson Silver badge
    Joke

    Sorry, Dave. I cannot do that for you

    and my name isn't even "Dave"

    1. Anonymous Coward
      Anonymous Coward

      Re: Sorry, Dave. I cannot do that for you

      That's "Mr Lister, Sir!"

  5. David Lester
    Boffin

    You can call me Mr Thickie here, but ...

    ... any chip design I've been involved with has used _all_ the available silicon area -- meaning that there is little room for widespread switchable functionality.

    And why is it necessary to have switchable functionality for this attack to work? Well, I presume there is some automated acceptance testing to check for functional/non-functional chips. And, if you're going to test the chip, it might as well test whether umbrellas can be distinguished from Bolivian Seaborne Marching Powder.

    1. Korev Silver badge

      Re: You can call me Mr Thickie here, but ...

      The designer might be persuaded to put some "custom" functionality by a government on behalf of $SPOOK_AGENCY. In fact I'd be surprised if this hasn't already happened.

      1. Steve the Cynic

        Re: You can call me Mr Thickie here, but ...

        I believe the usual figure cited for getting someone to spill commercial secrets belonging to his employer is on the order of five times salary.(1) Offer someone five times his normal salary, and most people will spill the beans. I'd guess that sabotaging your company's products is probably similar in price.

        (1) There are a few people who won't at all, or who will demand substantially more, or who will take it as an invitation to be a double agent, i.e. take it immediately to the boss: "Company X offered me Y dollareuropounds to spill secrets. How can we creatively misinform them, ((and what bonus will you offer me to do so))?"

  6. Anonymous Coward
    Anonymous Coward

    Can someone explain.....

    .....the difference between neural networks implemented in hardware (this article's subject), and neural networks implemented in software?

    *

    I'd be more worried about attacks on software implementations, not least because the software can be modified by the attacker......where I'm assuming that a hardware attack, once found, can no longer be used by the attacker.

    1. Anonymous Coward
      Anonymous Coward

      Re: Can someone explain.....

      I think the point is just that it's harder to fix pre-hacked hardware.

      1. Paul Crawford Silver badge

        Re: Can someone explain.....

        Exactly, and it survives typical software scans for tampering or a re-install.

        1. Tromos
          Joke

          "...it survives typical software scans..."

          Even worse. It survives the ultimate sanction of switching off then on again.

          1. Charles 9

            Re: "...it survives typical software scans..."

            I thought the ultimate sanction was nuking from orbit and starting from scratch. Replace the hardware and you'll probably replace it with a similarly-knackered chip.

  7. Jason Bloomberg Silver badge

    Manipulating results

    The doctors surgeries round my way already seems to be prototyping the system. Everyone going in seems to come out having been diagnosed with a lifetime need to take statins or metformin.

  8. Zippy's Sausage Factory
    Unhappy

    There's a good sci-fi story in here. Specifically one where computers are completely banned because they can't be trusted.

    1. zapgadget
      Coat

      Read "The Butlerian Jihad" from the Dune Saga.

      Not that it mentions the implications of strong AI or anything...

  9. EveryTime

    That was a measured counter-point.

    I'll be a little less measured.

    There are very few introspection tools to understand how NNs are actually classifying, and those that exist are mostly for images. We can visually understand "oh, so it's recognizing a round thing in that layer, and calling it a wheel over here", but not well enough to change the NN in a way that still works.

    We are very far away from a small automated system that could subtly change a trained NN to do something specific. To put it into genetic manipulation terms, it's quite easy to kill something. You need to be very sophisticated to make a fish glow. You must be vastly more sophisticated to change a chicken into a fire-breathing dragon.

    1. Fungus Bob

      Although, if one applies Buffalo sauce to the chicken it tastes like a fire-breathing dragon...

  10. DWRandolph

    Back doors can be very long term stuff. Think it was A Deepness In The Sky were one of the protagonists subverts a network using tricks over a hundred years old?

  11. Anonymous Coward
    Anonymous Coward

    Centuries old technology

    I read somewhere that neural networks are susceptible to the same sort of problems that humans suffer when viewing images where they "look like" something familiar.

    Case in point, some university showed a previously trained neural net pictures of clouds (obvously random) and the network classified them as pictures of: animals, plants, chairs etc.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like