back to article Another AI attack, this time against 'black box' machine learning

Would you like to join the merry band of researchers breaking machine learning models? A trio of German researchers has published a tool designed to make it easier to craft adversarial models when you're attacking a “black box”. Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes …

  1. Anonymous Coward
    Anonymous Coward

    The Emperor has no clothes.

    1. Rich 11

      The AI has only been trained to recognise emperors who are wearing clothes.

  2. Anonymous Coward
    Anonymous Coward

    I don't know what I'm talking about...

    ....but it doesn't half look like the methods they use are rather delicate. How can you secure something quite so sensitive, surely the method is just wrong?

    1. Anonymous Coward
      Anonymous Coward

      Re: I don't know what I'm talking about...

      I'd rather say it's inelegant from a "data science" point of view. Sometimes brute-force is called for and this is one of those occasions. Look at the article on reidentification for a further clue on "why we do this."

  3. Gordon 10

    let's look at this a little sceptically

    So looking at this through a commentards cynical gaze all they have managed to do is make a classifier fail to classify something? /slowhandclap

    I can do that without trying :)

    If read the article correctly (all bullshit bingo no explanations) it works by submitting subtly iffy subjects for classification? Wasn't sure from the explanation if it's just one shot or it needs to be built up over time.

    But let's look at workable real world scenarios.

    1. Corrupt iPhoneX faceid - requires Physical access - you are screwed anyway.

    2. Hijack any ML on a phone - requires at least dodgy App access - ie same as any other malware.

    3. Hijack PC ML requires browser or app hijack.

    So basically whilst the execution mechanism of the attack is novel the access mechanisms are the usual bog standard ones.

    So this is just a novel injection style attack and the usual protections still apply.

    Mark as interesting but ultimately low risk.

    1. Muscleguy

      Re: let's look at this a little sceptically

      You are slightly missing the point. The examples used are fairly trivial, though the facial recognition isn't. The face ID bit could simply require a small amount of makeup to fool a secure entry system. That is the level of the change required.

      If supposedly secure systems can be fooled by such simple stratagems then the whole claim of the recognition industry is bogus. As the first comment says, the Emperor has no clothes.

      This is the sort of thing which happens when you use machine learning. The machines are zeroing on methods which are hackable. The whole machine learning industry is predicated on not caring how it is done, simply demonstrating it can be done.

      These people are pointing out that it is also very easy to fool. Therefore something needs to change.

      1. Gordon 10
        FAIL

        Re: let's look at this a little sceptically

        You have missed the point entirely. This has nothing to do with physically modifying an image. This is about digitally modifying an image 'on the fly' that is then sent for recognition. Doing physically (i.e. makeup) is essentially hit and miss, whereas a digital process is repeatable.

        If it was just physical modification it would old news.

    2. Enki

      Re: let's look at this a little sceptically

      One potential is to perturb signs in the real world for autonomous or assisted cars. Another would be to perturb faces to prevent recognition. This is research now, but there is the potential to use this type of attack to determine the most effective perturbations to modify real world to prevent detections. Since the attack only requires the classifications, it could build it's own set using say an online API or other external method, determine the proper attack, and then modify the real world situations to avoid or mislead classification.

      1. Michael Wojcik Silver badge

        Re: let's look at this a little sceptically

        One potential is to perturb signs in the real world for autonomous or assisted cars

        Yes. More generally, it shows DNN-based classifier models are even more fragile in the face of active attacks than was previously generally acknowledged. Indeed, at least some of them are so fragile that they can plausibly misidentify an input[1] due to small random perturbations.

        That has very serious consequences for use cases such as autonomous vehicles. It's a very important avenue of research.

        Recent research into understanding what's actually happening in DNNs[2] may help.

        [1] That is, assign it to the wrong class, rather than to the null class. It's a precision failure, not just a recall failure.

        [2] See e.g. https://blog.acolyer.org/2017/11/15/opening-the-black-box-of-deep-neural-networks-via-information-part-i/.

  4. Michael H.F. Wilkinson Silver badge

    It is curious how these (otherwise very successful) deep-learning-based algorithms fail in ways that are radically different from methods based on hand-crafted features (which have their own set of problems), or for that matter the human observer. This suggests to me that deep learning as it is implemented now is not a very good model for the way humans learn, because the adversarial examples shown in the paper would never be mistaken by humans. In the human brain there are structures that where selected for over hundreds of millions of years of evolution (itself an optimization or learning process). This selected an architecture which in turn allows adaptive learning of features. It will be interesting to see how we can learn or design better architectures for these deep networks, or in general machine learning algorithms

    1. Mage Silver badge
      Paris Hilton

      Re: deep learning as it is implemented now is not [a very] good

      Because really it's not "learning" in any sentient sense at all. It's a method of putting human curated data into a special type of database (made of interconnected nodes) that has a specialist interface.

      All the terms of this area of computer application are deliberately misleading to make it sound cleverer than it is. Ignore the guys behind the curtain, which is why the models share the same biases as as the developers / programmers. People that are often lacking in wider knowledge, experience, wisdom and quite often don't realize how ethnically, age & gender bigoted/biased they are.

      Almost none are anything like as good as the PR claims. The media is very uncritical of claims.

  5. tiggity Silver badge

    Hoffman v Clooney

    Easy to differentiate these by AI in certain cases, if in the photo the actor is groping breast of female standing next to him, then its Dustin Hoffman.

    ..For those who may have missed it, easily done under the deluge of abusive celebs, Hoffman has been revealed publicly as yet another "Hollywood handy"

  6. fran 2

    “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.

    Well, obviously

    1. Michael Wojcik Silver badge

      Isn't it? I mean, that ought to be pretty clear if you're familiar with the terminology.

      You start with an image that the classifier gets right. You make big changes to it until the classifier gets it wrong. You gradually tweak it to be closer and closer to the original, until you get something very close to the original that the classifier still gets wrong.

      CS researchers (like all other researchers) catch a lot of flak over their prose style, and it's often deserved. But this particular sentence is quite straightforward while remaining technically specific.

  7. Forget It
    Facepalm

    artificial intelligence

    real dumness

  8. Luigi D'Goomba

    Depp and Aguilera?

    Yeah, okay, I can see how those two could be confused, perhaps even separated at birth.

  9. fidodogbreath

    mis-identifying celebrities and missing prominent logos

    They've turned it into my grandmother?!?!?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like