back to article Now that's sticker shock: Sticky labels make image-recog AI go bananas for toasters

Despite dire predictions from tech industry leaders about the risks posed by unfettered artificial intelligence, software-based smarts remain extremely fragile. The eerie competency of machine-learning systems performing narrow tasks under controlled conditions can easily become buffoonery when such software faces input …

  1. Michael Hoffmann Silver badge
    Unhappy

    Still no laughing matter

    Seeing as how I expect "AI" and "ML" to be pushed out regardless by our masters and overlords, no matter how faulty and erroneous it is, I am always reminded of that scene in "Brazil", where a smooshed fly causes a name to be mis-identified and the wrong bloke gets arrested and tortured to death.

    1. Anonymous Coward
      Anonymous Coward

      Re: Still no laughing matter

      They litterally just announced using AI and ML to vet social relief funding in the UK.

      1984 is a few decades behind it would seem.

      1. Richard Jones 1
        FAIL

        Re: Still no laughing matter

        To call the claim vetting process AI is pure bunkum. It is more akin to basic intelligence methods from about the 19th (if not before), century. The claim processing will check for repeated information in multiple claims for multiple locations, e.g. same bank details, telephone number, same or very similar details in claim letters,etc. and pass them for the wetware to examine with more care.

        That sounds like what I used to call basic pattern recognition though often done by human means when tracking down other sorts of crimes.

        1. Christoph

          Re: Still no laughing matter

          "pass them for the wetware to examine with more care."

          Pass them to the wetware for them to fulfil their required quota of rejected claims.

        2. Uffish

          Re: Still no laughing matter

          And the sticker seems to be similar to the dazzle pattern used to disguise ships in the early 19th century.

          1. Robert Carnegie Silver badge

            Re: Still no laughing matter

            "And the sticker seems to be similar to the dazzle pattern used to disguise ships in the early 19th century."

            1914-1918 is the 20th century.

            Does the image here represent the actual sticker used, or did someone read the story and then photograph a toaster then go nuts with Instagram filters? Should I see a toaster when I look at the sticker?

            If you paste a picture of a toaster into a photograph of a banana, should AI not see a toaster in the picture?

            What about the "door security" scene in [The Fifth Element]?

    2. Anonymous Coward
      Anonymous Coward

      Re: Still no laughing matter

      Yes but think of the fun we'll have sharing exploits to fool AI in hilarious ways!

  2. Captain DaFt

    Current AI development reminds me of Internet security

    A- Throw something semi functional out there.

    B- Somebody breaks it.

    C- Patch what breaks, try again.

    D- Go to step B.

    E- Success! (Shame you can't get here)

    1. Yet Another Anonymous coward Silver badge

      Re: Current AI development reminds me of Internet security

      If only it was possible to program some sort of artificial brain to perform these set of steps for itself

  3. Michael Thibault

    "The researchers conclude that those designing defenses against attacks on machine learning models need to consider not just imperceptible pixel-based alterations that mess with the math but also additive data that confuses classification code."

    Which pushes the Butlerian Jihad further off by another century, maybe more. Species traitors will go to the melt at the same time as the machines.

  4. Sampler

    Great

    Where can I get a few of these, for when the machine uprising kicks in and I can be safely identified as a toaster and carry on my daily business..

    1. Tromos

      Re: Great

      You might not be so happy when another machine comes along and tries to insert the bread!

      1. Teiwaz

        Re: Great

        You might not be so happy when another machine comes along and tries to insert the bread!

        Ah, the future,

        where and added peril of being a anti-cyber activitst is a toasting muffin jammed somewhere uncomfortable.

        Almost a harmless fetish, considering the tribes of roving lethally armed convenience devices with learning difficulties zipping about.

      2. Anonymous Coward
        Anonymous Coward

        Re: Great

        It's not the bread you have to worry about if they are unsure what is a banana.

        1. Irongut

          Re: Great

          Clearly it's a female aadrvark.

    2. Anonymous Coward
      Anonymous Coward

      Re: Great

      "I'm sorry Sir, it says here that electrical gear must be stowed in the hold"

      1. TRT Silver badge

        Re: Great

        Aah, so you're a waffle man.

    3. Uffish

      Re: "Where can I get..."

      Taking you terribly, terribly seriously, https://arxiv.org/pdf/1712.09665.pdf

    4. Brian Miller

      Re: Great

      You just need your bog standard psychedelic t-shirt and you're sorted. Machines don't have good trips when they're confronted with LSD.

  5. JeffyPoooh
    Pint

    To be fair...

    To be fair, the "psychedelic graphic" does (sort-of) look a bit like a highly-reflective, chrome-plated toaster in a colourful environment.

    First thing that the AI creators might wish to consider is the possibility that the image frame contain more than one object. Assuming one object is precisely daft.

    I've noticed BS claims are being made (on the various tech new shows) about facial recognition again. BBC WS mentioned about a system that seemed to claim pulling accurate recognition from only a few pixels. They should make it a capital offense to over-hype such things.

    1. Charles 9

      Re: To be fair...

      Do that and you can still confuse the system by making things that look like more or fewer items than they really are or should really be seen as a collective rather than individual items. Worse still, this kind of trickery can work on humans (think the old attached-by-transparent-thread prank), so good luck getting a machine to work out this kind of trickery.

    2. Nick Ryan Silver badge

      Re: To be fair...

      Yes, this is typical of narrow minded and daft "AI" attempts.

      What is in the picture is an open ended question, however they are attempting to train it on "what single object is in the picture" which, when presented with a picture with multiple objects in it, even if one happens to be represented by a sticker, naturally fails.

  6. Anonymous Coward
    Anonymous Coward

    Ahah!

    So this is why people in the future are shown wearing shiny silver suits.

    To ward off the rise of the machines...

    1. hplasm
      Big Brother

      Re: Ahah!

      You know too much, Citizen...

  7. wallaby

    Just in case

    I for one welcome our new robot overlords

    I'm the one hiding behind the banana with a glittery sticker in my hand

    1. Not also known as SC
      Joke

      Re: Just in case

      But what happens if ED-209 mistakes your glittery stickered banana for a gun?

      1. Francis Boyle Silver badge

        I was taught that bananas make excellent weapons

        But then I had a very silly teacher.

  8. harmjschoonhoven

    The procedure

    to follow is to subtract the "recognized" object from the image until no object is found above a certain threshold. In that case the banana will be found after the toaster.

    1. Charles 9

      Re: The procedure

      Not if the sticker is put ON the banana, thus making the end result NO banana found.

      1. TRT Silver badge

        Re: The procedure

        Multiple object detection... reports top banana, bottom toaster.

        1. Charles 9

          Re: The procedure

          Put the sticker ON the banana. Now it reports a toaster and NO banana because it's tricky enough for humans to recognize two separate items on top of each other (they could easily be a combined item where the pieces are stuck together), let alone a machine.

          How about this for a challenge. Can a visual recognition system identify something without even seeing it (such as the ball of a paddle ball that you can guess is there because the paddle is not sitting flat, meaning it's probably on top of and covering its ball)?

  9. John Smith 19 Gold badge
    Coat

    Philosophically intriguing.

    This is (in a sense) optical malware, designed to disrupt the normal functioning of a NN.

    If multiple stickers were presented to it in a sequence could each disrupt the NN in a specific way?

    Could that sequence make the NN do "useful" work for the creators of the image sequence?

    And since humans are NN's too, could the same process be applied to us?

    At the very least it's a nice little meme to seed a few SF stories.

    1. hplasm
      Childcatcher

      Re: Philosophically intriguing.

      Aha- BLIT

      By David Langford. Quite Creepy.

      1. John Smith 19 Gold badge
        Coat

        Aha- BLIT. By David Langford.

        After I suggested it I remembered "Snow Crash" loosely hinges around a similar idea.

  10. Anonymous Coward
    Anonymous Coward

    is that a toaster in your pocket or are you just pleased to see me? Said the AI to the man with psychedelic pants.

  11. Christoph

    The new SWATing

    Create "This is really a gun" sticker.

    Stick it on someone's back.

    Wait for police to zoom in and shoot them.

  12. Lee D Silver badge

    This is why you can't have an automated adult-image filter of any worth.

    The second someone can just put something small onto an image and radically change its categorisation without actually changing the overall nature of the image, you know it's going to end up in things like that to stop unwanted categorisation.

    And vice-versa... some poor guy with a hacker's conference sticker on his backpack gets scanned by an automated system as having a rifle as he transits an airport, for example.

    Until we understand what the "AI" (pfft) is actually doing to categorise, which criteria it's using, we can't make any comment on its accuracy or otherwise. Train a human to recognise something like a banana and they can tell you they are looking for a particular shape, size, colouration, orientation and apply those criteria using their learned knowledge of the object to identify zipped, unzipped, facing the camera or away, broken, twisted, ripe, unripe, etc. bananas. Train an AI and you literally have no idea whether or not it's just decided "if the center pixel is yellow, call it a banana" or some other random criteria that happens to fit "most" images of bananas but also a huge variety of other images and which can be turned to false detection by anyone willing to experiment.

    This kind of "throw data at something AI" stuff is really doomed to failure, except where it really doesn't matter at all and where a human would be cheaper to employ anyway (e.g. a banana factory).

    1. Charles 9

      "Train a human to recognise something like a banana and they can tell you they are looking for a particular shape, size, colouration, orientation and apply those criteria using their learned knowledge of the object to identify zipped, unzipped, facing the camera or away, broken, twisted, ripe, unripe, etc. bananas."

      And then you trick them with a plantain...or a carefully-sticked-back banana with something else within. We can fool humans. Machines don't stand a chance.

      "This kind of "throw data at something AI" stuff is really doomed to failure, except where it really doesn't matter at all and where a human would be cheaper to employ anyway (e.g. a banana factory)."

      Not necessarily. Remember that humans have continual costs and limited working hours. Why else do you think machines are replacing humans elsewhere?

      1. Lee D Silver badge

        There's a difference there...

        That's quite a reasonable mistake to make. Thinking any silver blob next to a banana turns the banana into a toaster is not.

        The human will apply the categories learned, and adjust if you say "no it's not". The AI can't without expensive retraining from scratch, and such retraining is liable to taint existing detection too. The human learns, the machine doesn't (despite the moniker "machine learning").

        Everywhere I see computers replacing humans they are incredibly dumbed down and not applying intelligence at all. Supermarket checkouts... are they "guessing" user's ages like humans do? No. They need a human. You use computers and machines where you can describe the task required exactly. If you can't it has unreliable and unpredictable results. Anywhere it matters, you have a human. Anywhere it doesn't matter (e.g. a banana factory), well, it doesn't matter. Human or computer are on a par because the computer might be quicker but it's dumber too.

        The car park wouldn't let me out last night as it read my number plate (beginning with LL) as something else for the ticket (beginning with CL). I had to actually put the ticket into the machine.

        Pretty much this is what AI / ML / recognition has always been... works okay, but far from infallible, and only utilised where it doesn't matter about being wrong. Voice recognition literally cannot understand my voice, but all humans who speak my language can. Image recognition is essentially atrocious and easy to mislead without extra controls. Text recognition is the entire basis of using CAPTCHAs... computers are so bad at it and always have been (who actually OCRs nowadays?). Anything requiring interpretation of complex data... don't give it to a machine unless the machine is told exactly what to do.

        This is precisely why you don't want a "self-driving" car, by the way. Not that you can't make a self-driving car. But one that tries to be human to self-drive is a dangerous and unreliable beast.

        We are literally DECADES at least from any decent amount of AI, I would actually posit that we DON'T have it, in any substantial form, today. Precisely because you cannot tell what it's doing, therefore cannot control it sufficiently, therefore cannot fix it when it's wrong.

        1. Charles 9

          "This is precisely why you don't want a "self-driving" car, by the way. Not that you can't make a self-driving car. But one that tries to be human to self-drive is a dangerous and unreliable beast."

          The problem with this example is that the HUMAN is a PROVEN dangerous and unreliable beast, given the spate of traffic accidents reported in the news everyday. Not to mention the human fallibilities of fatigue, drug inducement, anger, etc. and you've just set a very low bar.

  13. Anonymous Coward
    Anonymous Coward

    Surely a sticker with a picture of a toaster on it would work just as well?

    1. horse of a different color

      I'm wondering why we need AI to distinguish toasters from bananas in the first place.

  14. Timmy B

    Am I the only one wondering....

    ... what happens if they put that sticker next to a toaster?

  15. Cuddles

    Malice not necessary

    "The eerie competency of machine-learning systems performing narrow tasks under controlled conditions can easily become buffoonery when such software faces input designed to deceive."

    That last part should read "...when such software faces input it wasn't trained on.". The fundamental problem is that machine learning relies on a very limited learning set with the assumption that it will be representative of everything it will ever encounter in the real world. Since there are effectively infinite possible images, that assumption is never true. Deliberately designing input to fall outside the trained area obviously finds issues more easily, but once we start rolling out these systems to analyse billions of daily images from security cameras, social media, and so on, such issues are going to pop up constantly even with no design involved at all.

    1. Charles 9

      Re: Malice not necessary

      So what's missing in machines that makes humans better able to work outside the box?

      1. Cuddles

        Re: Malice not necessary

        "So what's missing in machines that makes humans better able to work outside the box?"

        If I knew the answer to that, I'd be far too rich to post it here.

  16. sloshnmosh

    Only a few pixels

    "I've noticed BS claims are being made (on the various tech new shows) about facial recognition again. BBC WS mentioned about a system that seemed to claim pulling accurate recognition from only a few pixels. They should make it a capital offense to over-hype such things."

    You'd be surprised how much data can be obtained by just ONE single pixel:

    https://www.facebook.com/impression.php/f2441a81bf8bca8/?lid=115&payload={%22source%22%3A%22jssdk%22}

  17. Jamie Kitson

    To be fair...

    that "psychedelic graphic" does look like a toaster to me, so is it really so shocking that an AI thinks so too? If anything it's showing its human like thinking.

  18. Anonymous Coward
    Anonymous Coward

    AKA

    BUY OUR GOOGLE AI INSTEAD!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like