back to article AI image recognition systems can be tricked by copying and pasting random objects

You don’t always need to build fancy algorithms to tamper with image recognition systems – adding objects in random places will do the trick. In most cases, adversarial models are used to change a few pixels here and there to distort images so objects are incorrectly recognized. A few examples have included stickers that turn …

  1. Herring`

    AI Hype

    There seems to be more and more of the hype in the regular media recently - talking about medical, legal etc. applications. The thing that's lacking with AI is, when a human makes a decision, you can ask them "what the hell were you thinking?".

    The comforting thought is that we're a long way off Skynet.

    1. Anonymous Coward
      Anonymous Coward

      Re: AI Hype

      You can ask with AI too... it would just take extraordinarily longer time, or a lot more work into getting the right type of software analyses (which we don't have yet IIRC).

      For example in the above article "The API could be struggling to correctly recognize the objects because it’s uncommon to see an elephant lumped in together with common items often seen in living rooms" Well, no. The API could be doing anything because of any reason. Because until it is checked, it could be counting pixels, checking contrast, checking shape, or just checking correlations.

      Trying to guess "why" at this point is a massive, massive error in the thought process of the programmers, trainers and reporters. :(

      It's a bit like watching a bridge supplied with poor quality concrete crumble to the ground and going "oh, it happened on a Thursday so must be cause of the moonshine off venus". Could we hit a new world of myths and magic based on imagined reasons for "AI" to act? ;)

      1. HolySchmoley

        Re: AI Hype

        >You can ask with AI too... it would just take extraordinarily longer

        >time, or a lot more work into getting the right type of software

        >analyses (which we don't have yet IIRC).

        Isn't it a matter of whether the software is designed to report on its decisions?

        IIRC, some is and some isn't.

    2. J27

      Re: AI Hype

      I blame lack of comprehension on the part of reports for most of it. I'm constantly reading reports of malfunctioning AI being used as proof of emergent behavior or the threat of strong AI... based on a report of a very simple neural net agent.

      AI fiction is fun, but we're no were near that yet guys.

    3. JeffyPoooh
      Pint

      Re: AI Hype

      Asking AI, "What the hell were you thinking?"

      The Google (Alexander Mordvintsev ) 'DeepDream' software seems to be about halfway down the correct road to make the mysterious innards (when trained) of these systems plainly visible to the designers.

      What comes out of DeepDream (the weird images) are a clear hint of what the subject neural network has actually focussed on and "learned".

      To be clear, I'm referring to the basic concepts to be used as a possible starting point.

      We'll know that this area is mature when an AI designer is held liable for marketing a safety-critical AI system where they have no idea what's inside. In other words, there will be no excuse for what is today's SOP.

  2. This post has been deleted by its author

  3. Lee D Silver badge

    AI is not "intelligent" in any way, shape or form.

    What you're making here - no matter the hype - is a statistical model trained to a very limited set of inputs (there might be 7 billion people in the world, capable of being photographed from billions of angles, wearing billions of expressions, clothes, etc. and you're not training on 1%) over, maybe, a month or so.

    Then you get surprised that it can't jam every object in the universe into tight categories based on that training as well as a human who's been doing that for 30+ years constantly with a much higher connection of brain and intelligence and vision than anything the biggest supercomputer can even approach.

    Give it up. Seriously. And the more things you teach it to recognise (i.e. not just people and elephants) the worse the problem will get because it cannot infer context like a human. The same way that we can "see" an elephant in a cloud formation, but we know it's not really a giant elephant made of water vapour.

    Even then, even with decades of training and human-matching capabilities... it's as good as a minimum wage employee. That's it.

    We do not have AI and we're not likely to get it toying about with this stuff that's been around since the 80's and hasn't significantly improved (except in the SPEED with which it will mess up) in all that time.

    1. Warm Braw

      It cannot infer context

      Indeed.

      And what would seem to be the biggest problem is that you can't selectively untrain parts of the model, as far as I'm aware. If there's an irrelevant feature in the source images that happens accidentally to correlate with the presence of another feature then the model is always going to have a correlation bias and even if you detect it (which often you won't), you can't get rid of it without retraining the entire model from source images from which the irrelevant feature has been removed.

      That alone should be sufficient to refute the claim that "intelligence" of any form is in play here.

    2. Anonymous Coward
      Anonymous Coward

      Re: Give it up. Seriously.

      Nevar! There is money to be made in books, training, conferences (cough).

    3. This post has been deleted by its author

    4. Doctor Syntax Silver badge

      "Then you get surprised that it can't jam every object in the universe into tight categories based on that training as well as a human who's been doing that for 30+ years constantly with a much higher connection of brain and intelligence and vision than anything the biggest supercomputer can even approach."

      30+ years? The human brain develops that ability very quickly. It will also very quickly extrapolate from a picture to a real elephant or vice versa depending on which it's shown first. However it has very many generations of human and pre-human evolution shaping the recognition neuro-system responsible. And yet autonomous vehicle supporters don't see this as a problem.

      1. Eddy Ito

        I'd go further and say that brains of many species have evolutionary advantages that AI simply can't match. Having spent a few days with a blind friend it's evident to me that his guide dog would likely beat most AI systems in real world scenarios. Granted I don't see the dog paying much attention to pictures of overlapping toilets however.

        1. User McUser

          I'd go further and say that brains of many species have evolutionary advantages that AI simply can't match.

          Obligatory XKCD comic - https://www.xkcd.com/1720/

      2. John Brown (no body) Silver badge

        "And yet autonomous vehicle supporters don't see this as a problem."

        Yes, I can quite see a new "sport" beginning if ever autonomous vehicle ever hit our streets in any real numbers. Can I confuse a car by walking down the pavement with a particular coloured striped jersey on? Maybe make it think a 35 tonne Artic is steaming towards it? Or an elephant is crossing the road?

        For that matter, if a Tesla can't see a white lorry crossing in front of it on an upward incline such that the lorry side appears to be part of the sky, what will these cars make of the lorrys with all sorts of photo-realistic scenes printed on the sides or backs? I have visions of cars crashing in a similar manner to Coyote hitting a brick wall because Roadrunner has painted a tunnel entrance there. Except we don't even need to paint a tunnel entrance. Just a few strategically placed dabs of paint and the car see the tunnel.

        1. onefang

          "I have visions of cars crashing in a similar manner to Coyote hitting a brick wall because Roadrunner has painted a tunnel entrance there."

          I think you have stumbled on something there. Self driving car makers need to feed their cars with a steady diet of Looney Tunes cartoons.

    5. Doctor Syntax Silver badge

      "Even then, even with decades of training and human-matching capabilities... it's as good as a minimum wage employee."

      Seriously?

  4. a pressbutton

    Pretty obvious really

    Humans look at something and recognise the things in the picture

    This is done by recognising _all the things they look like_ and then choosing the most likely in the context.

    AI currently just says chair / not chair and that is why it is so easily spoofed.

    Part of the problem is - what is a chair?

    1. frank ly

      Re: Pretty obvious really

      Yes, the is the elephant in the room for AI object recognition.

    2. Lee D Silver badge

      Re: Pretty obvious really

      No, the problem is that unless you specifically tell it what to look for (i.e. an algorithm that can identify four legs meeting a seat at right-angles, etc.) then it's picking up arbitrary correlations that you have zero insight into or control over.

      It could be recognising bananas by the fact they have 10 yellow pixels, that there's a curve, that they have a blue sticker on them, or any of a billion indescribable criteria that no human would ever attribute as the "essence" of a banana. And you have no (reasonable) way of telling what criteria that is, modifying it without literally shoving your hand in its brain and wiggling it about, or determining what criteria it'll modify that detection with when you next train it on an image.

      For all you know, it's training itself on the (C) Getty Ltd copyright on the bottom-right-hand corner, not the photograph at all, but just got lucky enough that you think it's detecting bananas.

      While such AI is nothing more than throwing a box of papers at a shredder and hoping it only shreds out the bit of information you want, you have no control over what's coming out of it and thus you get whatever nonsense you're given.

      In a million years of training a "conventional" AI, you'll never get it trained on something like this. And you'll never understand it well enough to rely on it, and then you'll never get it trained on something new without a million years of "untraining" on what is a banana and what's a Cavendish.

      1. The Nazz

        Re: Pretty obvious really

        re Lee D and his banana's.

        Googles latest effort at recognition : Gorilla eating banana.

        Is that a Gorilla that feeds on bananas generally, a Gorilla currently eating a banana or something else.

        1. onefang

          Re: Pretty obvious really

          "Is that a Gorilla that feeds on bananas generally, a Gorilla currently eating a banana or something else."

          It might be a banana in the gorillas pocket, or he's just glad to see me.

      2. John Brown (no body) Silver badge

        Re: Pretty obvious really

        "No, the problem is that unless you specifically tell it what to look for (i.e. an algorithm that can identify four legs meeting a seat at right-angles, etc.) "

        And, of course, there is just so much more to a chair than that. Just look at some of the really weird designs for chairs, especially some of the concept stuff from the 60's. Even some of the most outlandish would be instantly recognisable to a human as something to sit on, but no current "AI" could ever come even close. I've certainly seen one that this AI would probably, with high confidence, class as a banana rather than a chair

    3. Chris G

      Re: Pretty obvious really

      "Part of the problem is - what is a chair?"

      You can Google that question and make sense of the response, a neural network cannot, if you do google chair images, there are a surprising number of elephant chairs .

      All of these systems are far too limited to be genuinely meaningful, computing power is still a couple of orders of magnitude away from a pigeon brain and even then no one is completely sure of how even a pigeon brain works.

      The biggest danger with so called AI, is that too many people who have no clue about it's severe limitations are being taken in by the hype and want to implement AI in situations that can impact peoples lives or health.

      A lot of the pseudo technical press are responsible for a lot of the hype, probably because many of their journo's are not that savvy either.

      1. Doctor Syntax Silver badge

        Re: Pretty obvious really

        "i.e. an algorithm that can identify four legs meeting a seat at right-angles, etc."

        Before that you need algorithms that can identify legs and seats. You also need to take account for the fact that they don't usually meet the seat at right angles. A Windsor chair has splayed legs Sabre legs don't meet at right angles either. And as for cabriole legs....

        1. Eddy Ito

          Re: Pretty obvious really

          Then there's the problem of rotating the chair so it only presents 3 legs to the AI camera, the chair can only be seen from the rear and the two distant legs are just far enough apart to effectively hide behind the near legs, and a whole bunch of other perspective issues most people won't have difficulty with.

    4. Anonymous Coward
      Anonymous Coward

      Re: what is a chair?

      A chair is the highest officer of an organised group.

      1. DCFusor
        Joke

        Re: what is a chair?

        I've seen pictures (or it didn't happen) of people sitting on elephants....thus, the AI just got lucky, it iS a chair.

    5. Doctor Syntax Silver badge

      Re: Pretty obvious really

      "Part of the problem is - what is a chair?"

      There's one of AI's problems. It's never sat in a chair.

    6. Steve Aubrey
      Joke

      Re: Pretty obvious really

      "Part of the problem is - what is a chair?"

      I think you folks are taking this too seriously. I looked at the bottom of the pictures, down in the corner, and it says

      "Ceci n'est pas une chair"

      So the AI either has a sense of humor, or it has internalized Magritte.

  5. Marcus Fil

    The elephant in the room!

    Yes, the same kind of AI coming to an autonomous driving system near you. Present it with something unexpected and watch it fail. Best piece of advice I have ever received (from my second driving instructor): "Never hurry into a situation you don't understand". Problem with most AI - it resolves to a single decision layer - it lacks the meta to understand it does not understand. Worse yet, the AI engineers I have met lack the appreciation that the human brain is thinking at multiple levels - most concerned with on-going survival and well being. The good news - every time an AI system is shown to be broken then we, the humans, learn something about our own limitations.

    1. Zog_but_not_the_first
      Thumb Up

      Re: The elephant in the room!

      Absolutely! I'm enough of a techno-optimist to believe that we will develop reliable systems for these kind of tasks ONE DAY, but the headlong rush into "autonomous" vehicles troubles me.

      It has the feel of a bubble, with technology companies and car manufacturers falling over themselves and each other to be the next best tulip/web company/sub-prime mortgage supplier.

    2. Trollslayer
      Thumb Up

      Re: The elephant in the room!

      "it lacks the meta to understand it does not understand".

      Best description I have seen about autonomous vehicles.

    3. SVV

      Re: The elephant in the room!

      The AI could, with enough negative feedback to train it, learn to stop recognising lots of things as an elephant. You could then have more confidence in your self driving car, until the disastrous day you decide to visit a safari park and your car heads for what it thinks is a motorway at high speed.

      1. John Brown (no body) Silver badge

        Re: The elephant in the room!

        "until the disastrous day you decide to visit a safari park and your car heads for what it thinks is a motorway at high speed."

        My wife and I visited Longleat, a couple of months ago. I wonder how that would work with a self-driving car? Apart from the monkeys jumping on the car and snagging a ride, crossing the road in front of the cars etc, how do we get it to stop/start/go dead slow etc as we look at the animals? I wonder if any of the autonomous car developers have considering anything other than getting from A to B on city streets and motorways yet?

        (Yes, we go to places like that off-season specifically so we can go slow and stop when we feel like it)

    4. Doctor Syntax Silver badge

      Re: The elephant in the room!

      "from my second driving instructor"

      Curious minds want to know what happened to the first.

    5. Herring`

      Re: The elephant in the room!

      "Present it with something unexpected and watch it fail."

      As has been pointed out on The Reg and elsewhere, self-driving cars will (in the near future) inevitably get into situations where they can't cope with the input. The manufacturer has to make a decision as to whether, in this situation, it fails safe and stops (rendering it useless) or fails the other way and goes (rendering it dangerous). The only other option is to completely change the urban environment (e.g. cordoning off pedestrians) which just isn't going to fly.

      1. Chris G

        Re: The elephant in the room!

        @Herring "(e.g. cordoning off pedestrians) which just isn't going to fly."

        Without cordoning off pedestrians, there is a good chance they will fly, two impacts for the price of one.

      2. John Brown (no body) Silver badge

        Re: The elephant in the room!

        "The only other option is to completely change the urban environment (e.g. cordoning off pedestrians) which just isn't going to fly."

        Especially with the current trend for "shared space" where people, cyclists and cars all have equal priority and there are no kerbs to separate roads and pavements. Something the sight-impaired are already having problems with now, without so-called AI car drivers in the mix.

    6. Orv Silver badge

      Re: The elephant in the room!

      "Never hurry into a situation you don't understand"

      Pilots face an especially acute version of this, because when you're flying, you can't pull over and stop until you figure things out. It's easy to get "behind the airplane." The only saving grace is in the air there are fewer things to hit; still, pilots are generally trained to try not to let an airplane take them anywhere their mind didn't get to five minutes earlier.

      Mind you, pilots can also get lost on the ground at large airports, with occasionally disastrous results, which prompted one instructor to say, "if you're taxiing and things aren't making sense, set the parking brake until they do." ;)

  6. tiggity Silver badge

    elephant in the room

    Not that uncommon.

    I have visited houses with elephant ceramic ornaments on shelves, a plate on the wall that featured an image of an elephant, many a friends child had toy zoo ansomething (be it pokemon, model pigsimals (these were "realistic" elephants as opposed to cartoony looking so would be equivalent to photoshopped in image of an elephant)

    Quite legit that someone could hold the elephant ornament on their head as all sorts of random behaviour occur in rooms because people do silly stuff.

    Not a very useful "AI" - you could expect an image of just about anything in a room as an almost limitless variety of "ornaments" on display in peoples homes.

    Give the AI a photo taken in the room of an avid collector of stuff on a theme (be it pigs , pokemon, disney etc.) and see how confused it gets...

    1. John Brown (no body) Silver badge

      Re: elephant in the room

      "Not a very useful "AI" - you could expect an image of just about anything in a room as an almost limitless variety of "ornaments" on display in peoples homes."

      That was my thought too. A main living room in almost any house is always going to have something in it you've likely never seen before, but as a human, you'd probably be able to identify almost instantly, even in the background "clutter" of the rest of the scene. From here, I can see a pair of stylised china Siamese cats from Thailand, a pair of rabbits made from seashels bought in Maurtitous, a wooden carved elephant from Nairobi, A touristy "Faberge egg" from Moscow and a welsh slate clock that's basically a grey square with pointers, no numbers or other markings. I'm not sure what any AI would make of most of those.

      Oh, nearly forgot, a Tricerotops too. Proper, realistic-type thing, not a kids toy!)

  7. Pete 2 Silver badge

    Come back in 20 years!

    > Arguably, it is too much to expect a network which has never seen a certain combination of two categories within the same image to be able to successfully cope with such an image

    Although equally arguably few people will ever have seen that combination, either. The problem seems to me, with no experience of image recognition software, that the systems are pretty crap at recognising anything and rely too much on "tricks" such as context, to turn their guesses into even vaguely credible "image may contain ..... " analyses.

    Most people would start by looking at the picture as a whole. In this case the interior of a room. They would identify it as such and then work down, from the big things to the little things. It does seem to me that the identification process employed here is simply not up to the standard necessary to contribute anything useful.

  8. onefang

    If I was to see an elephant floating around my lounge room, I'd start disbelieving my eyes to.

    1. Doctor Syntax Silver badge

      "If I was to see an elephant floating around my lounge room, I'd start disbelieving my eyes to."

      It's not looking at a room. It's looking at a picture of a room and you'd have no problem recognising an elephant in that context.

      C'est n'est pas une pipe.

  9. Tromos

    The intelligence is artificial. The stupidity is all too real.

    This isn't AI, it's just pattern recognition with the usual failings as soon as there's an unexpected item in the bagging area.

  10. David Roberts
    Boffin

    Not just AI

    The invisible gorilla experiment shows that humans are also easily confused and don't always identify objects in a field of view.

    1. Allan George Dyer

      Re: Not just AI

      I've never seen anyone confused by the invisible gorilla experiment. Surprised, yes, but not confused.

      The "error" is quite different. The observers don't suddenly mis-classify existing objects when the unexpected object appears (e.g., think the ball is a coffee when the gorilla enters), they simply do not consciously notice it. This suggests we have the ability to turn on a filter below consciousness level when asked to focus on a task.

      If observers were asked to identify all the objects in the video, there would be no mistake... but a lot of different answers (do people and gorillas count as objects? are clothes separate objects to people? etc.)

      I'd like to suggest that the AI is at a disadvantage because it has no idea what an object is. In infancy, we spend a lot of time trying to touch things we see, and put them in our mouths, relating our vision to other senses. From this, we build our concept of objects and then we cannot be fooled by an elephant image pasted over someone's head.

    2. Doctor Syntax Silver badge

      Re: Not just AI

      "The invisible gorilla experiment shows that humans are also easily confused and don't always identify objects in a field of view."

      The confusion is between "field of view" and "area of focus".

      1. John Brown (no body) Silver badge

        Re: Not just AI

        The confusion is between "field of view" and "area of focus".

        Spot on. Most people don't realise that the human eye is not actually all that good at seeing. It sees a very small field of view in focus and sort of sees everything else out of focus. What the human brain is very, very good at is telling the eye to keep roving around all the time and then stitching together all those little in-focus bits into a larger whole. The brain is also incredible at pattern recognition, matching something new you see with past experience at a rate even the fastest computers can only dream of. Sometimes incorrectly at first, but then refining the definition on closer inspection.

  11. Draco
    Windows

    This reveals interesting insight into the behaviour of "neural net" image classifiers.

    It is a given that the networks have no "understanding" of what they are classifying. The wisdom being that there is no need to understand - simply fling enough images at it and it will "learn" how to correctly classify cats (if you don't like cats, substitute motorcycles, mountains, tumours, people, whatever).

    We now see that these classifiers are not learning what a "cat" is, rather they are learning the types of images in which cats appear - in other words: cat in a context. Change the context and it mis-classifies.

    The "obvious" solution seems to be that the neural nets need to segment images into distinct objects and then classify the objects. This is not a trivial problem.

    1. Orv Silver badge

      We now see that these classifiers are not learning what a "cat" is, rather they are learning the types of images in which cats appear - in other words: cat in a context. Change the context and it mis-classifies.

      This has cropped up dramatically in some instances. For example, scientists training a neural net to recognize skin cancers discovered they had instead trained a ruler detector -- images of cancerous lesions almost always have a ruler for scale.

      Google's "deep dream" experiments also showed that a neural net trained to recognize barbells considered the beefy arm attached to them to be part of the object.

      The "obvious" solution seems to be that the neural nets need to segment images into distinct objects and then classify the objects. This is not a trivial problem.

      Indeed, that's the "general vision problem" that has stumped AI researchers since 1966, one of the great unsolved "hard" problems of computer science.

  12. JeffyPoooh
    Pint

    Death to twins in crosswalks...

    El Reg, "Adding the same objects already in the image also has the same effect."

    Identical twins in matching clothing in crosswalks...

    It's gonna be a massacre.

  13. Jay Lenovo
    Coat

    New Incognito Window

    Jack Hanna's secret ability to avoid Facebook auto-tagging is now uncovered.

  14. a pressbutton

    I can see the future

    In 2022 you will see people wearing well-crafted images on their t-shirts that have the effect of causing any autonomous vehicle to halt.

    Of course Toyota (other manufacturers are available) will then update their firmware.

    And then someone will come up with a new image. (The image will be subject to 1st amendment rights - you cannot stop this)

    new t-shirts will be produced.

    Rinse.

    Repeat.

    1. J27

      Re: I can see the future

      Nah, we just need software re-programmable shirts.

    2. Crazy Operations Guy

      Re: I can see the future

      New t-shirts aren't needed. One of my coworkers has a fairly large collection of punk-band t-shirts with morphed faces and other imagery on it. Those shirts will screw up AIs without a problem.

      Those photo t-shirts you get at the mall seem to work quite well themselves. A former classmate of mine wears t-shirts with a photo of the FBI's poster for their most-wanted for that week and it will constantly trigger AI systems at various venues that have implemented AI (They actually work for the FBI and are doing it to increase awareness).

    3. onefang
      Coat

      Re: I can see the future

      "new t-shirts will be produced."

      "Rinse."

      The update cycle will probably be short enough that you don't need to actually wash these t-shirts, just buy the new ones. So no rinsing needed.

      I'll get my t-shirt with a photo of a coat on it.

  15. Anonymous Coward
    Anonymous Coward

    Demonstrates the fragility

    Of the neural network approach of today's "AI". You can train a neural net on stuff, and it will eventually do a good job on similar stuff. Toss it an outlier and you're likely to get garbage back as a result.

    Fills you with confidence about the future ability for autonomous cars to handle exceptional cases, doesn't it?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like