back to article You should find out what's going on in that neural network. Y'know they're cheating now?

Neural networks – the algorithms that many people think of when they hear the words machine learning – aren't very good at explaining what they do. They are black boxes. At one end you feed in training data such as a set of cat pictures and a set of non-cat pictures, the neural net crunches the data to produce a statistical …

  1. S4qFBxkFFg

    "Seltzer cites an example of a model that noticed asthma patients were less likely to die of pneumonia. The model incorrectly assumed that asthma protected pneumonia patients based on that data.

    "The problem is that if you show up at an ER and have asthma and pneumonia, they take it way more seriously than if you don't have asthma, so you're more likely to be admitted," she says. These patients did better than average because doctors had treated them more aggressively, but the algorithm didn't know that."

    Er - in this case asthma does protect pneumonia patients - through the mechanism of aggressive hospital treatment from doctors when asthma is present.

    The model is correct.

    1. Lee D Silver badge

      "Correct", "useful" and "misleading" are very different things.

    2. Filippo Silver badge

      That's the first thing I thought. This is yet another instance of the good old caveat: a computer does what you say, not what you want.

    3. James 51

      The model is not correct. It is the treatment that protects the patient, not having asthma. According to the model if you have asthma and pneumonia you're less likely to die if you do nothing which is not the case.

      1. codejunky Silver badge

        @ James 51

        "According to the model if you have asthma and pneumonia you're less likely to die if you do nothing which is not the case."

        Based on the information it is given it is a correct assumption. Without further information we believed the sun went around the earth. When a set of circumstances arises and the outcome seems consistent we assume correlation = causation. It is a lack of input information that leads to the wrong conclusion.

        Oddly applying the model would then show a problem as the increased danger of asthma and pneumonia would be identified, Assuming the model was to learn from the new data.

        1. James 51

          Re: @ James 51

          And a broken clock is right twice a day. It is still broken. Without understanding the data we are feeding the models we can't understand what they output and people would have to die to change the model as you point out assuming that the model continues to learn and isn't sold in a static version and it just keeps killing people.

        2. Jaybus

          Re: @ James 51

          "Based on the information it is given it is a correct assumption."

          No. Correlation does not imply causation. This is the reason for the addition of rule lists and such into algorithms being studied at ARPA and elsewhere.

          1. Freddie

            Re: @ James 51

            But this isn't correlation, is it?

            The asthma caused the medical team to be more attentive

            That extra medical attention caused the improved pneumonia outcomes

            Therefore, asthma is on the causal pathway, no? A corrolation would be having an inhaler, which is only corrolated with having asthma, rather than causing it, and so is not on the causal pathway, if I understand correctly.

      2. Roland6 Silver badge

        >The model is not correct.

        The model was correct, because treatment was not part of the model!

        However, what we have learnt is the limitations of the model, which given the statistical basis of such models is to be expected... "There are three kinds of lies: lies, damned lies, and statistics."

        1. Grikath

          What Roland6 says...

          When it comes to biology , which incidentally includes the medical profession as one of the applied areas of the science, you have to rely heavily on statistical techniques and models. In a good curriculum students are taught to be very wary of the results of models, because especially in biology you can be pretty much sure you Do Not Know all the variables, and there's a good chance the one you missed will trip you up in your model. And part of the curriculum used to be set up so that you would fail at correctly analysing a setup to drive the lesson home and save you from future embarassment.

          Unfortunately this practice seems to have declined, especially in the applied fields, so you get models like this, where the likelyhood/level of treatment applied per situation was omitted. You might as well start farming expecting "climate" and ignoring "weather".

          Models are only good if all relevant variables, and their interdependencies, are correctly mapped and entered. And in biology, even then a butterfly need only flap its wings .....

  2. Whitter

    Hot fuzz

    It does look like we are going the way of always "fuzzing" our input data (repeatedly running the network on slightly modified data) to obtain some input-specific evaluation of result stability, in addition to the validation set test metrics of general accuracy.

  3. steelpillow Silver badge

    recursive obscurity

    Neural nets are like people, "I don't know how I came up with it, I just did" is an intrinsic characteristic of both.

    If we analyse and understand how a particular neural net has learned its stuff, the next-generation net will just use that knowledge to drive the next level of impenetrable learning.

    1. Paul 195

      Re: recursive obscurity

      > Neural nets are like people, "I don't know how I came up with it, I just did" is an intrinsic characteristic of both.

      Not quite true - if you ask someone how they came up with the idea for a song or a novel, you might get that answer. But if you ask a doctor why they made a particular diagnosis (for example), they will be able to explain their reasoning. For a lot of the classification type tasks AI is being used for, explaining their reasoning is very useful.

  4. Anonymous Coward
    Anonymous Coward

    I remember one (though probably apocryphal) story told about the early days of neural nets. The military of some country or other wanted to find out if a neural net could help detect camouflaged objects such as tanks that were hidden behind, say, vegetation. So the AI bods took a bunch of pictures at the edge of a forest both with and without a tank hiding out behind the trees and then fed them into the neural net, and fairly soon, the system was able to detect the presence or absence of a tank with pretty much 100% accuracy.

    A demonstration was duly arranged for the top brass showing what had been accomplished, so the system was set up in the same location and totally failed to detect the presence of the tank. On actually looking at the net that had been constructed, it became obvious that all it was doing was seeing how dark or light the picture was in order to flag the presence or absence of a tank. Mystified, they looked at the training photographs and realised that it had been cloudy when they took the pictures with no tank, but by the time the tank had been positioned, the sun had come out.

    1. Rob D.
      Thumb Up

      Lost in the mists of time

      The best I've found in the past on this is at https://www.gwern.net/Tanks which includes a link to https://www.gwern.net/docs/ai/1964-kanal.pdf.

      The paper describes a study using pictures of tanks and explains the very promising results ("The experimental performance of the statistical classification procedure exceeded all expectations") though it does also acknowledge some limitations ("we feel that the sizes of the design sample and especially the independent non-tank test sample are considerably smaller than we would like them to be").

      There is then separately and later a claim that Edward Fredkin during the early 60's challenged the outcome in a conference, observing in discussion that the model could simply have been distinguishing between sunny and cloudy days based on the photos shown.

      But I like the story anyway and use it regularly!

      1. Anonymous Coward
        Angel

        Re: Lost in the mists of time

        How can the 'non tank' sample be smaller than they would like, given that 99.9999% of photos ever taken do not include a tank?

        1. getHandle
          Joke

          Re: Lost in the mists of time

          Maybe they had to be careful with the 'non tank' sample in case there was a tank hidden in some of them and they couldn't spot it...

    2. James 51
      Windows

      The version I heard of that story was that the photos of Russian equipment were poor quality, long shots, blurry, that kind of thing and the photos of NATO equipment was in focus with good white balance etc etc. The system learned to tell if shots were in focus, not blurry etc etc, it ignored the miltary hardware in the shot.

      1. Roland6 Silver badge

        This talk about the neutral network inferring the incorrect relationship reminds me about dog training! One of the hardest commands for dogs to grasp is 'heel', from having taught dogs to walk to heel, I fully understand; and with dog training you are working with an intelligent being, not a dumb computer...

    3. DCFusor

      An extremely similar anecdote appears in one of Timothy Master's books on neural nets.

      While the 'net can do amazing things, the slightest input bias that makes it easy to "cheat" is learned, because all we tell it when training is what the error was on this test data set...

      I think it was one of his early books: http://www.timothymasters.info/my-technical-books.html

      Probably the first one.

      So much of what's done (wrong) today ignores his fine work on the topic. And this article repeats quite a bit of what he said a couple (few) decades ago....

      He discusses the problems of knowing what's going on in there in considerably more depth if anyone is interested. It's staggeringly hard since a bunch of small weights arranged just so are just as important as a few big ones.

  5. Rob D.
    Thumb Up

    Model 1, Humans 0

    > The model incorrectly assumed that asthma protected pneumonia patients

    The models make no assumptions. It is the humans interpreting the output of the models that make the assumptions.

    This kind of statement underlines the problem this article is talking about - humans implicitly imbue 'The Model' with an intelligence it simply does not have and never exhibited. The model predicted survivability based on input parameters - it did not, and as far as the original researchers are concerned was never meant, to make any kind of associated between the medical characteristics of those parameters or any of the outcomes.

    Given that making stuff up and posting it online is enough to get a decent proportion of people to believe something, I don't think the problem of a 'clever machine doing clever stuff' being viewed as actually clever is going away soon.

    Fascinating topic though. Original study for the pneumonia model work is available at http://repository.cmu.edu/philosophy/288/.

  6. Anonymous Coward
    Anonymous Coward

    Wolf or Husky?

    "Singh was using a neural network discern wolves from huskies"..."One type of animal usually cropped up against a snowy background, so the model had classified pictures based on which pictures had snow and which didn't,"..."When we hid the wolf in the image and sent it across, the network would still predict that it was a wolf, but when we hid the snow, it would not be able to predict that it was a wolf anymore,"

    And there was me thinking the Huskies would be the animal always pictured with the snow...

    1. Roland6 Silver badge

      Re: Wolf or Husky?

      >And there was me thinking the Huskies would be the animal always pictured with the snow...

      Thinking about this, the wolf sequence in the Bourne Legacy and Hitchhikers Guide to the Galaxy... I suspect the neural network would be a bit like the whale: I see an animal in the snow, I'll call it a husky, I wonder if it will be friendly...

  7. A Non e-mouse Silver badge

    Gigo

    It all sounds like a case of Gigo: Garbage in, garbage out.

    If you don't truly understand the data you're feeding into your model, you're never going to get anything meaningful back out.

    As any seasoned programmer will tell you: Know and understand your inputs! This applies to any system: From neural nets all the way down to simple sed commands.

    1. juice

      Re: Gigo

      Not really - the information provided to the neural net was correct, it was just overly limited.

      Or to put it another way, a set of incorrect assumptions were generated due to the provision of limited information.

      If there was more variety in the training data, this issue wouldn't have occurred.

  8. TRT Silver badge

    Sounds like a very complex problem...

    Perhaps it lends itself to inspection by an AI ML expert system.

  9. Christian Berger

    Such problems were known _way_ before the current hype

    The early 1990s TV documentary series "The Machine that Changed the World" already covered that as a problem with Neural Networks by using the example of a tank detecting network trained with pictures of tanks during good weather, and pictures without tanks during bad weather. It trained on the weather instead of the tanks.

    BTW that series only mentiones the Internet once and only in passing.

    1. Lee D Silver badge

      Re: Such problems were known _way_ before the current hype

      This is precisely the problem.

      What "neural nets", "machine learning", etc. are actually doing is "brute force to find a set of conditions that result in the desired criteria the majority of the time".

      - "That's a banana."

      - "Okay. All bananas are 400 pixels wide."

      - "That's NOT a banana."

      - "Okay. Most bananas are 400 pixels wide but they all have a white pixel in the top left."

      - "That's a banana"

      - "Okay. Most bananas have a white pixel in the top-left, are 400 pixels wide, and look a bit yellowy overall".

      - "That's a banana"

      - "Okay. Most bananas have a white pixel in the top-left, are 400 pixels wide, and look a bit yellowy overall, and say Getty Images on them".

      ...

      And so on. In between, there is NO introspection into the criteria that are being inferred upon. And it will fit the training data, for the most part. And if the training data is large, you might get lucky and it might be useful enough to put a set of ears on a webcam image in roughly the right place. But the training data can't be COMPLETE and so you cannot use it with any surety. This is why "machine-learning AI self-driving cars" are basically suicide-murder-boxes.

      Not only that, they plateau quickly because they can't "unlearn" those early basic assumptions (because you can't even tell what they were, let alone itself!), so trying to train it to recognise planes and/or apples without literally starting from scratch is almost a complete waste of time.

      Say it takes 1,000,000 pieces of training data to recognise a banana... it surely takes 10-100m pieces of training data to "untrain" it or retrain it to also recognise other things, and what's "not a banana". Literally, you have to show it enough "not a bananas" for it to the be retrainable on "is an apple" without just assuming everything that's not a banana is an apple.

      "AI", "machine learning", "neural nets" are all toys. Sure, they can do some funny things if you let them, but they are uncontrolled, uncontrollable, single-purpose toys.

      At no point has anyone made an AI that literally can say "Hold on, so that's not a banana? But I was using this criteria. Can you tell me what the difference is between something that meets all this criteria but isn't a banana?". And yet that's a classification game we play with kids in primary school, where you make a "20-questions" like tree to identify species, etc.**

      The day we have an AI breakthrough is the day we have a computer that you program/operate by just clicking at the screen, and a big "Yes/No" switch to tell it off until it understands what it is that you want it to do.

      Clicks icons.

      Loads up PDF in Microsoft Reader.

      Hastily press the NONONO! button.

      It reverts back a bit, closes Microsoft Reader, opens it in Notepad.

      Hastily press the NONONO! button.

      It reverts back a bit, closes Microsoft Reader, opens it in FoxIt

      Press the Yes! button. Now it knows what you wanted, and changes your settings to reflect that.

      To program:

      Hold down the "programming shift modifier" key.

      Click button on screen.

      "Alright, so what do i do when you click that?"

      Click-and-drag to the printer icon on the desktop.

      "Ah, right, so you want me to print something when you click that button".

      Press the Yes! button.

      "Oh, I'd print this... <shows screenshot>"

      Press No button.

      Hold programming modifier.

      Click-and-drag around the current window in the screenshot.

      Press Yes button.

      (** Ironically, I can remember an early piece of programming I did was to make a game where you give the computer the name of two objects, it asks you for a question that would distinguish them, then you give it another object, it runs through the question, and builds the tree as you answer Yes/No to the distinguishing questions. Each time it ends up as something it doesn't know, you get to type in a question to distinguish it from the nearest thing. Does it have four legs? Does it live underwater? etc.

      The computer had no intelligence, but you classified things by having it demand a distinguishing question. And after sufficient such training, it could play a decent game of 20 questions (for at least 2^20 possible objects!)

      1. John 110

        Re: Such problems were known _way_ before the current hype

        I think your early game was "Pangolins" and it was in the ZX Spectrum programming manual (Y/n)

      2. TRT Silver badge

        Re: Such problems were known _way_ before the current hype

        It's a small, off-duty, Czechoslovakian traffic warden.

  10. Anonymous Coward
    Anonymous Coward

    E.g. The infamous 'Compas' black box sentencing tool

    There was one report of a black box sentencing tool that had learned to be a racist.

    You know, the weighting factors in a neural net could be clocked out and put into a spreadsheet. Then they might be reviewed, simplified, analyzed, plotted out, etc. There's nothing in a neural net that couldn't be replicated in a spreadsheet. How come nobody knows this?

    1. isogen74

      Re: E.g. The infamous 'Compas' black box sentencing tool

      The first neural layer in the AlexNet image classifier (now considered relatively simple) contains over a million parameters. It has 16 layers overall. How on earth do you intend to review that in a spreadsheet ...?

      1. Grikath
        Devil

        Re: E.g. The infamous 'Compas' black box sentencing tool

        Excel-fu, of course. Summarised for manglement in a pie chart during a cozy powerpoint presentation.

      2. Anonymous Coward
        Anonymous Coward

        Re: E.g. The infamous 'Compas' black box sentencing tool...

        (See Title) ...or similar have merely several inputs. Well within range of manual checking; if the designers were competent. Racist AI Sentencing Tools are the product of human incompetence.

        For Vision AI systems (where the numbers tend to skyrocket because of: pixels), use the human visualization techniques as demonstrated by Google DeepDream. This approach would allow the competent systems engineers to actually see what their otherwise black box neural network has actually learned. Anyone involved with an uninspected black box is incompetent and must stay well away from safety critical systems. Otherwise they'll kill innocents.

        If you read about self driving vehicles crashing straight into obvious obstacles, then you've got your examples.

    2. Anonymous Coward
      Anonymous Coward

      Re: E.g. The infamous 'Compas' black box sentencing tool

      https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/

      Those involved in this sort of field need to improve their work flow processes to ensure less-defective results.

  11. Fungus Bob
    Boffin

    the neural network classifies it as cat or not-cat

    This is very useful as one can classify everything in the universe as either cat or not-cat.

  12. Anonymous Coward
    Boffin

    What i reckon anyway!

    What we do with tiny children is sit them on our lap when we do things i.e play cards, they observe everything we do over a long period and imitate us, then we play them and they learn again, then develop their own ideas and beat us.

    Just shoving data at a learning device does not do it, a device playing itself thousand of times should not be the same as a device running a huge data set as the self learning allows for adjustment and monitoring of outcomes. Neural networks do run the alternatives in the data, it cannot learn beyond the data set and must give weight to common occurrence not to rare occurrence in most situations. This would assume that most people would have to be experts for the AI or ML to gain the most from a huge data set and that it would have to contain equal amounts of all outcomes and internal processes. The explaining of learning would require statistical analysis of the huge data sets to ensure they were known and suitable in the first instance. this would then be taken into consideration at the endpoint and result.

  13. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like