back to article Beware! Medical AI systems are easy targets for fraud and error

Medical AI systems are particularly vulnerable to attacks and have been overlooked in security research, a new study suggests. Researchers from Harvard University believe they have demonstrated the first examples of how medical systems can be manipulated in a paper [PDF] published on arXiv. Sam Finlayson, lead author of the …

  1. a_yank_lurker

    True Meaning of AI

    AI = Always Idiotic. Any AI system is only as good as the underlying model used for the classification. Also, the models will be incomplete by their very nature, they are models not reality.

  2. Pascal Monett Silver badge

    Not very reassuring

    So, images can be modified in a way that the human eye cannot perceive, but pseudo-AI does and reacts accordingly.

    That means that an attack on the system would possibly take a long time to discover if nobody is checking the actual images but just working on post-analysis data.

    It is already frightening to imagine that in a medical environment, but since Big Data is digging deep into our societal fabric, the consequences of such actions could really become terrifying.

    And nobody will understand because everyone already trusts the machine.

  3. Brewster's Angle Grinder Silver badge

    I'm struggling with the attack vectors here. A nurse takes a photo or scan and feeds it into the AI. How do I get to attack it? If there's motive to corrupt the diagnosis, then it's as easy to hack in and corrupt the result as corrupt the photo. (Okay, corrupting the photo means if they run the test again, they'll get the desired result, provided they don't take a new photo. So there's some mileage in that.)

    And I struggle with how much this can be exploited. Suppose the patient sends in their own photo of a suspected skin cancer. Why would they corrupt it? The patient's not going to want to turn it into a false positive. A hacker could delay necessary treatment (making death more likely) or force someone to have treatment who doesn't need it. But there has got to be an initial concern. Novichok on a door handle sounds easier.

    Obviously this is interesting and worthy of thinking about at this early stage. But it's about as practical as one of those fake pregnancy tests.

    1. o p

      new programming

      This is a new way of programming. Developpers of sendmail did not bother about anything but relaying emails.

    2. ibmalone

      For me (as someone who works in this area) it's not so much about the possible attacks, but demonstrating how sensitive some of these methods are to the data set and possibly over-trained. Human operators make errors too, often because of things that wouldn't fool the machine, but they at least can make sense of different scanning situations and noise. If a small set of pixels/voxels unrelated to the pathology/physiology in your images are influencing the machine prediction it's an indicator that possibly you have just trained the machine to recognise some property of those images in your training set that's not related to the actual problem (despite presumably using cross validation).

      Generative adversarial network approaches like this are beginning to be used in medical research now (don't know if any have been commercialised yet). This stuff is getting to the point where it can work alongside a person, e.g. you can maybe do better with one expert and one ML system reading scans than having two experts reading scans, which is the normal practice for some things.

    3. Not also known as SC

      @Brewster's Angle Grinder

      "And I struggle with how much this can be exploited. "

      One unlikely, tin foil hate wearing possibility is to create a false database that diagnoses higher instances of illness so that drug companies (for example) could push extremely expensive and profitable drugs where they're not needed. I doubt many medical people would want to contradict the AI's results because a patient might then die, and no one wants that on their conscience, so this fraud would be very hard to detect and prevent. (Even a tiny % over-diagnosis rate could prove very lucrative to the right people).

    4. scriven_j

      Attack Vectors

      I imagine that the attack vector is that you could get incorrectly diagnosed as having a terminal illness and then make a large insurance claim against a critical illness policy or something similar?

    5. Whitter
      Devil

      Attack vector

      There are a good few medical systems where treatment = $$$.

      Will there exist medical practitioners who would falsely detect then unnecessarily treat to generate personal profit? (Using a placebo treatment if they are not too sociopathic),

      Probably.

      Though not many I'd hope.

      1. d2

        Re: Attack vector

        NBD...med fraud has been around for decade$ : https://www.naturalnews.com/051482_cancer_industry_overdiagnosis_false_positives.html

        Unbelievable scam of cancer industry blown wide open: $100 billion a year spent on toxic chemotherapy for many FAKE diagnoses... National Cancer Institute's shocking admission affects millions of patients...Tens of millions of people who have been diagnosed with "cancer" by crooked oncologists -- and scared into medically unjustified but extremely profitable chemotherapy treatments -- never had any sort of life-threatening condition to begin with, scientists have confirmed...

  4. Anonymous Coward
    FAIL

    It's an outrage in my opinion!

    Just look at what companies charge as soon as they're operating in the healthcare sector. Many companies want to be active within this sector just because of the increased bills they can write up. And if you don't believe me just take a look at the actual production costs for some specific equipment (think about glasses, contact lenses but even a wheelchair) and then look at the attached pricetag.

    Many people don't care because... "the insurance will pay for that" but without realizing that if this continues then there might come a time when even that will start to become a problem in itself.

    And here we are... You think you can expect quality and some expertise to be involved but the facts are somewhat different.

    1. Korev Silver badge

      Re: It's an outrage in my opinion!

      There's also the huge matter of regulatory approval which is very complicated and expensive. A lot of companies shy away from this market because of it. Have you noticed how all the fitness trackers and smartphones have huge disclaimers and/or don't claim to do medical things?

    2. ibmalone

      Re: It's an outrage in my opinion!

      My contact lenses are disposable and cost about £18 a month. Writing it down that seems quite cheap for something I trust enough to put into my eyes, should maybe look for some more expensive ones.

  5. Anonymous Coward
    Anonymous Coward

    Re: Academics

    Dear Cattyarna,

    It's Kohane not Kahone.

  6. Trollslayer
    Mushroom

    SOAD

    Switch On And Duck - the basic approach to AI.

  7. EssentialTremor

    The tests do not show that AI is foolish but that it is brittle.

    1. allthecoolshortnamesweretaken

      The tests show that AI isn't, yet.

  8. Anonymous Coward
    Anonymous Coward

    show-me-the-money, aka, D.S.M.

    'More than half of the task force members who will oversee the next edition of the American Psychiatric Association’s most important diagnostic handbook have ties to the drug industry, reports a consumer watchdog group.

    The Web site for Integrity in Science, a project of the Center for Science in the Public Interest, highlights the link between the drug industry and the all-important psychiatric manual, called the Diagnostic and Statistical Manual of Mental Disorders. The handbook is the most-used guide for diagnosing mental disorders in the United States...'

    https://well.blogs.nytimes.com/2008/05/06/psychiatry-handbook-linked-to-drug-industry/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like