back to article AI is cool and all – but doctors and patients don't really need it

The American Medical Association does not believe that using AI is essential in healthcare and will benefit all patients, according to a new report. It published its first policy recommendations on ‘augmented intelligence’ in time for its Annual Meeting in June. “As technology continues to advance and evolve, we have a unique …

  1. Anonymous Coward
    Anonymous Coward

    I spy with my little eye - depression!

    We may not need blackbox AI to guess my lucky number, but docs definitely need assistance. How about we go back to the decision tree aids of decades ago? Doc says no it's not condition X, you must be depressed. Years later condition Y becomes a fad and bingo I'm cured. Brainwork by doc: none. We need more brains, but paper ones will do better than nothing.

  2. Chairman of the Bored

    I thought we were already there in some areas

    The line between 'machine vision' and 'AI' is blurry. But machine vision systems have been assisting radiologists for some time, though of course the human has the final word and responsibility. For me and my lesions though Im profoundly grateful to have both!

    The US FDA has approved an AI from Arterys earlier this year to make clinical decisions concerning hard tumor diagnoses. Apparently its more sensitive and specific than your human radiologist now. Good on them. I'm a yuuuuge fan of early detection.

    1. Yet Another Anonymous coward Silver badge

      Re: I thought we were already there in some areas

      Not in America.

      In Europe your smear test will be scanned at a dozen visible and non-visible wavelengths with sub-micron resolution with optics that even follows the non-flatness of the glass slide.

      The images may be compared with a training set of millions of cases to check for abnormal cells.

      In the USA (and to an extent in Canada) your data can only be examined by a doctor using a Mk1 eyeball under a 30 year old microscope that was last cleaned in the C20, he (it's always a he) will glance at a few areas at random for 20-30 seconds before making a report which is statistically little better than a coin toss.

      But he does drive a very nice car.

      (In Canada they can at least look at the image from the microscope on a monitor)

      1. Chairman of the Bored

        Re: I thought we were already there in some areas

        Not been my experience with cancer treatment in US - done the "ride alongs" with the radiologists using the software and machine vision tools for inspection along with his Mk 1s. Quite happy, especially seeing as how I'm still above ground.

        Where the 30 yr old Zeiss comes in to play is during your surgery. As tissue is removed its flash frozen, sectioned, stained, and quickly inspected under optical to make sure the margins are clean ... no time to screw around because the patient is lying open to ambient air at that point and the whole team is waiting.

        Lately the trend seems to be outsourcing radiology reads to India. Your MRI, CT, whatever produces digital output so the medical firms gladly charge you going US rates while some doc in Bangalore does the read at pennies on the dollaar. Mixed feelings about this, beyond getting screwed financially. I'm sure the Indian docs are properly trained and certified but as PHB insist on ever more efficiency how long before we get some random punters doing the reads?

        How did our software outsourcing work out?

        As an aside: cost of spine MRI w/ contrast in a 3T Siemens machine in US for me last year? ~$1600. Exact same protocol and machine in Bangalore? ~$150.

      2. Adrian Midgley 1

        Slides went out a little while ago

        The UK uses liquid-based cytology.

        I'm unsure of the rest of Europe.

  3. a_yank_lurker

    A wise person

    Once observed it is not they 99% AI gets right but the 1% it gets wrong. What is the false positive and false negative rate? How big a tumor will it miss not how small a tumor it can see. Also, the legal liability of a wrong diagnosis; is it the MD or the software vendor who is responsible. Right now it is the MD who is on the hook.

  4. -tim

    AI expensive?

    Some AI is expensive but most is not. The skin tumor detector will get very cheap as its rollout scales. Back in the 80s AI was used to find traits, then that was reversed engineered to determine just what it was looking for and that reduced to a simple algorithm. All that modern AI has over the stuff from the 80s is that we now have far more compute power to make the initial findings.

  5. Anonymous Coward
    Anonymous Coward

    Artificial intelligence is useless...

    ...without human intelligence.

    Human intelligence: it's a scarce commodity these days.

  6. Pete 2 Silver badge

    AI is a lever

    There are two parts to this. The first is that AI allows one person to do more "stuff" and to do higher quality stuff in the same amount of time. So for a GP, that would mean a visitor (they don't become patients unless there is something that needs treating) to a GPs office will have a dialog with a machine - even if it does have a human avatar, fronting it - either on their phone beforehand, or in a booth at the building.

    The GP will then call-in that person and tell them what happens next. Alternatively, the GP takes over the smartphone diaglog and recommends what further action is needed - including the person reporting to somewhere for some tests (also performed by AI augmented systems).

    But the major boost to healthcare is when AIs are let loose on bulk health data. Not only will that build the foundation for true evidence-based medicine, but it will revolutionise mental health: diagnosis of conditions and treatments. With luck, it will be so powerful that it will drag psychology and psychiatry (whichever one is which) into the beginning of the nineteenth century - the start of being a true science: comparable to when chemistry started to get the Periodic Table and physics got to terms with electricity.

    1. Prst. V.Jeltz Silver badge

      Re: AI is a lever

      " The first is that AI allows one person to do more "stuff" and to do higher quality stuff in the same amount of time"

      Thats not AI , thats a person using their skills, experience , tools , and any other resources available to improve thier productivity. If they do that by getting Alexa to dial the phone for them , they are using a voice recognition tool - not AI.

  7. Prst. V.Jeltz Silver badge

    What I'm hearing is:

    " American Medical Association correctly identifies that AI is the stuff of science fiction and that it would be madness to implement some half assed snake oil smoke and mirrors bullshit , and trust patients lives with it"

  8. Anonymous Coward
    Anonymous Coward

    Responsibility

    From the article - If it makes incorrect diagnoses who’s to blame?

    This is what bugs me about developments so far. The question shouldn't need to be asked.

    AI shouldn't be making diagnoses at this stage. When AI has verifiable capabilities equivalent to those of a fully qualified doctor, then maybe it can be trusted.

    In the meantime, AI can be useful as an aid to the clinician in making a diagnosis, but the clinician must take full responsibility for any resulting decisions.

    1. Prst. V.Jeltz Silver badge

      Re: Responsibility

      "AI shouldn't be making diagnoses at this stage. When AI has verifiable capabilities equivalent to those of a fully qualified doctor, then maybe it can be trusted."

      Exactly! right now theres no evidence of AI having any more mental capabilities than ....

      Looky - AI dosent exist!

      Its a little early to be putting AI in the "left school get a job" stage when it is not even at the "I wear a nappy cos i'm not toilet trained" stage yet

    2. Anonymous Coward
      Anonymous Coward

      Re: Responsibility

      Such questions should not need to be asked because responsibility is clear. If a doctor/hospital makes an error they should be held responsible and be made to pay. They can then turn to their contractors and make them pay as per contract.

      It is incompetence that has management saying they do not know who is responsible for what. They should not sign any contract if they do not understand it, even more so if they are not clear on which party is liable for the many seen and unseen situations that will or may occur,

      A proper contract clearly states liability, of course shady or incompetent managers avoid those contracts as they tend to hold the incompetent to account. Apparently medical associations avoid them as well.

      1. Adrian Midgley 1

        Re: Responsibility

        More complex than that with registered medical professionals.

  9. Anonymous Coward
    Anonymous Coward

    I'd be happy if the medical industry caught up with the 20th century.

    Canada is very far away from having to be so concerned about AI in the medical field. Our systems have yet to adopt basic ideas adopted by industry generations ago.

    The ideas of science, that have proven so effective in other parts of society, are rejected by much of the medical industry. Even measurement or collecting basic data on performance is resisted not only by those in healthcare but by healthcare itself.

    A doctor visit in Canada has no follow up, no data is collected to measure performance or effectiveness. Basic questions such as what is the success rate of a procedure at a facility is either ignored, avoided, or answered with a quess or stats from other countries, usually the USA.

    A computer program following proper even a general algorithm would be a huge improvement over the diagnoses and treatments most Canadians have today. Of course that is to be expected in a system that is not accountable to patients who have no other choice if they want treatment in Canada. And that helps with the liability questions. Little accountability in our healthcare means little risk of being sued for any real money in Canada.

    As always keep in mind that we cannot see the comments that have been removed or failed pre-moderation at this site. This is not an open conversation on the topic.

    1. Ken Moorhouse Silver badge

      Re: This is not an open conversation on the topic.

      Deleted comments are always shown as "by Author" or "by Moderator" in the Thread, enabling us to gauge quantity and controversy. I rarely see any Moderator deletes in this forum which points to a sensible and intelligent readership/moderation policy (yes, really).

      The only way commentards can play games is to make use of their ten minutes editing time to change their comment from (for example) one hating MS to one hating Linux, or vice versa, and watch the ensuing confusion in the subsequent comments, assuming they are posting in a period of high posting activity.

  10. Tikimon
    Facepalm

    This is only surprising to the tech industry

    Our benighted tech industry specializes in creating widgets and software that do... well, SHINY! Hey, I made a Thing! Don't really know what it's good for, doesn't replace anything else, but let's convince everyone they need to buy one! Problem is, we don't buy it. Literally. Only the tech press is surprised when that happens.

    Seriously look at "wearables" for a prime example. All we've heard for ten years is how Wearables are going to be in everything we buy and will transform our lives! Except they stubbornly have not. Swap out the names - AI, wearables, self-driving - and you find the same breathless insistence that it's the Future Now! Oh, and pay no attention to the failed products behind the curtain.

    1. Robert Grant

      Re: This is only surprising to the tech industry

      The tech industry also creates many, many things that are useful. Not sure what point you thought you made.

  11. Anonymous Coward
    Anonymous Coward

    "AI is cool..." but not always competent.

    BBC 'More or Less' podcast, 10 June 2018.

    About 9 minutes in, "Artificial Intelligence....Tim Harford speaks to author Meredith Broussard about ‘techno-chauvinism’."

    An example is given where a neural net is trained to identify "good selfies". Turns out to accidentally embody subtle biases, arguably including racism. There's a punch line at the very end, concerning where this apparently inept AI researcher landed.

  12. shawnfromnh

    I know

    If it's so good then have it tried out first on politicians, government workers including MI6 and give it a full tryout first then to the general public. If there are no privacy concerns then the legislators will find no problem in implementing it this way till they are able to scale up.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like