back to article AI clinician trained to save humans from sepsis – and, er, let's just say you should stick to your human doctor

Experts hope an artificially intelligent software system will help doctors tackle the deadly menace of sepsis in humans. The technology uses reinforcement learning, an area of machine learning more commonly used for teaching bots to play games such as Go, Dota 2 and poker. In this case, however, instead of games, a software …

  1. James 51
    FAIL

    The use of computer decision support systems to better guide treatments and improve outcomes is a much needed approach

    The unspoken bias and assumptions that go into statements like these make me laugh. If the AI can determine an optimal course, why not write a paper on that (after clinical trials) and let everyone know. Then you don't need the AI for the improved outcome, just someone who is already an expert and keeping up to date with advancements in their field to keep doing just that.

    As for the bit at the end that even a slight improvement would save a lot of lives worldwide, there's a lot of people who would not be able to afford what would undoubtably be a very expensive tool so the potential impact is vastly over stated. That is of course assuming that it ever gets to the point were it is actually useful.

    1. Anonymous Coward
      Anonymous Coward

      As for the bit at the end that even a slight improvement would save a lot of lives worldwide, there's a lot of people who would not be able to afford what would undoubtably be a very expensive tool so the potential impact is vastly over stated. That is of course assuming that it ever gets to the point were it is actually useful.

      I would agree with you that such a statement doesn't quite pass the smell test: it seeks to explain away what is really just noise in the statistics. That doesn't mean the approach cannot be of value, but if so, they're not exactly near it being of any practical use - more work required.

      However, it may just be an area where AIs are not really the best tool, so I would not discount all use of AI in medicine. I am presently looking at an application which is so seriously *not* just statistical noise that I'm excited for it to get funding. That said, the AI's job in this case is more correlation of information, detection happens much earlier in the process. I think that's where AIs can add real value: as PART of a tool chain, not as sole active component.

      1. James 51
        Go

        More power to you then and good luck. Just remember to vet the marketing and sales pitches.

    2. Waseem Alkurdi

      @James 51

      what would undoubtably be a very expensive tool

      Not really. It's a computer with "arms" extending into either a patient database or into medical equipment.

      In the least, it's a program running on, hell, a Raspberry Pi, for a front-end, and a server doing the AI lifting.

      1. James 51

        I wasn't talking about how much it would cost to develop or the price hardware it would run on. Just what the company who sells it would charge for it.

        1. Waseem Alkurdi
          Thumb Up

          Oh THAT. Now that's going to be expensive, like everything else being marketed as having "AI" or "IoT".

    3. katrinab Silver badge

      Because it is not actually AI.

      What the computer is doing is looking at various clinical measurements together with the dose given, and plotting the outcome in each case; and from that, working out what the best dose is for each given set of measurements.

      Potentially very useful, if the computer is able to record all the same relevant factors that a doctor considers, and it might be able to identify other factors that the doctors weren't aware of; but it isn't in any way intelligent.

      1. JetSetJim
        Boffin

        Training set

        > Potentially very useful, if the computer is able to record all the same relevant factors that a doctor considers, and it might be able to identify other factors that the doctors weren't aware of; but it isn't in any way intelligent.

        Well, the article says there are 48 variables they think influence the result, and they have a data set of 17k samples. This data set needs to get partitioned into training and validation sets (80:20 is usually good), so you've actually got 13,600 training points. No description as to the coverage of the 48-dimensional space (although perhaps some of these dimensions are correlated, which can lead to a feature reduction), but I'd contend that this is not enough training data to get a decent result anyway.

        In other words, more study/funding is needed. The best conclusion a research paper can reach :)

        1. katrinab Silver badge

          Re: Training set

          Even if each if the 48 variables has only two possible values, that is about 281tn possible combinations, and there is no way you could cover them all.

  2. Steve Button Silver badge

    The Wright Brothers

    I guess it's good that The Wright Brothers didn't listen to people like you, and decided to plough on despite the massive rate of failure. Of course you wouldn't rely on these machines NOW, but surely this (AI) is the future of medicine? (and transport, and warehousing, and retail, and ...)

    Or do I just read too much sci-fi and can't wait for a time when AI can solve many of our pesky squishy human problems?

    1. jmch Silver badge

      Re: The Wright Brothers

      "surely this (AI) is the future of medicine? "

      The AI is only as good a the doctors whose prescriptions it analyses / mimics. So while it's a useful tool it's no better than an intern or junior doctor who can help with the mass 'rank and file' cases but will be useless in edge cases where more specialised knowledge is needed. So (at least in current incarnation) it's more like a nurse's assistant.

      I think it will be the future of medicine in poorer countries and remote areas that can't afford any doctors, because any local doctors get poached / brain-drained to richer areas/countries.

      1. Waseem Alkurdi

        Re: The Wright Brothers

        The AI is only as good a the doctors whose prescriptions it analyses / mimics.

        As I see it, it's not mimicking a real doctor. Rather, it's, well, studying medicine; it's collecting data about variables and matching diagnoses.

        (It's worse than mimicking a doctor).

        So while it's a useful tool it's no better than an intern or junior doctor who can help with the mass 'rank and file' cases

        But these can't replace interns and junior doctors in 'rank and file' roles. Or else, how are (interns and junior doctors) going to learn? More textbook-crunching and less hands-on? Medical students are getting less and less training because of the numbers of students / year ... Add AI to the picture and it becomes very gloomy.

        I hope that I just make it through med school before the AI gets on the case ...

      2. David 164

        Re: The Wright Brothers

        Unless you add in a second source of feedback and that the outcome of it own prescriptions. Given how there's a shortage of train personnel in most countries in the world, it likely AI will be use to relieve the work loads of doctors and patients in every country on the planet.

    2. Waseem Alkurdi

      Re: The Wright Brothers

      I guess it's good that The Wright Brothers didn't listen to people like you, and decided to plough on despite the massive rate of failure.

      Big difference. If AI failed in the hospital, much more people could be at risk, and the negative effects are not always immediately obvious.

      Imagine a AI clinician that delivered the wrong amount of a drug whose therapeutic index (TI) is low (i.e. only a teeny tiny too much and it turns into poison). And dose calculation has many, many factors differing between one person and another (aka there's almost no room for error).

      On the other hand, an aircraft crash is immediately known of and the outcomes are almost always immediately determined.

    3. phuzz Silver badge

      Re: The Wright Brothers

      "it's good that The Wright Brothers didn't listen to people like you, and decided to plough on despite the massive rate of failure"

      It wasn't so much that they kept trying terrible designs despite repeated failures (eg elevators work better on the rear of an aircraft), the main problem was that they patented all heavier than air flying machines, thus preventing anyone else from "ploughing on".

      You should have chosen better subjects for your metaphor.

  3. Gene Cash Silver badge

    Hm. I think doctors do well at the actual treatment of a known problem.

    I think where a computer can help is the diagnosis... I can't see how doctors keep track of hundreds of thousands of diseases and their symptoms.

    A nudge of hey doc, it might not be lupus, there's a good chance it might be polymyositis, would be really valuable. Or... this guy has bad sleep apnea, that's why he's fat, tired, and his heart is in bad condition.

    1. Waseem Alkurdi

      But then, we would be introducing aviation's problem onto medicine: Over-reliance on the machine, especially with junior doctors.

      "Hey doc, it might not be lupus, but polymyositis"? But what if it *was* lupus? What if the computer missed a parameter?

      Call me a cynic, but until AI takes over the cockpit, it can't take over the clinic.

      1. aojari

        "The use of computer decision support systems to better guide treatments and improve outcomes is a much needed approach"

        The aim of this system is not to replace doctors or diagnose sepsis but to help them decide the best course of treatment. If fed enough of the right data it can surely help in this regard.

        There are systems that claim to be as good as docs at diagnosing some things though... https://www.nature.com/articles/nature21056.epdf?author_access_token=8oxIcYWf5UNrNpHsUHd2StRgN0jAjWel9jnR3ZoTv0NXpMHRAJy8Qn10ys2O4tuPakXos4UhQAFZ750CsBNMMsISFHIKinKDMKjShCpHIlYPYUHhNzkn6pSnOCt0Ftf6

      2. JEDIDIAH
        Linux

        Say hello to the future, it's already here.

        > Call me a cynic, but until AI takes over the cockpit, it can't take over the clinic.

        If you've ever flown through Heathrow then you have already "been there, and done that".

        They don't even let humans land at Heathrow anymore.

        I used to know a commercial pilot that flew that route.

        1. Anonymous Coward
          Anonymous Coward

          Re: Say hello to the future, it's already here.

          The automatic landing system turned out to be so good planes landed on exactly the same spot and caused premature failure of the runway lights. The solution was to add a bit if random error to even things out.

    2. Warm Braw

      where a computer can help is the diagnosis

      Having had to take relatives with sepsis to hospital on several occasions, an important factor in getting a correct diagnosis is to ensure it's discussed early in the proceedings. A&E doctors are under a lot of pressure and are looking for diagnoses to eliminate rather than additional possibilities. Symptoms of sepsis are not always very clear and if they do suspect sepsis they're likely to have to admit someone, potentially denying a bed to someone else. Which is why there a posters all over emergency departments underlining the danger of sepsis to give patients the benefit of the doubt - it's very tempting to send home someone who does not at that precise moment appear to be seriously ill.

      Even if a computer could come up with a reasonably accurate diagnosis (and sepsis is probably a poor candidate) it's still not in a position to make a bed available. And any algorithm is going to have to err on the side of caution: the more caution, the more beds required.

      1. Hollerithevo

        Yes, spotting it is the first step

        @Warm Braw, I agree. My partner is alive today only because an A&E admitting nurse thought something wasn't quite right with someone who was coming in with an inflamed knee and flu. It turned out the knee was a mass of sepsis and 24 days and expensive drugs later she came home with her life and both legs, but it was a near-run thing. But it looked like someone with flu and a bad knee.

        1. Anonymous Coward
          Anonymous Coward

          Re: Yes, spotting it is the first step

          If you are lucky you get the good medical professional. Like any workplace some are very good, some aren't and a few are hopeless. Sometimes even the great ones are having a bad day or are tired and make mistakes. Having an AI assistant (or in this case) an expert system to give them a helping hand seems like a good idea.

      2. tiggity Silver badge

        Diagnosis (early) is vital - I knew people who died due to sepsis being missed at an early stage & when it was diagnosed it was fatally late. As has been said, a lot of sepsis symptoms similar to other conditions, so if someone has e.g. coronary problems clinician may not recognise they also have sepsis, just assume its down to the recent heart issues

        Good news is that I was reading about an improved bacteria testing tool which can identify most sepsis bacteria in 3 hours (compared to the current days) - though in cash strapped NHS, who knows if (when that tool becomes commercially available) it will be routinely used in cases where sepsis *may* be present due to cost.

        UK sepsis stats really poor compared to many other European countries, probably not coincidence that sepsis easily missed in quick diagnosis decisions (which happens in understaffed, underfunded overworked NHS * a lot)

        * caveat - obviosuly a lot of "managers" are not underpaid or overworked, I'm talking about those at the "sharp end" of patient care

  4. Anonymous Coward
    Anonymous Coward

    A doctor can see how much hydration the patient needs and make an informed judgement, AI can't. Keep it up though, we must find a use for it.

    1. Korev Silver badge

      A doctor can see how much hydration the patient needs and make an informed judgement, AI can't.

      I'd have thought that it'd be an ideal usecase, you just take a few standard measurements (blood pressure, pulse, electrolytes etc.), "IO", make sure there are no contradictions (eg kidney problems) and go. It seems to me to be a much easier problem than the one they're trying to solve in this article.

  5. Alister

    The training data was fed into the neural network, so that it could spot patterns and make recommendations for new patients based on their records.

    The problem with this approach is that human illnesses, accidents etc may not have consistent patterns, they can be completely random. Two patients with the same gross symptoms and history may have completely different outcomes. That's why strict protocol based medicine (the patient has this, so we will do that) has proven to be ineffectual compared to more traditional methods.

    Whilst not quite the same, I have first-hand experience of this misguided attempt to predict future outcomes based on historical data. In the mid-nineties I was working as a paramedic with a UK ambulance service.

    A local university research team came up with the idea that they could improve ambulance response times by predicting where emergency calls would occur, based on supposed patterns in historical data of previous years. The idea was that ambulances would be sent to loiter in the predicted area, so that they would be closer when a call came in.

    Instead of trialling this with any simulation, the ambulance service decided to do it live, so they started using the software to position the ambulances around the county.

    It soon became apparent that except for a few isolated occasions where it guessed correctly, the overall impact was that the ambulances were nearly always in the wrong place, and that response times were worsening, not improving.

    The trial was abandoned early, despite the university's insistence that their idea would work if it was given more time.

    If you take a step back you would realise that the idea that previous years' data of emergency calls would have any bearing on future occurrences is unlikely at best, but the university research group were convinced that it was a reasonable assumption.

    In the same way, trying to predict how a patient will react to treatment, based on patterns of historical data of previous patients, is a flawed idea.

    1. katrinab Silver badge

      I suppose you could predict it to an extent.

      Eg, emergencies are more likely to take place in places where there are people, which is why ambulance stations tend to be in city centres rather than in the middle of the countryside.

      Certain groups of people are more at risk of having medical emergencies than others, like for example the elderly, and young men, so you might want to position more of them in places where they congregate.

      But these are about probabilities. Whether an actual specific person has an emergency situation is a random occurrence.

    2. Waseem Alkurdi

      The problem with this approach is that human illnesses, accidents etc may not have consistent patterns, they can be completely random.

      Tell this to a pathology lecturer and he's going to have a seizure on the spot.

      (And, BTW, does the brain have a /dev/urandom? ^_^)

      1. Alister

        If the pathology lecturer has any real-life experience of medicine, he will acknowledge that it can be true.

        You forgot to quote my next sentence:

        Two patients with the same gross symptoms and history may have completely different outcomes.

        This is a fact of medicine.

        1. Waseem Alkurdi

          I agree with you on this sentence. It is, as you said, a fact of medicine.

          But your first sentence, that human illnesses, accidents, etc can be completely random is where I disagree.

          It might not be a 2+2=4, but it's definitely not random. For each given disease, there are signs, symptoms, and other markers helping you to make the diagnoses. It's nowhere close to random.

          (After all, medicine is a science).

          Forgive me if this wasn't what you intended, but that's what I understood.

          1. Alister

            For each given disease, there are signs, symptoms, and other markers helping you to make the diagnoses.

            Yes, this is true to a certain extent, but even then there can be marked differences between patients.

            Although accepted knowledge is that a patient who has a heart attack will present with chest pain, be pale and sweaty, and have heart arrhythmias or visible changes on an ecg, there are many documented cases where this is not the case, and the patient may be unaware that they have had a heart attack at all.

            In similar fashion, a patient suffering with sepsis may easily be misdiagnosed as having flu, or some other illness, if they do not present with the classic symptoms.

            The classic signs and symptoms have to be a starting point, obviously, but it is dangerous to rely too heavily, or focus too narrowly on what is expected, both in diagnosis and treatment of illnesses.

            With regard to accidents, then they are definitely random occurrences, and trying to predict future instances from past data is foolish.

          2. JEDIDIAH
            Devil

            It really is that unpredictable.

            > But your first sentence, that human illnesses, accidents, etc can be completely random is where I disagree.

            Then you likely don't have enough experience.

            The human body is a really complex system. We really don't understand it nearly as well as we think we do. We certainly don't understand it as well as certain LAYMEN think we do.

            This quickly becomes obvious if you talk to enough patients with the same condition.

            You make quickly lose your belief in "scientific certainty".

      2. Anonymous Coward
        Anonymous Coward

        (And, BTW, does the brain have a /dev/urandom? ^_^)

        It does, for most people it is mostly offline until activated by an absence of conscious though, i.e. whilst being half asleep or inebriated.

        :)

  6. Cacot

    There is something in line with that (well not to prescribe but to identify a sepsis case)

    https://m.tecmundo.com.br/medicina/112872-dor-pai-revolucionar-medicina-mundo.htm

    There is a lot of marketing but this tool seems valuable. I think is something like Lauranetworks.

  7. Teiwaz

    Is treatment the Sharp point though.

    From some of the stories I've read of some of those struck down by Sepsis, it's the initial diagnosis that is often the crucial point

    Recognising it early enough in a patient who might be one of hundreds coming into A&E or a busy doctors surgery.

    And not sent home with an aspirin and told to have an early night.

  8. Doctor Syntax Silver badge

    "avoiding targeting short-term resuscitation goals and instead following trajectories toward longer-term survival.”

    I'm not clear on what this means but ISTM that if resuscitation is needed and you can't meet that goal in the short term you're going to fail to follow a trajectory toward longer-term survival.

  9. MarkB
    Facepalm

    I'm not sure that word means what you think it does.

    "vasopressors, a medication that reduces blood pressure, "

    100% wrong - Vasopressors are a group of medicines that contract (tighten) blood vessels and raise blood pressure.

    1. Waseem Alkurdi
      Thumb Up

      Re: I'm not sure that word means what you think it does.

      Exactly ... vasopressors induce vasoconstriction.

    2. diodesign (Written by Reg staff) Silver badge

      Re: MarkB

      Thanks – it's fixed. Don't get to email corrections@theregister.com if you spot anything wrong. It means we can patch up mistakes immediately, rather than hours later. We don't have time to read every article comment.

      C.

  10. Doctor Syntax Silver badge

    “The use of computer decision support systems to better guide treatments and improve outcomes is a much needed approach,”

    The crucial word here is "support". They're talking about a decision support system, not a decision making system. If I were looking for a system to support my decision making I'd want it to be able to explain its recommendations to me. What we frequently hear about AI is that the system isn't able to produce anything that looks like reasoning. Is this one different?

    1. Waseem Alkurdi

      They're talking about a decision support system, not a decision making system. If I were looking for a system to support my decision making I'd want it to be able to explain its recommendations to me. What we frequently hear about AI is that the system isn't able to produce anything that looks like reasoning. Is this one different?

      And what if people offload their decision making to a decision assistant system? It produces nasty stuff.

      And I tell you, a tired doctor AND a doctor w/o experience are quite likely to do that, especially if the doctor in question doesn't particularly understand the stuff.

      And nope, from the article there doesn't seem to be anything like "decision justification" to me.

  11. NotTelling

    Maybe to computer was right

    How do they know that the suggested doses indicated by the computer were wrong unless they actually administered them. For all we know, doctors are operating under historically biased false information. Not that I would volunteer to find out mind you!

    1. Korev Silver badge
      Boffin

      Re: Maybe to computer was right

      One of the problems that drug firms* have when bringing out innovative medicines** is that no doctor wants to take his/her patient off the "gold standard" onto a therapy unknown to them if the patient is generally stable or if the patient is at an immediate risk of dying. Obviously terminal cancer patients are an exception.

      *I work for one

      **Innovative is an overused word these days, but here I mean a "first in class" drug with a new "Method of Action"

      1. Korev Silver badge
        Childcatcher

        Re: Maybe to computer was right

        I meant "Mechanism of Action"...

  12. RosieRedfield

    "...the chances of a patient’s survival was highest where the model was most accurate in its recommendations compared to a real expert."

    Reworded: The real patients survived better when their real doctor had done what the model would have recommended.

    Corollary: When the real doctor had done something other than what the model would have recommended, the patients did worse.

    Doesn't the corollary mean that the model's recommendations were BETTER than the physicians in these cases?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like