back to article Google DeepMind's use of 1.6m Brits' medical records to test app was 'legally inappropriate'

Google's use of Brits' real medical records to test a software algorithm that diagnoses kidney disease was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health. In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to …

  1. ratfox

    It is my view, and that of my panel, that purpose for the transfer of 1.6 million identifiable patient records to Google DeepMind was for the testing of the Streams application, and not for the provision of direct care to patients

    ...I may have misunderstood, but I thought the data was used to train an AI model, not test an existing application? Without the data, there would have been no application.

    1. diodesign (Written by Reg staff) Silver badge

      Re: ratfox

      Edit: Story updated - it's not an AI system. It's a fixed algorithm. We've tidied up the piece.

      C.

      1. TRT Silver badge

        Re: ratfox

        Surely, though, you need some sort of feedback into the AI in order to train it? And if the only way to test the quality of the AI's predictive ability is to conduct further tests on those patients identified by the AI as at risk but where they were not picked up by the medics, then you'll only end up with an AI as good as the medics, not better than them.

        1. David 164

          Re: ratfox

          actually you end up with a AI as good as all the medics who use the system combine.

          Plus on top of this Deepmind is no doubt going back over the ones Streams missed to work out why it missed that patient and refine and improve the AI. So overtime Streams has the potential to become many times better than a medic. Plut it won't get tired, it won't be rushed and it won't just have a off day like humans do.

      2. Anonymous Coward
        Anonymous Coward

        Re: ratfox

        "... it's not an AI system. It's a fixed algorithm. ..."

        I like how that doesn't even matter, just like personal medical information to MegaCorps.

        To doubt my need for tinfoil, I'm still not clear on who requested the computation. Sounds like typical Google on one hand, but on the other it sounds like it somehow was requested by doctors (with no ethical training).

    2. Anonymous Coward
      Anonymous Coward

      adverts

      I'm guessing the production version pops up adverts on the screen like most other Slurp products?

      "Dave, I can see that you have a terminal illness - here are some local undertakers..."

  2. Anonymous Coward
    Anonymous Coward

    Paper Tigers

    "There were legitimate ways for DeepMind to develop the app they wanted to sell. Instead they broke the law, and then lied to the public about it. Such gross disregard of medical ethics by commercial interests – whose vision of 'patient care' reaches little further than their business plan – must never be repeated."

    Three words: Prosecution, Struck Off

    colour me cynical, but I'll bet they (ICO, BMA, NHS and whatever other bodies are concerned) don't use them...I foresee, however, an outbreak of the usual 'mistakes made, lessons to be learned'

    1. Adam 52 Silver badge

      Re: Paper Tigers

      Couldn't agree more. Although it would be the GMC that fails to do do the striking off, not the BMA.

    2. David 164

      Re: Paper Tigers

      NGD has no legal powers, Dame Fiona Caldicott has no legal training from what I can tell. So actually this is far from the end of this.

    3. PNGuinn

      2 more words

      Performing Rights.

      As a patient I own performing rights in any data has been created as a result of any medical examination or procedure I have taken part in.

      So wack the basturds with a wacking great fee for every patient abused.

      That'll learn 'em.

      1. fuzzie

        Re: 2 more words

        I like it, but rather treat it like a class action suit. Sue them, cover your legal fees with the award and then hand the rest to the 1.6 million people who's records were abused. A combination of showing some regulatory backbone and (buying, using the offender's money,) public support.

  3. Anonymous Coward
    Anonymous Coward

    Google

    will, no doubt, be quaking in their boots.

    #sarcasm.

    Again, it proves if you have enough financial resources you can make any problem disappear.

    Anyone who thinks Google will delete that treasure trove of private, confidential data is deluded.

    1. David 164

      Re: Google

      or better than financial resources, they could just show their app actually works and working to save lives. Showing Google and London Royal Hospital approach to introducing AI actually works, saving lives, saving, saving doctors time.

      1. RegGuy1 Silver badge

        AI actually works, saving lives, saving, saving doctors time...

        ... saving insurers money.

        [Ah Mr Jones. Thanks for all your preima over the years, but we won't be paying your claim as Google gave us your name before you came in; we've had time to think up a few excuses...]

    2. Daggerchild Silver badge

      Re: Google

      And if you think that data is a financial treasure trove, you're equally deluded. Anon-mapped retina scans? Would you like to give me a viable business case?

      Google stand accused of .. wait.. the >Trust< stand accused of using Deepmind's tool *too quickly* because it .. worked?

      I do not like the way reality is being defined by the glorious and righteous flames of quasi-religious hatred..

      EDIT: Aha, that's why my post was limbo'd - I'm not accurate. It looks like Deepmind had the correct data permissions *if* it was being used to help treat patients, but although it was was being used to help treat patients, it was *meant* to be being tested. And testing, requires more strenuous data approval than treating, because of course it does. *wibble*

      1. Pompous Git Silver badge

        Re: Google

        "I do not like the way reality is being defined by the glorious and righteous flames of quasi-religious hatred.."
        Have an upvote daggerchild...

    3. This post has been deleted by its author

  4. Korev Silver badge

    Google's use of Brits' medical records to train and test its AI was legally "inappropriate," says Dame Fiona Caldicott

    What does this actually mean? Did the hospital or Google actually break the law?

    1. Tom 7

      RE: Did the hospital or Google

      Well the hospital certainly did for not protecting the patients data and identities.

      I'd be happy for my non-identifiable data to be used in an experiment of this form so long as the full results are returned to the NHS.

      1. Anonymous Coward
        Anonymous Coward

        Re: RE: Did the hospital or Google

        "I'd be happy for my non-identifiable data"

        YEP but dont trust anyone to really make date non identifiable in this case. as soon as you start combining a few data sets from different sources then patterns emerge and people become identifiable. :-(

      2. sebt
        Stop

        Re: RE: Did the hospital or Google

        "I'd be happy for my non-identifiable data to be used in an experiment of this form so long as the full results are returned to the NHS."

        I'd be happy only given another caveat: that the data, and any results of research using it, remain the IP of the NHS, and subject to the same confidentiality restrictions as the original data is (I mean.... should have been).

        Or, possibly, that private companies could use this kind of data to provide useful analysis, for payment of a large fee, representing the real value of the data. Fee to be used to swell the NHS's coffers for spending on healthcare.

        Where did this assumption that data simply belongs to whoever can get hold of it come from? Answer: it's a convenient lie which serves enormous commercial interests like Google and FB.

    2. Unep Eurobats
      Childcatcher

      Re: 'inappropriate' or 'illegal'?

      Exactly - as AC says below, stop the pussyfooting. Was it illegal for the hospital to give Google 'identifiable patient records'? Or was it illegal for Google to then use those records beyond its remit? Or both?

      1. David 164

        Re: 'inappropriate' or 'illegal'?

        It always illegal for a company to use information beyond the scope of it intended purpose. It has been since they first pass the data protection act. I'm pretty sure analysing patent data to improve patient care doesn't go beyond on that and is allowed by the declaration patient sign when they join up with their GP or sign forms at hospital.

        The issue may be that google and Royal London are stretching that declaration to the maximum through. The ICO will have to decide. I

        1. SImon Hobson Bronze badge

          Re: 'inappropriate' or 'illegal'?

          ... the declaration patient sign when they join up with their GP or sign forms at hospital.

          I don't recall ever signing any data protection stuff with my GP, but then when I last signed up with them, they were still on paper records.

          Ditto when I've been to hospital - they've created records without asking my consent. They've also ignored my letters on the subject, but that's another matter !

    3. David 164

      It legally it doesn't mean anything, it does sound rather good if you a participant in the sport called bashing google. We will have to wait for the ICO to offer us a proper insight on whether this was legal or illegal. My guess will be that Deepmind did comply with all of the relevant laws at the time and why it may have taken a unorthodox approach it didn't breach patient data or confidentiality or broke any laws.

      It will probably make recommendation that Department of Health should construction a some rules and regulations around this.

    4. Ian Michael Gumby
      Boffin

      @Korev

      Actually both.

      It was illegal for Google to be in possession of the data.

      At a minimum, they should now provide an audit of how they used the data, where they stored it and to then verify its destruction.

      The sad thing... their gall and disregard for the law is prevalent in Silicon Valley and is continually being taught in schools.

  5. Anonymous Coward
    Anonymous Coward

    Streams is showing real patient benefits.

    Yes, the patients really do appreciate the improved advertising they now get.

    1. Mage Silver badge
      Devil

      Re: Streams is showing real patient benefits.

      Google is a totally inappropriate partner.

      Also no evidence that any such "AI" actually does much. It's no different from 1980 "Expert Systems" for medicine, just more data.

      An American hospital got into trouble doing a project with the arguably more expert IBM "AI" system, Watson. It never delivered.

      1. Anonymous Coward
        Anonymous Coward

        Re: Streams is showing real patient benefits.

        Google is a totally appropriate partner, they have huge expertise at big data analysis, and if it saves someone YOU personally care about, you will be thankful.

        1. sebt
          FAIL

          Re: Streams is showing real patient benefits.

          @AC

          "if it saves someone YOU personally care about, you will be thankful."

          Ah, the usual "if it saves ONE life..." fallacy. Always deployed when there's an argument about public-health ethics. Always deployed as if it overrides any other considerations. Didn't have to wait long for it to pop up here.

          1. Tom 38
            Trollface

            Re: Streams is showing real patient benefits.

            But what if it saves a CHILD'S LIFE?!

            1. Captain DaFt

              Re: Streams is showing real patient benefits.

              "But what if it saves a CHILD'S LIFE?!"

              But what if that child is the next Pol Pot?

              (What ifs are fun!)

            2. PNGuinn
              Trollface

              Re: Streams is showing real patient benefits.

              "But what if it saves a CHILD'S LIFE?!"

              But what if KITTENS were hurt?

          2. Daggerchild Silver badge

            Re: Streams is showing real patient benefits.

            Ah, the usual "if it saves ONE life..." fallacy

            I notice you didn't actually say he was wrong. Quite possible because in the case in question, the Trust were using it to try and save lives (before it had gone through formal trials), and you're unable to point out the Greater Evil you say is hiding behind it. You just 'know' it exists.

            I'm afraid it's true - Google really are good at this stuff. Have you tried actually *buying* any of this data 'Google sell' about people?

            Every hacker in the world is trying to get into Google with almost nothing to show for it. How's the NHS doing on that score at the moment?

            1. sebt

              Re: Streams is showing real patient benefits.

              The Greater Evil here is perfectly clear.

              Our confidential data is being provided to a profit-making company for nothing or next to nothing. Whether it (overtly) has so far or not, the company has no obligation whatsoever to respect privacy, to use the data strictly for the purpose intended, or to do anything other than pursue its own profit. It's a company that has a track-record of building income streams from data.

              The fact this exercise may have helped some people is a distraction. It's great that it did, but that's no excuse to brush the evils under the carpet, as if there was no better way to achieve the same outcome.

              1. Daggerchild Silver badge

                Re: Streams is showing real patient benefits.

                "The Greater Evil here is perfectly clear" - Then why were you unable to show it?

                "Our confidential data is being provided to a profit-making company for nothing or next to nothing" - Or, we could use the truth. Anon-mapped retina scans are being provided in return for a diagnosis tool that helps save lives.

                "the company has no obligation whatsoever" - apart from the law, the contract they signed, etc etc..

                "It's a company that has a track-record of" - playing Go, and Chess. You're thinking of the sister company, Google. Guilt by proximity isn't a thing.

                Hatred is not reason. Fear is not proof. This is not how you make the world better. Confirm your target. Aim.

                1. Tom 38

                  Re: Streams is showing real patient benefits.

                  "the company has no obligation whatsoever" - apart from the law, the contract they signed, etc etc..

                  So, this article is about them not following their contract. They were supposed to use the data to train and discard it. They are now running a service using that data.

                  Ignore whether it is a good or a bad thing; evidently they are not following their contract now so what happens in the future?

                  1. Daggerchild Silver badge

                    Re: Streams is showing real patient benefits.

                    "So, this article is about them not following their contract"

                    Actually, this article is about the Trust and Deepmind, not drawing up the *correct* contract. Technically.

                    Practically, it seems they drew up the correct contract for *live use* (and then used it live where it highlighted things). Not testing use, which it officially was (where they may not have been able to act?). This looks more like a paperwork squabble than anything actually evil.

                    1. JohnG

                      Re: Streams is showing real patient benefits.

                      It isn't just a contractual issue between Google and the trust concerned. It is a question of whether data protection laws was broken. If patients' data was used without their consent or for purposes for which Google and the trust did not have their consent, then it is likely that both Google and the trust have acted illegally.

                      1. David 164

                        Re: Streams is showing real patient benefits.

                        But if that's the case it likely all trusts have been breaking the law for decades. As they all regularly process data in ways that unrelated to patient care.

              2. David 164

                Re: Streams is showing real patient benefits.

                Is there a better way to achieve the same out come. Hospitals across the NHS runs trials, so it not like running trials and monitoring what the results are is new to them, I very much doubt they be expanding the role out of the services to more doctors and staff if it leading to poorer outcomes for patients.

            2. SImon Hobson Bronze badge

              Re: Streams is showing real patient benefits.

              Google really are good at this stuff

              And therein lies the heart of the problem - we know darn well what Google are good at. They are very good at ignoring the law and using their size to avoid the repercussions. They are very good at mining large volumes of data.

              Thus, we can have little (or no) confidence that they won't take this data that should be kept in it's own secure silo, never leaving UK (or at least, EU) control and jurisdiction, and then mine it along with other data that would probably de-anonymise it.

              So far, I have not read anything to suggest that Google has the corporate structures in place to respond as MS have done with the Irish emails case - ie tell the US authorities to sod off as the US company & staff don't physically have the access to provide them with the data which is held by a different legal entity on Irish soil.

              But most of all, I have seen nothing (but plenty to the contrary) to suggest that Google wouldn't pause even a second to consider mining the data along with everything else it holds.

              1. Daggerchild Silver badge

                Re: Streams is showing real patient benefits.

                But... you just complained Google don't have corporate/geographic separation, and so can't be trusted with this kind of data, without noticing that Deepmind is a legally separate and UK-based corporate entity, fulfilling your exact criteria.

      2. Anonymous Coward
        Anonymous Coward

        Re: Streams is showing real patient benefits.

        "Also no evidence that any such "AI" actually does much. It's no different from 1980 "Expert Systems" for medicine, just more data."

        Right off the commentard bingo card.

        Neural nets are absolutely nothing like expert systems. If you limit the definition of "expert system" to "makes a decision" you have a point, but the underlying mechanics for how that decision is formulated are radically different. More importantly the mechanism for *developing the formulation mechanism* are about as different as it is possible to be. Expert systems were big long lists of fixed if-then-else. Neural networks are definitively not.

        Watson being a steaming pile of turd is mainly due to it being a rebranded collection of acquired/legacy tools. Ask your local IBMer to explain what Watson is in less than 100 words. Proceed to smirk.

  6. Anonymous Coward
    Anonymous Coward

    Oh please, stop the pussy footing..

    Can we stop the BS already, dear watchdog?

    It's not "inappropriate", it's quite simply illegal. I don't care whose political toes you'll stand on calling a spade a spade, but if they were involved they deserve the bruises. Stop the euphemisms and doublespeak already.

  7. tedleaf

    And why did Google have to use UK mgs data ?

    Nothing g to do with no-one in America dared to because of the amount in fines/damages it would cost them.

    As usual,Google will lie and lie again until they are forced to delete/destroy the data they should never had access to in the first place..

    1. Anonymous Coward
      Anonymous Coward

      Probably because the American health care system is not organized enough to give them that much data. Security through obscurity, if you will.

      EDIT: Actually, I now remember that DeepMind is a British company that Google bought. That might explain why they got the data from here rather than there.

    2. David 164

      Because the NHS is one of the best in the world at collecting an organising this data and proactively using this to run the services and to guide changes that need to be made to the services to achieve better outcomes. The compare to other health care service providers not compare to other industries.

  8. Mark 110

    There's a more interesting ethical question than just "the rules"

    So it was just to train the model. And then having trained the model they realised they had identified people who needed kidney treatment.

    Would it have been ethical for them to have ignored the fact that people needed treatment and not told those people?

    If it was me I would preferred to have known and been treated.

    1. Anonymous Coward
      Anonymous Coward

      Re: There's a more interesting ethical question than just "the rules"

      Although the same argument could be used to justify intelligence gained through torture if it turned out to be life saving. This should just be treated as serendipity.

      I would assume though that once AI had seen the data it should be discarded and what's left is simply rules.

      At some point to validate the rules a real dataset has to be provided somehow too - in order to both validate the results, and indeed find the few that missed a diagnosis.

      I don't see any reasonable way of delivering such an AI without at some point, the full dataset being provided to train an AI - unless we either want, no AI based diagnostics at all, or it has to practise on people individually and make catastrophic mistakes as it learns the incorrect decisions. Neither of these options seem sound either.

      1. collinsl Bronze badge

        Re: There's a more interesting ethical question than just "the rules"

        Problem is, that when we attempted to use medical data gained through torture in the post WW2 period (the Germans and Japanese did the torturing, we acquired the records/doctors after the war) most of it turned out to be complete hogwash, or stuff we knew already.

      2. Anonymous Coward
        Anonymous Coward

        Re: There's a more interesting ethical question than just "the rules"

        I don't see any reasonable way of delivering such an AI without at some point, the full dataset being provided to train an AI - unless we either want, no AI based diagnostics at all, or it has to practise on people individually and make catastrophic mistakes as it learns the incorrect decisions. Neither of these options seem sound either.

        This isn't about whether or not datasets containing identifiable patient data should be used to train AI models. You can do what you like with medical data as long as the people that it describes have given you permission. In this case it seems that the hospital only had permission for it to be used for the direct care of their patients.

        My reading of para 3 of the letter is that the hospital justified handing over the records because Streams was being used for direct patient care, and at the same time said that Streams was not being used for direct patient care because e.g. "clinical safety verification is still in progress".

        Perhaps they also run prenatal clinics for people who are only a little bit pregnant.

        1. David 164

          Re: There's a more interesting ethical question than just "the rules"

          Except of cause the Hospitals/GPs/Clinics all across the NHS use patient data for areas outside of patient care without per specific permissions being given for each of those uses. It how the NHS can spot abnormal patient deaths, it how we know cancers drugs are as good as their manufacturers say, it how we know whether procedures are worthwhile doing.

    2. Pompous Git Silver badge
      Pint

      Re: There's a more interesting ethical question than just "the rules"

      "Would it have been ethical for them to have ignored the fact that people needed treatment and not told those people?

      If it was me I would preferred to have known and been treated."

      Beat me to it. Have an upvote!

    3. Jonathan Richards 1

      Re: There's a more interesting ethical question than just "the rules"

      > having trained the model they realised they had identified people who needed kidney treatment

      If DeepMind works like other neural network AIs, one trains the system by presenting it with known outcome data, so e.g. by feeding different representations of the letter "A" you can train a text-recognition algorithm to return a diagnosis of "A", even from a representation that it hasn't "seen" before. In this instance, one would have fed it millions of pieces of medical information for previous patients with, and without, kidney disease as diagnosed and confirmed by a trained human, and ended up with a diagnostic app. What would then be unethical, given the approvals that were given in the first place, would be to let the app loose on new patients.

      One point six million is a lot of records. I'm supposing that these came from all over the National Health Service, not just from the Royal Free's patient list?

      1. Anonymous Coward
        Anonymous Coward

        Re: One point six million is a lot of records

        Well no, that would be Royal Free only, that is less than the population size that that hospital serves.

        Anon as I work in Hospital IT (yes we are having a super week)

    4. Lamont Cranston

      Re: Would it have been ethical for them...

      The data used to train the AI should have been fully anonymised, so alerting patients that the machine has identified them as requiring treatment should never have been an option. Once the system has proven itself on anonymous data, then it can be fed the identifiable data providing the patients involved have consented to this use of their data.

    5. sebt
      Thumb Up

      Re: There's a more interesting ethical question than just "the rules"

      @Mark110

      "Would it have been ethical for them to have ignored the fact that people needed treatment and not told those people?"

      Very good point. I think it would clearly be unethical.

      Where that argument becomes invalid is when it gets abused (as it often is) to argue:

      - Using this method, we managed to treat someone who'd otherwise not have been noticed, or even save their life;

      - Therefore, any objections to the method itself (e.g. giving data to big companies for free, torturing prisoners) are irrelevant and overridden.

  9. Korev Silver badge
    Boffin

    Off the shelf data?

    You can buy anonymised data from the NHS via CRPD which can be used for this exact purpose. Google could have avoided this fuss if they'd just handed over some cash. Maybe the PHBs at the Hospital got all excited by working with Google and handed their data over for free.

  10. Unep Eurobats
    Boffin

    AI usage out of control?

    That's the issue. Did doctors rely on the AI's diagnosis to provide treatment?

    Test data is no good without an outcome: you feed in 1.6m sets of symptoms, 1.6m treatment regimes and 1.6m outcomes (eg died, got better etc). The AI learns what treatments work best for a given set of symptoms. The hope is that eventually it can give a better diagnosis than a human.

    So how do we get from this to Google's AI being used to treat real people? If Google simply flagged anomalous results for clinicians to follow up, that seems fine. As Mark 110 says above, they would presumably rather know. But clearly the project has greatly exceeded its original scope if the still under-development AI was blindly used to direct treatment, either for patients in the initial data set or others.

    1. Anonymous Coward
      Happy

      Re: AI usage out of control?

      So how do we get from this to Google's AI being used to treat real people? If Google simply flagged anomalous results for clinicians to follow up, that seems fine.

      Even if that were the case, if I provide sensitive personal data to a hospital for my care, that doesn't give them the right to use it for research purposes unrelated to my care. If the outcome for me is known, and it is being used instead for someone else's care, then it is not being used for my care.

      Anonymous data isn't considered to be personal, but unanomyised (or insufficiently anonymised) medical data is considered to be sensitive personal data for which there are particular safeguards. If you want to use my sensitive personal data for research then you can do so only if I give you explicit permission. Inconvenient, I know, to companies who want to make millions of dollars on the back of it, and perhaps inadvertently let it be stolen in the process, but that is how it is.

      1. Anonymous Coward
        Anonymous Coward

        Re: research purposes unrelated to my care.

        I'm not sure it is best to be too narrow about what constitutes "my care". For example, although your care needs now may not overlap with some specific research purposes, they may well be likely to in the future (e.g. you might get cancer, or have a stroke, which perhaps isn't in the your-care-right-now category, but might be in ten years time - your existing data might contain an as-yet unnoticed hint of future problems). You might even broaden it out further, with (hypothetically) "if the NHS mines records it can get better outcomes for more patients" ... thus freeing up resources for what will be /your/ care needs in the future, even if they only save money on something not ever related to you.

        It seems to me there's a big grey area about what different people might think acceptable, or might reasonably be convinced is acceptable.

        But IMO it's not a grey area we want an organization like /Google/ anywhere near, whether in the test phase or later on.

        1. Anonymous Coward
          Happy

          Re: research purposes unrelated to my care.

          I'm not sure it is best to be too narrow about what constitutes "my care". For example, although your care needs now may not overlap with some specific research purposes, they may well be likely to in the future (e.g. you might get cancer, or have a stroke, which perhaps isn't in the your-care-right-now category, but might be in ten years time - your existing data might contain an as-yet unnoticed hint of future problems).

          That is just another way of saying that patient data should be used for any bit of health-related research, on the rather thin basis that the participants may one day catch something.

          This is explicitly not what patients currently consent too - the data is for their personal immediate care. When people consent to be participants in research projects (which includes reusing data collected for other purposes), they have to give informed consent, which means they cannot consent to a purpose they don't understand let alone don't even know about. These rules came about because many of the darkest parts of recent history are littered with medical research projects where researchers abused participants who did not even know they were part of an experiment.

      2. David 164

        Re: AI usage out of control?

        Better not use the NHS then because pretty much all data is collected and use for research purposes or for the day to day management and improvement of the NHS.

        The declaration use by the NHS probably isn't detail enough to cover the uses of this data that is mandated by law then.

  11. Anonymous Coward
    Anonymous Coward

    saving hours every day

    Shooting all those pesky prisoners would save even more hours!

  12. Bilious

    Identify those with kidney damage?

    Why would you need Big Data for that?

    http://www.aafp.org/afp/2012/1001/p631.html

    1. Anonymous Coward
      Anonymous Coward

      Re: Identify those with kidney damage?

      Well you need big data processing as there is more data than there are doc's, nurses or other AHP's to deal with it. Finding AKI (Acute Kidney Injury) is a problem sometimes as it happens often when other things are happening with a patient.

      EG older person has a fall, breaks hip, rushed in - hip replacement, surgery all is looking good and oh AKI... (because in all the going on with the Hip they missed that the patient was de-hydrated and was not passing urine and had a pre-existing UTI(Urinary tract infection))

      Lots of different reasons people get AKI

      So we have an algorithm that sits on the pathology system that grades AKI from 1(mild) to 3(Severe)

      https://www.england.nhs.uk/akiprogramme/aki-algorithm/

      That will spit out maybe 150 AKI Alerts a day for the royal free (I don't work there, I work at a different hospital and have scaled our output to the size of the royal free)

      I guess that this attempt by the royal free was to find the AKI's and the causes of the AKI's in a new way. Good idea but they didn't get the information governance right. (by the way it was royal free Doc's that approached Google, not the other way round)

  13. Mage Silver badge
    Devil

    Not surprised

    I said at the time that it had to be illegal.

    Unless EXECUTIVES are fined, it will keep happening.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not surprised

      "Unless EXECUTIVES are JAILED, it will keep happening." TFTFY

    2. David 164

      Re: Not surprised

      But it was a doctor not a excutive that suggested they do it this way.

  14. Korev Silver badge

    Although the same argument could be used to justify intelligence gained through torture if it turned out to be life saving. This should just be treated as serendipity.

    True, but you'd be pretty pissed off if you ended up on the kidney transplant list if they could have picked up the disease earlier and just given you a few tables and/or diet changes.

  15. WatcherFrog

    The data provided by Royal Free to Google DeepMind is in the same format it would have been provided to me as an academic (sometimes not even ananonymised like it was in this case). When I recently made a data request to NHS Digital (its called hospital episode statistics records) patients have no clue what I am doing, sure it has scrutiny by a panel and that's it. I think what DeepMind and Free are doing could be really beneficial, yet they executed it in a poor manner.

    I would recommend people consult deep minds papers on this topic, while doctors were given Streams they were told to make their own judgement on the patient treatment pathway and not rely on Streams outcomes (they use reinforcement learning).

  16. nematoad
    Unhappy

    Ha!

    "...and has never been used for commercial purposes or combined with Google products, services or ads – and never will be,"

    Ah, that's as convincing as "Don't be evil."

    Let's face it, these scumbags are in it for one thing and one thing only, money. They probably think that if a life is saved then that would be a bonus.

    1. Eguro

      Re: Ha!

      Well maybe it wasn't combined with Google products, but Alphabet products?

      If something like that happened, however, it was obviously a rogue engineer behind it!

  17. John Smith 19 Gold badge
    Unhappy

    "Such gross disregard of medical ethics by commercial interests"

    US corporation + NHS.

    What could go wrong?

  18. adam payne

    "Google's use of Brits' medical records to train an AI and treat people was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health."

    Not just inappropriate but illegal. Call it what it is.

    "Nurses and doctors have told us that Streams is already speeding up urgent care at the Royal Free and saving hours every day. The data used to provide the app has always been strictly controlled by the Royal Free and has never been used for commercial purposes or combined with Google products, services or ads – and never will be," said a DeepMind spokesman.

    They haven't changed the T&Cs yet.

    1. David 164

      This is only advisory. Only the ICO and the courts can actually declare this illegal. I somehow doubt they will declare it illegal, they will probably say google and Royal Free London should have done it in a different way instead using a different method the NHS has for data sharing.

  19. HoggertyHog

    Medical device?

    Its a medical device (AI or not) as it affects patient treatment. That means ISO13485 and CE marking There are so many more risks involved in producing medical products, there is a clear risk based standard that every medical device producer in EU is required to follow. If you are not following this, you are by definition risking patient lives. No idea if google did this.

    In terms of data collection and processing:

    https://en.wikipedia.org/wiki/General_Data_Protection_Regulation will have real teeth (fines a max of 4% global turnover) when it comes into force next May. Probably one of the main reasons we have been told to leave EU...

  20. MSmith

    Maybe the article wasn't clear

    I'm confused. So, they used 1.6 million records to train the AI. OK, real records are probably needed, Since you don't need to feed any identifying information into the system to do that you don't necessarily have a real privacy issue. Then, to test it, they feed in a bunch of other people and it determines they may have kidney issues. That's great. Now what do you do about it? Do you just let these people go on with possibly undiagnosed kidney issues, or do you notify their physicians that your AI trial program indicates their patient may have kidney issues, please check and let us know so we can get an idea of the accuracy of the program?

  21. Jim Birch

    Luddites

    The use of big data techniques is great for medicine. It should be a normal practice that goes on all the time. Privacy protection can be achieved via anonymizing data but in practice this can be difficult to implement fully, in practice, security standards and data use agreements is often a better approach.

    If you don't absolutely don't trust the government or companies or individuals, we have a problem.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like