back to article Google DeepMind's use of 1.6m Brits' medical records to test app was 'legally inappropriate'

Google's use of Brits' real medical records to test a software algorithm that diagnoses kidney disease was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health. In April 2016 it was revealed the web giant had signed a deal with the Royal Free Hospital in London to …

Page:

      1. collinsl Bronze badge

        Re: There's a more interesting ethical question than just "the rules"

        Problem is, that when we attempted to use medical data gained through torture in the post WW2 period (the Germans and Japanese did the torturing, we acquired the records/doctors after the war) most of it turned out to be complete hogwash, or stuff we knew already.

      2. Anonymous Coward
        Anonymous Coward

        Re: There's a more interesting ethical question than just "the rules"

        I don't see any reasonable way of delivering such an AI without at some point, the full dataset being provided to train an AI - unless we either want, no AI based diagnostics at all, or it has to practise on people individually and make catastrophic mistakes as it learns the incorrect decisions. Neither of these options seem sound either.

        This isn't about whether or not datasets containing identifiable patient data should be used to train AI models. You can do what you like with medical data as long as the people that it describes have given you permission. In this case it seems that the hospital only had permission for it to be used for the direct care of their patients.

        My reading of para 3 of the letter is that the hospital justified handing over the records because Streams was being used for direct patient care, and at the same time said that Streams was not being used for direct patient care because e.g. "clinical safety verification is still in progress".

        Perhaps they also run prenatal clinics for people who are only a little bit pregnant.

        1. David 164

          Re: There's a more interesting ethical question than just "the rules"

          Except of cause the Hospitals/GPs/Clinics all across the NHS use patient data for areas outside of patient care without per specific permissions being given for each of those uses. It how the NHS can spot abnormal patient deaths, it how we know cancers drugs are as good as their manufacturers say, it how we know whether procedures are worthwhile doing.

    1. Pompous Git Silver badge
      Pint

      Re: There's a more interesting ethical question than just "the rules"

      "Would it have been ethical for them to have ignored the fact that people needed treatment and not told those people?

      If it was me I would preferred to have known and been treated."

      Beat me to it. Have an upvote!

    2. Jonathan Richards 1

      Re: There's a more interesting ethical question than just "the rules"

      > having trained the model they realised they had identified people who needed kidney treatment

      If DeepMind works like other neural network AIs, one trains the system by presenting it with known outcome data, so e.g. by feeding different representations of the letter "A" you can train a text-recognition algorithm to return a diagnosis of "A", even from a representation that it hasn't "seen" before. In this instance, one would have fed it millions of pieces of medical information for previous patients with, and without, kidney disease as diagnosed and confirmed by a trained human, and ended up with a diagnostic app. What would then be unethical, given the approvals that were given in the first place, would be to let the app loose on new patients.

      One point six million is a lot of records. I'm supposing that these came from all over the National Health Service, not just from the Royal Free's patient list?

      1. Anonymous Coward
        Anonymous Coward

        Re: One point six million is a lot of records

        Well no, that would be Royal Free only, that is less than the population size that that hospital serves.

        Anon as I work in Hospital IT (yes we are having a super week)

    3. Lamont Cranston

      Re: Would it have been ethical for them...

      The data used to train the AI should have been fully anonymised, so alerting patients that the machine has identified them as requiring treatment should never have been an option. Once the system has proven itself on anonymous data, then it can be fed the identifiable data providing the patients involved have consented to this use of their data.

    4. sebt
      Thumb Up

      Re: There's a more interesting ethical question than just "the rules"

      @Mark110

      "Would it have been ethical for them to have ignored the fact that people needed treatment and not told those people?"

      Very good point. I think it would clearly be unethical.

      Where that argument becomes invalid is when it gets abused (as it often is) to argue:

      - Using this method, we managed to treat someone who'd otherwise not have been noticed, or even save their life;

      - Therefore, any objections to the method itself (e.g. giving data to big companies for free, torturing prisoners) are irrelevant and overridden.

  1. Korev Silver badge
    Boffin

    Off the shelf data?

    You can buy anonymised data from the NHS via CRPD which can be used for this exact purpose. Google could have avoided this fuss if they'd just handed over some cash. Maybe the PHBs at the Hospital got all excited by working with Google and handed their data over for free.

  2. Unep Eurobats
    Boffin

    AI usage out of control?

    That's the issue. Did doctors rely on the AI's diagnosis to provide treatment?

    Test data is no good without an outcome: you feed in 1.6m sets of symptoms, 1.6m treatment regimes and 1.6m outcomes (eg died, got better etc). The AI learns what treatments work best for a given set of symptoms. The hope is that eventually it can give a better diagnosis than a human.

    So how do we get from this to Google's AI being used to treat real people? If Google simply flagged anomalous results for clinicians to follow up, that seems fine. As Mark 110 says above, they would presumably rather know. But clearly the project has greatly exceeded its original scope if the still under-development AI was blindly used to direct treatment, either for patients in the initial data set or others.

    1. Anonymous Coward
      Happy

      Re: AI usage out of control?

      So how do we get from this to Google's AI being used to treat real people? If Google simply flagged anomalous results for clinicians to follow up, that seems fine.

      Even if that were the case, if I provide sensitive personal data to a hospital for my care, that doesn't give them the right to use it for research purposes unrelated to my care. If the outcome for me is known, and it is being used instead for someone else's care, then it is not being used for my care.

      Anonymous data isn't considered to be personal, but unanomyised (or insufficiently anonymised) medical data is considered to be sensitive personal data for which there are particular safeguards. If you want to use my sensitive personal data for research then you can do so only if I give you explicit permission. Inconvenient, I know, to companies who want to make millions of dollars on the back of it, and perhaps inadvertently let it be stolen in the process, but that is how it is.

      1. Anonymous Coward
        Anonymous Coward

        Re: research purposes unrelated to my care.

        I'm not sure it is best to be too narrow about what constitutes "my care". For example, although your care needs now may not overlap with some specific research purposes, they may well be likely to in the future (e.g. you might get cancer, or have a stroke, which perhaps isn't in the your-care-right-now category, but might be in ten years time - your existing data might contain an as-yet unnoticed hint of future problems). You might even broaden it out further, with (hypothetically) "if the NHS mines records it can get better outcomes for more patients" ... thus freeing up resources for what will be /your/ care needs in the future, even if they only save money on something not ever related to you.

        It seems to me there's a big grey area about what different people might think acceptable, or might reasonably be convinced is acceptable.

        But IMO it's not a grey area we want an organization like /Google/ anywhere near, whether in the test phase or later on.

        1. Anonymous Coward
          Happy

          Re: research purposes unrelated to my care.

          I'm not sure it is best to be too narrow about what constitutes "my care". For example, although your care needs now may not overlap with some specific research purposes, they may well be likely to in the future (e.g. you might get cancer, or have a stroke, which perhaps isn't in the your-care-right-now category, but might be in ten years time - your existing data might contain an as-yet unnoticed hint of future problems).

          That is just another way of saying that patient data should be used for any bit of health-related research, on the rather thin basis that the participants may one day catch something.

          This is explicitly not what patients currently consent too - the data is for their personal immediate care. When people consent to be participants in research projects (which includes reusing data collected for other purposes), they have to give informed consent, which means they cannot consent to a purpose they don't understand let alone don't even know about. These rules came about because many of the darkest parts of recent history are littered with medical research projects where researchers abused participants who did not even know they were part of an experiment.

      2. David 164

        Re: AI usage out of control?

        Better not use the NHS then because pretty much all data is collected and use for research purposes or for the day to day management and improvement of the NHS.

        The declaration use by the NHS probably isn't detail enough to cover the uses of this data that is mandated by law then.

  3. Anonymous Coward
    Anonymous Coward

    saving hours every day

    Shooting all those pesky prisoners would save even more hours!

  4. Bilious

    Identify those with kidney damage?

    Why would you need Big Data for that?

    http://www.aafp.org/afp/2012/1001/p631.html

    1. Anonymous Coward
      Anonymous Coward

      Re: Identify those with kidney damage?

      Well you need big data processing as there is more data than there are doc's, nurses or other AHP's to deal with it. Finding AKI (Acute Kidney Injury) is a problem sometimes as it happens often when other things are happening with a patient.

      EG older person has a fall, breaks hip, rushed in - hip replacement, surgery all is looking good and oh AKI... (because in all the going on with the Hip they missed that the patient was de-hydrated and was not passing urine and had a pre-existing UTI(Urinary tract infection))

      Lots of different reasons people get AKI

      So we have an algorithm that sits on the pathology system that grades AKI from 1(mild) to 3(Severe)

      https://www.england.nhs.uk/akiprogramme/aki-algorithm/

      That will spit out maybe 150 AKI Alerts a day for the royal free (I don't work there, I work at a different hospital and have scaled our output to the size of the royal free)

      I guess that this attempt by the royal free was to find the AKI's and the causes of the AKI's in a new way. Good idea but they didn't get the information governance right. (by the way it was royal free Doc's that approached Google, not the other way round)

  5. Mage Silver badge
    Devil

    Not surprised

    I said at the time that it had to be illegal.

    Unless EXECUTIVES are fined, it will keep happening.

    1. Anonymous Coward
      Anonymous Coward

      Re: Not surprised

      "Unless EXECUTIVES are JAILED, it will keep happening." TFTFY

    2. David 164

      Re: Not surprised

      But it was a doctor not a excutive that suggested they do it this way.

  6. Korev Silver badge

    Although the same argument could be used to justify intelligence gained through torture if it turned out to be life saving. This should just be treated as serendipity.

    True, but you'd be pretty pissed off if you ended up on the kidney transplant list if they could have picked up the disease earlier and just given you a few tables and/or diet changes.

  7. WatcherFrog

    The data provided by Royal Free to Google DeepMind is in the same format it would have been provided to me as an academic (sometimes not even ananonymised like it was in this case). When I recently made a data request to NHS Digital (its called hospital episode statistics records) patients have no clue what I am doing, sure it has scrutiny by a panel and that's it. I think what DeepMind and Free are doing could be really beneficial, yet they executed it in a poor manner.

    I would recommend people consult deep minds papers on this topic, while doctors were given Streams they were told to make their own judgement on the patient treatment pathway and not rely on Streams outcomes (they use reinforcement learning).

  8. nematoad
    Unhappy

    Ha!

    "...and has never been used for commercial purposes or combined with Google products, services or ads – and never will be,"

    Ah, that's as convincing as "Don't be evil."

    Let's face it, these scumbags are in it for one thing and one thing only, money. They probably think that if a life is saved then that would be a bonus.

    1. Eguro

      Re: Ha!

      Well maybe it wasn't combined with Google products, but Alphabet products?

      If something like that happened, however, it was obviously a rogue engineer behind it!

  9. John Smith 19 Gold badge
    Unhappy

    "Such gross disregard of medical ethics by commercial interests"

    US corporation + NHS.

    What could go wrong?

  10. adam payne

    "Google's use of Brits' medical records to train an AI and treat people was legally "inappropriate," says Dame Fiona Caldicott, the National Data Guardian at the UK's Department of Health."

    Not just inappropriate but illegal. Call it what it is.

    "Nurses and doctors have told us that Streams is already speeding up urgent care at the Royal Free and saving hours every day. The data used to provide the app has always been strictly controlled by the Royal Free and has never been used for commercial purposes or combined with Google products, services or ads – and never will be," said a DeepMind spokesman.

    They haven't changed the T&Cs yet.

    1. David 164

      This is only advisory. Only the ICO and the courts can actually declare this illegal. I somehow doubt they will declare it illegal, they will probably say google and Royal Free London should have done it in a different way instead using a different method the NHS has for data sharing.

  11. HoggertyHog

    Medical device?

    Its a medical device (AI or not) as it affects patient treatment. That means ISO13485 and CE marking There are so many more risks involved in producing medical products, there is a clear risk based standard that every medical device producer in EU is required to follow. If you are not following this, you are by definition risking patient lives. No idea if google did this.

    In terms of data collection and processing:

    https://en.wikipedia.org/wiki/General_Data_Protection_Regulation will have real teeth (fines a max of 4% global turnover) when it comes into force next May. Probably one of the main reasons we have been told to leave EU...

  12. MSmith

    Maybe the article wasn't clear

    I'm confused. So, they used 1.6 million records to train the AI. OK, real records are probably needed, Since you don't need to feed any identifying information into the system to do that you don't necessarily have a real privacy issue. Then, to test it, they feed in a bunch of other people and it determines they may have kidney issues. That's great. Now what do you do about it? Do you just let these people go on with possibly undiagnosed kidney issues, or do you notify their physicians that your AI trial program indicates their patient may have kidney issues, please check and let us know so we can get an idea of the accuracy of the program?

  13. Jim Birch

    Luddites

    The use of big data techniques is great for medicine. It should be a normal practice that goes on all the time. Privacy protection can be achieved via anonymizing data but in practice this can be difficult to implement fully, in practice, security standards and data use agreements is often a better approach.

    If you don't absolutely don't trust the government or companies or individuals, we have a problem.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like