Based on images alone
Proper doctors perform physical tests before irradiating/cutting up their patients.
An algorithm that promises to diagnose skin cancer as well as dermatologists can may work with mobile phone cameras in the future, according to a paper published in Nature. The recent obsession with machine learning and AI in the tech world has boosted the ability of computers to analyze streams of data and classify images. …
"Proper doctors" glance at 0.1% of the area of a smear sample under an uncalibrated 30year old microscope for 1-2mins, in a single polarisation using the wavelength range of their eyes and make an estimation of cancer cells which is marginaly better than tossing a coin.
Every couple of yeas you then discover the process was done by someone who lacked the correct certifcation for that state and you recall 1000s of women to be re-scanned. Ironically this is the nearest we get to a study of accuracy.
Funnily enough it's possible to kill more people by using a cancer-detecting machine than by not using one. This is because the machine finds the obvious positives, and leaves highly qualified humans with the mind-numbing job of trying to find the remaining hard-to-spot false negatives in a load of true negatives. The humans simply cannot, so the false negative cases remain undiagnosed and are more likely to die.
If instead the machine eliminates a percentage of true negatives ("completely normal, nothing to see here") then humans are left with a more interesting job - find the true positives in a smaller, "richer" selection of samples that the machine has flagged as "not quite normal". Incremental improvements in the machine should be in the direction of extending the definition of "absolutely normal" leaving humans with an even more interesting job.
Statisticians and systems bods might get this but tech-dazzled doctors often do not. If you really must have a headline-grabbing cancer detector then run it over the samples after humans have looked at them, not before.
"This is because the machine finds the obvious positives, and leaves highly qualified humans with the mind-numbing job of trying to find the remaining hard-to-spot false negatives in a load of true negatives.....If instead the machine eliminates a percentage of true negatives"
So we need a machine that can prove a negative, without ever giving a false positive? Then we should scan the remaining pool using "qualified humans"?
This assumes that everyone who didn't get an "all clear" from the "not cancer detector" goes to see a "qualified human" so we would need a lot more "qualified humans" than we have currently.
The real statistic is that without a simple, cheap, test most victims will never even see a "qualified human" so more people will die as a result.
The idea of this is to let a person say "Hey, here's an interesting shaped brown patch on my skin. Could it be cancer? "
And if the AI says yes, the person goes to a doctor and says... "Hey, I think I have something you need to look at to see if its cancer..." Then the doctor looks at it, and if its something that warrants further investigation, they will biopsy it.
Most biopsies are benign and of course this is still hype and not real until it goes thru an FDA study.
Proper doctors perform physical tests before irradiating/cutting up their patients.
Of course. Presumably if an algorithm flagged up a risk, the same procedure would be followed as if it had been flagged up by a human specialist. Not sure whether the article intended to imply otherwise. I admit to finding the sentence: "Each person was asked whether they would refer a patient for a biopsy, for a treatment, or reassure them that their skin lesion wasn't cancerous, based on images alone" a little hard to scrute.
Before they get to the physical tests, they look at it. If this acts as a filter and weeds out the obviously non-cancerous stuff it frees up docs to work on the problem ones. Even at 91%, it can be useful (anything remotely fuzzy, "see a human")
Automating this step has been something that cancer specialists have been wanting for a long time. It's not going to take any jobs away.
New algorithm can detect skin cancer as well as dermatologists and in the future will also detect mobile phones.
I'm impressed by the dermatologist detector part of the algorithm but who thought of combining all three ?
Wow, the future is so bright (if a little mad)
>Last time I checked tensor flow used python. So unless you are running a BlackBerry (they did have a python runtime), the answer is no.
Um...?
TensorFlow was designed with mobile and embedded platforms in mind. We have sample code and build support you can try now for these platforms: Android, iOS, Raspberry Pi
- https://www.tensorflow.org/mobile/
Wonder how well the AI approach works compared to other methods?
I know various image processing techniques are used medically to flag images of concern - not a replacement for experts, as an aid.
Just that the algorithms I have seen used medicinally have typically been developed by coders (liaising with experts to find out what are significant things to look for) rather than by auto processing a large amount of images.
Essentially clinicians have said what features should be flagged up & image processing code tweaked to flag those.
I assume the Google machine learning (if better than more "classically" developed software) would be used in same way - as an aid not replacement for experts.
Though some of the classical systems used for assessing (potential) cancer biopsies can outperform clinicians working alone (computer systems do not suffer eye strain, general fatigue / concentration issues) they are never relied on as the only analysis method.
AC as I have worked on medical image analysis in the past
Whether 91% accuracy is a good result or not depends very much on the split between false positives, false negatives and the underlying base rate (see https://en.wikipedia.org/wiki/Base_rate_fallacy).
For automatic technical solutions, one is almost always best to avoid trying to diagnose but rather to screen instead: bias the technology to avoid false negatives to act as a gatekeeper to reduce the number of cases a genuine doctor has to see.
Or just Google as an entity?
Of course you sue Google... They have the money. Why would a lawyer go after an App developer who has very little in assets or capital, can't afford to pay a decent settlement that would make the lawyer's cut worthwhile and can't afford to pay for "billable hours" to defend.
Icon for lawyers...
.. as an Aussie (world skin cancer champions) who spends a lot of time outdoors. I wont dispute the image recognition technology here, what i doubt is the ability of an untrained amateur user (ie almost everyone) to do a thorough scan of their entire risk-exposed surface (instead of taking dick-pics). especially areas not easily accesible from in front.
Assistance from a partner would help, but not something I would rely on.
I see this technology as being a useful diagnostic aide to help professionals, but as a consumer level app its slightly more useful than a banner ad sourced credit rating check.