One of the advances in AI is the training of algorithms to detect disease in the human body.
Using image analysis, software can be trained to identify tell-tale signs of disease, and in many cases the software is beating what medical professionals can detect.
For the patient, this means early detection and greater success in treatment since the disease is detected at an earlier time.
Yet with all advances, there is the human factor. In a recent editorial piece, the NY Times hosted an article discussing how deletion or addition of a few dots of data (“pixels”) can change a diagnosis from negative to positive (in cases where they want to meet their quota/sales forecast) or from positive detection to negative (minimize the health benefits the patient is entitled to).
The NY Time editorial states
In a paper published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.
Samuel Finlayson, a researcher at Harvard Medical School and M.I.T. and one of the authors of the paper, warned that because so much money changes hands across the health care industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track health care visits. A.I. could exacerbate the problem.https://www.nytimes.com/2019/03/21/science/health-medicine-artificial-intelligence.html
“The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information,” he said.
Let us insist in oversight and regulation in the coming age where artificial intelligence runs critical decisions such as medical diagnosis systems.