Artificial Intelligence in Medicine
In the past century, the field of medicine has made miraculous advancements in lifesaving technology. Many diseases and conditions, fatal in previous generations, now have reliable cures, treatments, and preventions. As technology in communications, engineering, chemistry, and physics continues to develop, more possibilities open up for those who work in medicine. Until recently, though, nothing we’ve developed has been able to do the work for doctors.
Artificial intelligence (AI) is quickly changing that.
Advances in the field of “deep learning” have made it possible for machines to begin lightening the workload of doctors by performing certain tasks faster—and in some cases more accurately—than humans. This frees up medical professionals to focus on their true priority: treating patients. Today, we’ll take a look at AI, and several ways it’s making life easier for medical practitioners.
AI and Deep Learning
The idea of artificial beings that are capable of thought has been around since at least ancient Greece. Much like the eponymous doctor in Mary Shelley’s Frankenstein, humans have postulated for years that if divinity has created sentient life capable of cognition, man may likewise be capable of such acts of creation. The result has been the romanticising of the concept in literature and culture, as science simultaneously pursues its realization.
Both approaches have become more sophisticated over the centuries, with writers and poets broadening the scope of what hypothetical synthetic cognition might be capable of. Likewise, as the progression of technology has led to the development of computer systems, it has finally become possible to create systems that can accomplish tasks previously considered too abstract for non-human minds.
The most significant advancement thus far in the field is that of “deep learning,” a subcategory of machine learning that attempts to mimic the way the human brain learns new concepts. A simple definition of deep learning is when a system or algorithm is given a large dataset, and told to look for patterns, without being programmed how to differentiate between the patterns.
The most convenient example of this is image recognition, and how Google built a computer network that taught itself what a “cat” was. Fed enough YouTube videos, it started to recognize patterns among the images it was presented, and it started to group felines together as a category. It’s an impressive feat, considering there’s a non-trivial amount of variety of characteristics among cats.
None of the data Google fed the algorithm was labeled, meaning this was “unsupervised” deep learning, and the computers didn’t have any base examples to work from first. The alternative is “supervised” deep learning, where the computer is given a number of labeled examples to start it off, teaching it what kind of things classify as “category A” or “category B.” In supervised learning, they’re still not instructed what to look for, just given examples of what qualifies; determining qualifying characteristics is up to the system.
This is very similar to how we teach a child what a cat is. We show them pictures and videos of a cat, tell them what sound they make, and maybe even let them pet a cat (if one is available). Most toddlers, even ones who have never seen a cat in person, can identify them by sight or sound. We give them sufficient data, and they can draw the conclusion on their own. That’s one of the foundational principles of intelligence.
Image recognition is where much of the deep learning research has been done, and it’s where a majority of the AI use cases in medicine can be found. That’s because so much of a doctor’s job is looking at images and scans and trying to determine if there’s a problem. It can consume large amounts of valuable time, so employing AI to help process the large number of radiology results can save both time and money on the task.
AI has already been tested (with successful results) to prove it can identify the presence or absence of a number of conditions via reviewing images alone. At Jefferson University Hospital, researchers proved that A.I. could detect tuberculosis with an accuracy of up to 96%. Over in China, Infervision is using the same methodology in detecting lung cancer, helping the mere 80,000 radiologists process the over one billion radiology scans a year.
Lungs aren’t the only things being inspected by computers. In England, several clinical trials have proven that a computer can predict future heart problems in patients simply by looking at a patient’s heart scans. It does this with accuracy levels greater than the doctors themselves. Even diabetic retinopathy can be detected, as can the metastasizing of breast cancer.
As the technology improves, and as the algorithms are fed more data, an increasing number of diseases and conditions should be detectable or predictable by machines, enabling doctors to save more lives.
Artificial intelligence in medicine isn’t limited to image recognition. In some cases, the AI doesn’t even need to see images to make a diagnostic prediction. Take the “Deep Patient” project. Researchers at the Icahn School of Medicine at Mount Sinai gave their AI 700,000 electronic health records (EHRs), hoping that the machine would be able to make the connections between disease or condition predictors, and eventual diagnosis.
Once the AI had been given the chance to parse all the information (patient X developed lung cancer, patient Y developed heart disease), it was tested. It was given data on 76,000 patients whose diagnosis was known, but not given to the computer. Their results were impressive, drastically outperforming “evaluations based only on raw EHR data, doing particularly well at predicting severe diabetes, schizophrenia, and various cancers.”
They weren’t the only ones who had positive results. A team of UK researchers performed a similar test, to see if their learning algorithm could accurately predict heart attacks. It “correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms.” What’s more, the machine used different guidelines to make its judgement calls, highlighting the fact that doctors may be using the wrong metrics: