June 8, 2018

Artificial Intelligence in Medicine: How “Deep Learning” Is Helping Doctors Save Lives

Artificial Intelligence in Medicine

In the past century, the field of medicine has made miraculous advancements in lifesaving technology. Many diseases and conditions, fatal in previous generations, now have reliable cures, treatments, and preventions. As technology in communications, engineering, chemistry, and physics continues to develop, more possibilities open up for those who work in medicine. Until recently, though, nothing we’ve developed has been able to do the work for doctors.

Artificial intelligence (AI) is quickly changing that.

Advances in the field of “deep learning” have made it possible for machines to begin lightening the workload of doctors by performing certain tasks faster—and in some cases more accurately—than humans. This frees up medical professionals to focus on their true priority: treating patients. Today, we’ll take a look at AI, and several ways it’s making life easier for medical practitioners.

AI and Deep Learning

The idea of artificial beings that are capable of thought has been around since at least ancient Greece. Much like the eponymous doctor in Mary Shelley’s Frankenstein, humans have postulated for years that if divinity has created sentient life capable of cognition, man may likewise be capable of such acts of creation. The result has been the romanticizing of the concept in literature and culture, as science simultaneously pursues its realization.

Both approaches have become more sophisticated over the centuries, with writers and poets broadening the scope of what hypothetical synthetic cognition might be capable of. Likewise, as the progression of technology has led to the development of computer systems, it has finally become possible to create systems that can accomplish tasks previously considered too abstract for non-human minds.

The most significant advancement thus far in the field is that of “deep learning,” a subcategory of machine learning that attempts to mimic the way the human brain learns new concepts. A simple definition of deep learning is when a system or algorithm is given a large dataset, and told to look for patterns, without being programmed how to differentiate between the patterns.

The most convenient example of this is image recognition, and how Google built a computer network that taught itself what a “cat” was. Fed enough YouTube videos, it started to recognize patterns among the images it was presented, and it started to group felines together as a category. It’s an impressive feat, considering there’s a non-trivial amount of variety of characteristics among cats.

None of the data Google fed the algorithm was labeled, meaning this was “unsupervised” deep learning, and the computers didn’t have any base examples to work from first. The alternative is “supervised” deep learning, where the computer is given a number of labeled examples to start it off, teaching it what kind of things classify as “category A” or “category B.” In supervised learning, they’re still not instructed what to look for, just given examples of what qualifies; determining qualifying characteristics is up to the system.

This is very similar to how we teach a child what a cat is. We show them pictures and videos of a cat, tell them what sound they make, and maybe even let them pet a cat (if one is available). Most toddlers, even ones who have never seen a cat in person, can identify them by sight or sound. We give them sufficient data, and they can draw the conclusion on their own. That’s one of the foundational principles of intelligence.

Image Recognition

Image recognition is where much of the deep learning research has been done, and it’s where a majority of the AI use cases in medicine can be found. That’s because so much of a doctor’s job is looking at images and scans and trying to determine if there’s a problem. It can consume large amounts of valuable time, so employing AI to help process the large number of radiology results can save both time and money on the task.

AI has already been tested (with successful results) to prove it can identify the presence or absence of a number of conditions via reviewing images alone. At Jefferson University Hospital, researchers proved that A.I. could detect tuberculosis with an accuracy of up to 96%. Over in China, Infervision is using the same methodology in detecting lung cancer, helping the mere 80,000 radiologists process the over one billion radiology scans a year.

Lungs aren’t the only things being inspected by computers. In England, several clinical trials have proven that a computer can predict future heart problems in patients simply by looking at a patient’s heart scans. It does this with accuracy levels greater than the doctors themselves. Even diabetic retinopathy can be detected, as can the metastasizing of breast cancer.

As the technology improves, and as the algorithms are fed more data, an increasing number of diseases and conditions should be detectable or predictable by machines, enabling doctors to save more lives.

Predictive Modeling

Artificial intelligence in medicine isn’t limited to image recognition. In some cases, the AI doesn’t even need to see images to make a diagnostic prediction. Take the “Deep Patient” project. Researchers at the Icahn School of Medicine at Mount Sinai gave their AI 700,000 electronic health records (EHRs), hoping that the machine would be able to make the connections between disease or condition predictors, and eventual diagnosis.

Once the AI had been given the chance to parse all the information (patient X developed lung cancer, patient Y developed heart disease), it was tested. It was given data on 76,000 patients whose diagnosis was known, but not given to the computer. Their results were impressive, drastically outperforming “evaluations based only on raw EHR data, doing particularly well at predicting severe diabetes, schizophrenia, and various cancers.”

They weren’t the only ones who had positive results. A team of UK researchers performed a similar test, to see if their learning algorithm could accurately predict heart attacks. It “correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms.” What’s more, the machine used different guidelines to make its judgement calls, highlighting the fact that doctors may be using the wrong metrics:

Several of the risk factors that the machine-learning algorithms identified as the strongest predictors are not included in the ACC/AHA guidelines, such as severe mental illness and taking oral corticosteroids. Meanwhile, none of the algorithms considered diabetes, which is on the ACC/AHA list, to be among the top 10 predictors.

The corporate world has been benefiting from “big data” for several years, and now, with the help of AI, medicine is too.

The Human Diagnosis Project

The Human Diagnosis Project, also known as Human Dx, is similar to the above-mentioned EHR projects, but on a much larger scale. Using the same methodology of machine learning, but crowdsourcing both the patient data and the solutions, Human Dx aims to aggregate comprehensive diagnostic and prognostic data on a legion of conditions.

The AI then parses the information, and makes it more accessible and user friendly. Then, when a practitioner needs information or a second opinion, they can consult the archive for a wealth of knowledge related to their use case. The project makes answering difficult medical questions easier, and faster.

Mental Health and Cogito

Shifting from physical health to mental health, a new app is actually making it easier for mental health professionals to track the status of their patients. The Cogito Companion app is designed to track a patient’s activity, social connectedness, and mood. It can tell when you’ve left your house, or if you’ve stayed in one place for an extended period. It can tell if you’ve been calling or texting other people, or if you’ve been out of contact. And it allows your counselor to see your progress.

On the surface, it seems like other apps on the market: it tracks your activities, and reports back to your therapist. What makes it different is how you can record audio logs as a sort of diary. While this helps to record your feelings and thoughts, an AI algorithm analyzes the speed, tone, and energy in your voice to assess your mood. It can tell you if you’re having a down day, or if you’re doing better than you expect, and those results can be viewed by the practitioner.

The tool is still in its early stages, but as it learns and becomes more sophisticated it stands to help a great many people who struggle with mental health challenges.

Surgical Robots

The most direct application of AIs to medical practice is that of surgical robots. Though robots have been assisting surgeons since the 1980s, they’ve been non-autonomous, human-operated tools until very recently. They were just another device for surgeons to use.

Recent developments in machine learning have made it possible for surgical robots to become increasingly autonomous. Again, the technology is still in its early stages, but tests have begun to demonstrate the potential for the robots to be more accurate and precise than humans in the operating room. While it’s unlikely that machines will totally replace experienced surgeons in the conceivable future, there’s the definite possibility that they will begin making procedures easier for them.

AI and Machine Learning Applications in Genomics

Perhaps the most impressive usage of artificial intelligence in medical diagnosis is in the rapidly developing field of genomics, which includes analysis of cell-free DNA (cfDNA) and other genomic biomarkers. Cell-free DNA tests, if you’re unfamiliar with the practice, screen blood plasma for DNA fragments that are left behind when cells in the body die. Currently, the most prominent usage for these tests is as a pregnancy screening tool—by testing fetal DNA (cffDNA) found in the mother’s blood plasma, medical professionals can test for gender and certain genetic conditions.

The new technique already has a wealth of potential, even before artificial intelligence is applied. Incorporate the AI, however, and the possibilities only become more impressive.

Take our ambitious mission at Freenome, for example. Using advanced deep learning algorithms, Freenome’s AI genomics platform is looking at cfDNA and other cell-free biomarkers, such as cell-free RNA (cfRNA) and proteins, to aggregate and decode genetic data left behind by cancer cells, as well as the patient’s immune system. Using the AI to identify patterns and trends in these cell-free biomarkers enables Freenome to achieve cancer detection at much earlier stages, and determine which treatments will be most effective.

Rather than having to wait until later stages when the cancer causes observable symptoms in the patient, or the growth is visible on X-rays and CT scans, Freenome will help doctors identify cancer at the earliest stages of its growth, making treatments more effective and drastically improving survival rates, all from a simple blood test.

Computers have been changing the world since their inception, and with the advent of functional “weak A.I.s,” that progress has only accelerated. With the introduction of artificial intelligence into healthcare and the continual improvement of technology, clinicians in the next several years can expect to achieve medical miracles long thought impossible, ultimately shifting the practice of medicine from its current focus on treatment to one where disease is prevented altogether. 

Stay
connected.

Sign up to receive the latest news on clinical trials, publications, and more.

We respect your privacy.