Image Recognition

Image recognition is where much of the deep learning research has been done, and it’s where a majority of the A.I. use cases in medicine can be found. That’s because so much of a doctor’s job is looking at images and scans and trying to determine if there’s a problem. It can consume large amounts of valuable time, so employing A.I. to help process the large number of radiology results can save both time and money on the task.

A.I. has already been tested (with successful results) to prove it can identify the presence or absence of a number of conditions via reviewing images alone. At Jefferson University Hospital, researchers proved that A.I. could detect tuberculosis with an accuracy of up to 96%. Over in China, Infervision is using the same methodology in detecting lung cancer, helping the mere 80,000 radiologists process the over one billion radiology scans a year.

Lungs aren’t the only things being inspected by computers. In England, several clinical trials have proven that a computer can predict future heart problems in patients simply by looking at a patient’s heart scans. It does this with accuracy levels greater than the doctors themselves. Even diabetic retinopathy can be detected, as can the metastasizing of breast cancer.

As the technology improves, and as the algorithms are fed more data, an increasing number of diseases and conditions should be detectable or predictable by machines, enabling doctors to save more lives.