Much proselytizing has occurred regarding the value and future of artificial intelligence (AI) and machine learning in healthcare. The industry is burgeoning. As with blockchain technology, which continues to evolve in the healthcare marketplace, AI and machine learning are constructs that require a bit of near-term expectation management. While their efficacy and value will improve with time, they are not the magic bullet (at present) that will answer the myriad care and cost delivery questions surrounding healthcare in the United States. Owing to space constraints this column is an overly simplistic contemplation of AI.
As prologue to this article, I am not an AI programmer, don’t play in Python, and have never built a machine learning algorithm. That said, I do have 30 years of practical experience in the healthcare trenches and have dealt with information technology (IT) systems and applications in that time, such as culling quality data and outcomes from electronic medical record (EMR) systems and deploying rudimentary analytics. I also have a fairly extensive background in IT.
Preamble aside, last year, when blockchain was casually bandied about, I suggested that solid deployment of blockchain technology in healthcare would take some time due to significant disparity in the care delivery system and the multitudes of inputs and variables. Use/deployment of blockchain is predicated on targeted problems with common agreed-upon data sets. Generally, the same can be said of AI. Is that to say that AI, machine learning, and blockchain will not play a role in the future of healthcare? Certainly not. I believe they will play a significant role. However, short-term challenges will continue as robust IT offerings are unveiled. AI, machine learning, blockchain, and other cutting-edge technologies, are needed to advance the delivery and coordination of care, squeeze costs and redundancy out of the “system,” and help ensure repeatable quality outcomes. But few technologies are perfect, and most require time to germinate as they grow in use and scalability.
For the sake of this article we should expound on our definitions. As with telehealth, where people often use telehealth and telemedicine interchangeably, many people toss AI and machine learning into the same bucket. I’d herewith suggest that many components fall under the AI umbrella, including machine learning. With AI, machines mimic human cognitive functions. Under that arch, AI includes machine learning, natural languages processing (NLP), and “reasoning.” With machine learning, machines have no explicit instructions but extrapolate and determine patterns in large chunks of data. “Reasoning” is stored information combined with rules and is utilized to make deductions. NLP is the processing, analyzing, understanding, and generating of natural human languages. Machines can be taught to learn and discern between items. For instance, coding can be deployed to identify different leaves (not sure why you’d do that – absurd example). Each leaf has data element differentiators that help the computer “learn” what the types of leaves are. The computer can then, over time, pick an oak leaf from a maple leaf, for instance. But the computer knows none of this unless it is “told” what these items are and how they are defined. The inputs must be sound, and the algorithms must be written with background knowledge and understanding about the underlying issue at hand (e.g., the differences between and oak leaf and a maple leaf).
And that can be the rub. Subject matter experts (SMEs) and data scientists must work hand in glove to delineate the problem to be solved, the data needed, and the nurturing of the algorithms to ensure they remain relevant. Bad “training” of the computer and bad data inputs lead to bad and/or inaccurate outputs.
Figure 1 below shows how these components live under the greater AI umbrella.
How does a bad construct present itself? As an apolitical consideration, we’ve recently seen how bad data inputs lead to bad outputs. A variety of recent COVID-19 projections by certain entities were grossly inaccurate, overestimating infection rates and deaths. While not AI, per se, certainly the algorithms, logic, and data inputs had flaws leading to calamitously inaccurate results. Again, bad or misunderstood inputs and bad algorithms can lead to bad outputs.
Lest you think me a naysayer, I’ll reemphasise that I believe AI will play an increasingly larger role in healthcare delivery; it’s a matter of time and necessity. The key is in the development, build, and parameters of the logic data scientists and SMEs (e.g., clinic