When it comes to healthcare, artificial intelligence (AI) is revolutionizing the industry by enhancing decision-making capabilities, improving quality of care and reducing costs. In the age of supercomputers and technological advancement, the health sector generates vast amounts of data, which AI can process and analyse to extract meaningful information.
But are we making good use of that data? As I see it, we need better, but not necessarily more, data. We have a lot of data, but much of it is of little use because it’s not being turned into meaningful information. If, for example, we want to use AI to help make accurate forecasts and recommendations, we need high-quality data that can provide us with information.
As an IT and healthcare professional, I have seen health informatics evolve over the past 35 years. Health informatics focuses on information technology to positively impact the patient-physician relationship through effective collection, storage, normalization and analysis of health data. How does it work? Electronic health records (EHR), for one, capture all relevant data about a patient so that, when an individual arrives at a doctor’s office or hospital, all their medical information is readily available in digital form.
Records are up to date and secure, and healthcare is easier to coordinate between facilities and providers. This type of robust record collection means that data can be extrapolated from whole populations, i.e. in the other direction, to identify commonalities between groups, such as those suffering from, or at risk of, a condition like diabetes. All this points to a shift towards personalized healthcare (also known as precision medicine).
Eliminating bias in AI
I don’t think that it will be too long into the future when we’ll be able to tailor treatment and prevention plans to an individual based on factors like genetics, age, lifestyle and environment. Much like with other technologies and improvements, the more tailored the medical plan, the better and more cost-effective the patient outcome.
While the future is promising, there are still significant challenges to overcome when implementing AI in healthcare. As part of the problem, we need to sort out bias. Bias appears in multiple forms, including omission and commission. Models containing bias will likely exacerbate social inequalities and may even cause deaths, but I’d also point out that there are times in healthcare where it’s beneficial to have an algorithm that contains a bias. To give a real-world example of this, being older than 65 during the COVID-19 pandemic was an important bias that needed to be reflected in monitoring and treatment.
Towards a definition of AI
AI is creating a lot of interest because of its potential in terms of cost savings and improved quality in healthcare. Investment into AI in the medical sphere is growing, but the industry is slow to change and there are a lot of issues that need resolving before AI can, and likely should, really take off. In my opinion, one of the big stumbling blocks is the fact that there is no single accepted definition of AI.
For the physician, AI is a host of computational methods that produce systems that perform tasks that would normally require human intelligence. These methods include image recognition and natural language processing. A phrase I’ve heard repeatedly is “augmented intelligence”, reflecting the need to enhance human decision-making capabilities, when coupled with computational methods. This moves us away from the term “artificial” but, from the physician’s viewpoint, AI is about being able to assist them in their decision making.
AI for precision medicine
According to a World Health Organization report, AI holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment and use. Will there come a time when AI will replace humans in the healthcare industry? I think this is unlikely! Instead, and we’re already seeing this to some degree, there will be a shift towards a working relationship between the two.
As AI technology is steadily applied to all corners of medicine, regulators will need to consider multiple approaches for ensuring the safety of AI in healthcare, and this includes International Standards. Finding that common vocabulary, taxonomy and definition is vital because it means that the practitioner and the regulator can talk the same language as the technical expert. These standards will guide future AI use in order to ensure that AI systems are fully interoperable and transparent, and prevent bias and inequality. The non-determinism of machine learning and the “hallucinations” of today’s Large Language Models are also significant challenges that must be addressed to ensure safe and effective AI in health.
We’re still on a long and complex journey in healthcare. While I don’t think you’ll be seeing a robot instead of a doctor any time soon, I believe we must keep in mind that AI’s most powerful use is to enhance human capabilities, not replace them. Amid the uncertainty and change, we must look for new ways to transform the journey of care. As technology continues to get smarter, faster and more reliable, the possibilities are endless to ensure patients receive the best possible care. These efforts will ensure that the full potential of AI for healthcare and public health will be used for the benefit of all.
About Michael Glickman
Michael L. Glickman, founder and CEO of Computer Network Architects, has many years of experience in the computer industry and 35 years in healthcare information technology. He is an internationally recognized subject matter expert on systems integration and secure interoperability, and is a pioneer in healthcare informatics as a founding member of the HL7 working group in 1987. Michael is the Chair of ISO/TC 215, Health informatics.