Mar 24, 2021

Q&A with Bo Wang: why collaboration is essential for the future of AI in healthcare

Programs: Graduate, Research: Artificial Intelligence in healthcare
Bo Wang

Dr. Bo Wang is jointly appointed between the Department of Laboratory Medicine & Pathobiology and the Department of Computer Science and is an expert in Artificial Intelligence in healthcare and medical research.

We spoke to him about AI and machine learning, and its role in laboratory medicine.

What is Artificial Intelligence and why do we need it in healthcare?

“AI generally is a computer program that we teach to analyze data and answer questions. At the core of AI is machine learning, which is a subset of algorithms that enables computers to learn, with deep learning being a further division of that. Deep learning is based on human biological neural networks, it consists of multiple layers of programming which have neurons and connections. We ‘feed’ these neural networks with millions of examples which the AI can adapt within certain rules.

AI is most used in facial recognition, as in your smartphone or on Facebook. AI algorithms teach your phone camera to recognize a face. The same techniques can be applied to medicine. For example, we can feed the machine learning model lots of medical images and teach it to recognize the heart, or what a tumor or certain disease looks like.

There are two main benefits of AI in healthcare from my perspective:

One is to help doctors and clinicians with some of the more time-consuming tasks, such as image segmentation. A machine learning program can identify and contour medical images much faster and more accurately than a human.

Another benefit of AI is to deal with data overload. Clinicians have to be able to spot subtle signs in oceans of data. The human brain has a limited capacity to process information, but AI does not. It can sift through large amounts of noisy data to find patterns and signals that could help with diagnosis and treatments.”

Will AI replace humans in healthcare?

“I often get asked this question by my collaborators, but the answer is emphatically no! Replacement is not the key word, enhancement is. We’re trying to develop tools to improve workflows, not to replace humans, which is a vital aspect of healthcare.

AI has already had lots of applications in day-to-day life such as smartphones, self-driving cars, and high-end entertainment systems. Although there is some adoption on the research side, very little is adopted in clinical application because medicine is a very unique field. There are considerations of governance, regulations, and ethics.

Many biological researchers use traditional types of statistical modeling, which have limitations when it comes to large-scale or noisy datasets. AI is a new tool that can help them overcome these challenges.

I once read that “AI will not replace doctors, but doctors with the knowledge of AI will replace doctors who don't”. I think this is an accurate prediction. Clinicians need to know what’s available and the pros and cons of these new tools.

But we’re still a long way off adopting AI in clinical settings. We need to make algorithms more robust, more interpretable, and more trustworthy for clinicians. It is a field still very much in its infancy.”

What is one of the major challenges in machine learning?

“Alongside the adoption and acceptance of AI, one issue we’re dealing with is that of bias.

Machine learning is only as good as the data you input. Many traditional clinical diagnosis systems come from old studies that only focused on very small subsets of populations without considering the variety of the wider population.

When we design the dataset, studies or machine learning tools, we have to pay particular attention to any bias in the data. Once the model is trained, we have to validate it across different groups to see whether it's biased. Testing is really key here. It may, for example, have a high accuracy in the male population, but not in the female population, or work well in hospital A, but not in hospital B. We have to be very cautious in validating our own models.

An example of this was when a deep learning program had 100% accuracy in detecting an image of a polar bear. However, under further testing, it was discovered that the program was recognising the snow in the image, not the bear - it was learning the wrong thing! This is why algorithm development and testing is so essential.”

How can AI be applied in the field of laboratory medicine and pathology?

“AI can have huge applications in medical imaging for laboratory medicine and pathology, which is a main focus of my research. There are two main applications: segmentation and prediction.

Clinicians, particularly pathologists, spend a long time contouring images for analysis, known as segmenting them. This involves painstakingly ‘drawing’ around images on the computer.

This task is important because many downstream variables are calculated based on the contours, for example, the size of the contour or tumor is a very important indicator of the grade for the disease or cancer. But this takes a lot of time, involves a certain level of skill with computer equipment and there is a margin of human error.

We have developed an AI-enabled tool to help automatic segmentation of different organs with a very high accuracy which only takes seconds. We trained it by loading millions of ‘raw’ images and images contoured by human experts, and it learns how to contour from these patterns. This is available to all researchers now and is incredibly accurate.

Another application in medical imaging is in predictive tasks. The AI tool can take images, such as an MRI, and predict if there is, for example, cancer, if so, which subtype. It’s a binary yes or no answer which can classify images for clinicians much more quickly.”

What role does the University of Toronto play in the future of AI?

“The University of Toronto has some real pioneers in AI, particularly on the algorithm side, people like Dr. Geoffrey Hinton.

U of T is uniquely placed to play a leading role in AI for healthcare and biology. Toronto has a single-payer health system, lots of hospitals, and a wide diversity in human populations so we have access to a huge and very valuable dataset for AI to explore.

With the development of the Vector Institute for Artificial Intelligence and now The Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM), Toronto, and U of T is placing itself at the forefront of this area of research.”

How can we make AI in medicine and healthcare successful?

“When it comes to AI in healthcare, collaboration is the key. We need to learn from each other and can only develop AI in healthcare together.

Computer scientists and clinicians or biologists speak very different languages so it can be hard to communicate.

Computer scientists need to understand why a particular clinical question is important and clinicians need to understand how AI is going to enable them to answer, even partially, the question they are asking.

We need to understand what the data is looking at and what the question is. We then pre-process the data and train the model. Almost always, this first attempt will fail, so we need to be able to work with clinicians and biological researchers to understand why. It’s important that they understand the process, the limitations, pros and cons.

This is the reason why I am cross-appointed between Computer Sciences and LMP. I am a computer scientist, but being part of LMP allows me to build these collaborations and get a true understanding of the clinical aspects which is so vital in this kind of development.”

You’re developing a new graduate module on machine learning in healthcare: tell us about it

“Yes, I’m developing a machine learning module for graduate students in LMP which will be launched in Winter 2022.

It will cover the basic principles of machine learning in biomedical research and teach graduate students what machine learning can and cannot do. They’ll learn what machine learning is, how to construct it, how to train it, and how to make a diagnosis based on the model. I’ll also cover the limitations of machine learning when it comes to biomedical research - machine learning is not perfect and still needs lots of development.

The plan is to first launch the course in LMP and then gradually expand it across the Temerty Faculty of Medicine for all learners. It’s very exciting.”

Find out how an LMP graduate student taught himself machine learning and changed the course of his PhD.