Title
Explainable machine learning, predictive modelling, and causal inference, applied to medical data (Research)
Abstract
This research consists of applying machine learning to medical data, with a focus on trustworthy and explainable machine learning methods.
We are investigating how to improve longitudinal methods for optimizing the care of people with multiple sclerosis. We investigate
predictive models, which can tell the patient whether he or she will progress in disability after a certain time period. This helps the patients
plan their life. We focus on how well-calibrated predictions about the future disease state are. Instead of predicting whether the patient will
progress yes /no, we try to calculate to correct probability of progression. Given the complexity of the disease, this is a more relevant
quantity to model. We are also looking at causal inference methods, which can be used to optimize the administered medicine.
A large number of explanation methods and consistency metrics is benchmarked on a large variety of image datasets, with a special focus
on medical images (OCT, MRI, …). We have submitted a conference article on how to calibrate the output of models that segment brain
tumours on medical images. A well calibrated model does not only segment the tumour correctly, but also returns a good estimate of how
certain it is about its segmentation.
We are working on EEG time-series to detect epileptic seizures. Automated detection of seizures would have a far-reaching impact on
the management of patients with epilepsy. We use several methods to obtain a confidence estimate of the classifier its decision. We defer the
least-confident EEG segments to a human expert. The end goal is to speed up the application of these methods to clinical use. Because
algorithms are not good enough when classifying all data, a full replacement of human judgment is not possible. However, we aim to
show that it can be automated to a significant degree.
Period of project
01 January 2019 - 31 December 2022