Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham e-Theses
You are in:

Explainable Machine Learning for Robust Modelling in Healthcare

WATSON, MATTHEW,STEVEN (2023) Explainable Machine Learning for Robust Modelling in Healthcare. Doctoral thesis, Durham University.

[img]
Preview
PDF - Accepted Version
17Mb

Abstract

Deep Learning (DL) has seen an unprecedented rise in popularity over the last decade, with applications ranging from machine translation to self-driving cars. This includes extensive work in sensitive domains such as healthcare and finance with, for example, models recently achieving better-than-human performance in tasks such as chest x-ray diagnosis. However, despite these impressive results there are relatively few real-world deployments of DL models in sensitive scenarios, with experts claiming this is due to a lack of model transparency, reproducibility, robustness and privacy; this is in spite of numerous techniques having been proposed to address these issues. Most notably is the development of Explainable Deep Learning techniques, which aim to compute feature importance values for a given input (i.e. which features does a model use to make its decision?) - such methods can greatly improve the transparency of a model, but have little impact on reproducibility, robustness and privacy. In this thesis, I explore how explainability techniques can be used to address these issues, by using feature attributions to improve our understanding of how model parameters change during training, and across different hyperparameter setups. Through the introduction of a novel model architecture and training technique that used model explanations to improve model consistency, I show how explanations can improve privacy, robustness and reproducibility. Extensive experimentation is carried out across a number of sensitive datasets from healthcare and bioinformatics in both traditional and federated learning settings show that these techniques have a significant impact on the quality of these models. I discuss the impact these results could have on real-world applications of deep learning, due to the issues addressed by the proposed techniques, and present some ideas for further research in this area.

Item Type:Thesis (Doctoral)
Award:Doctor of Philosophy
Keywords:machine learning; deep learning; healthcare; explainability; robustness
Faculty and Department:Faculty of Science > Computer Science, Department of
Thesis Date:2023
Copyright:Copyright of this thesis is held by the author
Deposited On:06 Jun 2023 09:01

Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter