Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham e-Theses
You are in:

2D to 3D Translation with Medical Image Applications

CORONA-FIGUEROA, ABRIL (2024) 2D to 3D Translation with Medical Image Applications. Doctoral thesis, Durham University.

[img]
Preview
PDF - Accepted Version
Available under License Creative Commons Attribution 3.0 (CC BY).

104Mb

Abstract

To translate between medical images of different modalities, one option is to rely on novel deep learning approaches within the broad task of image-to-image translation. Over the past decade, this problem has spawned many computer vision subtasks, including image inpainting, domain adaptation, semantic segmentation, super-resolution, among others. However, adapting or designing deep learning frameworks for real-world data, such as medical imaging, imposes additional considerations in terms of data availability, structure, and topology.

In particular, 2D to 3D medical image translation entails further complexities:
1. Lack of unpaired datasets makes learning direct mappings difficult, thus precise transformations cannot be guaranteed.
2. Dimensionality mismatch calls for a generative modeling formulation where issues such as poor spatial alignment and impractical data representations can lead to inaccurate translations.
3. Hallucination can prove fatal in medical contexts. Unlike in a "natural image" setting, which can focus on aesthetics or creativity, medical imaging relies on the precise representation of anatomical structures.

The thesis makes advancements in such areas and works towards having positive outcomes in applications like 3D CT reconstruction from 2D X-rays, involving lower radiation doses for patients, cost savings, and increased accessibility. Specifically, the research work presented in this thesis explores different deep learning approaches for the task of 2D to 3D translation, considering necessary data domain and technical aspects. Our proposed methods generally outperform competitive models and introduce novel ideas with potential use in other frameworks or relevant applications. These methods have been implemented in PyTorch and are open source for the research community.

This work aims to contribute to the problem of 2D to 3D image translation using deep learning. To complement existing methods for CT reconstruction from X-rays, which primarily focus on adapting architectures to medical images, we concentrate on studying the data structure and making well-founded design decisions to effectively solve modeling issues and achieve state-of-the-art results. This thesis is also intended to be accessible to researchers without a computer science background, promoting multidisciplinary collaboration with clinicians and biomedical experts.

Item Type:Thesis (Doctoral)
Award:Doctor of Philosophy
Keywords:medical imaging; deep learning; generative models; image-to-image translation; 2D to 3D
Faculty and Department:Faculty of Science > Computer Science, Department of
Thesis Date:2024
Copyright:Copyright of this thesis is held by the author
Deposited On:22 Nov 2024 09:28

Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter