CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising.

TitleCTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising.
Publication TypeJournal Article
Year of Publication2023
AuthorsWang D, Fan F, Wu Z, Liu R, Wang F, Yu H
JournalPhys Med Biol
Volume68
Issue6
Date Published2023 Mar 15
ISSN1361-6560
KeywordsArtifacts, Image Processing, Computer-Assisted, Neural Networks, Computer, Signal-To-Noise Ratio, Tomography, X-Ray Computed
Abstract

Objective. Low-dose computed tomography (LDCT) denoising is an important problem in CT research. Compared to the normal dose CT, LDCT images are subjected to severe noise and artifacts. Recently in many studies, vision transformers have shown superior feature representation ability over the convolutional neural networks (CNNs). However, unlike CNNs, the potential of vision transformers in LDCT denoising was little explored so far. Our paper aims to further explore the power of transformer for the LDCT denoising problem.Approach. In this paper, we propose a Convolution-free Token2Token Dilated Vision Transformer (CTformer) for LDCT denoising. The CTformer uses a more powerful token rearrangement to encompass local contextual information and thus avoids convolution. It also dilates and shifts feature maps to capture longer-range interaction. We interpret the CTformer by statically inspecting patterns of its internal attention maps and dynamically tracing the hierarchical attention flow with an explanatory graph. Furthermore, overlapped inference mechanism is employed to effectively eliminate the boundary artifacts that are common for encoder-decoder-based denoising models.Main results. Experimental results on Mayo dataset suggest that the CTformer outperforms the state-of-the-art denoising methods with a low computational overhead.Significance. The proposed model delivers excellent denoising performance on LDCT. Moreover, low computational cost and interpretability make the CTformer promising for clinical applications.

DOI10.1088/1361-6560/acc000
Alternate JournalPhys Med Biol
PubMed ID36854190
Division: 
Institute of Artificial Intelligence for Digital Health
Category: 
Faculty Publication