TY - GEN
T1 - Medical Transformer
T2 - 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
AU - Valanarasu, Jeya Maria Jose
AU - Oza, Poojan
AU - Hacihaliloglu, Ilker
AU - Patel, Vishal M.
N1 - Funding Information: This work was supported by the NSF grant 1910141. Publisher Copyright: © 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Over the past decade, deep convolutional neural networks have been widely adopted for medical image segmentation and shown to achieve adequate performance. However, due to inherent inductive biases present in convolutional architectures, they lack understanding of long-range dependencies in the image. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to explore transformer-based solutions and study the feasibility of using transformer-based network architectures for medical image segmentation tasks. Majority of existing transformer-based network architectures proposed for vision applications require large-scale datasets to train properly. However, compared to the datasets for vision applications, in medical imaging the number of data samples is relatively low, making it difficult to efficiently train transformers for medical imaging applications. To this end, we propose a gated axial-attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module. Furthermore, to train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance. Specifically, we operate on the whole image and patches to learn global and local features, respectively. The proposed Medical Transformer (MedT) is evaluated on three different medical image segmentation datasets and it is shown that it achieves better performance than the convolutional and other related transformer-based architectures. Code: https://github.com/jeya-maria-jose/Medical-Transformer
AB - Over the past decade, deep convolutional neural networks have been widely adopted for medical image segmentation and shown to achieve adequate performance. However, due to inherent inductive biases present in convolutional architectures, they lack understanding of long-range dependencies in the image. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to explore transformer-based solutions and study the feasibility of using transformer-based network architectures for medical image segmentation tasks. Majority of existing transformer-based network architectures proposed for vision applications require large-scale datasets to train properly. However, compared to the datasets for vision applications, in medical imaging the number of data samples is relatively low, making it difficult to efficiently train transformers for medical imaging applications. To this end, we propose a gated axial-attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module. Furthermore, to train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance. Specifically, we operate on the whole image and patches to learn global and local features, respectively. The proposed Medical Transformer (MedT) is evaluated on three different medical image segmentation datasets and it is shown that it achieves better performance than the convolutional and other related transformer-based architectures. Code: https://github.com/jeya-maria-jose/Medical-Transformer
KW - Medical image segmentation
KW - Self-attention
KW - Transformers
UR - http://www.scopus.com/inward/record.url?scp=85116425765&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85116425765&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/978-3-030-87193-2_4
DO - https://doi.org/10.1007/978-3-030-87193-2_4
M3 - Conference contribution
SN - 9783030871925
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 36
EP - 46
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 - 24th International Conference, Proceedings
A2 - de Bruijne, Marleen
A2 - Cattin, Philippe C.
A2 - Cotin, Stéphane
A2 - Padoy, Nicolas
A2 - Speidel, Stefanie
A2 - Zheng, Yefeng
A2 - Essert, Caroline
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 27 September 2021 through 1 October 2021
ER -