Image Synthesis in Multi-Contrast MRI with Conditional Generative Adversarial Networks
Journal Article
Abstract Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, scan time limitations may prohibit acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can in turn suffer from the loss of structural details in synthesized images. Here, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss; and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to previous state-of-the-art methods. Our synthesis approach can help improve quality and versatility of multi-contrast MRI exams without the need for prolonged or repeated examinations.

BibTeX
@ARTICLE{Dar2019,
author={S. U. H. Dar, M. Yurt, L. Karacan, A. Erdem, E. Erdem, T. Cukur},
journal={IEEE Transactions on Medical Imaging},
title={Image Synthesis in Multi-Contrast MRI with Conditional Generative Adversarial Networks},
volume={38},
number={10},
pages={2375-2388},
year={2019}}