Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Multi-contrast MRI acquisitions of an anatomy enrich the magnitude of information available for diagnosis. Yet, excessive scan times associated with additional contrasts may be a limiting factor. Two mainstream approaches for enhanced scan efficiency are reconstruction of undersampled acquisitions and synthesis of missing acquisitions. In reconstruction, performance decreases towards higher acceleration factors with diminished sampling density particularly at high-spatial-frequencies. In synthesis, the absence of data samples from the target contrast can lead to artefactual sensitivity or insensitivity to image features. Here we propose a new approach for synergistic reconstruction-synthesis of multi-contrast MRI based on conditional generative adversarial networks. The proposed method preserves high-frequency details of the target contrast by relying on the shared high-frequency information available from the source contrast, and prevents feature leakage or loss by relying on the undersampled acquisitions of the target contrast. Demonstrations on brain MRI datasets from healthy subjects and patients indicate the superior performance of the proposed method compared to previous state-of-the-art. The proposed method can help improve the quality and scan efficiency of multi-contrast MRI exams.

Type

Internet publication

Publication Date

27/05/2018

Keywords

cs.CV, cs.CV