Multimodal Analysis
Many studies and datasets contain a wealth of information from different MRI modalities (e.g. functional, structural, diffusion imaging) as well as other information (e.g. neuropsychological tests, clinical measures, genetic information, MEG, etc.). We aim to combine such different data together, so that available brain scanning methods can be used to their full potential, in both research and clinical settings in order to open up new possibilities for exploring inter-relations between structure, function and connectivity in the healthy and diseased brain.
Such cross-modal-integration (XMI for short) is currently lacking in the majority of analyses that are done, since existing tools typically treat the analysis of different modalities separately. This limits findings about relationships between these data and misses out on substantial sensitivity advantages available from data integration. Tools that are based on XMI have the potential to expand the range of possible investigations, to enhance understanding of disease mechanisms, to improve diagnostic decision-support, and to enrich large cohort studies and their application in the clinic.
The research projects in this area utilise machine learning and generative models to combine information across modalities. They build on the methodologies used in our existing single-modality tools, and aim to be flexible in what modalities are required, to be as widely applicable as possible. Our ultimate goal is to provide automated, practical tools that combine across all MRI modalities and are capable of being used in basic neuroimaging research as well as in large cohort studies, in drug-trials and in the clinic.
Current Projects
- Surface-based Registration
- Subcortical Segmentation
- Dynamic Discriminative Atlases
FSL Tools
- Linked-ICA (FLICA)
- Multimodal Surface Matching (MSM)
- Multimodal Image Segmentation Tool (MIST)
- White-matter hyper-intensity/lesion segmentation (BIANCA)