Search results
Found 8303 matches for
Longer scans boost prediction and cut costs in brain-wide association studies.
A pervasive dilemma in brain-wide association studies1 (BWAS) is whether to prioritize functional magnetic resonance imaging (fMRI) scan time or sample size. We derive a theoretical model showing that individual-level phenotypic prediction accuracy increases with sample size and total scan duration (sample size × scan time per participant). The model explains empirical prediction accuracies well across 76 phenotypes from nine resting-fMRI and task-fMRI datasets (R2 = 0.89), spanning diverse scanners, acquisitions, racial groups, disorders and ages. For scans of ≤20 min, accuracy increases linearly with the logarithm of the total scan duration, suggesting that sample size and scan time are initially interchangeable. However, sample size is ultimately more important. Nevertheless, when accounting for the overhead costs of each participant (such as recruitment), longer scans can be substantially cheaper than larger sample size for improving prediction performance. To achieve high prediction performance, 10 min scans are cost inefficient. In most scenarios, the optimal scan time is at least 20 min. On average, 30 min scans are the most cost-effective, yielding 22% savings over 10 min scans. Overshooting the optimal scan time is cheaper than undershooting it, so we recommend a scan time of at least 30 min. Compared with resting-state whole-brain BWAS, the most cost-effective scan time is shorter for task-fMRI and longer for subcortical-to-whole-brain BWAS. In contrast to standard power calculations, our results suggest that jointly optimizing sample size and scan time can boost prediction accuracy while cutting costs. Our empirical reference is available online for future study design ( https://thomasyeolab.github.io/OptimalScanTimeCalculator/index.html ).
An fMRI study of initiation and inhibition of manual and spoken responses in people who stutter
Abstract Stuttering is characterised by difficulties initiating speech and frequent interruptions to the flow of speech. Neuroimaging studies of speech production in people who stutter consistently reveal greater activity of the right inferior frontal cortex, an area robustly implicated in stopping manual and spoken responses. This has been linked to an “overactive response suppression mechanism” in people who stutter. Here, we used fMRI to investigate neural differences related to response initiation and inhibition in people who stutter and matched controls (aged 19-45) during performance of the stop-signal task in both the manual and speech domains. We hypothesised there would be increased activity in an inhibitory network centred on right inferior frontal cortex. Out of scanner behavioural testing revealed that people who stutter were slower than controls to respond to ‘go’ stimuli in both the manual and the speech domains, but the groups did not differ in their stop-signal reaction times in either domain. During the fMRI task, both groups activated the expected networks for the manual and speech tasks. Contrary to our hypothesis, we did not observe differences in task-evoked activity between people who stutter and controls during either ‘go’ or ‘stop’ trials. Targeted region-of-interest analyses in the inferior frontal cortex, the supplementary motor area and the putamen bilaterally confirmed that there were no group differences in activity. These results focus on tasks involving button presses and production of single nonwords, and therefore do not preclude inhibitory involvement related specifically to stuttering events. Our findings indicate that people who stutter do not show behavioural or neural differences in response inhibition, when making simple manual responses and producing fluent speech, contrary to predictions from the global inhibition hypothesis.
Corrigendum to “Relating TMS measures of GABAergic and Cholinergic signalling to attention” [Brain Stimul 18 (1) (2025) 507–508, (S1935861X24010489), (10.1016/j.brs.2024.12.853)]
The authors regret that some of the authors are omitted in the original publication. The correct list of authors is as presented above. The authors also regret the errors in the abstract text. The corresponding corrections are provided below: The first line of paragraph 3 of the abstract should read: Here we investigated the role of GABA and ACh in healthy vision (n = 35). The last two paragraphs of the abstract should read as follows: We found that higher GABAergic Cholinergic inhibition in the motor cortex relates to better orienting attention allocation, as indicated by a significant correlation between the alerting orienting index of the ANT and SICI-1msSAI (r = −0.5942, p = 0.004). Despite the proposed role of Cholinergic signalling024). Our results are in line with evidence suggesting cholinergic mechanisms are responsible for successful orienting of attention, we did not find a significant correlation between SAI and any of the attentional indices (alerting, orienting, executive) of the ANT. Our findings suggest that GABAergic Cholinergic inhibition plays an important role in success fulorienting attention allocation and have guided the design of our ongoing pharmaco-TMS study investigating the effects of Zolpidem (GABA agonist) and Donepezil (cholinesterase antagonist) on behavioural and neurophysiological indices of attention. The authors would like to apologise for any inconvenience caused.
An atlas of trait associations with resting-state and task-evoked human brain functional organizations in the UK Biobank.
Functional magnetic resonance imaging (fMRI) has been widely used to identify brain regions linked to critical functions, such as language and vision, and to detect tumors, strokes, brain injuries, and diseases. It is now known that large sample sizes are necessary for fMRI studies to detect small effect sizes and produce reproducible results. Here we report a systematic association analysis of 647 traits with imaging features extracted from resting-state and task-evoked fMRI data of more than 40,000 UK Biobank participants. We used a parcellation-based approach to generate 64,620 functional connectivity measures to reveal fine-grained details about cerebral cortex functional organizations. The difference between functional organizations at rest and during task was examined, and we have prioritized important brain regions and networks associated with a variety of human traits and clinical outcomes. For example, depression was most strongly associated with decreased connectivity in the somatomotor network. We have made our results publicly available and developed a browser framework to facilitate the exploration of brain function-trait association results (http://fmriatlas.org/).
Neurodegenerative disease in C9orf72 repeat expansion carriers: population risk and effect of UNC13A.
The C9orf72 hexanucleotide repeat expansion (HRE) is the most common monogenetic cause of amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). Neurodegenerative disease incidence in C9orf72 HRE carriers has been studied using cohorts from disease-affected families or by extrapolating from population disease incidence, potentially introducing bias. Age-specific cumulative incidence of ALS and dementia was estimated using Kaplan-Meier and competing risk models in C9orf72 HRE carriers compared to matched controls in UK Biobank. Risk modification by UNC13A genotype was examined. Of 490,331 individuals with valid genetic data, 701 had >100 repeats in C9orf72 (median age 55 [IQR 48-62], follow-up 13.4 years [12.3-14.1]). The cumulative incidence of ALS or dementia was 66% [95% CI 57-73%] by age 80 in C9orf72 HRE carriers versus 5.8% [4.5-7.0%] in controls, or 58% [50-64%] versus 5.1% [4.1-6.4%] accounting for the competing risk of other-cause mortality. Forty-one percent of dementia incidence accrued between age 75-80. C-allele homozygosity at rs12608932 in UNC13A increased ALS or dementia risk in C9orf72 HRE carriers (hazard ratio 1.81 [1.18 - 2.78]). C9orf72 HRE disease was incompletely penetrant in this population-based cohort, with risk modified by UNC13A genotype. This has implications for counselling at-risk individuals and modelling expected phenoconversion for prevention trials.
Human motor cortical gamma activity relates to GABAergic intracortical inhibition and motor learning
Gamma activity (γ, >30 Hz) is universally demonstrated across brain regions and species. However, the physiological basis and functional role of γ sub-bands (slow-γ, mid-γ, fast-γ) have been predominantly studied in rodent hippocampus; γ activity in the human neocortex is much less well understood. We use electrophysiology, non-invasive brain stimulation, and several motor tasks to examine the properties of sensorimotor γ activity sub-bands and their relationship with both local GABAergic activity and motor learning. Data from three experimental studies are presented. Experiment 1 (N = 33) comprises magnetoencephalography (MEG), transcranial magnetic stimulation (TMS), and a motor learning paradigm; experiment 2 (N = 19) uses MEG and motor learning; and experiment 3 (N = 18) uses EEG and TMS. We characterised two distinct γ sub-bands (slow-γ, mid-γ) which show a movement-related increase in activity during unilateral index finger movements and are characterised by distinct temporal–spectral–spatial profiles. Bayesian correlation analysis revealed strong evidence for a positive relationship between slow-γ (~30–60 Hz) peak frequency and GABAergic intracortical inhibition (as assessed using the TMS-metric short interval intracortical inhibition). There was also moderate evidence for a relationship between the power of the movement-related mid-γ activity (60–90 Hz) and motor learning. These relationships were neurochemical and frequency specific. These data provide new insights into the neurophysiological basis and functional roles of γ activity in human M1 and allow the development of a new theoretical framework for γ activity in the human neocortex.
The GLM-spectrum: A multilevel framework for spectrum analysis with covariate and confound modelling
The frequency spectrum is a central method for representing the dynamics within electrophysiological data. Some widely used spectrum estimators make use of averaging across time segments to reduce noise in the final spectrum. The core of this approach has not changed substantially since the 1960s, though many advances in the field of regression modelling and statistics have been made during this time. Here, we propose a new approach, the General Linear Model (GLM) Spectrum, which reframes time averaged spectral estimation as multiple regression. This brings several benefits, including the ability to do confound modelling, hierarchical modelling, and significance testing via non-parametric statistics. We apply the approach to a dataset of EEG recordings of participants who alternate between eyes-open and eyes-closed resting state. The GLM-Spectrum can model both conditions, quantify their differences, and perform denoising through confound regression in a single step. This application is scaled up from a single channel to a whole head recording and, finally, applied to quantify age differences across a large group-level dataset. We show that the GLM-Spectrum lends itself to rigorous modelling of within- and between-subject contrasts as well as their interactions, and that the use of model-projected spectra provides an intuitive visualisation. The GLM-Spectrum is a flexible framework for robust multilevel analysis of power spectra, with adaptive covariate and confound modelling.
Automated quality control of T1-weighted brain MRI scans for clinical research datasets: methods comparison and design of a quality prediction classifier
T1-weighted (T1w) MRI is widely used in clinical neuroimaging for studying brain structure and its changes, including those related to neurodegenerative diseases, and as anatomical reference for analysing other modalities. Ensuring high-quality T1w scans is vital as image quality affects reliability of outcome measures. However, visual inspection can be subjective and time consuming, especially with large datasets. The effectiveness of automated quality control (QC) tools for clinical cohorts remains uncertain. In this study, we used T1w scans from elderly participants within ageing and clinical populations to test the accuracy of existing QC tools with respect to visual QC and to establish a new quality prediction framework for clinical research use. Four datasets acquired from multiple scanners and sites were used (N = 2438, 11 sites, 39 scanner manufacturer models, 3 field strengths—1.5T, 3T, 2.9T, patients and controls, average age 71 ± 8 years). All structural T1w scans were processed with two standard automated QC pipelines (MRIQC and CAT12). The agreement of the accept–reject ratings was compared between the automated pipelines and with visual QC. We then designed a quality prediction framework that combines the QC measures from the existing automated tools and is trained on clinical research datasets. We tested the classifier performance using cross-validation on data from all sites together, also examining the performance across diagnostic groups. We then tested the generalisability of our approach when leaving one site out and explored how well our approach generalises to data from a different scanner manufacturer and/or field strength from those used for training, as well as on an unseen new dataset of healthy young participants with movement-related artefacts. Our results show significant agreement between automated QC tools and visual QC (Kappa = 0.30 with MRIQC predictions; Kappa = 0.28 with CAT12’s rating) when considering the entire dataset, but the agreement was highly variable across datasets. Our proposed robust undersampling boost (RUS) classifier achieved 87.7% balanced accuracy on the test data combined from different sites (with 86.6% and 88.3% balanced accuracy on scans from patients and controls, respectively). This classifier was also found to be generalisable on different combinations of training and test datasets (average balanced accuracy of leave-one-site-out = 78.2%; exploratory models on field strengths and manufacturers = 77.7%; movement-related artefact dataset when including 1% scans in the training = 88.5%). While existing QC tools may not be robustly applicable to datasets comprising older adults, they produce quality metrics that can be leveraged to train more robust quality control classifiers for ageing and clinical cohorts.
A Validated Model to Predict Severe Weight Loss in Amyotrophic Lateral Sclerosis
ABSTRACTSevere weight loss in amyotrophic lateral sclerosis (ALS) is common, multifactorial, and associated with shortened survival. Using longitudinal weight data from over 6000 patients with ALS across three cohorts, we built an accelerated failure time model to predict the risk of future severe (≥ 10%) weight loss using five single‐timepoint clinical predictors: symptom duration, revised ALS Functional Rating Scale, site of onset, forced vital capacity, and age. Model performance and generalisability were evaluated using internal–external cross‐validation and random‐effects meta‐analysis. The overall concordance statistic was 0.71 (95% CI 0.63–0.79), and the calibration slope and intercept were 0.91 (0.69–1.13) and 0.05 (−0.11–0.21). This study highlights clinical factors most associated with severe weight loss in ALS and provides the basis for a stratification tool.
Lower risk of dementia with AS01-adjuvanted vaccination against shingles and respiratory syncytial virus infections.
AS01-adjuvanted shingles (herpes zoster) vaccination is associated with a lower risk of dementia, but the underlying mechanisms are unclear. In propensity-score matched cohort studies with 436,788 individuals, both the AS01-adjuvanted shingles and respiratory syncytial virus (RSV) vaccines, individually or combined, were associated with reduced 18-month risk of dementia. No difference was observed between the two AS01-adjuvanted vaccines, suggesting that the AS01 adjuvant itself plays a direct role in lowering dementia risk.
Volatility-driven learning in human infants.
Adapting to change is a fundamental feature of human learning, yet its developmental origins remain elusive. We developed an experimental and computational approach to track infants' adaptive learning processes via pupil size, an indicator of tonic and phasic noradrenergic activity. We found that 8-month-old infants' tonic pupil size mirrored trial-by-trial fluctuations in environmental volatility, while phasic pupil responses revealed that infants used this information to dynamically optimize their learning. This adaptive strategy resulted in successful task performance, as evidenced by anticipatory looking toward correct target locations. The ability to estimate volatility varied significantly across infants, and these individual differences were related to infant temperament, indicating early links between cognitive adaptation and emotional responsivity. These findings demonstrate that infants actively adapt to environmental change, and that early differences in this capacity may have profound implications for long-term cognitive and psychosocial development.
Designing and Comparing Optimised Pseudo-Continuous Arterial Spin Labelling Protocols for Measurement of Cerebral Blood Flow
1.AbstractArterial Spin Labelling (ASL) is a non-invasive, non-contrast, perfusion imaging technique which is inherently SNR limited. It is, therefore, important to carefully design scan protocols to ensure accurate measurements. Many pseudo-continuous ASL (PCASL) protocol designs have been proposed for measuring cerebral blood flow (CBF), but it has not yet been demonstrated which design offers the most accurate and repeatable CBF measurements. In this work, a wide range of literature PCASL protocols, including single-delay, sequential and time-encoded multi-timepoint protocols, and several novel protocol designs, which are hybrids of time-encoded and sequential multi-timepoint protocols, were first optimised using a Cramér-Rao Lower Bound framework and then compared for CBF accuracy and repeatability using Monte Carlo simulations and in vivo experiments. It was found that several multi-timepoint protocols produced more confident, accurate, and repeatable CBF estimates than the single-delay protocol, while also generating maps of arterial transit time. One of the novel hybrid protocols, HybridT1-adj, was found to produce the most confident, accurate and repeatable CBF estimates of all protocols tested in both simulations and in vivo (24%, 47%, and 28% more confident, accurate, and repeatable than single-PLD in vivo). The HybridT1-adjprotocol makes use of the best aspects of both time-encoded and sequential multi-timepoint protocols and should be a useful tool for accurately and efficiently measuring CBF.