A quick overview of the labels used/generated by this framework: .
An example annotated output tree is here. This shows the results of a real run of ANTsPyMM on multiple modality MRI (M3RI).
The ANTsPyMM
system takes advantage of templates, core
T1 processing and super-resolution described in this
paper.
We quantify cortex with the Desikan-Killiany-Tourville
parcellation (DKT).
*dktregions
refers to the segmentation label
directly from the deep learning method.
*dktcortex
refers to the segmentation label
restricted to the cortical segmentation that is provided by the tissue
segmentation method.
Subcortical segmentation with deep learning and also SyN
registration leverages the CIT168 atlas 10.1101/211201.
A medial temporal lobe parcellation is based on manual labels
derived from M. Yassa’s research group (preprint) and described here.
A cerebellum parcellation based on a template from the CoBrALab with
additional manual editing by the Tustison family (publication in
progress). The parcellation scheme is defined using the Schmahmann
nomenclature (reviewed
here). The core tool is named cerebellum_morphology
and
based on a concatentation of two U-nets.
Brain stem is subdivided into the mid-brain, pons and medulla;
This segmentation derives from registration between the individual T1w
and the manually labeled CIT168 template. The protocol is based on this used in Freesurfer.
We segment anatomy related to the basal forebrain and associated
cholinergic neurons with a deep learning method (deepNBM
)
derived from manual labeling of ADNI data.
This approach is based on anatomical landmarks proposed in Liu, A. et
al. (2015). The deepNBM
method uses super-resolution
segmentation to take advantage of manual labels performed on
super-resolution training data.
The classic Zaborszky, L. et al. (2008) “Stereotaxic probabilistic maps of the magnocellular cell groups in human basal forebrain.” Neuroimage 42: 1127–1141 also informed the manual labeling.
Presented at Human Brain Mapping conference 2022.
The Mori JHU white matter atlas provides a parcellation of white
matter regions (paper
here) which are applied to diffusion weighted images.
Jonathan Power’s coordinates from “Functional Network Organization of the Human Brain” 10.1016/j.neuron.2011.09.006 were previously used to guide our analysis of resting state function MRI (rsfMRI). We also based our processing evaluation on these data. However, we now use the homotopic coordinates described below.
Yeo’s Homotopic local-global parcellation of the human cerebral cortex from resting-state functional connectivity is the new default that we use for rsfMRI instead of the JP coordinates. Additional details about the parcellation are here. We use the variant with 500 homotopic parcels i.e. 250 per hemisphere.
these parcels are used to generate inter and intra network correlations using the following networks (which are further subdivided s.t. 17 network names are available)
Visual Network: Processes visual information, including spatial and motion processing.
Somatomotor Network: Involved in the control of movement and processing of somatosensory information.
Dorsal Attention Network: Plays a key role in top-down attentional control and spatial orientation.
Ventral Attention Network: Involved in detecting unexpected stimuli and bottom-up attention.
Limbic Network: Associated with emotion processing, memory, and autonomic functions.
Frontoparietal Network: Important for cognitive functions including working memory and executive control.
Default Mode Network (DMN): Active during rest and involved in self-referential thought, memory retrieval, and planning for the future.
The 17 Network abbreviations are adapted from Ru Kong’s
work.
rsfMRI phenotype variables like “fcnxpro129” indicate that parameter set 129 was used; currently 122 is considered “best” but 129 and 134 are also provided. they differ in the censoring and nuisance variables used in the processing.
we also provide voxel-wise measurements within each parcel including:
fALFF and ALFF each divided by the global mean of the brain (mfALFF and mALFF)
percent absolute fluctuation divided by the global PerAF mean of the brain (mPerAF)
The ANTsPyMM
data dictionary labels associates each
column name with each of the above anatomical references. Table 1 below
summarizes the number of variables associated with each modality and
anatomical prior space. DTI-derived connectivity yields the largest
number of variables because we pairwise map each DKT cortical region as
well as the primary CIT168 regions.
DTI | Flair | Neuromelanin | Other | restingStatefMRI | T1 DiReCT thickness processing | T1 hierarchical processing | unique id | p | |
---|---|---|---|---|---|---|---|---|---|
(N=709) | (N=46) | (N=44) | (N=409) | (N=2143) | (N=63) | (N=853) | (N=2) | ||
Atlas | < 0.001 | ||||||||
ANTs | 127 (17.9%) | 46 (100.0%) | 28 (63.6%) | 409 (100.0%) | 0 ( 0.0%) | 1 ( 1.6%) | 21 ( 2.5%) | 0 ( 0.0%) | |
BF | 32 ( 4.5%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 24 ( 2.8%) | 0 ( 0.0%) | |
CIT168 | 200 (28.2%) | 0 ( 0.0%) | 16 (36.4%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 189 (22.2%) | 0 ( 0.0%) | |
desikan-killiany-tourville | 252 (35.5%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 62 (98.4%) | 486 (57.0%) | 0 ( 0.0%) | |
johns hopkins white matter | 98 (13.8%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | |
MTL | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 36 ( 4.2%) | 0 ( 0.0%) | |
quality control metrics | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 25 ( 2.9%) | 0 ( 0.0%) | |
TustisonCobra | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 72 ( 8.4%) | 0 ( 0.0%) | |
unique id | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 2 (100.0%) | |
yeo_homotopic | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 2143 (100.0%) | 0 ( 0.0%) | 0 ( 0.0%) | 0 ( 0.0%) | |
Measurement | unique values:64 | unique values:64 | unique values:64 | unique values:64 | unique values:64 | unique values:64 | unique values:64 | unique values:64 | |
Automated (blind) QC is part of standard ANTsPyMM processing. QC metrics are therefore present in the standard output tables. Notes on the meaning of these metrics, along with examples of the metrics on images, are here.
The most important QC metric is T1Hier_resnetGrade
which
rejects truly bad images. We recommend rejecting images with
T1Hier_resnetGrade < 1.01
where higher thresholds may be
useful in some cases. This grading function uses deep learning to
generate a pseudocontinuous measurement of aggregate T1w image quality.
Several factors will contribute to these scores and it is not a perfect
approach but it is well-defined and evaluated based on manual ratings here.
The outlierness
function produces additional blind QC
measurements via automated outlierness calculations. These measurements
are denoted by Local Outlier Probability (LOOP *loop*
) and
local outlier factor ( LOF *lof*
) column names.
Time series images may have additional QC metrics such as:
TSNR: temporal signal to noise ratio
SSNR: spatial signal to noise ratio on the mean image (different from TSNR)
DVARS: the spatial root mean square of the data after temporal differencing
Neuromelanin (NM) has its own set of variables including:
count: the number of acquisitions at that date
min/max/mean/sd: a statistic on the raw image signal after averaging the acquisitions
avg/std_refregion
: average and standard deviation
of signal in the reference region
avg/std_substantianigra
: average and standard
deviation of signal in the substantia nigra or a related CIT168 region
(determined by deepCIT168)
NM2DMT_NM_substantianigra_z_coordinate
: estimate of
the normalized (zero to one) z-coordinate of the substantia nigra in a
neuromelanin scan; higher values mean higher in the slab.
the framewise displacement (FD) values are 10x in DTI data compared to rsfMRI data. E.g. if 0.3 were a reasonable high motion threshold in rsfMRI; then 3.0 would be a comparable threshold for DTI.
A sortable version of the data dictionary is below (if viewing on local machine).
We provide an example of how to join derived ANTsPyMM
summary data with clinical variables from PPMI.
library( subtyper)
?merge_ppmi_imaging_clinical_demographic_data