Brainload API Documentation¶
Brainload high-level API functions.
-
brainload.
subject
(subject_id, surf='white', measure='area', hemi='both', subjects_dir=None, meta_data=None, load_surface_files=True, load_morhology_data=True)¶ Load FreeSurfer brain morphometry and/or mesh data for a single subject.
High-level interface to load FreeSurfer brain data for a single space. This parses the data for the surfaces of this subject. If you want to load data that has been mapped to an average subject like ‘fsaverage’, use subject_avg instead.
Parameters: - subject_id (string) – The subject identifier of the subject. As always, it is assumed that this is the name of the directory containing the subject’s data, relative to subjects_dir. Example: ‘subject33’.
- measure (string, optional) – The measure to load, e.g., ‘area’ or ‘curv’. Defaults to ‘area’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- meta_data (dictionary, optional) – A dictionary that should be merged into the return value meta_data. Defaults to the empty dictionary if omitted.
- load_surface_files (boolean, optional) – Whether to load mesh data. If set to False, the first return values vert_coords and faces will be None. Defaults to True.
- load_morphometry_data (boolean, optional) – Whether to load morphometry data. If set to False, the first return value morphometry_data will be None. Defaults to True.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
morphometry_data (numpy array) – A numpy array with as many entries as there are vertices in the subject. If you load two hemispheres instead of one, the length doubles. You can get the start indices for data of the hemispheres in the returned meta_data, see meta_data[‘lh.num_vertices’] and meta_data[‘rh.num_vertices’]. You can be sure that the data for the left hemisphere will always come first (if both were loaded). Indices start at 0, of course. So if the left hemisphere has n vertices, the data for them are at indices 0..n-1, and the data for the right hemisphere start at index n. Note that the two hemispheres do in general NOT have the same number of vertices.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the ?h_morphometry_data_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
- subject_id : the subject id
- subjects_dir : the subjects dir that was used
- surf : the surf that was used, e.g., ‘white’
- measure : the measure that was loaded as morphometry data, e.g., ‘area’
- space : always the string ‘subject’. This means that the data loaded represent morphometry data taken from the subject’s surface (as opposed to data mapped to a common or average subject).
- hemi : the hemi value that was used
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> vertices, faces, data, md = bl.subject('subject1')
Here, we are a bit more explicit about what we want to load:
>>> import os >>> user_home = os.getenv('HOME') >>> subjects_dir = os.path.join(user_home, 'data', 'my_study_x') >>> vertices, faces, data, md = bl.subject('subject1', hemi='lh', measure='curv', subjects_dir=subjects_dir)
Sometimes we do not care for the mesh, e.g., we only want the morphometry data:
>>> data, md = bl.subject('subject1', hemi='rh', load_surface_files=False)[2:4]
…or the other way around (mesh only, no morphometry data):
>>> vertices, faces = bl.subject('subject1', hemi='rh', load_morphometry_data=False)[0:2]
-
brainload.
subject_avg
(subject_id, measure='area', surf='white', display_surf='white', hemi='both', fwhm='10', subjects_dir=None, average_subject='fsaverage', subjects_dir_for_average_subject=None, meta_data=None, load_surface_files=True, load_morhology_data=True, custom_morphometry_files=None)¶ Load morphometry data that has been mapped to an average subject for a subject.
Load data for a single subject that has been mapped to an average subject like the fsaverage subject from FreeSurfer. Can also load the mesh of an arbitrary surface for the average subject.
Parameters: - subject_id (string) – The subject identifier of the subject. As always, it is assumed that this is the name of the directory containing the subject’s data, relative to subjects_dir. Example: ‘subject33’.
- measure (string, optional) – The measure to load, e.g., ‘area’ or ‘curv’. Defaults to ‘area’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- fwhm (string or None, optional) – Which averaging version of the data should be loaded. FreeSurfer usually generates different standard space files with a number of smoothing settings. Defaults to ‘10’. If None is passed, the .fwhmX part is omitted from the file name completely. Set this to ‘0’ to get the unsmoothed version.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- average_subject (string, optional) – The name of the average subject to which the data was mapped. Defaults to ‘fsaverage’.
- display_surf (string, optional) – The surface of the average subject for which the mesh should be loaded, e.g., ‘white’, ‘pial’, ‘inflated’, or ‘sphere’. Defaults to ‘white’. Ignored if load_surface_files is False.
- subjects_dir_for_average_subject (string, optional) – A string representing the full path to a directory. This can be used if the average subject is not in the same directory as all your study subjects. Defaults to the setting of subjects_dir.
- meta_data (dictionary, optional) – A dictionary that should be merged into the return value meta_data. Defaults to the empty dictionary if omitted.
- load_surface_files (boolean, optional) – Whether to load mesh data. If set to False, the first return values vert_coords and faces will be None. Defaults to True.
- load_morphometry_data (boolean, optional) – Whether to load morphometry data. If set to False, the first return value morphometry_data will be None. Defaults to True.
- custom_morphometry_files (dictionary, optional) – Cutom filenames for the left and right hemispjere data files that should be loaded. A dictionary of strings with exactly the following two keys: lh and rh. The value strings must contain hardcoded file names or template strings for them. As always, the files will be loaded relative to the surf/ directory of the respective subject. Example: {‘lh’: ‘lefthemi.nonstandard.mymeasure44.mgh’, ‘rh’: ‘righthemi.nonstandard.mymeasure44.mgh’}.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the average subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the average subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
morphometry_data (numpy array) – A numpy array with as many entries as there are vertices in the average subject. If you load two hemispheres instead of one, the length doubles. You can get the start indices for data of the hemispheres in the returned meta_data, see meta_data[‘lh.num_vertices’] and meta_data[‘rh.num_vertices’]. You can be sure that the data for the left hemisphere will always come first (if both were loaded). Indices start at 0, of course. So if the left hemisphere has n vertices, the data for them are at indices 0..n-1, and the data for the right hemisphere start at index n. In many cases, your average subject will have the same number of vertices for both hemispheres and you will know this number beforehand, so you may not have to worry about this at all.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the ?h_morphometry_data_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
- subject_id : the subject id
- subjects_dir : the subjects dir that was used
- surf : the surf that was used, e.g., ‘white’
- measure : the measure that was loaded as morphometry data, e.g., ‘area’
- space : always the string ‘common’. This means that the data loaded represent morphometry data that has been mapped to a common or average subject.
- hemi : the hemi value that was used
- display_subject : the name of the common or average subject. This is the subject the surface meshes originate from. Ususally ‘fsaverage’.
- display_surf : the surface of the common subject that has been loaded. Something like ‘pial’, ‘white’, or ‘inflated’.
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR, mapped to fsaverage:
>>> import brainload as bl >>> v, f, data, md = bl.subject_avg('subject1') >>> print md['surf'] white
Here, we are a bit more picky and explicit about what we want to load:
>>> import os >>> import brainload as bl >>> user_home = os.getenv('HOME') >>> subjects_dir = os.path.join(user_home, 'data', 'my_study_x') >>> v, f, data, md = bl.subject_avg('subject1', hemi='lh', measure='curv', fwhm='15', display_surf='inflated', subjects_dir=subjects_dir)
Sometime we do not care for the mesh, e.g., we only want the morphometry data:
>>> import brainload as bl >>> data, md = bl.subject_avg('subject1', hemi='rh', fwhm='15', load_surface_files=False)[2:4]
-
brainload.
group
(measure, surf='white', hemi='both', fwhm='10', subjects_dir=None, average_subject='fsaverage', group_meta_data=None, subjects_list=None, subjects_file='subjects.txt', subjects_file_dir=None, custom_morphometry_file_templates=None, subjects_detection_mode='auto')¶ Load morphometry data for a number of subjects.
Load group data, i.e., morphometry data for all subjects in a study that has already been mapped to standard space and is ready for group analysis. The information given in the parameters measure, surf, hemi, and fwhm are used to construct the file name that will be loaded by default. This function will NOT load the meshes.
Parameters: - measure (string) – The measure to load, e.g., ‘area’ or ‘curv’. Data files for this measure have to exist for all subjects.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- fwhm (string or None, optional) – Which averaging version of the data should be loaded. FreeSurfer usually generates different standard space files with a number of smoothing settings. Defaults to ‘10’. If None is passed, the .fwhmX part is omitted from the file name completely. Set this to ‘0’ to get the unsmoothed version.
- subjects_dir (string, optional) – A string representing the full path to a directory. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- average_subject (string, optional) – The name of the average subject to which the data was mapped. Defaults to ‘fsaverage’.
- group_meta_data (dictionary, optional) – A dictionary that should be merged into the return value group_meta_data. Defaults to the empty dictionary if omitted.
- subjects_list (list of strings, optional (unless subjects_detection_mode is set to list)) – A list of subject identifiers or directory names that should be loaded from the subjects_dir. Example list: [‘subject1’, ‘subject2’]. Defaults to None. Only allowed if subjects_detection_mode is auto or list. In auto mode, this takes precedence over all other options, i.e., if a subjects_list and the (default or custom) subjects_file are given, the subjects_list will be used.
- subjects_file_dir (string, optional) – A string representing the full path to a directory. This directory must contain the subjects_file (see below). Defaults to the subjects_dir.
- subjects_file (string, optional) – The name of the subjects file, relative to the subjects_file_dir. Defaults to ‘subjects.txt’. The file must be a simple text file that contains one subject_id per line. It can be a CSV file that has other data following, but the subject_id has to be the first item on each line and the separator must be a comma. So a line is allowed to look like this: subject1, 35, center1, 147. No header is allowed. If you have a different format, consider reading the file yourself and pass the result as subjects_list instead.
- custom_morphometry_file_templates (dictionary, optional) –
- Cutom filenames for the left and right hemisphere data files that should be loaded. A dictionary of strings with exactly the following two keys: lh and rh. The value strings can contain hardcoded file names or template strings for them. As always, the files will be loaded relative to the surf/ directory of the respective subject. Example for hard-coded files: {‘lh’: ‘lefthemi.nonstandard.mymeasure44.mgh’, ‘rh’: ‘righthemi.nonstandard.mymeasure44.mgh’}. The strings may contain any of the following variabes, which will be replaced by what you supplied to the other arguments of this function:
- ${MEASURE} will be replaced with the value of measure.
- ${SURF} will be replaced with the FreeSurfer file name part for the surface surf. This is the empty string if surf is ‘white’, and a dot followed by the value of surf for all other settings of surf. Examples: when surf is ‘pial’, this will be replaced with ‘.pial’ (Note the dot!). If surf is ‘white’, this will be replaced with the empty string.
- ${SURF_RAW} will be replaced with the value of surf.
- ${HEMI} will be replaced with ‘lh’ for the left hemisphere, and with ‘rh’ for the right hemisphere.
- ${FWHM} will be replaced with the value of fwhm, so something like ‘10’.
- ${SUBJECT_ID} will be replaced by the id of the subject that is being loaded, e.g., ‘subject3’.
- ${AVERAGE_SUBJECT} will be replaced by the value of average_subject.
Note that only ${SURF} and ${HEMI} are usually needed, everything else can be hardcoded (or is not part of typical FreeSurfer file names at all, like ${SUBJECT_ID}). Example template string: subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh. Complete example for template strings in dictionary: {‘lh’: ‘subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh’, ‘rh’: ‘subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh’}.
- subjects_detection_mode ({'auto', 'list', 'file', 'search_dir'}, optional) –
- The method used to determine the subjects that should be loaded. Defaults to ‘auto’. You can always see which mode was used by looking at the returned run_meta_data, see run_meta_data[‘subjects_detection_mode’].
- ’auto’: In this mode, all available methods will be tried in the following order: If a subjects_list is given, it is used. Then, the subjects_file is used if it exists. Note that this may be the default file, ‘$SUBJECTS_DIR/subject_surf_dir.txt’, or another if one has explicitely been defined by setting subjects_file and/or subjects_file_dir. If the file does not exist, the directory is searched for directories containing FreeSurfer data as defined in the section for ‘search_dir’ mode below. You can always see which method was used in auto mode by looking at the returned run_meta_data, see run_meta_data[‘subjects_detection_mode_auto_used_method’].
- ’list’: In this mode, the given subjects_list is used, and you have to supply one. If not, an error is raised. You are not allowed to supply a subjects_file in this mode, or an error will be raised.
- ’file’: In this mode, the subjects file is used. Note that this may be the default file, ‘$SUBJECTS_DIR/subjects.txt’, or another if one has explicitely been defined by setting subjects_file and/or subjects_file_dir. If the file does not exist, an error is raised. You can see which file was used by looking at the returned run_meta_data, see run_meta_data[‘subjects_file’]. You are not allowed to supply a subjects_list in this mode, or an error will be raised.
- ’search_dir’: In this mode, the subjects_dir (default or explicitely given) is searched for sub directories which look as if they could contain FreeSurfer data. The latter means that they contain a sub directory named ‘surf’. There is one exception though: if the name of one such directory equals the name of the average_subject, the directory is skipped. You are not allowed to supply a subjects_list in this mode, or an error will be raised.
Returns: - group_morphometry_data (numpy array) – An array filled with the morphometry data for the subjects. The array has shape (n, m) where n is the number of subjects, and m is the number of vertices of the standard subject. (If you load both hemispheres instead of one, m doubles.) To get the subject id for the entries, look at the respective index in the returned subjects_list.
- subjects_list (list of strings) – A list containing the subject identifiers in the same order as the data in group_morphometry_data. (If subjects_detection_mode is ‘list’ or ‘file’, the order in these is guaranteed to be preserved. But in mode ‘search_dir’ or ‘auto’ which may have chosen to fall back to ‘search_dir’ as a last resort, this is helpful: You can use the index of a subject in this list to find its data in group_morphometry_data, as it will have the same index. See the examples below.)
- group_meta_data (dictionary) – A dictionary containing detailed information on all subjects and files that were loaded. Each of its keys is a subject identifier. The data value is another dictionary that contains all meta data for this subject as returned by the subject_avg function.
- run_meta_data (dictionary) – A dictionary containing general information on the settings used when executing the function and determining which subjects to load.
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for all subjects in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> data, subjects, group_md, run_md = bl.group('area')
Here, we load curv data for the right hemisphere, computed on the pial surface with smooting of 20:
>>> data, subjects, group_md, run_md = bl.group('curv', hemi='rh', surf='pial', fwhm='20')
We may want to be a but more explicit on which subjects are loaded from where:
>>> import os >>> import brainload as bl >>> subjects_dir = os.path.join(os.getenv('HOME'), 'data', 'my_study_x') >>> subjects_list = ['subject1', 'subject4', 'subject8'] >>> data, subjects, group_md, run_md = bl.group('curv', fwhm='20', subjects_dir=subjects_dir, subjects_list=subjects_list)
Continuing the last example, we may want to have a look at the curv value of the vertex at index 100000 of the subject ‘subject4’:
>>> subject4_idx = subjects.index('subject4') >>> print data[subject4_idx][100000]
-
brainload.
fsaverage_mesh
(subject_id='fsaverage', surf='white', hemi='both', subjects_dir=None, use_freesurfer_home_if_missing=True)¶ Load a surface mesh of the fsaverage subject.
Convenience function to load a FreeSurfer surface mesh of the fsaverage subject. You could also use this function to load the mesh of any other subject, but in that case, you may want to set use_freesurfer_home_if_missing to False (see below). This function calls subject in the background and shares the relevant arguments and return values with that function.
Parameters: - subject_id (string, optional) – The subject identifier of the subject. Defaults to ‘fsaverage’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- use_freesurfer_home_if_missing (boolean, optional) – If set to True, first checks whether the directory for the given subject exists in the subjects_dir. If it does not, it will reset the subjects_dir to ‘${FREESURFER_HOME}/subjects’ before proceeding.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> verts, faced, meta_data = bl.fsaverage_mesh()
-
brainload.
rhi
(rh_relative_index, meta_data)¶ Computes the absolute data index given an index relative to the right hemisphere.
This function makes sense only given a morphometry_data and associated meta_data that contains data on two hemispheres (even though the morphometry_data array itself is not passed to this function). E.g., the return value of a function like subject() or subject_avg() when called with hemi=’both’. For such data, it computes the absolute index in the data given a request index relative to the right hemisphere. The name is short for ‘right hemisphere index’.
Parameters: - rh_relative_index (int) – An index relative to the right hemisphere. E.g., 0 if you want to get the index of the first vertex of the right hemisphere. Its absolute value must be between 0 and the number of vertices of the right hemisphere. Negative values are allowed, and -1 will get you the last possible index, -2 the second-to-last, and so on.
- meta_data (dictionary) – The meta data dictionary returned for your data. It must contain the keys ‘lh.num_data_points’ and ‘rh.num_data_points’.
Returns: Return type: The absolute index into the data for the given rh_relative_index.
Examples
>>> import brainload as bl >>> morphometry_data, meta_data = bl.subject('heinz', hemi='both')[2:4] >>> print "rh value at index 10, relative to start of right hemisphere: %d." % morphometry_data[bl.rhi(10, meta_data)]
-
brainload.
rhv
(rh_relative_index, morphometry_data, meta_data)¶ Returns the value in morphometry_data at an index relative to the right hemisphere.
This function makes sense only given a morphometry_data and associated meta_data that contains data on two hemispheres. E.g., the return value of a function like subject() or subject_avg() when called with hemi=’both’. For such data, it returns the value in morphometry_data at a request index given relative to the right hemisphere. The name is short for ‘right hemisphere value’.
Parameters: - rh_relative_index (int) – An index relative to the start of the right hemisphere in the data. E.g., 0 if you want to get the value for the first vertex of the right hemisphere. Its absolute value must be between 0 and the number of vertices of the right hemisphere. Negative values are allowed, and -1 will get you the last possible value, -2 the second-to-last, and so on.
- morphometry_data (numpy array) – The morphometry data array, must represent data for both hemispheres.
- meta_data (dictionary) – The meta data dictionary returned for your data. It must contain the keys ‘lh.num_data_points’ and ‘rh.num_data_points’.
Returns: Return type: The value at the given index that is relative to the start of the right hemisphere in the data.
Examples
>>> import brainload as bl >>> morphometry_data, meta_data = bl.subject('heinz', hemi='both')[2:4] >>> print "rh value at index 10, relative to start of right hemisphere: %d." % bl.rhv(10, morphometry_data, meta_data)
Submodules¶
brainload.freesurferdata module¶
Functions for loading FreeSurfer data on different levels.
The high-level functions are available directly in the package namespace. Using the functions in here should not be necessary.
-
brainload.freesurferdata.
fsaverage_mesh
(subject_id='fsaverage', surf='white', hemi='both', subjects_dir=None, use_freesurfer_home_if_missing=True)¶ Load a surface mesh of the fsaverage subject.
Convenience function to load a FreeSurfer surface mesh of the fsaverage subject. You could also use this function to load the mesh of any other subject, but in that case, you may want to set use_freesurfer_home_if_missing to False (see below). This function calls subject in the background and shares the relevant arguments and return values with that function.
Parameters: - subject_id (string, optional) – The subject identifier of the subject. Defaults to ‘fsaverage’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- use_freesurfer_home_if_missing (boolean, optional) – If set to True, first checks whether the directory for the given subject exists in the subjects_dir. If it does not, it will reset the subjects_dir to ‘${FREESURFER_HOME}/subjects’ before proceeding.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> verts, faced, meta_data = bl.fsaverage_mesh()
-
brainload.freesurferdata.
group
(measure, surf='white', hemi='both', fwhm='10', subjects_dir=None, average_subject='fsaverage', group_meta_data=None, subjects_list=None, subjects_file='subjects.txt', subjects_file_dir=None, custom_morphometry_file_templates=None, subjects_detection_mode='auto')¶ Load morphometry data for a number of subjects.
Load group data, i.e., morphometry data for all subjects in a study that has already been mapped to standard space and is ready for group analysis. The information given in the parameters measure, surf, hemi, and fwhm are used to construct the file name that will be loaded by default. This function will NOT load the meshes.
Parameters: - measure (string) – The measure to load, e.g., ‘area’ or ‘curv’. Data files for this measure have to exist for all subjects.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- fwhm (string or None, optional) – Which averaging version of the data should be loaded. FreeSurfer usually generates different standard space files with a number of smoothing settings. Defaults to ‘10’. If None is passed, the .fwhmX part is omitted from the file name completely. Set this to ‘0’ to get the unsmoothed version.
- subjects_dir (string, optional) – A string representing the full path to a directory. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- average_subject (string, optional) – The name of the average subject to which the data was mapped. Defaults to ‘fsaverage’.
- group_meta_data (dictionary, optional) – A dictionary that should be merged into the return value group_meta_data. Defaults to the empty dictionary if omitted.
- subjects_list (list of strings, optional (unless subjects_detection_mode is set to list)) – A list of subject identifiers or directory names that should be loaded from the subjects_dir. Example list: [‘subject1’, ‘subject2’]. Defaults to None. Only allowed if subjects_detection_mode is auto or list. In auto mode, this takes precedence over all other options, i.e., if a subjects_list and the (default or custom) subjects_file are given, the subjects_list will be used.
- subjects_file_dir (string, optional) – A string representing the full path to a directory. This directory must contain the subjects_file (see below). Defaults to the subjects_dir.
- subjects_file (string, optional) – The name of the subjects file, relative to the subjects_file_dir. Defaults to ‘subjects.txt’. The file must be a simple text file that contains one subject_id per line. It can be a CSV file that has other data following, but the subject_id has to be the first item on each line and the separator must be a comma. So a line is allowed to look like this: subject1, 35, center1, 147. No header is allowed. If you have a different format, consider reading the file yourself and pass the result as subjects_list instead.
- custom_morphometry_file_templates (dictionary, optional) –
- Cutom filenames for the left and right hemisphere data files that should be loaded. A dictionary of strings with exactly the following two keys: lh and rh. The value strings can contain hardcoded file names or template strings for them. As always, the files will be loaded relative to the surf/ directory of the respective subject. Example for hard-coded files: {‘lh’: ‘lefthemi.nonstandard.mymeasure44.mgh’, ‘rh’: ‘righthemi.nonstandard.mymeasure44.mgh’}. The strings may contain any of the following variabes, which will be replaced by what you supplied to the other arguments of this function:
- ${MEASURE} will be replaced with the value of measure.
- ${SURF} will be replaced with the FreeSurfer file name part for the surface surf. This is the empty string if surf is ‘white’, and a dot followed by the value of surf for all other settings of surf. Examples: when surf is ‘pial’, this will be replaced with ‘.pial’ (Note the dot!). If surf is ‘white’, this will be replaced with the empty string.
- ${SURF_RAW} will be replaced with the value of surf.
- ${HEMI} will be replaced with ‘lh’ for the left hemisphere, and with ‘rh’ for the right hemisphere.
- ${FWHM} will be replaced with the value of fwhm, so something like ‘10’.
- ${SUBJECT_ID} will be replaced by the id of the subject that is being loaded, e.g., ‘subject3’.
- ${AVERAGE_SUBJECT} will be replaced by the value of average_subject.
Note that only ${SURF} and ${HEMI} are usually needed, everything else can be hardcoded (or is not part of typical FreeSurfer file names at all, like ${SUBJECT_ID}). Example template string: subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh. Complete example for template strings in dictionary: {‘lh’: ‘subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh’, ‘rh’: ‘subj_${SUBJECT_ID}_hemi_${HEMI}.alsononstandard.mgh’}.
- subjects_detection_mode ({'auto', 'list', 'file', 'search_dir'}, optional) –
- The method used to determine the subjects that should be loaded. Defaults to ‘auto’. You can always see which mode was used by looking at the returned run_meta_data, see run_meta_data[‘subjects_detection_mode’].
- ’auto’: In this mode, all available methods will be tried in the following order: If a subjects_list is given, it is used. Then, the subjects_file is used if it exists. Note that this may be the default file, ‘$SUBJECTS_DIR/subject_surf_dir.txt’, or another if one has explicitely been defined by setting subjects_file and/or subjects_file_dir. If the file does not exist, the directory is searched for directories containing FreeSurfer data as defined in the section for ‘search_dir’ mode below. You can always see which method was used in auto mode by looking at the returned run_meta_data, see run_meta_data[‘subjects_detection_mode_auto_used_method’].
- ’list’: In this mode, the given subjects_list is used, and you have to supply one. If not, an error is raised. You are not allowed to supply a subjects_file in this mode, or an error will be raised.
- ’file’: In this mode, the subjects file is used. Note that this may be the default file, ‘$SUBJECTS_DIR/subjects.txt’, or another if one has explicitely been defined by setting subjects_file and/or subjects_file_dir. If the file does not exist, an error is raised. You can see which file was used by looking at the returned run_meta_data, see run_meta_data[‘subjects_file’]. You are not allowed to supply a subjects_list in this mode, or an error will be raised.
- ’search_dir’: In this mode, the subjects_dir (default or explicitely given) is searched for sub directories which look as if they could contain FreeSurfer data. The latter means that they contain a sub directory named ‘surf’. There is one exception though: if the name of one such directory equals the name of the average_subject, the directory is skipped. You are not allowed to supply a subjects_list in this mode, or an error will be raised.
Returns: - group_morphometry_data (numpy array) – An array filled with the morphometry data for the subjects. The array has shape (n, m) where n is the number of subjects, and m is the number of vertices of the standard subject. (If you load both hemispheres instead of one, m doubles.) To get the subject id for the entries, look at the respective index in the returned subjects_list.
- subjects_list (list of strings) – A list containing the subject identifiers in the same order as the data in group_morphometry_data. (If subjects_detection_mode is ‘list’ or ‘file’, the order in these is guaranteed to be preserved. But in mode ‘search_dir’ or ‘auto’ which may have chosen to fall back to ‘search_dir’ as a last resort, this is helpful: You can use the index of a subject in this list to find its data in group_morphometry_data, as it will have the same index. See the examples below.)
- group_meta_data (dictionary) – A dictionary containing detailed information on all subjects and files that were loaded. Each of its keys is a subject identifier. The data value is another dictionary that contains all meta data for this subject as returned by the subject_avg function.
- run_meta_data (dictionary) – A dictionary containing general information on the settings used when executing the function and determining which subjects to load.
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for all subjects in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> data, subjects, group_md, run_md = bl.group('area')
Here, we load curv data for the right hemisphere, computed on the pial surface with smooting of 20:
>>> data, subjects, group_md, run_md = bl.group('curv', hemi='rh', surf='pial', fwhm='20')
We may want to be a but more explicit on which subjects are loaded from where:
>>> import os >>> import brainload as bl >>> subjects_dir = os.path.join(os.getenv('HOME'), 'data', 'my_study_x') >>> subjects_list = ['subject1', 'subject4', 'subject8'] >>> data, subjects, group_md, run_md = bl.group('curv', fwhm='20', subjects_dir=subjects_dir, subjects_list=subjects_list)
Continuing the last example, we may want to have a look at the curv value of the vertex at index 100000 of the subject ‘subject4’:
>>> subject4_idx = subjects.index('subject4') >>> print data[subject4_idx][100000]
-
brainload.freesurferdata.
load_subject_mesh_files
(lh_surf_file, rh_surf_file, hemi='both', meta_data=None)¶ Load mesh files for a subject.
Load one or two mesh files for a subject. Which of the two files lh_surf_file and rh_surf_file are actually loaded is determined by the hemi parameter.
Parameters: - lh_surf_file (string | None) – A string representing an absolute path to a mesh file for the left hemisphere (e.g., the path to ‘lh.white’). If hemi is ‘rh’, this will be ignored and can thus be None.
- rh_surf_file (string | None) – A string representing an absolute path to a mesh file for the right hemisphere (e.g., the path to ‘rh.white’). If hemi is ‘lh’, this will be ignored and can thus be None.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere for which data should actually be loaded. Defaults to ‘both’.
- meta_data (dictionary | None, optional) – Meta data to merge into the output meta_data. Defaults to the empty dictionary.
Returns: vert_coords (numpy array) – A 2D array containing 3 coordinates for each vertex. If the argument hemi was ‘both’, this includes vertices from several meshes. You can check the meta_data return values to get the border between meshes, see meta_data[‘lh.num_vertices’] and meta_data[‘rh.num_vertices’].
faces (numpy array) – A 2D array containing 3 vertex indices per face. Look at the respective indices in vert_coords to get the vertex coordinates. If the argument hemi was ‘both’, this includes faces from several meshes. You can check the meta_data return values to get the border between meshes, see meta_data[‘lh.num_faces’] and meta_data[‘rh.num_faces’].
meta_data (dictionary) –
- Contains detailed information on the data that was loaded. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
Examples
>>> import brainload.freesurferdata as fsd; import os >>> lh_surf_file = os.path.join('my_subjects_dir', 'subject1', 'surf', 'lh.white') >>> rh_surf_file = os.path.join('my_subjects_dir', 'subject1', 'surf', 'rh.white') >>> vert_coords, faces, meta_data = fsd.load_subject_mesh_files(lh_surf_file, rh_surf_file)
-
brainload.freesurferdata.
load_subject_morphometry_data_files
(lh_morphometry_data_file, rh_morphometry_data_file, hemi='both', format='curv', meta_data=None)¶ Load morphometry data files for a subject.
Load one or two morphometry data files for a subject. Which of the two files lh_morphometry_data_file and rh_morphometry_data_file are actually loaded is determined by the hemi parameter.
Parameters: - lh_morphometry_data_file (string | None) – A string representing an absolute path to a morphometry data file for the left hemisphere. If hemi is ‘rh’, this will be ignored and can thus be None.
- rh_morphometry_data_file (string | None) – A string representing an absolute path to a morphometry data file for the right hemisphere. If hemi is ‘lh’, this will be ignored and can thus be None.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere for which data should actually be loaded. Defaults to ‘both’.
- format ({'curv', 'mgh'}, optional) – The file format for the files that are to be loaded. Defaults to ‘curv’.
- meta_data (dictionary | None, optional) – Meta data to merge into the output meta_data. Defaults to the empty dictionary.
Returns: morphometry_data (numpy array) – An array containing the scalar per-vertex data loaded from the file(s).
meta_data (dictionary) –
- Contains detailed information on the data that was loaded. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the ?h_morphometry_data_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
Examples
Load the lh and rh area files for subject1.
>>> import brainload.freesurferdata as fsd; import os >>> lh_morphometry_file = os.path.join('path', 'to', 'subjects_dir', 'subject1', 'surf', 'lh.area') >>> rh_morphometry_file = os.path.join('path', 'to', 'subjects_dir', 'subject1', 'surf', 'rh.area') >>> morphometry_data, meta_data = fsd.load_subject_morphometry_data_files(lh_morphometry_file, rh_morphometry_file)
Now let’s look at the area value for the vertex at index 10:
>>> print "lh value at index 10: %d." % morphometry_data[10]
But what about the value of vertex 10 at the right hemisphere? We loaded 2 hemispheres, so the data is concatinated. But you can use the meta_data to get the correct index relative to the right hemisphere:
>>> print "rh value at index 10: %d." % morphometry_data[fsd.rhi(10, meta_data)]
You could also get the value directly using the rhv function:
>>> print "rh value at index 10: %d." % fsd.rhv(10, morphometry_data, meta_data)
-
brainload.freesurferdata.
merge_morphometry_data
(morphometry_data_arrays, dtype=<type 'float'>)¶ Merge morphometry data horizontally.
Merge morphometry data read from several meshes of the same subject horizontally. This is used to merge data from the left and right hemispheres.
Parameters: - morphometry_data_arrays (2D array) – An array of arrays, each of which represents morphometry data from different hemispheres of the same subject.
- dtype (data type, optional) – Data type for the output numpy array. Defaults to float.
Returns: Horizontally stacked array containing the data from all arrays in the input array.
Return type: numpy array
Examples
Merge some data:
>>> lh_morphometry_data = np.array([0.0, 0.1, 0.2, 0.3]) # some fake data >>> rh_morphometry_data = np.array([0.5, 0.6]) >>> merged_data = fsd.merge_morphometry_data(np.array([lh_morphometry_data, rh_morphometry_data])) >>> print merged_data.shape (6, )
Typically, the lh_morphometry_data and rh_morphometry_data come from calls to read_fs_morphometry_data_file_and_record_meta_data as shown here:
>>> lh_morphometry_data, meta_data = read_fs_morphometry_data_file_and_record_meta_data(lh_morphometry_data_file, 'lh') >>> rh_morphometry_data, meta_data = read_fs_morphometry_data_file_and_record_meta_data(rh_morphometry_data_file, 'rh', meta_data=meta_data) >>> both_hemis_morphometry_data = merge_morphometry_data(np.array([lh_morphometry_data, rh_morphometry_data]))
-
brainload.freesurferdata.
read_fs_morphometry_data_file_and_record_meta_data
(curv_file, hemisphere_label, meta_data=None, format='curv')¶ Read a morphometry file and record meta data on it.
Read a morphometry file and record meta data on it. A morphometry file is file containing a scalar value for each vertex on the surface of a FreeSurfer mesh. An example is the file ‘lh.area’, which contains the area values for all vertices of the left hemisphere of the white surface. Such a file can be in two different formats: ‘curv’ or ‘mgh’. The former is used when the data refers to the surface mesh of the original subject, the latter when it has been mapped to a standard subject like fsaverage.
Parameters: - curv_file (string) – A string representing a path to a morphometry file (e.g., the path to ‘lh.area’).
- hemisphere_label ({'lh' or 'rh'}) – A string representing the hemisphere this file belongs to. This is used to write the correct meta data.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere for which data should actually be loaded. Defaults to ‘both’.
- meta_data (dictionary | None, optional) – Meta data to merge into the output meta_data. Defaults to the empty dictionary.
- format ({'curv', 'mgh'}, optional) – The file format for the files that are to be loaded. Defaults to ‘curv’.
Returns: per_vertex_data (numpy array) – A 1D array containing one scalar value per vertex.
meta_data (dictionary) –
- Contains detailed information on the data that was loaded. The following keys are available (replace ?h with the value of the argument hemisphere_label, which must be ‘lh’ or ‘rh’).
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the curv_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
Examples
>>> import brainload.freesurferdata as fsd; import os >>> lh_morphometry_file = os.path.join('my_subjects_dir', 'subject1', 'surf', 'lh.area') >>> lh_morphometry_data, meta_data = read_fs_morphometry_data_file_and_record_meta_data(lh_morphometry_file, 'lh') >>> print meta_data['lh.num_data_points'] 121567 # arbitrary number, depends on the subject mesh >>> print meta_data['lh.morphometry_file'] my_subjects_dir/subject1/surf/lh.area # on UNIX-like systems
-
brainload.freesurferdata.
read_fs_surface_file_and_record_meta_data
(surf_file, hemisphere_label, meta_data=None)¶ Read a surface file and record meta data on it.
Read a surface file and record meta data on it. A surface file is a mesh file in FreeSurfer format, e.g., ‘lh.white’. It contains vertices and 3-faces made out of them.
Parameters: - surf_file (string) – A string representing an absolute path to a surface (or ‘mesh’) file (e.g., the path to ‘lh.white’).
- hemisphere_label ({'lh' or 'rh'}) – A string representing the hemisphere this file belongs to. This is used to write the correct meta data.
- meta_data (dictionary | None, optional) – Meta data to merge into the output meta_data. Defaults to the empty dictionary.
Returns: vert_coords (numpy array) – A 2D array containing 3 coordinates for each vertex in the surf_file.
faces (numpy array) – A 2D array containing 3 vertex indices per face. Look at the respective indices in vert_coords to get the vertex coordinates.
meta_data (dictionary) –
- Contains detailed information on the data that was loaded. The following keys are available (replace ?h with the value of the argument hemisphere_label, which must be ‘lh’ or ‘rh’).
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : value of the surf_file argument: the mesh file that was loaded
Examples
>>> vert_coords, faces, meta_data = fsd.read_fs_surface_file_and_record_meta_data(surf_file, 'lh') >>> print meta_data['lh.num_vertices'] 121567 # arbitrary number, depends on the subject mesh
-
brainload.freesurferdata.
read_mgh_file
(mgh_file_name, collect_meta_data=True)¶ Read data from a FreeSurfer output file in mgh format.
Read all data from the MGH file and return it as a numpy array. Optionally, collect meta data from the mgh file header.
Parameters: - mgh_file_name (string) – A string representing a full path to a file in FreeSurfer MGH file format.
- collect_meta_data (bool, optional) – Whether or not to collect meta data from the MGH file header. Defaults to True.
Returns: - mgh_data (numpy array) – The data from the MGH file, usually one scalar value per voxel.
- mgh_meta_data (dictionary) – The meta data collected from the header, or an empty dictionary if the argument collect_meta_data was ‘False’. The keys correspond to the names of the respective nibabel function used to retrieve the data. The values are the data as returned by nibabel.
Examples
Read a file in MGH format from the surf dir of a subject:
>>> import os >>> import brainload.freesurferdata as fsd >>> mgh_file = os.path.join('my_subjects_dir', 'subject1', 'surf', 'rh.area.fsaverage.mgh') >>> mgh_data, mgh_meta_data = fsd.read_mgh_file(mgh_file)
-
brainload.freesurferdata.
rhi
(rh_relative_index, meta_data)¶ Computes the absolute data index given an index relative to the right hemisphere.
This function makes sense only given a morphometry_data and associated meta_data that contains data on two hemispheres (even though the morphometry_data array itself is not passed to this function). E.g., the return value of a function like subject() or subject_avg() when called with hemi=’both’. For such data, it computes the absolute index in the data given a request index relative to the right hemisphere. The name is short for ‘right hemisphere index’.
Parameters: - rh_relative_index (int) – An index relative to the right hemisphere. E.g., 0 if you want to get the index of the first vertex of the right hemisphere. Its absolute value must be between 0 and the number of vertices of the right hemisphere. Negative values are allowed, and -1 will get you the last possible index, -2 the second-to-last, and so on.
- meta_data (dictionary) – The meta data dictionary returned for your data. It must contain the keys ‘lh.num_data_points’ and ‘rh.num_data_points’.
Returns: Return type: The absolute index into the data for the given rh_relative_index.
Examples
>>> import brainload as bl >>> morphometry_data, meta_data = bl.subject('heinz', hemi='both')[2:4] >>> print "rh value at index 10, relative to start of right hemisphere: %d." % morphometry_data[bl.rhi(10, meta_data)]
-
brainload.freesurferdata.
rhv
(rh_relative_index, morphometry_data, meta_data)¶ Returns the value in morphometry_data at an index relative to the right hemisphere.
This function makes sense only given a morphometry_data and associated meta_data that contains data on two hemispheres. E.g., the return value of a function like subject() or subject_avg() when called with hemi=’both’. For such data, it returns the value in morphometry_data at a request index given relative to the right hemisphere. The name is short for ‘right hemisphere value’.
Parameters: - rh_relative_index (int) – An index relative to the start of the right hemisphere in the data. E.g., 0 if you want to get the value for the first vertex of the right hemisphere. Its absolute value must be between 0 and the number of vertices of the right hemisphere. Negative values are allowed, and -1 will get you the last possible value, -2 the second-to-last, and so on.
- morphometry_data (numpy array) – The morphometry data array, must represent data for both hemispheres.
- meta_data (dictionary) – The meta data dictionary returned for your data. It must contain the keys ‘lh.num_data_points’ and ‘rh.num_data_points’.
Returns: Return type: The value at the given index that is relative to the start of the right hemisphere in the data.
Examples
>>> import brainload as bl >>> morphometry_data, meta_data = bl.subject('heinz', hemi='both')[2:4] >>> print "rh value at index 10, relative to start of right hemisphere: %d." % bl.rhv(10, morphometry_data, meta_data)
-
brainload.freesurferdata.
subject
(subject_id, surf='white', measure='area', hemi='both', subjects_dir=None, meta_data=None, load_surface_files=True, load_morhology_data=True)¶ Load FreeSurfer brain morphometry and/or mesh data for a single subject.
High-level interface to load FreeSurfer brain data for a single space. This parses the data for the surfaces of this subject. If you want to load data that has been mapped to an average subject like ‘fsaverage’, use subject_avg instead.
Parameters: - subject_id (string) – The subject identifier of the subject. As always, it is assumed that this is the name of the directory containing the subject’s data, relative to subjects_dir. Example: ‘subject33’.
- measure (string, optional) – The measure to load, e.g., ‘area’ or ‘curv’. Defaults to ‘area’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- meta_data (dictionary, optional) – A dictionary that should be merged into the return value meta_data. Defaults to the empty dictionary if omitted.
- load_surface_files (boolean, optional) – Whether to load mesh data. If set to False, the first return values vert_coords and faces will be None. Defaults to True.
- load_morphometry_data (boolean, optional) – Whether to load morphometry data. If set to False, the first return value morphometry_data will be None. Defaults to True.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
morphometry_data (numpy array) – A numpy array with as many entries as there are vertices in the subject. If you load two hemispheres instead of one, the length doubles. You can get the start indices for data of the hemispheres in the returned meta_data, see meta_data[‘lh.num_vertices’] and meta_data[‘rh.num_vertices’]. You can be sure that the data for the left hemisphere will always come first (if both were loaded). Indices start at 0, of course. So if the left hemisphere has n vertices, the data for them are at indices 0..n-1, and the data for the right hemisphere start at index n. Note that the two hemispheres do in general NOT have the same number of vertices.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the ?h_morphometry_data_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
- subject_id : the subject id
- subjects_dir : the subjects dir that was used
- surf : the surf that was used, e.g., ‘white’
- measure : the measure that was loaded as morphometry data, e.g., ‘area’
- space : always the string ‘subject’. This means that the data loaded represent morphometry data taken from the subject’s surface (as opposed to data mapped to a common or average subject).
- hemi : the hemi value that was used
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR:
>>> import brainload as bl >>> vertices, faces, data, md = bl.subject('subject1')
Here, we are a bit more explicit about what we want to load:
>>> import os >>> user_home = os.getenv('HOME') >>> subjects_dir = os.path.join(user_home, 'data', 'my_study_x') >>> vertices, faces, data, md = bl.subject('subject1', hemi='lh', measure='curv', subjects_dir=subjects_dir)
Sometimes we do not care for the mesh, e.g., we only want the morphometry data:
>>> data, md = bl.subject('subject1', hemi='rh', load_surface_files=False)[2:4]
…or the other way around (mesh only, no morphometry data):
>>> vertices, faces = bl.subject('subject1', hemi='rh', load_morphometry_data=False)[0:2]
-
brainload.freesurferdata.
subject_avg
(subject_id, measure='area', surf='white', display_surf='white', hemi='both', fwhm='10', subjects_dir=None, average_subject='fsaverage', subjects_dir_for_average_subject=None, meta_data=None, load_surface_files=True, load_morhology_data=True, custom_morphometry_files=None)¶ Load morphometry data that has been mapped to an average subject for a subject.
Load data for a single subject that has been mapped to an average subject like the fsaverage subject from FreeSurfer. Can also load the mesh of an arbitrary surface for the average subject.
Parameters: - subject_id (string) – The subject identifier of the subject. As always, it is assumed that this is the name of the directory containing the subject’s data, relative to subjects_dir. Example: ‘subject33’.
- measure (string, optional) – The measure to load, e.g., ‘area’ or ‘curv’. Defaults to ‘area’.
- surf (string, optional) – The brain surface where the data has been measured, e.g., ‘white’ or ‘pial’. This will become part of the file name that is loaded. Defaults to ‘white’.
- hemi ({'both', 'lh', 'rh'}, optional) – The hemisphere that should be loaded. Defaults to ‘both’.
- fwhm (string or None, optional) – Which averaging version of the data should be loaded. FreeSurfer usually generates different standard space files with a number of smoothing settings. Defaults to ‘10’. If None is passed, the .fwhmX part is omitted from the file name completely. Set this to ‘0’ to get the unsmoothed version.
- subjects_dir (string, optional) – A string representing the full path to a directory. This should be the directory containing all subjects of your study. Defaults to the environment variable SUBJECTS_DIR if omitted. If that is not set, used the current working directory instead. This is the directory from which the application was executed.
- average_subject (string, optional) – The name of the average subject to which the data was mapped. Defaults to ‘fsaverage’.
- display_surf (string, optional) – The surface of the average subject for which the mesh should be loaded, e.g., ‘white’, ‘pial’, ‘inflated’, or ‘sphere’. Defaults to ‘white’. Ignored if load_surface_files is False.
- subjects_dir_for_average_subject (string, optional) – A string representing the full path to a directory. This can be used if the average subject is not in the same directory as all your study subjects. Defaults to the setting of subjects_dir.
- meta_data (dictionary, optional) – A dictionary that should be merged into the return value meta_data. Defaults to the empty dictionary if omitted.
- load_surface_files (boolean, optional) – Whether to load mesh data. If set to False, the first return values vert_coords and faces will be None. Defaults to True.
- load_morphometry_data (boolean, optional) – Whether to load morphometry data. If set to False, the first return value morphometry_data will be None. Defaults to True.
- custom_morphometry_files (dictionary, optional) – Cutom filenames for the left and right hemispjere data files that should be loaded. A dictionary of strings with exactly the following two keys: lh and rh. The value strings must contain hardcoded file names or template strings for them. As always, the files will be loaded relative to the surf/ directory of the respective subject. Example: {‘lh’: ‘lefthemi.nonstandard.mymeasure44.mgh’, ‘rh’: ‘righthemi.nonstandard.mymeasure44.mgh’}.
Returns: vert_coords (numpy array) – A 2-dimensional array containing the vertices of the mesh(es) of the average subject. Each vertex entry contains 3 coordinates. Each coordinate describes a 3D position in a FreeSurfer surface file (e.g., ‘lh.white’), as returned by the nibabel function nibabel.freesurfer.io.read_geometry.
faces (numpy array) – A 2-dimensional array containing the 3-faces of the mesh(es) of the average subject. Each face entry contains 3 indices. Each index references the respective vertex in the vert_coords array.
morphometry_data (numpy array) – A numpy array with as many entries as there are vertices in the average subject. If you load two hemispheres instead of one, the length doubles. You can get the start indices for data of the hemispheres in the returned meta_data, see meta_data[‘lh.num_vertices’] and meta_data[‘rh.num_vertices’]. You can be sure that the data for the left hemisphere will always come first (if both were loaded). Indices start at 0, of course. So if the left hemisphere has n vertices, the data for them are at indices 0..n-1, and the data for the right hemisphere start at index n. In many cases, your average subject will have the same number of vertices for both hemispheres and you will know this number beforehand, so you may not have to worry about this at all.
meta_data (dictionary) –
- A dictionary containing detailed information on all files that were loaded and used settings. The following keys are available (depending on the value of the hemi argument, you can replace ?h with ‘lh’ or ‘rh’ or both ‘lh’ and ‘rh’):
- ?h.num_data_points : the number of data points loaded.
- ?h.morphometry_file : the value of the ?h_morphometry_data_file argument (data file that was loaded)
- ?h.morphometry_file_format : the value for format that was used
- ?h.num_vertices : number of vertices in the loaded mesh
- ?h.num_faces : number of faces in the loaded mesh
- ?lh.surf_file : the mesh file that was loaded for this hemisphere
- subject_id : the subject id
- subjects_dir : the subjects dir that was used
- surf : the surf that was used, e.g., ‘white’
- measure : the measure that was loaded as morphometry data, e.g., ‘area’
- space : always the string ‘common’. This means that the data loaded represent morphometry data that has been mapped to a common or average subject.
- hemi : the hemi value that was used
- display_subject : the name of the common or average subject. This is the subject the surface meshes originate from. Ususally ‘fsaverage’.
- display_surf : the surface of the common subject that has been loaded. Something like ‘pial’, ‘white’, or ‘inflated’.
Raises: ValueError
– If one of the parameters with a fixed set of values receives a value that is not allowed.Examples
Load area data for both hemispheres and white surface of subject1 in the directory defined by the environment variable SUBJECTS_DIR, mapped to fsaverage:
>>> import brainload as bl >>> v, f, data, md = bl.subject_avg('subject1') >>> print md['surf'] white
Here, we are a bit more picky and explicit about what we want to load:
>>> import os >>> import brainload as bl >>> user_home = os.getenv('HOME') >>> subjects_dir = os.path.join(user_home, 'data', 'my_study_x') >>> v, f, data, md = bl.subject_avg('subject1', hemi='lh', measure='curv', fwhm='15', display_surf='inflated', subjects_dir=subjects_dir)
Sometime we do not care for the mesh, e.g., we only want the morphometry data:
>>> import brainload as bl >>> data, md = bl.subject_avg('subject1', hemi='rh', fwhm='15', load_surface_files=False)[2:4]
brainload.nitools module¶
Utility functions for loading neuroimaging data.
Most of these functions interact with the filesystem to find data.
-
brainload.nitools.
detect_subjects_in_directory
(subjects_dir, ignore_dir_names=None, required_subdirs_for_hits=None)¶ Search for directories containing FreeSurfer output in a directory and return the subject names.
Given a directory, search its sub directories for FreeSurfer data and return the directory names of all directories in which such data was found. The resulting list can be used to create a subjects.txt file. This method searches all direct sub directories of the given subjects_dir for the existance of the typical FreeSurfer output directory structure.
Parameters: - subjects_dir (string) – Path to a subjects directory.
- ignore_dir_names (list of strings | None, optional) – A list of directory names that should be ignored, even if they have the required sub directories. This is useful if you do not want to load certain subjects. It is often used to avoid loading the average subject ‘fsaverage’. Defaults to a list with the single element ‘fsaverage’. You can explicitely pass an empty list if you want to include all subjects.
- required_subdirs_for_hits (list of strings | None) – A sub directory of the given subjects_dir is considered a subject if it contains the typical FreeSurfer directory structure. Which sub directories are required is determined by this argument. If all of them are found under a dir, that dir is added tp the output list. This list defaults to a list with the single element ‘surf’. If that leads to false positives in your case, you could pass something like [‘surf’, ‘mri’, ‘label’].
Returns: A list of the subject identifiers (or directories that were considered as such).
Return type: list of strings
Examples
Guess which directories under the current SUBJECTS_DIR contain subject data:
>>> import brainload.nitools as nit >>> import os >>> my_subject_dir = os.getenv('SUBJECTS_DIR') >>> subjects_ids = nit.detect_subjects_in_directory(my_subject_dir, ignore_dir_names=['fsaverage', 'Copy of subject4'])
-
brainload.nitools.
do_subject_files_exist
(subjects_list, subjects_dir, filename=None, filename_template=None, sub_dir='surf')¶ Checks for the existance of certain files in each subject directory for a group of subjects.
Checks for the existance of certain files in the each subject directory for a group of subjects. This is useful to see whether data you intend to work on exists for all subjects you are interested in.
Parameters: - subjects_list (list of strings) – List of subject ids.
- subjects_dir (string) – Path to a directory that contains the subject data.
- filename (string) – A string representing the file name within the sub_dir sub directory of each subject, hardcoded. You must supply this or a filename_template.
- filename_template (string) – A string representing the file name within the ‘surf’ sub directory of each subject as a template. You must supply this or a filename, but not both. You can use the variable ${SUBJECT_ID} in the template.
- sub_dir (string | None, optional) – The sub directory to look in. You could set any value, but the typical ones are the default FreeSurfer directories, e.g., ‘surf’, ‘mri’, ‘scripts’ and so on. You can set this to None if you want to look directly in the subjct’s dir, but FreeSurfer does not seem to store any data there by default. Defaults to ‘surf’.
Returns: A dictionary. The keys are subjects that are missing the respective file, and the value is the absolute path of the file that is missing. If no files are missing, the dictionary is empty. If none of the subjects have the file, the length of the dictionary is equal to the length of the input subjects_list.
Return type: dictionary
Examples
Check whether a file exists for all subjects:
>>> import brainload.nitools as nit >>> subjects_list = ['subject1', 'subject4', 'subject7'] >>> subjects_dir = subjects_dir = os.path.join(os.getenv('HOME'), 'data', 'my_study_x') >>> searched_file = 'lh.area' >>> missing = nit.do_subject_files_exist(subjects_list, subjects_dir, filename=searched_file) >>> print "The file '%s' exists for %d of the %d subjects." % (searched_file, len(missing), len(subjects_list))
-
brainload.nitools.
fill_template_filename
(template_string, substitution_dict)¶ Replace variables in the template with the respective substitution dict entries.
Checks the template_string for variables (i.e., something like ‘${VAR_NAME}’) that are listed as keys in substitution_dict. If such entries are found, they are replaced with the respective values in the substitution_dict. This function only calls ting.Template().substitute() in the background.
Parameters: - template_string (string) – A template string, see the string.Template constructor in the standard Python string module. Variable names must be enclosed in ${}. Example: ${SUBJECT_ID}_hardcoded_text.
- substitution_dict (dictionary string, string) – The keys are variable names, values are the replacements. See string.Template.substitute in the standard Python string module. Example: { ‘SUBJECT_ID’ : ‘subject3’ }.
Returns: The result of the replacement.
Return type: string
Examples
Fill in a template string:
>>> import brainload.nitools as nit >>> template_str = '${HEMI}.white' >>> substitution_dict = {'HEMI' : 'lh'} >>> print nit.fill_template_filename(template_str, substitution_dict) lh.white
-
brainload.nitools.
read_subjects_file
(subjects_file, has_header_line=False, index_of_subject_id_field=0, **kwargs)¶ Read a subjects file in CSV format that has the subject id as the first entry on each line. Arbitrary data may follow in the consecutive fields on each line, and will be ignored. Having nothing but the subject id on the line is also fine, of course.
The file can be a simple text file that contains one subject_id per line. It can also be a CSV file that has other data following, but the subject_id has to be the first item on each line and the separator must be a comma. So a line is allowed to look like this: subject1, 35, center1, 147. No header is allowed. If you have a different format, consider reading the file yourself and pass the result as subjects_list instead.
Parameters: - subjects_file (string) – Path to a subjects file (see above for format details).
- has_header_line (boolean, optional) – Whether the first line is a header line and should be skipped. Defaults to ‘False’.
- index_of_subject_id_field (integer, optional) – The column index of the field that contains the subject id in each row. Defaults to ‘0’. Changing this only makes sense for CSV files.
- **kwargs (any) – Any other named arguments will be passed on to the call to the call to the csv.reader constructor. That is a class from Python’s standard csv module. Example: pass delimiter=’ ‘ if your CSV file is limited by tabs.
Returns: A list of subject identifiers.
Return type: list of strings
Examples
Load a list of subjects from a simple text file that contains one subject per line.
>>> import brainload.nitools as nit >>> subjects_ids = nit.read_subjects_file('/home/myuser/data/study5/subjects.txt')
brainload.spatial module¶
Simple functions for spatial tranformation of 3-dimensional coordinates.
These functions are helpful if you want to rotate, translate, mirror, or scale (brain) meshes. In general, you would use them roughly like this:
>>> import brainload as bl
>>> vert_coords, faces = bl.subject('bert')[0:2]
>>> x, y, z = bl.spatial.coords_a2s(vert_coords)
Now you have the coordinates of the mesh vertices in the required format and can call any function from this module:
>>> xt, yt, zt = bl.spatial.translate_3D_coordinates_along_axes(x, y, z, 5, 0, 0) # or some other function
-
brainload.spatial.
coords_a2s
(coords)¶ Split single array for all 3 coords into 3 separate ones.
Split a 2D array with shape (3, n) of coordinates (x, y, z values) into 3 separate 1D arrays of length n.
Parameters: coords (Numpy 2D array of numbers) – The merged coordinate array. Returns: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Has the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Has the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Has the same length as the x and y arrays.
Examples
>>> import brainload.spatial as st; import numpy as np; >>> coords = np.array([[5, 7, 9], [6, 8, 10]]) >>> x, y, z = st.coords_a2s(coords) >>> print y[1] 8
-
brainload.spatial.
coords_s2a
(x, y, z)¶ Separate a single xyz coordinate array into x, y and z arrays.
Merge 3 arrays of length n with coordinates (x, y, z values) into a single 2D coordinate array of shape (3, n).
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays.
Returns: The merged coordinate array.
Return type: Numpy 2D array of numbers
Examples
>>> import brainload.spatial as st; import numpy as np >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> coords = st.coords_s2a(x, y, z) >>> print coords[1][2] 10
-
brainload.spatial.
deg2rad
(degrees)¶ Convert an angle given in degrees to radians.
Convert an angle given in degrees to radians. 360 degrees are 2 Pi radians. If negative values or values larger than 360 are passed, use the modulo operation to bring them to a suitable range first. In other words, passing -90 will be transformed to 360 - 90 = 270 degrees, and will thus return 1.5 Pi radians.
Parameters: degrees (float) – The angle in degrees. Returns: The angle in radians. Return type: float Examples
>>> import brainload.spatial as st >>> rad = st.deg2rad(180) # will be Pi
-
brainload.spatial.
mirror_3D_coordinates_at_axis
(x, y, z, axis, mirror_at_axis_coordinate=None)¶ Mirror the given 3D coordinates on the given mirror plane.
Mirror or reflect the given 3D coordinates on a plane (perpendicular to the axis) at axis coordinate mirror_at_axis_coordinate at the given axis. If mirror_at_axis_coordinate is not given, the smallest coordinate along the mirror axis in the data is used.
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays.
- axis (string, one of {'x', 'y', 'z'}) – An axis identifier.
- mirror_at_axis_coordinate (number | None) – The coordinate along the axis axis at which the mirror plane should be created. If you set axis to ‘x’ and specify 5 for this, a yz-plane will be used at x coordinate 5. If not given, it defaults to the minimal axis coordinate for the respective axis in the data.
Returns: - x_mirrored (Numpy array of numbers) – The mirrored x coordinates.
- y_mirrored (Numpy array of numbers) – The mirrored y coordinates.
- z_mirrored (Numpy array of numbers) – The mirrored z coordinates.
Examples
Mirror at the origin of the x axis:
>>> import brainload.spatial as st; import numpy as np >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> xm, ym, zm = st.mirror_3D_coordinates_at_axis(x, y, z, 'x', 0) >>> print "%d %d %d" % (xm[0], ym[0], zm[0]) # -5 7 9 >>> print "%d %d %d" % (xm[1], ym[1], zm[1]) # -6 8 10
-
brainload.spatial.
point_mirror_3D_coordinates
(x, y, z, point_x, point_y, point_z)¶ Point-mirror or reflect the given coordinates at the given point.
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays.
- point_x (number) – The x coordinate of the point used for mirroring.
- point_y (number) – The y coordinate of the point used for mirroring.
- point_z (number) – The z coordinate of the point used for mirroring.
Returns: - xm (Numpy array of numbers) – The mirrored x coordinates.
- ym (Numpy array of numbers) – The mirrored y coordinates.
- zm (Numpy array of numbers) – The mirrored z coordinates.
Examples
Mirror at the origin:
>>> import brainload.spatial as st; import numpy as np >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> xm, ym, zm = st.point_mirror_3D_coordinates(x, y, z, 0, 0, 0) >>> print "%d %d %d" % (xm[0], ym[0], zm[0]) # -5 -7 -9 >>> print "%d %d %d" % (xm[1], ym[1], zm[1]) # -6 -8 -10
-
brainload.spatial.
rad2deg
(rad)¶ Convert an angle given in radians to degrees.
Convert an angle given in radians to degrees. 2 Pi radians are 360 degrees. If negative values or values larger than 2 Pi are passed, use the modulo operation to bring them to a suitable range first. In other words, passing -0.5 * Pi will be transformed to 2 - 0.5 = 1.5 Pi, and will thus return 270 degrees.
Parameters: rad (float) – The angle in radians. Returns: The angle in degrees. Return type: float Examples
>>> import brainload.spatial as st >>> deg = st.rad2deg(2 * np.pi) # will be 360
-
brainload.spatial.
rotate_3D_coordinates_around_axes
(x, y, z, radians_x, radians_y, radians_z)¶ Rotate coordinates around the 3 axes.
Rotate coordinates around the x, y, and z axes. The rotation values must be given in radians.
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays. (See coords_a2s if you have a single 2D array containing all 3.)
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays. (See coords_a2s if you have a single 2D array containing all 3.)
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays. (See coords_a2s if you have a single 2D array containing all 3.)
- radians_x (number) – A single number, representing the rotation in radians around the x axis.
- radians_y (number) – A single number, representing the rotation in radians around the y axis.
- radians_z (number) – A single number, representing the rotation in radians around the z axis.
Returns: - xr (Numpy array of numbers) – The rotated x coordinates.
- yr (Numpy array of numbers) – The rotated y coordinates.
- zr (Numpy array of numbers) – The rotated z coordinates.
Examples
>>> import brainload.spatial as st; import numpy as np; >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> xr, yr, zr = st.rotate_3D_coordinates_around_axes(x, y, z, np.pi, 0, 0)
-
brainload.spatial.
scale_3D_coordinates
(x, y, z, x_scale_factor, y_scale_factor=None, z_scale_factor=None)¶ Scale coordinates by factors.
Scale the given coordinates by the given scale factor or factors.
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays.
- x_scale_factor (number) – A single number, representing the scaling factor along the x axis. If the other values are not given, this counts for all axes.
- y_scale_factor (number | None) – A single number, representing the scaling factor along the y axis. If this is None, the value given for x_scale_factor is used.
- z_scale_factor (number | None) – A single number, representing the scaling factor along the z axis. If this is None, the value given for x_scale_factor is used.
Returns: - x_scaled (Numpy array of numbers) – The scaled x coordinates.
- y_scaled (Numpy array of numbers) – The scaled y coordinates.
- z_scaled (Numpy array of numbers) – The scaled z coordinates.
Examples
>>> import brainload.spatial as st; import numpy as np >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> xs, ys, zs = st.scale_3D_coordinates(x, y, z, 3.0) >>> print "%d %d %d" % (xs[0], ys[0], zs[0]) # 15 21 27 >>> print "%d %d %d" % (xs[1], ys[1], zs[1]) # 18 24 30
-
brainload.spatial.
translate_3D_coordinates_along_axes
(x, y, z, shift_x, shift_y, shift_z)¶ Translate coordinates along one or more axes.
Translate or shift coordinates along one or more axes.
Parameters: - x (Numpy array of numbers) – A 1D array representing x axis coordinates. Must have the same length as the y and z arrays.
- y (Numpy array of numbers) – A 1D array representing y axis coordinates. Must have the same length as the x and z arrays.
- z (Numpy array of numbers) – A 1D array, representing z axis coordinates. Must have the same length as the x and y arrays.
- shift_x (number) – A single number, representing the shift along the x axis.
- shift_y (number) – A single number, representing the shift along the y axis.
- shift_z (number) – A single number, representing the shift along the z axis.
Returns: - x_shifted (Numpy array of numbers) – The shifted x coordinates.
- y_shifted (Numpy array of numbers) – The shifted y coordinates.
- z_shifted (Numpy array of numbers) – The shifted z coordinates.
Examples
>>> import brainload.spatial as st; import numpy as np >>> x = np.array([5, 6]) >>> y = np.array([7, 8]) >>> z = np.array([9, 10]) >>> xt, yt, zt = st.translate_3D_coordinates_along_axes(x, y, z, 2, -4, 0) >>> print "%d %d %d" % (xt[0], yt[0], zt[0]) # 7 3 9 >>> print "%d %d %d" % (xt[1], yt[1], zt[1]) # 8 4 10