spacr.io¶
Module Contents¶
- spacr.io.process_non_tif_non_2D_images(folder)[source]¶
Processes all images in the folder and splits them into grayscale channels, preserving bit depth.
- class spacr.io.CombineLoaders(train_loaders)[source]¶
A class that combines multiple data loaders into a single iterator.
- Parameters:
train_loaders (list) – A list of data loaders.
- Raises:
StopIteration – If all data loaders have been exhausted.
- class spacr.io.CombinedDataset(datasets, shuffle=True)[source]¶
Bases:
torch.utils.data.Dataset
A dataset that combines multiple datasets into one.
- Parameters:
datasets (list) – A list of datasets to be combined.
shuffle (bool, optional) – Whether to shuffle the combined dataset. Defaults to True.
- class spacr.io.NoClassDataset(data_dir, transform=None, shuffle=True, load_to_memory=False)[source]¶
Bases:
torch.utils.data.Dataset
A custom dataset class for handling image data without class labels.
- Parameters:
data_dir (str) – The directory path where the image files are located.
transform (callable, optional) – A function/transform to apply to the image data. Default is None.
shuffle (bool, optional) – Whether to shuffle the dataset. Default is True.
load_to_memory (bool, optional) – Whether to load all images into memory. Default is False.
- images¶
A list of loaded images (if load_to_memory is True).
- Type:
list
- class spacr.io.spacrDataset(data_dir, loader_classes, transform=None, shuffle=True, pin_memory=False, specific_files=None, specific_labels=None)[source]¶
Bases:
torch.utils.data.Dataset
An abstract class representing a
Dataset
.All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite
__getitem__()
, supporting fetching a data sample for a given key. Subclasses could also optionally overwrite__len__()
, which is expected to return the size of the dataset by manySampler
implementations and the default options ofDataLoader
. Subclasses could also optionally implement__getitems__()
, for speedup batched samples loading. This method accepts list of indices of samples of batch and returns list of samples.Note
DataLoader
by default constructs an index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.
- class spacr.io.spacrDataLoader(*args, preload_batches=1, **kwargs)[source]¶
Bases:
torch.utils.data.DataLoader
Data loader combines a dataset and a sampler, and provides an iterable over the given dataset.
The
DataLoader
supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.See
torch.utils.data
documentation page for more details.- Parameters:
dataset (Dataset) – dataset from which to load the data.
batch_size (int, optional) – how many samples per batch to load (default:
1
).shuffle (bool, optional) – set to
True
to have the data reshuffled at every epoch (default:False
).sampler (Sampler or Iterable, optional) – defines the strategy to draw samples from the dataset. Can be any
Iterable
with__len__
implemented. If specified,shuffle
must not be specified.batch_sampler (Sampler or Iterable, optional) – like
sampler
, but returns a batch of indices at a time. Mutually exclusive withbatch_size
,shuffle
,sampler
, anddrop_last
.num_workers (int, optional) – how many subprocesses to use for data loading.
0
means that the data will be loaded in the main process. (default:0
)collate_fn (Callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
pin_memory (bool, optional) – If
True
, the data loader will copy Tensors into device/CUDA pinned memory before returning them. If your data elements are a custom type, or yourcollate_fn
returns a batch that is a custom type, see the example below.drop_last (bool, optional) – set to
True
to drop the last incomplete batch, if the dataset size is not divisible by the batch size. IfFalse
and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default:False
)timeout (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default:
0
)worker_init_fn (Callable, optional) – If not
None
, this will be called on each worker subprocess with the worker id (an int in[0, num_workers - 1]
) as input, after seeding and before data loading. (default:None
)multiprocessing_context (str or multiprocessing.context.BaseContext, optional) – If
None
, the default multiprocessing context of your operating system will be used. (default:None
)generator (torch.Generator, optional) – If not
None
, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generatebase_seed
for workers. (default:None
)prefetch_factor (int, optional, keyword-only arg) – Number of batches loaded in advance by each worker.
2
means there will be a total of 2 * num_workers batches prefetched across all workers. (default value depends on the set value for num_workers. If value of num_workers=0 default isNone
. Otherwise, if value ofnum_workers > 0
default is2
).persistent_workers (bool, optional) – If
True
, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default:False
)pin_memory_device (str, optional) – the device to
pin_memory
to ifpin_memory
isTrue
.
Warning
If the
spawn
start method is used,worker_init_fn
cannot be an unpicklable object, e.g., a lambda function. See multiprocessing-best-practices on more details related to multiprocessing in PyTorch.Warning
len(dataloader)
heuristic is based on the length of the sampler used. Whendataset
is anIterableDataset
, it instead returns an estimate based onlen(dataset) / batch_size
, with proper rounding depending ondrop_last
, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts userdataset
code in correctly handling multi-process loading to avoid duplicate data.However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when
drop_last
is set. Unfortunately, PyTorch can not detect such cases in general.See `Dataset Types`_ for more details on these two types of datasets and how
IterableDataset
interacts with `Multi-process data loading`_.Warning
See reproducibility, and dataloader-workers-random-seed, and data-loading-randomness notes for random seed related questions.
- class spacr.io.NoClassDataset(data_dir, transform=None, shuffle=True, load_to_memory=False)[source]¶
Bases:
torch.utils.data.Dataset
An abstract class representing a
Dataset
.All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite
__getitem__()
, supporting fetching a data sample for a given key. Subclasses could also optionally overwrite__len__()
, which is expected to return the size of the dataset by manySampler
implementations and the default options ofDataLoader
. Subclasses could also optionally implement__getitems__()
, for speedup batched samples loading. This method accepts list of indices of samples of batch and returns list of samples.Note
DataLoader
by default constructs an index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.
- class spacr.io.TarImageDataset(tar_path, transform=None)[source]¶
Bases:
torch.utils.data.Dataset
An abstract class representing a
Dataset
.All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite
__getitem__()
, supporting fetching a data sample for a given key. Subclasses could also optionally overwrite__len__()
, which is expected to return the size of the dataset by manySampler
implementations and the default options ofDataLoader
. Subclasses could also optionally implement__getitems__()
, for speedup batched samples loading. This method accepts list of indices of samples of batch and returns list of samples.Note
DataLoader
by default constructs an index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.
- spacr.io.delete_empty_subdirectories(folder_path)[source]¶
Deletes all empty subdirectories in the specified folder.
Args: - folder_path (str): The path to the folder in which to look for empty subdirectories.
- spacr.io.convert_numpy_to_tiff(folder_path, limit=None)[source]¶
Converts all numpy files in a folder to TIFF format and saves them in a subdirectory ‘tiff’.
Args: folder_path (str): The path to the folder containing numpy files.
- spacr.io.parse_gz_files(folder_path)[source]¶
Parses the .fastq.gz files in the specified folder path and returns a dictionary containing the sample names and their corresponding file paths.
- Parameters:
folder_path (str) – The path to the folder containing the .fastq.gz files.
- Returns:
A dictionary where the keys are the sample names and the values are dictionaries containing the file paths for the ‘R1’ and ‘R2’ read directions.
- Return type:
dict
- spacr.io.generate_loaders(src, mode='train', image_size=224, batch_size=32, classes=['nc', 'pc'], n_jobs=None, validation_split=0.0, pin_memory=False, normalize=False, channels=[1, 2, 3], augment=False, verbose=False)[source]¶
Generate data loaders for training and validation/test datasets.
Parameters: - src (str): The source directory containing the data. - mode (str): The mode of operation. Options are ‘train’ or ‘test’. - image_size (int): The size of the input images. - batch_size (int): The batch size for the data loaders. - classes (list): The list of classes to consider. - n_jobs (int): The number of worker threads for data loading. - validation_split (float): The fraction of data to use for validation. - pin_memory (bool): Whether to pin memory for faster data transfer. - normalize (bool): Whether to normalize the input images. - verbose (bool): Whether to print additional information and show images. - channels (list): The list of channels to retain. Options are [1, 2, 3] for all channels, [1, 2] for blue and green, etc.
Returns: - train_loaders (list): List of data loaders for training datasets. - val_loaders (list): List of data loaders for validation datasets.
- spacr.io.training_dataset_from_annotation(db_path, dst, annotation_column='test', annotated_classes=(1, 2))[source]¶
- spacr.io.training_dataset_from_annotation_metadata(db_path, dst, annotation_column='test', annotated_classes=(1, 2), metadata_type_by='columnID', class_metadata=['c1', 'c2'])[source]¶