--- title: TimeSeriesLoader keywords: fastai sidebar: home_sidebar summary: "Data Loader for Time Series data" description: "Data Loader for Time Series data" nb_path: "nbs/data__tsloader.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %}

Inherited DataLoader from pytorch

{% raw %}

class TimeSeriesLoader[source]

TimeSeriesLoader(*args, **kwds) :: DataLoader

Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset.

The :class:~torch.utils.data.DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.

See :py:mod:torch.utils.data documentation page for more details.

Args: dataset (Dataset): dataset from which to load the data. batch_size (int, optional): how many samples per batch to load (default: 1). shuffle (bool, optional): set to True to have the data reshuffled at every epoch (default: False). sampler (Sampler or Iterable, optional): defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, :attr:shuffle must not be specified. batch_sampler (Sampler or Iterable, optional): like :attr:sampler, but returns a batch of indices at a time. Mutually exclusive with :attr:batch_size, :attr:shuffle, :attr:sampler, and :attr:drop_last. num_workers (int, optional): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0) collate_fn (callable, optional): merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. pin_memory (bool, optional): If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your :attr:collate_fn returns a batch that is a custom type, see the example below. drop_last (bool, optional): set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False) timeout (numeric, optional): if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0) worker_init_fn (callable, optional): If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None) prefetch_factor (int, optional, keyword-only arg): Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2) persistent_workers (bool, optional): If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)

.. warning:: If the spawn start method is used, :attr:worker_init_fn cannot be an unpicklable object, e.g., a lambda function. See :ref:multiprocessing-best-practices on more details related to multiprocessing in PyTorch.

.. warning:: len(dataloader) heuristic is based on the length of the sampler used. When :attr:dataset is an :class:~torch.utils.data.IterableDataset, it instead returns an estimate based on len(dataset) / batch_size, with proper rounding depending on :attr:drop_last, regardless of multi-process loading configurations. This represents the best guess PyTorch can make because PyTorch trusts user :attr:dataset code in correctly handling multi-process loading to avoid duplicate data.

         However, if sharding results in multiple workers having incomplete last batches,
         this estimate can still be inaccurate, because (1) an otherwise complete batch can
         be broken into multiple ones and (2) more than one batch worth of samples can be
         dropped when :attr:`drop_last` is set. Unfortunately, PyTorch can not detect such
         cases in general.

         See `Dataset Types`_ for more details on these two types of datasets and how
         :class:`~torch.utils.data.IterableDataset` interacts with
         `Multi-process data loading`_.

.. warning:: See :ref:reproducibility, and :ref:dataloader-workers-random-seed, and :ref:data-loading-randomness notes for random seed related questions.

{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Tests WindowsDataset and TimeSeriesDataset

{% raw %}
from nixtlats.data.utils import create_synthetic_tsdata
{% endraw %} {% raw %}
def test_eq_batch_size(dataset, batch_size, loader_class):
    # Check returns batch_size tensors
    loader = loader_class(dataset=dataset, batch_size=batch_size, eq_batch_size=True)
    sizes = [batch['Y'].size(0) == batch_size for batch in loader]
    
    assert all(sizes), 'Unexpected batch sizes.'
{% endraw %} {% raw %}
def test_eq_batch_size_order(dataset, batch_size, loader_class):
    #This test only works for TimeSeriesDataset class
    loader = loader_class(dataset=dataset, batch_size=batch_size, eq_batch_size=True)
    for batch in loader:
        idxs = batch['idxs']
        dataset_batch = dataset[idxs.numpy().tolist()]
        for key in batch.keys():
            assert t.equal(batch[key], dataset_batch[key]), (
                f'Batch and dataset batch differ, key {key}'
            )
{% endraw %}

Complete timeseries dataset

{% raw %}
Y_df, X_df, S_df = create_synthetic_tsdata(sort=True)
dataset = TimeSeriesDataset(S_df=S_df, Y_df=Y_df, X_df=X_df,
                            input_size=5,
                            output_size=2)
dataloader = TimeSeriesLoader(dataset=dataset, batch_size=12, 
                              eq_batch_size=False, shuffle=True)

for batch in dataloader:
    batch
{% endraw %} {% raw %}
test_eq_batch_size(dataset, 32, TimeSeriesLoader)
{% endraw %} {% raw %}
test_eq_batch_size_order(dataset, 32, TimeSeriesLoader)
{% endraw %}

Windowed timeseries dataset

{% raw %}
Y_df, X_df, S_df = create_synthetic_tsdata(sort=True)
dataset = WindowsDataset(S_df=S_df, Y_df=Y_df, X_df=X_df,
                         input_size=5,
                         output_size=2,
                         sample_freq=1)
dataloader = TimeSeriesLoader(dataset=dataset, batch_size=12, 
                              eq_batch_size=False, shuffle=True)

for batch in dataloader:
    batch
{% endraw %} {% raw %}
test_eq_batch_size(dataset, 32, TimeSeriesLoader)
{% endraw %}

Faster implemention

{% raw %}

class FastTimeSeriesLoader[source]

FastTimeSeriesLoader(dataset:TimeSeriesDataset, batch_size:int=32, eq_batch_size:bool=False, shuffle:bool=False)

A DataLoader-like object for a set of tensors that can be much faster than TensorDataset + DataLoader because dataloader grabs individual indices of the dataset and calls cat (slow). Source: https://discuss.pytorch.org/t/dataloader-much-slower-than-manual-batching/27014/6

Notes

[1] Adapted from https://github.com/hcarlens/pytorch-tabular/blob/master/fast_tensor_data_loader.py.

{% endraw %} {% raw %}
{% endraw %} {% raw %}

FastTimeSeriesLoader.__iter__[source]

FastTimeSeriesLoader.__iter__()

{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

FastTimeSeriesLoader.__next__[source]

FastTimeSeriesLoader.__next__()

{% endraw %} {% raw %}
{% endraw %} {% raw %}

FastTimeSeriesLoader.__len__[source]

FastTimeSeriesLoader.__len__()

{% endraw %} {% raw %}
{% endraw %}

Tests WindowsDataset and TimeSeriesDataset

Complete timeseries dataset

{% raw %}
Y_df, X_df, S_df = create_synthetic_tsdata(sort=True)
dataset = TimeSeriesDataset(S_df=S_df, Y_df=Y_df, X_df=X_df,
                            input_size=5,
                            output_size=2)
dataloader = FastTimeSeriesLoader(dataset=dataset, batch_size=12, 
                                  eq_batch_size=False, shuffle=True)

for batch in dataloader:
    batch
{% endraw %} {% raw %}
test_eq_batch_size(dataset, 32, FastTimeSeriesLoader)
{% endraw %} {% raw %}
test_eq_batch_size_order(dataset, 32, FastTimeSeriesLoader)
{% endraw %}

Windowed timeseries dataset

{% raw %}
Y_df, X_df, S_df = create_synthetic_tsdata(sort=True)
dataset = WindowsDataset(S_df=S_df, Y_df=Y_df, X_df=X_df,
                         input_size=5,
                         output_size=2,
                         sample_freq=1)
dataloader = FastTimeSeriesLoader(dataset=dataset, batch_size=12, 
                                  eq_batch_size=False, shuffle=True)
{% endraw %} {% raw %}
for batch in dataloader:
    batch
{% endraw %} {% raw %}
test_eq_batch_size(dataset, 32, FastTimeSeriesLoader)
{% endraw %}

Performance comparison

{% raw %}
dataloader = TimeSeriesLoader(dataset=dataset, batch_size=12, shuffle=True)
fast_dataloader = FastTimeSeriesLoader(dataset=dataset, batch_size=12, shuffle=True)
{% endraw %} {% raw %}
%timeit -n 50 -r 3  [batch for batch in dataloader]
{% endraw %} {% raw %}
%timeit -n 50 -r 3 [batch for batch in fast_dataloader]
{% endraw %}