kedro.io.PickleS3DataSet¶
-
class
kedro.io.
PickleS3DataSet
(filepath, bucket_name, credentials=None, load_args=None, save_args=None, version=None)[source]¶ Bases:
kedro.io.core.AbstractDataSet
,kedro.io.core.S3PathVersionMixIn
PickleS3DataSet
loads and saves a Python object to a pickle file on S3. The underlying functionality is supported by the pickle library, so it supports all allowed options for loading and saving pickle files.Example:
from kedro.io import PickleLocalDataSet import pandas as pd dummy_data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5], 'col3': [5, 6]}) data_set = PickleS3DataSet(filepath="data.pkl", bucket_name="test_bucket", load_args=None, save_args=None) data_set.save(dummy_data) reloaded = data_set.load()
Methods
PickleS3DataSet.__init__
(filepath, bucket_name)Creates a new instance of PickleS3DataSet
pointing to a concrete file on S3.PickleS3DataSet.exists
()Checks whether a data set’s output already exists by calling the provided _exists() method. PickleS3DataSet.from_config
(name, config[, …])Create a data set instance using the configuration provided. PickleS3DataSet.load
()Loads data by delegation to the provided load method. PickleS3DataSet.save
(data)Saves data by delegation to the provided save method. -
__init__
(filepath, bucket_name, credentials=None, load_args=None, save_args=None, version=None)[source]¶ Creates a new instance of
PickleS3DataSet
pointing to a concrete file on S3.PickleS3DataSet
uses pickle backend to serialise objects to disk:pickle.dumps: https://docs.python.org/3/library/pickle.html#pickle.dumps
and to load serialised objects into memory:
pickle.loads: https://docs.python.org/3/library/pickle.html#pickle.loads
Parameters: - filepath (
str
) – path to a pkl file. - bucket_name (
str
) – S3 bucket name. - credentials (
Optional
[Dict
[str
,Any
]]) – Credentials to access the S3 bucket, such asaws_access_key_id
,aws_secret_access_key
. - load_args (
Optional
[Dict
[str
,Any
]]) – Options for loading pickle files. Refer to the help file ofpickle.loads
for options. - save_args (
Optional
[Dict
[str
,Any
]]) – Options for saving pickle files. Refer to the help file ofpickle.dumps
for options. - version (
Optional
[Version
]) – If specified, should be an instance ofkedro.io.core.Version
. If itsload
attribute is None, the latest version will be loaded. If itssave
attribute is None, save version will be autogenerated.
Return type: None
- filepath (
-
exists
()¶ Checks whether a data set’s output already exists by calling the provided _exists() method.
Return type: bool
Returns: Flag indicating whether the output already exists. Raises: DataSetError
– when underlying exists method raises error.
-
classmethod
from_config
(name, config, load_version=None, save_version=None)¶ Create a data set instance using the configuration provided.
Parameters: - name (
str
) – Data set name. - config (
Dict
[str
,Any
]) – Data set config dictionary. - load_version (
Optional
[str
]) – Version string to be used forload
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled. - save_version (
Optional
[str
]) – Version string to be used forsave
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.
Return type: AbstractDataSet
Returns: An instance of an
AbstractDataSet
subclass.Raises: DataSetError
– When the function fails to create the data set from its config.- name (
-
load
()¶ Loads data by delegation to the provided load method.
Return type: Any
Returns: Data returned by the provided load method. Raises: DataSetError
– When underlying load method raises error.
-
save
(data)¶ Saves data by delegation to the provided save method.
Parameters: data ( Any
) – the value to be saved by provided save method.Raises: DataSetError
– when underlying save method raises error.Return type: None
-