kedro.contrib.io.pyspark.SparkDataSet¶
-
class
kedro.contrib.io.pyspark.
SparkDataSet
(filepath, file_format='parquet', load_args=None, save_args=None)[source]¶ Bases:
kedro.io.core.AbstractDataSet
,kedro.io.core.ExistsMixin
SparkDataSet
loads and saves Spark data frames.Example:
from pyspark.sql import SparkSession from pyspark.sql.types import (StructField, StringType, IntegerType, StructType) from kedro.contrib.io.pyspark import SparkDataSet schema = StructType([StructField("name", StringType(), True), StructField("age", IntegerType(), True)]) data = [('Alex', 31), ('Bob', 12), ('Clarke', 65), ('Dave', 29)] spark_df = SparkSession.builder.getOrCreate() .createDataFrame(data, schema) data_set = SparkDataSet(filepath="test_data") data_set.save(spark_df) reloaded = data_set.load() reloaded.take(4)
Methods
SparkDataSet.__init__
(filepath[, …])Creates a new instance of SparkDataSet
.SparkDataSet.exists
()Checks whether a data set’s output already exists by calling the provided _exists() method. SparkDataSet.from_config
(name, config[, …])Create a data set instance using the configuration provided. SparkDataSet.load
()Loads data by delegation to the provided load method. SparkDataSet.save
(data)Saves data by delegation to the provided save method. -
__init__
(filepath, file_format='parquet', load_args=None, save_args=None)[source]¶ Creates a new instance of
SparkDataSet
.Parameters: - filepath (
str
) – path to a Spark data frame. - file_format (
str
) – file format used during load and save operations. These are formats supported by the running SparkContext include parquet, csv. For a list of supported formats please refer to Apache Spark documentation at https://spark.apache.org/docs/latest/sql-programming-guide.html - load_args (
Optional
[Dict
[str
,Any
]]) – Load args passed to Spark DataFrameReader load method. It is dependent on the selected file format. You can find a list of read options for each supported format in Spark DataFrame read documentation: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame - save_args (
Optional
[Dict
[str
,Any
]]) – Save args passed to Spark DataFrame write options. Similar to load_args this is dependent on the selected file format. You can passmode
andpartitionBy
to specify your overwrite mode and partitioning respectively. You can find a list of options for each format in Spark DataFrame write documentation: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
Return type: None
- filepath (
-
exists
()¶ Checks whether a data set’s output already exists by calling the provided _exists() method.
Return type: bool
Returns: Flag indicating whether the output already exists. Raises: DataSetError
– when underlying exists method raises error.
-
classmethod
from_config
(name, config, load_version=None, save_version=None)¶ Create a data set instance using the configuration provided.
Parameters: - name (
str
) – Data set name. - config (
Dict
[str
,Any
]) – Data set config dictionary. - load_version (
Optional
[str
]) – Version string to be used forload
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled. - save_version (
Optional
[str
]) – Version string to be used forsave
operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.
Return type: AbstractDataSet
Returns: An instance of an
AbstractDataSet
subclass.Raises: DataSetError
– When the function fails to create the data set from its config.- name (
-
load
()¶ Loads data by delegation to the provided load method.
Return type: Any
Returns: Data returned by the provided load method. Raises: DataSetError
– When underlying load method raises error.
-
save
(data)¶ Saves data by delegation to the provided save method.
Parameters: data ( Any
) – the value to be saved by provided save method.Raises: DataSetError
– when underlying save method raises error.Return type: None
-