abacusai.batch_prediction_version

Module Contents

Classes

BatchPredictionVersion

Batch Prediction Version

class abacusai.batch_prediction_version.BatchPredictionVersion(client, batchPredictionVersion=None, batchPredictionId=None, status=None, deploymentId=None, modelId=None, modelVersion=None, predictionsStartedAt=None, predictionsCompletedAt=None, globalPredictionArgs=None, totalPredictions=None, failedPredictions=None, databaseConnectorId=None, databaseOutputConfiguration=None, explanations=None, fileConnectorOutputLocation=None, fileOutputFormat=None, connectorType=None, legacyInputLocation=None, error=None, csvInputPrefix=None, csvPredictionPrefix=None, csvExplanationsPrefix=None, batchInputs={})

Bases: abacusai.return_class.AbstractApiClass

Batch Prediction Version

Parameters
  • client (ApiClient) – An authenticated API Client instance

  • batchPredictionVersion (str) – The unique identifier of the batch prediction

  • batchPredictionId (str) – The unique identifier of the batch prediction

  • status (str) – The current status of the batch prediction

  • deploymentId (str) – The deployment used to make the predictions

  • modelId (str) – The model used to make the predictions

  • modelVersion (str) – The model version used to make the predictions

  • predictionsStartedAt (str) – Predictions start date and time

  • predictionsCompletedAt (str) – Predictions completion date and time

  • globalPredictionArgs (dict) – Argument(s) passed to every prediction call

  • totalPredictions (int) – Number of predictions performed in this batch prediction job

  • failedPredictions (int) – Number of predictions that failed

  • databaseConnectorId (str) – The database connector to write the results to

  • databaseOutputConfiguration (dict) – Contains information about where the batch predictions are written to

  • explanations (bool) – If true, explanations for each prediction were created

  • fileConnectorOutputLocation (str) – Contains information about where the batch predictions are written to

  • fileOutputFormat (str) – The format of the batch prediction output (CSV or JSON)

  • connectorType (str) – Null if writing to internal console, else FEATURE_GROUP | FILE_CONNECTOR | DATABASE_CONNECTOR

  • legacyInputLocation (str) – The location of the input data

  • error (str) – Relevant error if the status is FAILED

  • csvInputPrefix (str) – A prefix to prepend to the input columns, only applies when output format is CSV

  • csvPredictionPrefix (str) – A prefix to prepend to the prediction columns, only applies when output format is CSV

  • csvExplanationsPrefix (str) – A prefix to prepend to the explanation columns, only applies when output format is CSV

  • batchInputs (PredictionInput) – Inputs to the batch prediction

__repr__(self)

Return repr(self).

to_dict(self)

Get a dict representation of the parameters in this class

Returns

The dict value representation of the class parameters

Return type

dict

download_batch_prediction_result_chunk(self, offset=0, chunk_size=10485760)

Returns a stream containing the batch prediction results

Parameters
  • offset (int) – The offset to read from

  • chunk_size (int) – The max amount of data to read

get_batch_prediction_connector_errors(self)

Returns a stream containing the batch prediction database connection write errors, if any writes failed to the database connector

Parameters

batch_prediction_version (str) – The unique identifier of the batch prediction job to get the errors for

refresh(self)

Calls describe and refreshes the current object’s fields

Returns

The current object

Return type

BatchPredictionVersion

describe(self)

Describes a batch prediction version

Parameters

batch_prediction_version (str) – The unique identifier of the batch prediction version

Returns

The batch prediction version.

Return type

BatchPredictionVersion

download_result_to_file(self, file)

Downloads the batch prediction version in a local file.

Parameters

file (file object) – A file object opened in a binary mode e.g., file=open(‘/tmp/output’, ‘wb’).

wait_for_predictions(self, timeout=1200)

A waiting call until batch prediction version is ready.

Parameters

timeout (int, optional) – The waiting time given to the call to finish, if it doesn’t finish by the allocated time, the call is said to be timed out. Default value given is 1200 milliseconds.

get_status(self)

Gets the status of the batch prediction version.

Returns

A string describing the status of the batch prediction version, for e.g., pending, complete, etc.

Return type

str