rocelib.robustness_evaluations package

Submodules

rocelib.robustness_evaluations.DeltaRobustnessEvaluator module

class rocelib.robustness_evaluations.DeltaRobustnessEvaluator.DeltaRobustnessEvaluator(ct)[source]

Bases: ModelChangesRobustnessEvaluator

A robustness evaluator that uses a Mixed-Integer Linear Programming (MILP) approach to evaluate the robustness of a model’s predictions when perturbations are applied.

This class inherits from ModelChangesRobustnessEvaluator and uses the Gurobi optimizer to determine if the model’s prediction remains stable under perturbations.

task

The task to solve, inherited from ModelChangesRobustnessEvaluator.

Type:

Task

opt

An optimizer instance for setting up and solving the MILP problem.

Type:

OptSolver

evaluate(instance, desired_output=1, delta=0.5, bias_delta=0, M=1000000000, epsilon=0.0001)[source]

Evaluates whether the model’s prediction for a given instance is robust to changes in the input.

@param instance: The instance to evaluate. @param desired_output: The desired output for the model (0 or 1).

The evaluation will check if the model’s output matches this.

@param delta: The maximum allowable perturbation in the input features. @param bias_delta: Additional bias to apply to the delta changes. @param M: A large constant used in MILP formulation for modeling constraints. @param epsilon: A small constant used to ensure numerical stability. @return: A boolean indicating whether the model’s prediction is robust given the desired output.

rocelib.robustness_evaluations.ModelChangesRobustnessEvaluator module

class rocelib.robustness_evaluations.ModelChangesRobustnessEvaluator.ModelChangesRobustnessEvaluator(ct)[source]

Bases: ABC

Abstract base class for evaluating the robustness of model predictions with respect to model changes.

This class defines an interface for evaluating how robust a model’s predictions are when the model parameters are changed.

abstract evaluate(instance, neg_value=0)[source]

Abstract method to evaluate the robustness of a model’s prediction on a given instance.

Must be implemented by subclasses.

@param instance: The instance for which to evaluate robustness.

This could be a single data point for the model.

@param neg_value: The value considered negative in the target variable.

Used to determine if the counterfactual flips the prediction.

@return: Result of the robustness evaluation. The return type should be defined by the subclass.

rocelib.robustness_evaluations.ModelChangesRobustnessScorer module

class rocelib.robustness_evaluations.ModelChangesRobustnessScorer.ModelChangesRobustnessScorer(ct)[source]

Bases: ABC

Abstract base class for scoring the robustness of model predictions with respect to counterfactuals.

This class defines an interface for assigning a robustness score to a model’s predictions when the model parameters are changed.

abstract score(instance, neg_value=0)[source]

Abstract method to calculate the robustness score for a model’s prediction on a given instance.

Must be implemented by subclasses.

@param instance: The instance for which to calculate the robustness score.

This could be a single data point for the model.

@param neg_value: The value considered negative in the target variable.

Used to determine if the counterfactual flips the prediction.

@return: The calculated robustness score. The return type should be defined by the subclass.

rocelib.robustness_evaluations.ModelChangesRobustnessSetEvaluator module

class rocelib.robustness_evaluations.ModelChangesRobustnessSetEvaluator.ModelChangesRobustnessSetEvaluator(ct, evaluator=<class 'rocelib.robustness_evaluations.ModelChangesRobustnessEvaluator.ModelChangesRobustnessEvaluator'>)[source]

Bases: object

Class for evaluating the robustness of model predictions over a set of instances.

This class uses a specified evaluator to assess the robustness of model predictions for multiple instances.

task

The task for which robustness is being evaluated.

Type:

Task

evaluator

An instance of a robustness evaluator used to assess each instance.

Type:

ModelChangesRobustnessEvaluator

evaluate(instances, neg_value=0)[source]

Evaluates the robustness of model predictions for a set of instances.

@param instances: A DataFrame containing the instances to evaluate. @param neg_value: The value considered negative in the target variable, used

to evaluate the robustness of the model’s prediction.

@return: A DataFrame containing the robustness evaluation results for each instance.

Module contents