sensortoolkit.param._targets.ParameterTargets
- class ParameterTargets(param)[source]
Bases:
object
Assign and retreive parameter performance metrics and target values for the evaluation of air sensor performance for devices measuring the indicated parameter.
Preset performance metrics and target values are included for sensors measuring either fine particulate matter (PM2.5) or ozone (O3), where U.S. EPA’s recommended performance metrics and target values for devices measuring these pollutants are utilized.
- Parameters
param (sensortoolkit.Parameter object) – The parameter for which air sensor performance is evaluated against using the metrics and target values included in this module.
Methods
Returns all performance metrics and target values associated with the Parameter object.
Return details about a single performance metric (description, target range, goal value, metric units).
Assign a new performance metric to an existing metric category.
Assign a new metric category.
- get_all_metrics()[source]
Returns all performance metrics and target values associated with the Parameter object.
- Returns
A dictionary containing the performance metric categories, metric names, and target values.
- Return type
dict
- get_metric(metric_name)[source]
Return details about a single performance metric (description, target range, goal value, metric units).
- Parameters
metric_name (str) – The name of the metric to return information about. Must be contained within the list of metrics indicated by
get_all_metrics()
.- Raises
KeyError – Raise if passed metric name is not in the dictionary of configured metrics.
- Returns
Return a dictionary containing a textual description of the metric (key- ‘description’), the target range for the metric (key- ‘bounds’), the goal/ideal achievable performance metric value (key- ‘goal’), and the units associated with the metric if applicable (key- ‘metric_units’).
- Return type
metric (dict)
- set_metric(metric_category, metric_name, **kwargs)[source]
Assign a new performance metric to an existing metric category.
- Parameters
metric_category (str) – The name of an existing performance metric category within the dictionary of metric values, accessed via
get_all_metrics()
(e.g., ‘Bias’ or ‘Error’).metric_name (str) – The name of a new performance metric that will be added to the indicated metric category.
Keyword Arguments:
- Parameters
description (str) – A textual description of the metric.
bounds (Two-element tuple) – The target range (lower bound, upper bound) for the metric.
goal (int or float) – The goal/ideal achievable performance metric value.
metric_units (str) – The units associated with the metric (if applicable)
- Raises
KeyError – Raise if the passed metric category is not in the list metric categories (keys) indicated by the dictionary of metric values accessed via
get_all_metrics()
.- Returns
None.
- set_metric_category(metric_category, metric_names=None)[source]
Assign a new metric category. Optionally, users can also add new metrics and associated target values to this new category.
Example
Say we are working with a parameter (pollutant) that is neither
PM25
norO3
, so the existing set of performance targets for these pollutants (metrics and target values recommended by U.S. EPA) are not utilized. Let’s also assume that the name of our parameter object isparam_obj
. After instantiating the parameter object, we may call theget_all_metrics()
method to display all of the performance metrics and target values for our parameter. Since no preset metrics were specified, we will see the following printed to the console:>>> param_obj.PerformanceTargets.get_all_metrics() {'Bias': {}, 'Linearity': {}, 'Error': {}, 'Precision': {}}
We can add a new performance evaluation metric category as well as any metrics and associated target values we may wish to include in the new category using the
set_metric_category()
method. For instance, say we wish to add a category ‘Data Quality’ and a metric within this category called ‘Uptime’ with a target value of 70% uptime or greater. This new category and metric can be added via the following:>>> param_obj.PerformanceTargets.set_metric_category( metric_category='Data Quality', metric_names={'Uptime': {'description': 'Measurement uptime', 'bounds': (70, 100), 'goal': 100, 'metric_units': '%'} } )
If we again call the
get_all_metrics()
method, we will see that the dictionary has been updated to include the new category and metric name.>>> param_obj.PerformanceTargets.get_all_metrics() {'Bias': {}, 'Linearity': {}, 'Error': {}, 'Precision': {}, 'Data_Quality': {'Uptime': {'description': 'Measurement uptime', 'bounds': (70, 100), 'goal': 100, 'metric_units': '%'} } }
- Parameters
metric_category (str) – The name of the new performance metric category to add.
metric_names (dict, optional) – A dictionary of metrics (dictionary keys) and a description of each metric (sub-dictionary for each metric containing ‘description’ - a textual description of the metric, ‘bounds’ - the lower and upper bounds of the target range for the specified metric, ‘goal’ - the goal/ideal achievable performance metric value, and ‘metric_units’ - the units associated with the metric if applicable). Defaults to None.
- Raises
TypeError – Raise if the type of metric_names is not a dictionary.
- Returns
None.