Uncertainty Quantification Analysis

This analysis evaluates how well the model quantifies uncertainty in its predictions. Well-calibrated uncertainty estimates provide reliable prediction intervals and confidence levels.

Key Findings

Overall Uncertainty Quality
{% if uncertainty_score >= 0.9 %} Excellent ({{ (uncertainty_score * 100)|int }}%) {% elif uncertainty_score >= 0.8 %} Good ({{ (uncertainty_score * 100)|int }}%) {% elif uncertainty_score >= 0.7 %} Moderate ({{ (uncertainty_score * 100)|int }}%) {% elif uncertainty_score >= 0.6 %} Fair ({{ (uncertainty_score * 100)|int }}%) {% else %} Limited ({{ (uncertainty_score * 100)|int }}%) {% endif %}
Coverage Performance
{% if coverage_gap < 0.02 %} Excellent ({{ (coverage_gap * 100)|round(1) }}% gap) {% elif coverage_gap < 0.05 %} Good ({{ (coverage_gap * 100)|round(1) }}% gap) {% elif coverage_gap < 0.1 %} Moderate ({{ (coverage_gap * 100)|round(1) }}% gap) {% elif coverage_gap < 0.15 %} Fair ({{ (coverage_gap * 100)|round(1) }}% gap) {% else %} Poor ({{ (coverage_gap * 100)|round(1) }}% gap) {% endif %}
Calibration Quality
{% if calibration_error < 0.02 %} Excellent ({{ calibration_error|round(3) }} error) {% elif calibration_error < 0.05 %} Good ({{ calibration_error|round(3) }} error) {% elif calibration_error < 0.1 %} Moderate ({{ calibration_error|round(3) }} error) {% elif calibration_error < 0.15 %} Fair ({{ calibration_error|round(3) }} error) {% else %} Poor ({{ calibration_error|round(3) }} error) {% endif %}
Critical Alpha Level
α = {{ critical_alpha }} ({{ (critical_alpha_gap * 100)|round(1) }}% gap)

Recommendations