{% block content %}
This overview provides a high-level assessment of the model's resilience to distribution shifts between baseline and target distributions.
Performance gap chart data will display here.
Distribution shift chart data will display here.
Feature impact chart data will display here.
The model experiences a {% if performance_gap > 0.2 %}significant{% elif performance_gap > 0.1 %}moderate{% else %}minor{% endif %} impact when exposed to distribution shifts, with an average performance gap of {{ (performance_gap * 100)|round(1) }}%.
Information about model performance impact is not available from test results.
The dataset exhibits {% if avg_dist_shift > 0.5 %}significant{% elif avg_dist_shift > 0.2 %}moderate{% else %}minor{% endif %} distribution shifts, with an average distance metric of {{ avg_dist_shift|round(2) }}.
Information about distribution shift magnitude is not available from test results.
{% if sensitive_features|length > 0 %}{{ sensitive_features|length }} features show high sensitivity to distribution shifts, with the greatest impact observed in the most affected features.{% else %}No high sensitivity features were identified in the analysis.{% endif %}
{% if shift_scenarios|length > 0 %}{{ shift_scenarios|length }} different shift scenarios were analyzed to evaluate the model's resilience under various conditions.{% else %}No shift scenarios were available for analysis.{% endif %}
Analyze the distribution patterns of the most sensitive features to understand sources of instability.
Enhance training data with examples from underrepresented regions to improve model performance under distribution shifts.
Set up ongoing monitoring of feature distributions in production to detect shifts early.
Consider feature engineering techniques that are more robust to distribution changes.