{{ meta.report_title }}{{ meta.report_subtitle }}
Generated on {{ report_creation_datetime.strftime("%d %b %Y, %H:%M") }} ● {{ "{:,d}".format(meta.rows_original) }} original samples, {{ "{:,d}".format(meta.rows_synthetic) }} synthetic samples
{% if is_model_report %}
Accuracy
{{html_assets['info.svg']}}
{{ "{:.1%}".format(metrics.accuracy.overall) }}
({{ "{:.1%}".format(metrics.accuracy.overall_max) }})
|
|
Similarity
{{html_assets['info.svg']}}
|
|
Distances
{{html_assets['info.svg']}}
|
|
Correlations
{{ correlation_matrix_html_chart }}
Univariate Distributions
{% for uni_plots_row in univariate_html_charts | batch(3, ' ') %}
{% for uni_plot in uni_plots_row %}
{% endfor %}
{{ uni_plot }}
{% endfor %}
Bivariate Distributions
{% for biv_plots_row in bivariate_html_charts_tgt | batch(3, ' ') %}
{% for biv_plot in biv_plots_row %}
{% endfor %}
{{ biv_plot }}
{% endfor %}
Bivariate Distributions for context
{% for biv_plots_row in bivariate_html_charts_ctx | batch(3, ' ') %}
{% for biv_plot in biv_plots_row %}
{% endfor %}
{{ biv_plot }}
{% endfor %}
Coherence: Auto-correlations
{% for biv_plots_row in bivariate_html_charts_nxt | batch(3, ' ') %}
{% for biv_plot in biv_plots_row %}
{% endfor %}
{{ biv_plot }}
{% endfor %}
Coherence: Sequences per Distinct Category
{% for seq_per_cat_plots_row in sequences_per_distinct_category_html_charts | batch(3, ' ') %}
{% for seq_per_cat_plot in seq_per_cat_plots_row %}
{% endfor %}
{{ seq_per_cat_plot }}
{% endfor %}
Coherence: Distinct Categories per Sequence
{% for cats_per_seq_plots_row in distinct_categories_per_sequence_html_charts | batch(3, ' ') %}
{% for cats_per_seq_plot in cats_per_seq_plots_row %}
{% endfor %}
{{ cats_per_seq_plot }}
{% endfor %}
Accuracy
Column | Univariate | {% if 'bivariate' in accuracy_table_by_column %}Bivariate | {% endif %} {% if 'trivariate' in accuracy_table_by_column %}Trivariate | {% endif %} {% if 'coherence' in accuracy_table_by_column %}Coherence | {% endif %}
---|---|---|---|---|
{{ row['column'] }} | {{ "{:.1%}".format(row['univariate']) }} | {% if 'bivariate' in accuracy_table_by_column %}{{ "{:.1%}".format(row['bivariate']) }} | {% endif %} {% if 'trivariate' in accuracy_table_by_column %}{{ "{:.1%}".format(row['trivariate']) }} | {% endif %} {% if 'coherence' in accuracy_table_by_column %}{{ "{:.1%}".format(row['coherence']).replace('nan%', '-') }} | {% endif %}
Total |
{{ "{:.1%}".format(metrics.accuracy.univariate) }} ({{ "{:.1%}".format(metrics.accuracy.univariate_max) }}) |
{% if 'bivariate' in accuracy_table_by_column %}
{{ "{:.1%}".format(metrics.accuracy.bivariate) }} ({{ "{:.1%}".format(metrics.accuracy.bivariate_max) }}) |
{% endif %}
{% if 'trivariate' in accuracy_table_by_column %}
{{ "{:.1%}".format(metrics.accuracy.trivariate) }} ({{ "{:.1%}".format(metrics.accuracy.trivariate_max) }}) |
{% endif %}
{% if 'coherence' in accuracy_table_by_column %}
{{ "{:.1%}".format(metrics.accuracy.coherence) }} ({{ "{:.1%}".format(metrics.accuracy.coherence_max) }}) |
{% endif %}
{{ accuracy_matrix_html_chart }}
Explainer
Accuracy of synthetic data is assessed by comparing the distributions of the synthetic (shown in green) and the original data (shown in gray).
For each distribution plot we sum up the deviations across all categories, to get the so-called total variation distance (TVD). The reported accuracy is then simply reported as 100% - TVD.
These accuracies are calculated for all univariate, bivariate and trivariate distributions. A final accuracy score is then calculated as the average across all of these.
Similarity
{{ similarity_pca_html_chart }}
Explainer
These plots show the first 3 principal components of training samples, synthetic samples, and (if available) holdout samples within the embedding space. The black dots visualize the centroids of the respective samples.
The similarity metric then measures the cosine similarity between these centroids. We expect the cosine similarity to be close to 1, indicating that the synthetic samples are as similar to the training samples as the holdout samples are.
Distances
Synthetic vs. Training Data | {% if metrics.distances.ims_holdout is not none %}Synthetic vs. Holdout Data | Training vs. Holdout Data | {% endif %}|
Identical Matches | {{ "{:.1%}".format(metrics.distances.ims_training) }} | {% if metrics.distances.ims_holdout is not none %}{{ "{:.1%}".format(metrics.distances.ims_holdout) }} | {{ "{:.1%}".format(metrics.distances.ims_trn_hol) if metrics.distances.ims_trn_hol is not none else "N/A" }} | {% endif %}
DCR Average | {{ "{:.3f}".format(metrics.distances.dcr_training) }} | {% if metrics.distances.dcr_holdout is not none %}{{ "{:.3f}".format(metrics.distances.dcr_holdout) }} | {{ "{:.3f}".format(metrics.distances.dcr_trn_hol) if metrics.distances.dcr_trn_hol is not none else "N/A" }} | {% endif %}
DCR Share | {{ "{:.1%}".format(metrics.distances.dcr_share) }} | {{ "{:.1%}".format(1 - metrics.distances.dcr_share) }} | |
NNDR Min10 | {{ "{:.2e}".format(metrics.distances.nndr_training) if metrics.distances.nndr_training < 0.01 else "{:.3f}".format(metrics.distances.nndr_training) }} | {% if metrics.distances.nndr_holdout is not none %}{{ "{:.2e}".format(metrics.distances.nndr_holdout) if metrics.distances.nndr_holdout < 0.01 else "{:.3f}".format(metrics.distances.nndr_holdout) }} | {% endif %} |
{{ distances_dcr_html_chart }}
Explainer
Synthetic data shall be as close to the original training samples, as it is close to original holdout samples, which serve us as a reference.
This can be asserted empirically by measuring distances between synthetic samples to their closest original samples, whereas training and holdout sets are sampled to be of equal size.
A green line that is significantly left of the dark gray line implies that synthetic samples are closer to the training samples than to the holdout samples, indicating that the data has overfitted to the training data.
A green line that overlays with the dark gray line validates that the trained model indeed represents the general rules, that can be found in training just as well as in holdout samples.
The DCR share indicates the proportion of synthetic samples that are closer to a training sample than to a holdout sample, and ideally, this value should not significantly exceed 50%, as a higher value could indicate overfitting.