sconce.models package

Submodules

sconce.models.basic_autoencoder module

class sconce.models.basic_autoencoder.BasicAutoencoder(image_height, image_width, hidden_size, latent_size)[source]

Bases: sconce.models.base.Model

A basic 2D image autoencoder built up of fully connected layers, three each in the encoder and the decoder.

Loss:
This model uses binary cross-entropy for the loss.
Metrics:
None
Parameters:
  • image_height (int) – image height in pixels.
  • image_width (int) – image width in pixels.
  • hidden_size (int) – the number of activations in each of the 4 hidden layers.
  • latent_size (int) – the number of activations in the latent representation (encoder output).
calculate_loss(inputs, outputs, **kwargs)[source]
decode(x_latent)[source]
encode(inputs, **kwargs)[source]
forward(inputs, **kwargs)[source]

sconce.models.basic_classifier module

class sconce.models.basic_classifier.BasicClassifier(image_height, image_width, image_channels, convolutional_layer_kwargs, fully_connected_layer_kwargs, num_categories=10)[source]

Bases: sconce.models.base.Model

A basic 2D image classifier built up of some number of convolutional layers followed by some number of densly connected layers.

Loss:
This model uses cross-entropy for the loss.
Metrics:
classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters:
  • image_height (int) – image height in pixels.
  • image_width (int) – image width in pixels.
  • image_channels (int) – number of channels in the input images.
  • convolutional_layer_kwargs (list[dict]) – a list of dictionaries describing the convolutional layers. See Convolution2dLayer for details.
  • fully_connected_layer_kwargs (list[dict]) – a list of dictionaries describing the fully connected layers. See FullyConnectedLayer for details.
  • num_categories (int) – [2, inf) the number of different image classes.
calculate_loss(targets, outputs, **kwargs)[source]
calculate_metrics(targets, outputs, **kwargs)[source]
forward(inputs, **kwargs)[source]
freeze_batchnorm_layers()[source]
layers
classmethod new_from_yaml_file(yaml_file)[source]

Construct a new BasicClassifier from a yaml file.

Parameters:yaml_file (file) – a file-like object that yaml contents can be read from.

Example yaml file contents:

---
# Values for MNIST and FashionMNIST
image_height: 28
image_width: 28
image_channels: 1
num_categories: 10

# Remaining values are not related to the dataset
convolutional_layer_attributes: ["out_channels", "stride", "padding", "kernel_size"]
convolutional_layer_values:  [ # ==============  ========  =========  =============
                                [16,             1,        4,         9],
                                [8,              2,        1,         3],
                                [8,              2,        1,         3],
                                [8,              2,        1,         3],
                                [8,              2,        1,         3],
]

fully_connected_layer_attributes: ['out_size', 'dropout']
fully_connected_layer_values:  [ # ======      =========
                                  [100,        0.4],
                                  [100,        0.8],
]
classmethod new_from_yaml_filename(yaml_filename)[source]

Construct a new BasicClassifier from a yaml file.

Parameters:filename (path) – the filename of a yaml file. See new_from_yaml_file() for more details.
unfreeze_batchnorm_layers()[source]

sconce.models.multilayer_perceptron module

class sconce.models.multilayer_perceptron.MultilayerPerceptron(image_height, image_width, image_channels, layer_kwargs, num_categories=10)[source]

Bases: sconce.models.base.Model

A basic 2D image multi-layer perceptron built up of a number of densly connected layers.

Loss:
This model uses cross-entropy for the loss.
Metrics:
classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters:
  • image_height (int) – image height in pixels.
  • image_width (int) – image width in pixels.
  • image_channels (int) – number of channels in the input images.
  • layer_kwargs (list[dict]) – a list of dictionaries describing layers. See FullyConnectedLayer for details.
  • num_categories (int) – [2, inf) the number of different image classes.
calculate_loss(targets, outputs, **kwargs)[source]
calculate_metrics(targets, outputs, **kwargs)[source]
forward(inputs, **kwargs)[source]
classmethod new_from_yaml_file(yaml_file)[source]

Construct a new MultilayerPerceptron from a yaml file.

Parameters:yaml_file (file) – a file-like object that yaml contents can be read from.

Example yaml file contents:

---
# Values for MNIST and FashionMNIST
image_height: 28
image_width: 28
image_channels: 1
num_categories: 10

layer_attributes: ['out_size', 'dropout', 'with_batchnorm']
layer_values:  [ # ======      =========  ================
                  [100,        0.4,       true],
                  [100,        0.8,       true],
]
classmethod new_from_yaml_filename(yaml_filename)[source]

Construct a new MultilayerPerceptron from a yaml file.

Parameters:filename (path) – the filename of a yaml file. See new_from_yaml_file() for more details.

sconce.models.wide_resnet_image_classifier module

class sconce.models.wide_resnet_image_classifier.WideResnetBlock_3x3(in_channels, out_channels, stride)[source]

Bases: torch.nn.modules.module.Module

forward(x_in)[source]
class sconce.models.wide_resnet_image_classifier.WideResnetGroup_3x3(in_channels, out_channels, stride, num_blocks)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class sconce.models.wide_resnet_image_classifier.WideResnetImageClassifier(image_channels=1, depth=28, widening_factor=10, num_categories=10)[source]

Bases: sconce.models.base.Model

A wide resnet image classifier, based on this paper

Loss:
This model uses cross-entropy for the loss.
Metrics:
classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters:
  • image_channels (int) – number of channels in the input images.
  • depth (int) – total number of convolutional layers in the network. This should be divisible by (6n + 4) where n is a positive integer.
  • widening_factor (int) – [1, inf) determines how many convolutional channels are in the network (see paper above for details).
  • num_categories (int) – [2, inf) the number of different image classes.
calculate_loss(targets, outputs, **kwargs)[source]
calculate_metrics(targets, outputs, **kwargs)[source]
forward(inputs, **kwargs)[source]