sconce package

Submodules

sconce.trainer module

class sconce.trainer.Trainer(*, model, validation_feed=None, training_feed=None, test_data_generator=None, training_data_generator=None, monitor=None)[source]

Bases: object

A Class that is used to train sconce models.

Keyword Arguments:
 
  • model (Model) – the sconce model to be trained. See sconce.models for examples.
  • training_data_generator (DataGenerator) – DEPRECATED, use training_feed argument instead.
  • training_feed (DataFeed) – used during training to provide inputs and targets.
  • test_data_generator (DataGenerator) – DEPRECATED, use validation_feed argument instead.
  • validation_feed (DataFeed) – targets. These are never used for back-propagation.
  • monitor (Monitor, optional) – the sconce monitor that records data during training. This data can be sent to external systems during training or kept until training completes allowing you to analyze training or make plots. If None, a composite monitor consisting of a StdoutMonitor and a DataframeMonitor will be created for you and used.
checkpoint(filename=None)[source]

Save model state and retain filename for a later call to restore().

Parameters:filename (path, optional) – the filename to save the model state to.
get_num_steps(num_epochs, data_generator=None, feed=None, batch_multiplier=1)[source]
load_model_state(filename)[source]

Restore model state frome a file.

Parameters:filename (path) – the filename to where the model’s state was saved.
multi_train(*, num_cycles, cycle_length=1, cycle_multiplier=2.0, **kwargs)[source]

Runs multiple training sessions one after another.

Parameters:
  • num_cycles (int) – [1, inf) the number of cycles to train for.
  • cycle_length (float) – (0.0, inf) the length (in epochs) of the first cycle.
  • cycle_multiplier (float) – (0.0, inf) a factor used to determine the length of a cycle. The length of a cycle is equal to the length of the previous cycle (or cycle_length if it is the first cycle) multiplied by cycle_multiplier.
Keyword Arguments:
 

**kwargs – are passed to the underlying train() method.

num_trainable_parameters

The number of trainable parameters that the models has.

restore()[source]

Restore model to previously checkpointed state. See also checkpoint().

save_model_state(filename=None)[source]

Save model state to a file.

Parameters:filename (path, optional) – the filename to save the model state to. If None, a system dependent temporary location will be chosen.
Returns:the passed in filename, or the temporary filename chosen if None was passed in.
Return type:filename (path)
survey_learning_rate(*, num_epochs=1.0, min_learning_rate=1e-12, max_learning_rate=10, monitor=None, batch_multiplier=1, stop_factor=10)[source]

Checkpoints a model, then runs a learning rate survey, before restoring the model back.

Keyword Arguments:
 
  • num_epochs (float, optional) – (0.0, inf) the number of epochs to train the model for.
  • min_learning_rate (float, optional) – (0.0, inf) the minimum learning rate used in the survey.
  • max_learning_rate (float, optional) – (0.0, inf) the maximum learning rate used in the survey.
  • monitor (Monitor, optional) – the sconce monitor that records data during the learning rate survey. If None, a composite monitor consisting of a StdoutMonitor and a DataframeMonitor will be created for you and used.
  • batch_multiplier (int, optional) – [1, inf) determines how often parameter updates will occur during training. If greater than 1, this simulates large batch sizes without increasing memory usage. For example, if the batch size were 100 and batch_multipler=10, the effective batch size would be 1,000, but the memory usage would be for a batch size of 100.
  • stop_factor (float) – (1.0, inf) determines early stopping. If the training loss rises by more than this factor from it’s minimum value, the survey will stop.
Returns:

the monitor used during this learning rate survey.

Return type:

monitor (Monitor)

test(*, monitor=None)[source]

Run all samples of self.validation_feed through the model in test (inference) mode.

Parameters:monitor (Monitor, optional) – the sconce monitor that records data during this testing. If None, a composite monitor consisting of a StdoutMonitor and a DataframeMonitor will be created for you and used.
Returns:the monitor used during this testing.
Return type:monitor (Monitor)

Note

This method has been deprecated since 1.2.0. Please use the validate() method instead.

train(*, num_epochs, monitor=None, test_to_train_ratio=None, validation_to_train_ratio=None, batch_multiplier=1)[source]

Train the model for a given number of epochs.

Parameters:
  • num_epochs (float) – the number of epochs to train the model for.
  • monitor (Monitor, optional) – a monitor to use for this training session. If None, then self.monitor will be used.
  • test_to_train_ratio (float, optional) – [0.0, 1.0] determines how often (relative to training samples) that test samples are run through the model during training. If None, then the relative size of the training and test datasets is used. For example, for MNIST with 60,000 training samples and 10,000 test samples, the value would be 1/6th.
  • batch_multiplier (int, optional) – [1, inf) determines how often parameter updates will occur during training. If greater than 1, this simulates large batch sizes without increasing memory usage. For example, if the batch size were 100 and batch_multipler=10, the effective batch size would be 1,000, but the memory usage would be for a batch size of 100.
Returns:

the monitor used during training.

Return type:

monitor (Monitor)

validate(*, monitor=None)[source]

Run all samples of self.validation_feed through the model in test (inference) mode.

Parameters:monitor (Monitor, optional) – the sconce monitor that records data during this testing. If None, a composite monitor consisting of a StdoutMonitor and a DataframeMonitor will be created for you and used.
Returns:the monitor used during this testing.
Return type:monitor (Monitor)

sconce.transforms module

class sconce.transforms.NHot(size)[source]

Bases: object

Converts a list of indices to a n-hot encoded vector.

Parameters:size (int) – the size of the returned array

example

>>> l = [3, 7, 2, 1]
>>> t = NHot(size=10)
>>> t(l)
array([0., 1., 1., 1., 0., 0., 0., 1., 0., 0.])

sconce.utils module

class sconce.utils.Progbar(target, width=30, verbose=1, interval=0.05, stateful_metrics=None, alpha=0.05)[source]

Bases: object

Displays a progress bar.

Parameters:
  • target (int) – Total number of steps expected, None if unknown.
  • width (float) – Progress bar width on screen.
  • verbose (int) – Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose)
  • stateful_metrics (list string) – Iterable of string names of metrics that should not be averaged over time. Metrics in this list will be displayed as-is. All others will be averaged by the progbar before display.
  • interval (float) – Minimum visual progress update interval (in seconds).
  • alpha (float) – The coefficient for exponentially weighted moving averages.
add(n, values=None)[source]
update(current, values=None)[source]

Updates the progress bar.

Parameters:
  • current (int) – Index of current step.
  • values (list tuples) – List of tuples like (name, value_for_last_step).

Note

If name is in stateful_metrics, value_for_last_step will be displayed as-is. Else, an average of the metric over time will be displayed.

sconce.parameter_group module

class sconce.parameter_group.ParameterGroup(parameters, name, is_active=True)[source]

Bases: sconce.schedules.base.ScheduledMixin

A parameter group is the way that sconce models organize nn.Module parameters and their associated optimizers.

Parameters:
  • parameters (iterable of torch.nn.Parameter) – the parameters you want to group together.
  • name (string) – your name for this group
  • is_active (bool, optional) – should this group be considered active (used during training)?
freeze()[source]

Set requires_grad = False for all parameters in this group.

set_learning_rate(desired_learning_rate)[source]
set_momentum(desired_momentum)[source]
set_optimizer(optimizer_class, *args, **kwargs)[source]

Set an optimizer on this parameter group. If this parameter group is active (has is_active=True) then this optimizer will be used during training.

Parameters:optimizer_class (one of the torch.optim classes) – the class of optimizer to set.

Note

All other arguments and keyword arguments are delivered to the optimizer_class’s constructor.

set_weight_decay(desired_weight_decay)[source]
unfreeze()[source]

Set requires_grad = True for all parameters in this group.