power_cogs.base package

Submodules

power_cogs.base.base_torch_dataset module

class power_cogs.base.base_torch_dataset.BaseTorchDataset[source]

Bases: torch.utils.data.dataset.Dataset

sample(batch_size: int) → Dict[str, Any][source]

Random sample from dataset

Args:
batch_size (int): batch size to sample
Returns:
typing.Dict[str, typing.Any]: dict of outputs, ex: {“data”: subset, “targets”:targets}
to_device(device: torch.device) → None[source]

power_cogs.base.base_torch_model module

class power_cogs.base.base_torch_model.BaseTorchModel[source]

Bases: torch.nn.modules.module.Module

summary(input_shape)[source]

power_cogs.base.base_torch_trainer module

class power_cogs.base.base_torch_trainer.BaseTorchTrainer(name: Optional[str] = None, pretrained_path: Optional[str] = None, visualize_output: bool = False, use_cuda: bool = False, device_id: int = 0, early_stoppage: bool = False, loss_threshold: float = -inf, batch_size: int = 32, epochs: int = 100, checkpoint_interval: int = 100, num_samples: Optional[int] = None, model_config: Dict[str, Any] = {}, dataset_config: Dict[str, Any] = {}, optimizer_config: Dict[str, Any] = {}, scheduler_config: Dict[str, Any] = {}, logging_config: Dict[str, Any] = {}, dataloader_config: Dict[str, Any] = {}, tune_config: Dict[str, Any] = {}, config: Dict[str, Any] = {})[source]

Bases: object

classmethod from_config(config_path: Optional[str] = None, config: Dict[str, Any] = {}) → power_cogs.base.base_torch_trainer.BaseTorchTrainer[source]
load(pretrained_path)[source]
load_model(checkpoint_path: str, load_optimizer_and_scheduler: bool = True)[source]
log_epoch(train_metrics, epoch)[source]
make_checkpoint(path: str)[source]
post_dataset_setup()[source]
post_model_setup()[source]
post_optimizer_setup()[source]
post_setup()[source]
post_train()[source]
post_train_iter(train_output: Dict[Any, Any])[source]
pre_dataset_setup()[source]
pre_model_setup()[source]
pre_optimizer_setup()[source]
pre_train()[source]
pre_train_iter()[source]
save(base_path: Optional[str] = None, step: Optional[int] = None, path_name: Optional[str] = None) → str[source]
save_config(path) → str[source]
setup()[source]
setup_dataloader()[source]
setup_dataset()[source]
setup_device()[source]
setup_logging_and_checkpoints()[source]
setup_model()[source]
setup_optimizer()[source]
setup_trainer()[source]
to_device(device)[source]
train(batch_size=None, epochs=None, checkpoint_interval=None, visualize=None) → Dict[str, Any][source]

Main training function, should call train_iter

train_iter(batch_size: int, iteration: Optional[int] = None) → Dict[Any, Any][source]

Training iteration, specify learning process here

tune(tune_config: Dict[str, Any] = {}, trainer_overrides: Dict[str, Any] = {})[source]
visualize(*args, **kwargs)[source]

Visualize output

Module contents

class power_cogs.base.BaseTorchDataset[source]

Bases: torch.utils.data.dataset.Dataset

sample(batch_size: int) → Dict[str, Any][source]

Random sample from dataset

Args:
batch_size (int): batch size to sample
Returns:
typing.Dict[str, typing.Any]: dict of outputs, ex: {“data”: subset, “targets”:targets}
to_device(device: torch.device) → None[source]
class power_cogs.base.BaseTorchModel[source]

Bases: torch.nn.modules.module.Module

summary(input_shape)[source]
class power_cogs.base.BaseTorchTrainer(name: Optional[str] = None, pretrained_path: Optional[str] = None, visualize_output: bool = False, use_cuda: bool = False, device_id: int = 0, early_stoppage: bool = False, loss_threshold: float = -inf, batch_size: int = 32, epochs: int = 100, checkpoint_interval: int = 100, num_samples: Optional[int] = None, model_config: Dict[str, Any] = {}, dataset_config: Dict[str, Any] = {}, optimizer_config: Dict[str, Any] = {}, scheduler_config: Dict[str, Any] = {}, logging_config: Dict[str, Any] = {}, dataloader_config: Dict[str, Any] = {}, tune_config: Dict[str, Any] = {}, config: Dict[str, Any] = {})[source]

Bases: object

classmethod from_config(config_path: Optional[str] = None, config: Dict[str, Any] = {}) → power_cogs.base.base_torch_trainer.BaseTorchTrainer[source]
load(pretrained_path)[source]
load_model(checkpoint_path: str, load_optimizer_and_scheduler: bool = True)[source]
log_epoch(train_metrics, epoch)[source]
make_checkpoint(path: str)[source]
post_dataset_setup()[source]
post_model_setup()[source]
post_optimizer_setup()[source]
post_setup()[source]
post_train()[source]
post_train_iter(train_output: Dict[Any, Any])[source]
pre_dataset_setup()[source]
pre_model_setup()[source]
pre_optimizer_setup()[source]
pre_train()[source]
pre_train_iter()[source]
save(base_path: Optional[str] = None, step: Optional[int] = None, path_name: Optional[str] = None) → str[source]
save_config(path) → str[source]
setup()[source]
setup_dataloader()[source]
setup_dataset()[source]
setup_device()[source]
setup_logging_and_checkpoints()[source]
setup_model()[source]
setup_optimizer()[source]
setup_trainer()[source]
to_device(device)[source]
train(batch_size=None, epochs=None, checkpoint_interval=None, visualize=None) → Dict[str, Any][source]

Main training function, should call train_iter

train_iter(batch_size: int, iteration: Optional[int] = None) → Dict[Any, Any][source]

Training iteration, specify learning process here

tune(tune_config: Dict[str, Any] = {}, trainer_overrides: Dict[str, Any] = {})[source]
visualize(*args, **kwargs)[source]

Visualize output