Learner

Pytorch Modules for Training Models for sequential data
from tsfast.datasets import create_dls_test
dls = create_dls_test()
model = SimpleRNN(1,1)

Loss Functions


source

ignore_nan

 ignore_nan (func)

remove nan values from tensors before function execution, reduces tensor to a flat array, apply to functions such as mse

n = 1000
y_t = torch.ones(32,n,6)
y_t[:,20]=np.nan
y_p = torch.ones(32,n,6)*1.1
(~torch.isnan(y_t)).shape
torch.Size([32, 1000, 6])
y_t.shape
torch.Size([32, 1000, 6])
assert torch.isnan(mse(y_p,y_t))
test_close(mse_nan(y_p,y_t),0.01)

source

float64_func

 float64_func (func)

calculate function internally with float64 and convert the result back

Learner(dls,model,loss_func=float64_func(nn.MSELoss())).fit(1)
epoch train_loss valid_loss time
0 0.057472 0.059530 00:01

source

SkipNLoss

 SkipNLoss (fn, n_skip=0)

Loss-Function modifier that skips the first n samples of sequential data

Learner(dls,model,loss_func=SkipNLoss(nn.MSELoss(),n_skip=30)).fit(1)
epoch train_loss valid_loss time
0 0.051443 0.050879 00:00

source

CutLoss

 CutLoss (fn, l_cut=0, r_cut=None)

Loss-Function modifier that skips the first n samples of sequential data

Learner(dls,model,loss_func=CutLoss(nn.MSELoss(),l_cut=30)).fit(1)
epoch train_loss valid_loss time
0 0.028662 0.018038 00:00

source

weighted_mae

 weighted_mae (input, target)
Learner(dls,model,loss_func=SkipNLoss(weighted_mae,n_skip=30)).fit(1)
epoch train_loss valid_loss time
0 0.093035 0.078785 00:00

source

RandSeqLenLoss

 RandSeqLenLoss (fn, min_idx=1, max_idx=None, mid_idx=None)

Loss-Function modifier that truncates the sequence length of every sequence in the minibatch inidiviually randomly. At the moment slow for very big batchsizes.

Learner(dls,model,loss_func=RandSeqLenLoss(nn.MSELoss())).fit(1)
0.00% [0/1 00:00<?]
epoch train_loss valid_loss time

0.00% [0/12 00:00<?]

source

fun_rmse

 fun_rmse (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.012053 0.011027 0.055338 00:00

source

cos_sim_loss

 cos_sim_loss (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=cos_sim_loss,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.238007 0.257800 0.055498 00:01

source

cos_sim_loss_pow

 cos_sim_loss_pow (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=cos_sim_loss_pow,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.508688 0.569000 0.062938 00:00

source

nrmse

 nrmse (inp, targ)

rmse loss function scaled by variance of each target variable

dls.one_batch()[0].shape
torch.Size([64, 100, 1])
Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(nrmse,n_skip=30)).fit(1)
epoch train_loss valid_loss nrmse time
0 0.010482 0.010386 0.199243 00:00

source

nrmse_std

 nrmse_std (inp, targ)

rmse loss function scaled by standard deviation of each target variable

Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(nrmse_std,n_skip=30)).fit(1)
epoch train_loss valid_loss nrmse_std time
0 0.010121 0.010095 0.088533 00:00

source

mean_vaf

 mean_vaf (inp, targ)
Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(mean_vaf,n_skip=30)).fit(1)
0.00% [0/1 00:00<?]
epoch train_loss valid_loss mean_vaf time

0.00% [0/12 00:00<?]