from tsfast.datasets import create_dls_test
Learner
= create_dls_test()
dls = SimpleRNN(1,1) model
Loss Functions
ignore_nan
ignore_nan (func)
remove nan values from tensors before function execution, reduces tensor to a flat array, apply to functions such as mse
= 1000
n = torch.ones(32,n,6)
y_t 20]=np.nan
y_t[:,= torch.ones(32,n,6)*1.1 y_p
~torch.isnan(y_t)).shape (
torch.Size([32, 1000, 6])
y_t.shape
torch.Size([32, 1000, 6])
assert torch.isnan(mse(y_p,y_t))
0.01) test_close(mse_nan(y_p,y_t),
float64_func
float64_func (func)
calculate function internally with float64 and convert the result back
=float64_func(nn.MSELoss())).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.057472 | 0.059530 | 00:01 |
SkipNLoss
SkipNLoss (fn, n_skip=0)
Loss-Function modifier that skips the first n samples of sequential data
=SkipNLoss(nn.MSELoss(),n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.051443 | 0.050879 | 00:00 |
CutLoss
CutLoss (fn, l_cut=0, r_cut=None)
Loss-Function modifier that skips the first n samples of sequential data
=CutLoss(nn.MSELoss(),l_cut=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.028662 | 0.018038 | 00:00 |
weighted_mae
weighted_mae (input, target)
=SkipNLoss(weighted_mae,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.093035 | 0.078785 | 00:00 |
RandSeqLenLoss
RandSeqLenLoss (fn, min_idx=1, max_idx=None, mid_idx=None)
Loss-Function modifier that truncates the sequence length of every sequence in the minibatch inidiviually randomly. At the moment slow for very big batchsizes.
=RandSeqLenLoss(nn.MSELoss())).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | time |
---|
fun_rmse
fun_rmse (inp, targ)
rmse loss function defined as a function not as a AccumMetric
=nn.MSELoss(),metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.012053 | 0.011027 | 0.055338 | 00:00 |
cos_sim_loss
cos_sim_loss (inp, targ)
rmse loss function defined as a function not as a AccumMetric
=cos_sim_loss,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.238007 | 0.257800 | 0.055498 | 00:01 |
cos_sim_loss_pow
cos_sim_loss_pow (inp, targ)
rmse loss function defined as a function not as a AccumMetric
=cos_sim_loss_pow,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.508688 | 0.569000 | 0.062938 | 00:00 |
nrmse
nrmse (inp, targ)
rmse loss function scaled by variance of each target variable
0].shape dls.one_batch()[
torch.Size([64, 100, 1])
=nn.MSELoss(),metrics=SkipNLoss(nrmse,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | nrmse | time |
---|---|---|---|---|
0 | 0.010482 | 0.010386 | 0.199243 | 00:00 |
nrmse_std
nrmse_std (inp, targ)
rmse loss function scaled by standard deviation of each target variable
=nn.MSELoss(),metrics=SkipNLoss(nrmse_std,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | nrmse_std | time |
---|---|---|---|---|
0 | 0.010121 | 0.010095 | 0.088533 | 00:00 |
mean_vaf
mean_vaf (inp, targ)
=nn.MSELoss(),metrics=SkipNLoss(mean_vaf,n_skip=30)).fit(1) Learner(dls,model,loss_func
epoch | train_loss | valid_loss | mean_vaf | time |
---|