(!) Max diff 0.026240
MyModule[__main__](float32_0<1,3,256,256>) -> float32_8<1,8,128,128>
│ Conv2d[torch.nn.modules.conv](float32_0<1,3,256,256>) -> float32_3<1,16,128,128>
│ └· conv2d[torch.nn.functional](float32_0<1,3,256,256>, float32_1<16,3,3,3>, float32_2<16>, (2, 2), (1, 1), (1, 1), 1) -> float32_3<1,16,128,128>
│ (!) Max diff 0.038028
│ Hardsigmoid[torch.nn.modules.activation](float32_3<1,16,128,128>) -> float32_4<1,16,128,128>
│ │ (!) Max diff 0.038028
│ └· hardsigmoid[torch.nn.functional](float32_3<1,16,128,128>, False) -> float32_4<1,16,128,128>
│ __getitem__[torch.Tensor](float32_4<1,16,128,128>, (:, ::2)) -> float32_5<1,8,128,128>
│ __getitem__[torch.Tensor](float32_4<1,16,128,128>, (:, 1::2)) -> float32_6<1,8,128,128>
│ __mul__[torch.Tensor](float32_5<1,8,128,128>, float32_6<1,8,128,128>) -> float32_7<1,8,128,128>
└· __rsub__[torch.Tensor](float32_7<1,8,128,128>, 1) -> float32_8<1,8,128,128>