Legend:
Green — conversion successful
Yellow — conversion imprecise
Red — conversion failed
Red — no converter found
Bold — conversion applied directly
* — subgraph reused
Tensor — this output is not dependent on any of subgraph's input tensors
Tensor — this input is a parameter / constant
Tensor — this tensor is useless
MyModule[__main__](float32_0<1,3,256,256>) -> float32_8<1,8,128,128>
│ Conv2d[torch.nn.modules.conv](float32_0<1,3,256,256>) -> float32_3<1,16,128,128>
│ └· conv2d[torch.nn.functional](float32_0<1,3,256,256>, float32_1<16,3,3,3>, float32_2<16>, (2, 2), (1, 1), (1, 1), 1) -> float32_3<1,16,128,128>
│ Hardsigmoid[torch.nn.modules.activation](float32_3<1,16,128,128>) -> float32_4<1,16,128,128>
│ └· hardsigmoid[torch.nn.functional](float32_3<1,16,128,128>, False) -> float32_4<1,16,128,128>
│ __getitem__[torch.Tensor](float32_4<1,16,128,128>, (:, ::2)) -> float32_5<1,8,128,128>
│ __getitem__[torch.Tensor](float32_4<1,16,128,128>, (:, 1::2)) -> float32_6<1,8,128,128>
│ __mul__[torch.Tensor](float32_5<1,8,128,128>, float32_6<1,8,128,128>) -> float32_7<1,8,128,128>
└· __rsub__[torch.Tensor](float32_7<1,8,128,128>, 1) -> float32_8<1,8,128,128>