Description
Environment:
GTX 3090
torch1.7.0+cu110
torchvision0.8.1
python3.8
File:
if name == 'main':
# main()
img_l = torch.randn(1,3,512,256).cuda()
img_r = torch.randn(1,3,512,256).cuda()
f = r"../results/pretrained_anynet/checkpoint.tar"
checkpoint = torch.load(f)
checkpoint['state_dict'] = proc_nodes_module(checkpoint, 'state_dict')
model = models.anynet.AnyNet(args).cuda()
model.load_state_dict(checkpoint['state_dict'])
model.eval()
torch.onnx.export(model, (img_l, img_r), "anynet.onnx", verbose=False, input_names=["img_l","img_r"],
output_names=["stage1","stage2","stage3"], opset_version=11)
Problem:
RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11.
I have set opset_version 11, but it doesn't work at all. Does anybody know how to solve this problem? I really need help, please.