11. Appendix.01: Reference for converting model to ONNX format
This chapter provides a reference for how to convert PyTorch, TensorFlow and PaddlePaddle models to ONNX format. You can also refer to the model conversion tutorial provided by ONNX official repository: https://github.com/onnx/tutorials. All the operations in this chapter are carried out in the Docker container. For the specific environment configuration method, please refer to the content of Chapter 2.
11.1. PyTorch model to ONNX
This section takes a self-built simple PyTorch model as an example to perform onnx conversion.
11.1.1. Step 0: Create a working directory
Create and enter the torch_model directory using the command line.
1$ mkdir torch_model
2$ cd torch_model
11.1.2. Step 1: Build and save the model
Create a script named simple_net.py
in this directory and run it. The specific content of the script is as follows:
1#!/usr/bin/env python3
2import torch
3
4# Build a simple nn model
5class SimpleModel(torch.nn.Module):
6
7 def __init__(self):
8 super(SimpleModel, self).__init__()
9 self.m1 = torch.nn.Conv2d(3, 8, 3, 1, 0)
10 self.m2 = torch.nn.Conv2d(8, 8, 3, 1, 1)
11
12 def forward(self, x):
13 y0 = self.m1(x)
14 y1 = self.m2(y0)
15 y2 = y0 + y1
16 return y2
17
18# Create a SimpleModel and save its weight in the current directory
19model = SimpleModel()
20torch.save(model.state_dict(), "weight.pth")
The run command as follows:
$ python simple_net.py
After running the script, we will get a weight.pth
weight file in the current directory.
11.1.3. Step 2: Export ONNX model
Create another script named export_onnx.py
in the same directory and run it. The specific content of the script is as follows:
1#!/usr/bin/env python3
2import torch
3from simple_net import SimpleModel
4
5# Load the pretrained model and export it as onnx
6model = SimpleModel()
7model.eval()
8checkpoint = torch.load("weight.pth", map_location="cpu")
9model.load_state_dict(checkpoint)
10
11# Prepare input tensor
12input = torch.randn(1, 3, 16, 16, requires_grad=True)
13
14# Export the torch model as onnx
15torch.onnx.export(model,
16 input,
17 'model.onnx', # name of the exported onnx model
18 opset_version=13,
19 export_params=True,
20 do_constant_folding=True)
After running the script, we can get the onnx model named model.onnx
in the current directory.
11.2. TensorFlow model to ONNX
In this section, we use the mobilenet_v1_0.25_224
model provided in the TensorFlow official repository as a conversion example.
11.2.1. Step 0: Create a working directory
Create and enter the tf_model directory using the command line.
1$ mkdir tf_model
2$ cd tf_model
11.2.2. Step 1: Prepare and convert the model
Download the model with the following commands and use the tf2onnx
tool to export it as an ONNX model:
1$ wget -nc http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_0.25_224.tgz
2# tar to get "*.pb" model def file
3$ tar xzf mobilenet_v1_0.25_224.tgz
4$ python -m tf2onnx.convert --graphdef mobilenet_v1_0.25_224_frozen.pb \
5 --output mnet_25.onnx --inputs input:0 \
6 --inputs-as-nchw input:0 \
7 --outputs MobilenetV1/Predictions/Reshape_1:0
After running all commands, we can get the onnx model named mnet_25.onnx
in the current directory.
11.3. PaddlePaddle model to ONNX
This section uses the SqueezeNet1_1 model provided in the official PaddlePaddle repository as a conversion example. This section requires additional installation of openssl-1.1.1o (ubuntu 22.04 provides openssl-3.0.2 by default).
11.3.1. Step 0: Install openssl-1.1.1o
1wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.19_amd64.deb
2sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2.19_amd64.deb
If the link is expired, check http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/?C=M;O=D for a valid one.
11.3.2. Step 1: Create a working directory
Create and enter the pp_model directory using the command line.
1$ mkdir pp_model
2$ cd pp_model
11.3.3. Step 2: Prepare the model
Download the model with the following commands:
1$ wget https://bj.bcebos.com/paddlehub/fastdeploy/SqueezeNet1_1_infer.tgz
2$ tar xzf SqueezeNet1_1_infer.tgz
3$ cd SqueezeNet1_1_infer
In addition, use the paddle_infer_shape.py
script from the PaddlePaddle project to perform shape inference on the model. The input shape is set to [1,3,224,224]
in NCHW format here:
1$ wget https://raw.githubusercontent.com/jiangjiajun/PaddleUtils/main/paddle/paddle_infer_shape.py
2$ python paddle_infer_shape.py --model_dir . \
3 --model_filename inference.pdmodel \
4 --params_filename inference.pdiparams \
5 --save_dir new_model \
6 --input_shape_dict="{'inputs':[1,3,224,224]}"
After running all commands, we will be in the SqueezeNet1_1_infer
directory, and the new_model
directory will be generated in that directory.
11.3.4. Step 3: Convert the model
Install the paddle2onnx
tool through the following commands, and use this tool to convert the PaddlePaddle model to the ONNX format:
1$ pip install paddle2onnx
2$ paddle2onnx --model_dir new_model \
3 --model_filename inference.pdmodel \
4 --params_filename inference.pdiparams \
5 --opset_version 13 \
6 --save_file squeezenet1_1.onnx
After running all the above commands we will get an onnx model named squeezenet1_1.onnx
.