How to Troubleshoot Apps for the Modern Connected Worker
Face Recognition: ONNX to TensorRT conversion for Arcface model problem?
1. Face recognition: OnnX to
TensorRT conversion of
Arcface Model
https://www.datatobiz.com/
2. I fail to run TensorRT inference on Jetson Nano, due to PReLU
activation function not supported for TensorRT 5.1. But, the
PReLU channel-wise operator is available for TensorRT 6. In
this blogpost, I will explain the steps required in the model
conversion of ONNX to TensorRT and the reason why my steps
failed to run TensorRT inference on Jetson Nano
4. 1.The first step is to import the model, which includes loading it from
a saved file on disk and converting it to a TensorRT network from its
native framework or format.Our example loads the model in ONNX
format i.e. arcface model of face recognition.
2. Next, an optimized TensorRT engine is built based on the input
model, target GPU platform, and other configuration parameters
specified.
3. The last step is to provide input data to the TensorRT engine to
perform inference. The sample uses input data bundled with model
from the ONNX model zoo to perform inference.
6. Firstly, ensure that ONNX is installed on Jetson Nano by running the
following command.
Now let’s convert the downloaded ONNX model into TensorRT
arcface_trt.engine.
TensorRT module is pre-installed on Jetson Nano. The current release of
TensorRT version is 5.1 by NVIDIA JetPack SDK.
1.
import ONNX
If this command gives an error, then ONNX is not installed on Jetson Nano.
Follow the steps to install ONNX on Jetson Nano:
sudo apt-get install cmake==3.2
sudo apt-get install protobuf-compiler
sudo apt-get install libprotoc-dev
pip install –no-binary ONNX ‘ONNX==1.5.0
7. Now, ONNX is ready to run on Jetson Nano satisfying all the dependencies.
2. Now, download the ONNX model using the following command:
wget https://s3.amazonaws.com/ONNX-model-
zoo/arcface/resnet100/resnet100.ONNX
3. Simply run the following script as a next step:
We are using Python API for the conversion.
import os
import tensorrt as trtbatch_size = 1
TRT_LOGGER = trt.Logger()
def build_engine_ONNX(model_file):
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as
network, trt.ONNXParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1 << 30
builder.max_batch_size = batch_size
8. # Load the ONNX model and parse it in order to populate the TensorRT
network.
with open(model_file, 'rb') as model:
parser.parse(model.read())
return builder.build_cuda_engine(network)
# downloaded the arcface mdoel
ONNX_file_path = './resnet100.ONNX'
engine = build_engine_ONNX(ONNX_file_path)
engine_file_path = './arcface_trt.engine'
with open(engine_file_path, "wb") as f:
f.write(engine.serialize())
After running the script, we get some error “Segmentation fault core
dumped”. After doing a lot of research we have found that there is no
issue with the script. There are some other reasons why we are facing
this problem.