Skip to content

Latest commit

 

History

History
 
 

PyTorch-ONNX-TensorRT

How to convert a model from PyTorch to TensorRT and speed up inference

The blog post is here: https://www.learnopencv.com/how-to-convert-a-model-from-pytorch-to-tensorrt-and-speed-up-inference/

To run PyTorch part:

python3 -m pip install -r requirements.txt
python3 pytorch_model.py

To run TensorRT part:

  1. Download and install NVIDIA CUDA 10.0 or later following by official instruction: link
  2. Download and extract CuDNN library for your CUDA version (login required): link
  3. Download and extract NVIDIA TensorRT library for your CUDA version (login required): link. The minimum required version is 6.0.1.5. Please follow the Installation Guide for your system and don't forget to install Python's part
  4. Add the absolute path to CUDA, TensorRT, CuDNN libs to the environment variable PATH or LD_LIBRARY_PATH
  5. Install PyCUDA
python3 trt_inference.py

AI Courses by OpenCV

Want to become an expert in AI? AI Courses by OpenCV is a great place to start.