A Guide to using TensorRT on the Nvidia Jetson Nano
- Note This guide assumes that you are using Ubuntu
18.04. If you are using Windows refer to these instructions on how to setup your computer to use TensorRT.
Step 1: Setup TensorRT on Ubuntu Machine
Follow the instructions here.
Make sure you use the
tar file instructions unless you have previously
installed CUDA using
Step 2: Setup TensorRT on your Jetson Nano
- Setup some environment variables so
$PATH. Add the following lines to your
# Add this to your .bashrc file
# Adds the CUDA compiler to the PATH
# Adds the libraries
- Test the changes to your
You should see something like:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on ...
Cuda compilation tools, release 10.0, Vxxxxx
- Switch to your
virtualenvand install PyCUDA.
# This takes a a while.`
pip install pycuda
- After this you will also need to setup
PYTHONPATHsuch that your
dist-packagesare included as part of your
virtualenv. Add this to your
.bashrc. This needs to be done because the python bindings to
tensorrtare available in
dist-packagesand this folder is usually not visible to your virtualenv. To make them visible we add it to
- Test this change by switching to your
> import tensorrt as trt
> # This import should succeed
Step 3: Train, Freeze and Export your model to TensorRT format (
After you train the
linear model you end up with a file with a
# You end up with a Linear.h5 in the models folder
python manage.py train --model=./models/Linear.h5 --tub=./data/tub_1_19-06-29,...
# (optional) copy './models/Linear.h5' from your desktop computer to your Jetson Nano in your working dir (~mycar/models/)
# Freeze model using freeze_model.py in donkeycar/scripts ; the frozen model is stored as protocol buffers.
# This command also exports some metadata about the model which is saved in ./models/Linear.metadata
python ~/projects/donkeycar/scripts/freeze_model.py --model=~/mycar/models/Linear.h5 --output=~/mycar/models/Linear.pb
# Convert the frozen model to UFF. The command below creates a file ./models/Linear.uff
python convert_to_uff.py ~/mycar/models/Linear.pb
Now copy the converted
uff model and the
metadata to your Jetson Nano.
myconfig.pypick the model type as
DEFAULT_MODEL_TYPE = `tensorrt_linear`
- Finally you can do
# After you scp your `uff` model to the Nano
python manage.py drive --model=./models/Linear.uff