This repository contains several applications which invoke DNN inference with TensorFlow Lite GPU Delegate and visualizes its result with OpenGLES.
Target platform: Linux PC / NVIDIA Jetson / RaspberryPi.
$ sudo apt install libgles2-mesa-dev
$
$ wget https://github.com/bazelbuild/bazel/releases/download/2.0.0/bazel-2.0.0-installer-linux-x86_64.sh
$ chmod 755 bazel-2.0.0-installer-linux-x86_64.sh
$ sudo ./bazel-2.0.0-installer-linux-x86_64.sh
$ cd ~/work
$ git clone https://github.com/terryky/tflite_gles_app.git
$ ./tflite_gles_app/tools/scripts/tf2.2/build_libtflite_r2.2.sh
(Tensorflow configure will start after a while. Please enter according to your environment)
$
$ ln -s tensorflow_r2.2 ./tensorflow
$ cd ~/work/tflite_gles_app/gl2handpose
$ make -j4
$ cd ~/work/tflite_gles_app/gl2handpose
$ ./gl2handpose
(HostPC)$ wget https://github.com/bazelbuild/bazel/releases/download/2.0.0/bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$ chmod 755 bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$ sudo ./bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$
(HostPC)$ cd ~/work
(HostPC)$ git clone https://github.com/terryky/tflite_gles_app.git
(HostPC)$ ./tflite_gles_app/tools/scripts/tf2.2/build_libtflite_r2.2_with_gpu_delegate_aarch64.sh
(Tensorflow configure will start after a while. Please enter according to your environment)
(HostPC)scp ~/work/tensorflow_r2.2/tensorflow/lite/tools/make/gen/linux_aarch64/lib/libtensorflow-lite.a jetson@192.168.11.11:/home/jetson/
(Jetson)$ cd ~/work
(Jetson)$ git clone https://github.com/tensorflow/tensorflow.git
(Jetson)$ cd tensorflow
(Jetson)$ git checkout r2.2
(Jetson)$ ./tensorflow/lite/tools/make/download_dependencies.sh
(Jetson)$ cd ~/work
(Jetson)$ git clone https://github.com/terryky/tflite_gles_app.git
(Jetson)$ cd ~/work/tflite_gles_app/gl2handpose
(Jetson)$ cp ~/libtensorflow-lite.a .
(Jetson)$ make -j4 TARGET_ENV=jetson_nano TFLITE_DELEGATE=GPU_DELEGATEV2
(Jetson)$ cd ~/work/tflite_gles_app/gl2handpose
(Jetson)$ ./gl2handpose
On Jetson Nano, display sync to vblank (VSYNC) is enabled to avoid the tearing by default . To enable/disable VSYNC, run app with the following command.
# enable VSYNC (default).
(Jetson)$ export __GL_SYNC_TO_VBLANK=1; ./gl2handpose
# disable VSYNC. framerate improves, but tearing occurs.
(Jetson)$ export __GL_SYNC_TO_VBLANK=0; ./gl2handpose
(HostPC)$ wget https://github.com/bazelbuild/bazel/releases/download/2.0.0/bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$ chmod 755 bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$ sudo ./bazel-2.0.0-installer-linux-x86_64.sh
(HostPC)$
(HostPC)$ cd ~/work
(HostPC)$ git clone https://github.com/terryky/tflite_gles_app.git
(HostPC)$ ./tflite_gles_app/tools/scripts/tf2.2/build_libtflite_r2.2_with_gpu_delegate_rpi.sh
(Tensorflow configure will start after a while. Please enter according to your environment)
(HostPC)scp ~/work/tensorflow_r2.2/tensorflow/lite/tools/make/gen/rpi_armv7l/lib/libtensorflow-lite.a pi@192.168.11.11:/home/pi/
(Raspi)$ sudo apt install libgles2-mesa-dev libegl1-mesa-dev xorg-dev
(Raspi)$ sudo apt update
(Raspi)$ sudo apt upgrade
(Raspi)$ cd ~/work
(Raspi)$ git clone https://github.com/tensorflow/tensorflow.git
(Raspi)$ cd tensorflow
(Raspi)$ git checkout r2.2
(Raspi)$ ./tensorflow/lite/tools/make/download_dependencies.sh
(Raspi)$ cd ~/work
(Raspi)$ git clone https://github.com/terryky/tflite_gles_app.git
(Raspi)$ cd ~/work/tflite_gles_app/gl2handpose
(Raspi)$ cp ~/libtensorflow-lite.a .
(Raspi)$ make -j4 TARGET_ENV=raspi4 #disable GPUDelegate. (recommended)
#enable GPUDelegate. but it cause low performance on Raspi4.
(Raspi)$ make -j4 TARGET_ENV=raspi4 TFLITE_DELEGATE=GPU_DELEGATEV2
(Raspi)$ cd ~/work/tflite_gles_app/gl2handpose
(Raspi)$ ./gl2handpose
for more detail infomation, please refer this article.
Both Live camera and video file are supported as input methods.
- UVC(USB Video Class) camera capture is supported.
-
Use
v4l2-ctl
command to configure the capture resolution.- lower the resolution, higher the framerate.
(Target)$ sudo apt-get install v4l-utils
# confirm current resolution settings
(Target)$ v4l2-ctl --all
# query available resolutions
(Target)$ v4l2-ctl --list-formats-ext
# set capture resolution (160x120)
(Target)$ v4l2-ctl --set-fmt-video=width=160,height=120
# set capture resolution (640x480)
(Target)$ v4l2-ctl --set-fmt-video=width=640,height=480
-
currently, only YUYV pixelformat is supported.
- If you have error messages like below:
-------------------------------
capture_devie : /dev/video0
capture_devtype: V4L2_CAP_VIDEO_CAPTURE
capture_buftype: V4L2_BUF_TYPE_VIDEO_CAPTURE
capture_memtype: V4L2_MEMORY_MMAP
WH(640, 480), 4CC(MJPG), bpl(0), size(341333)
-------------------------------
ERR: camera_capture.c(87): pixformat(MJPG) is not supported.
ERR: camera_capture.c(87): pixformat(MJPG) is not supported.
...
please try to change your camera settings to use YUYV pixelformat like following command :
$ sudo apt-get install v4l-utils
$ v4l2-ctl --set-fmt-video=width=640,height=480,pixelformat=YUYV --set-parm=30
- to disable camera
- If your camera doesn't support YUYV, please run the apps in camera_disabled_mode with argument
-x
- If your camera doesn't support YUYV, please run the apps in camera_disabled_mode with argument
$ ./gl2handpose -x
- FFmpeg (libav) video decode is supported.
- If you want to use a recorded video file instead of a live camera, follow these steps:
# setup dependent libralies.
(Target)$ sudo apt install libavcodec-dev libavdevice-dev libavfilter-dev libavformat-dev libavresample-dev libavutil-dev
# build an app with ENABLE_VDEC options
(Target)$ cd ~/work/tflite_gles_app/gl2facemesh
(Target)$ make -j4 ENABLE_VDEC=true
# run an app with a video file name as an argument.
(Target)$ ./gl2facemesh -v assets/sample_video.mp4
You can select the platform by editing Makefile.env.
- Linux PC (X11)
- NVIDIA Jetson Nano (X11)
- NVIDIA Jetson TX2 (X11)
- RaspberryPi4 (X11)
- RaspberryPi3 (Dispmanx)
- Coral EdgeTPU Devboard (Wayland)