Build Tensorflow-addons for Jetson Xavier, with Tensorflow 1.15.4 and 2.8.0.

Paul Xiong
4 min readMay 13, 2022

--

Tensorflow-addons is not always compatible with various TensorFlow version with hardware, here is my env:

  • Jetson AGX Xavier
  • Tensorflow 1.5.4 or TensorFlow 2.8.0 (comes from Nvidia site, prebuild docker image, see my another post for more information.)

If you don’t want to build from source code, here is the TensorFlow-addons whl I built and tested: tensorflow_addons-0.17.0.dev0-cp38-cp38-linux_aarch64.whl

Install bazel

$ apt-get update
$ sudo apt-get install openjdk-8-jdk
$ wget https://github.com/bazelbuild/bazel/archive/refs/tags/0.24.1.tar.gz
$ apt install unzip
$ unzip 0.24.1.tar.gz
$ cd bazel-0.24.1

Need to modify one line to pass compile and build:

# file path /usr/include/aarch64-linux-gnu/bits/unistd_ext.h, line 34 from 
extern __pid_t gettid (void) __THROW;
to extern __pid_t sys_gettid (void) __THROW;

then build it by:

env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh

install it by:

$ sudo cp output/bazel /usr/local/bin/

Installed the Tensorflow 1.15.4 for Jetson Xavier

I have the TF 1.15.5 on my Jetson Xavier AGX, so to get the source code of 1.15.5, do following steps:

$ git clone https://github.com/tensorflow/tensorflow.git

find the correct branch and checkout out it:

$ git ls-remote|grep v1.15.5$ git checkout refs/tags/v1.15.5$ sh -c "echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/nvidia-tegra.conf"

replace configure.py @1351+1352 lines with following :

  config = {}
for line in proc.stdout:

parameter_split = line.decode('ascii').rstrip().split(': ')
if len(parameter_split) == 1:
# had to manually add here the cudnn version
config[parameter_split[0][:-1]] = '8.1'
else:
config[parameter_split[0]] = parameter_split[1]

run:

$  ldconfig$ ./configure

Follow following selection:

Script started on 2022-05-11 19:35:32+00:00 [TERM="xterm" TTY="/dev/pts/0" COLUMNS="91" LINES="24"]
# ./configure
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.24.1- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python 3
Found possible Python library paths:
/usr/lib/python3.8/dist-packages
/usr/local/lib/python3.8/dist-packages
/usr/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3.8/dist-packages]
Do you wish to build TensorFlow with XLA JIT support? [Y/n]:
XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]:
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]:
Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.Tensorflow
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apache Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished

Get the addons :

$ git clone --depth=1 https://github.com/tensorflow/addons.git
$ cd addons
$ wget https://raw.githubusercontent.com/Qengineering/TensorFlow-Addons-Jetson-Nano/main/configure.py$ wget https://github.com/Qengineering/TensorFlow-Addons-Jetson-Nano/raw/main/build_pip_pkg.sh
$ chmod 755 build_ip_pkg.sh
$ mv build_ip_pkg.sh ./build_deps
$ ln -s /usr/local/lib/python3.8/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so Tensorflow/usr/lib/lib_pywrap_tensorflow_internal.so
$ python3 ./configure.py
$ bazel clean$ bazel build build_pip_pkg
$ ./bazel-bin/build_pip_pkg ./tmp/tensorflow_addons_pkg

The whl file is ready to be installed now:

$ pip3 install tensorflow_addons-0.17.0.dev0-cp38-cp38-linux_aarch64.whl

Though above build is made from Tensorflow 1.15, it also works with Tensorflow 2.8.0 (tested.)

--

--

Paul Xiong

Coding, implementing, optimizing ML annotation with self-supervised learning, TLDR: doctor’s labeling is the 1st priority for our Cervical AI project.