Convert PyTorch model to tensorflowjs

At First

I want to easily run machine learning models in a browser. So I’m working on a library to run various models in tensorflowjs(see this repository). However, as you can see in the graph below, PyTorch has recently been used in 75% of the papers at major conferences, making it impossible to easily run edgy and interesting models with tensorflowjs.

State of AI Report 2020 (site)

Approach

As I mentioned above, there were some things that did not work because of my particular environment. I tried a lot of trial and error, but it didn’t work. So I finally succeeded in reproducing it using docker. In the following, I will proceed on the assumption that I will be working with docker.

Build Environment

At first, run the docker container. I used the ubuntu image which have Cuda version 10.1

$ docker run --gpus all  -v /home/whoami/docker_share:/work --name converter -ti nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
$ apt install -y python3.8 python3-pip emacs mlocate pciutils cpio git sudo curl
$ python3 -m pip install pip --upgrade
$ pip3 install tensorflow==2.3.1 --upgrade
$ pip3 install openvino2tensorflow --upgrade
$ pip3 install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
$ pip3 install onnxruntime onnx-simplifier openvino-python --upgrade
$ pip3 install networkx defusedxml test-generator==0.1.1 tensorflow_datasets tensorflowjs
$ cp /work/l_openvino_toolkit_p_2021.1.110.tgz ./
$ tar xvfz l_openvino_toolkit_p_2021.1.110.tgz
$ cd l_openvino_toolkit_p_2021.1.110
$ ./install.sh
$ cd -
$ cd /opt/intel/openvino_2021/install_dependencies
$ ./install_openvino_dependencies.sh
$ source /opt/intel/openvino_2021/bin/setupvars.sh
$ cd -

Let’s Conversion

Now, let’s see if we can convert it well.
Let’s try to convert the Semantic Segmentation model of U²-Net by copying PINTO’s article.

$ git clone https://github.com/NathanUA/U-2-Net.git
$ cd U-2-Net/
$ mkdir ./saved_models/u2netp/
$ curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy" > /dev/null
$ CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"
$ curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1rbSTGKAE-MTxBYHd-51l2hMOQPT_7EPy" -o saved_models/u2netp/u2netp.pth
$ export PYTHONPATH=/U-2-Net
$ SIZE=512
$ python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/pytorch_to_onnx.py \
--import-module model.u2net \
--model-name U2NETP \
--input-shape 1,3,${SIZE},${SIZE} \
--weights saved_models/u2netp/u2netp.pth \
--output-file u2netp_${SIZE}x${SIZE}.onnx --input-names "x" \
--output-names "a/F.sigmoid(d0)"
<snip...>
ONNX check passed successfully.
$ python3 -m onnxsim u2netp_${SIZE}x${SIZE}.onnx u2netp_${SIZE}x${SIZE}_opt.onnx
<snip...>
Checking 2/3...
Ok!
$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
--input_model u2netp_${SIZE}x${SIZE}_opt.onnx \
--input_shape [1,3,${SIZE},${SIZE}] \
--output_dir openvino/${SIZE}x${SIZE}/FP32 \
--data_type FP32
<snip...>
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /U-2-Net/openvino/512x512/FP32/u2netp_512x512.xml
[ SUCCESS ] BIN file: /U-2-Net/openvino/512x512/FP32/u2netp_512x512.bin
[ SUCCESS ] Total execution time: 35.76 seconds.
[ SUCCESS ] Memory consumed: 809 MB.
$ openvino2tensorflow \
--model_path openvino/${SIZE}x${SIZE}/FP32/u2netp_${SIZE}x${SIZE}_opt.xml \
--model_output_path saved_model_${SIZE}x${SIZE} \
--output_saved_model True
<snip...>
All the conversion process is finished! =============================================
$ tensorflowjs_converter \
--input_format=tf_saved_model \
--output_format=tfjs_graph_model \
--signature_name=serving_default \
--saved_model_tags=serve \
saved_model_${SIZE}x${SIZE} \
tfjs_model_u2netp_${SIZE}x${SIZE}

Use It

Let’s use this model to actually segment the image. The result is shown below. We can see that the segmentation is working. It is a little blurry in some areas, but I think it can be improved by setting the threshold value well.
I’ve heard a lot of people say that it’s quite lightweight, but in my environment (CPU), it took 2–3 minutes. Also, with the GPU, even with a GTX1660 with 6GB of RAM, I couldn’t run it because it overflowed the memory. (I also have a 2080Ti on the same machine, but Chrome only activates the GTX1660. I don’t know how to switch it…)

I am very thirsty!!

Reference

I used the images from the following pages.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store