Blog

Migrating deep learning models across platforms

May 02, 2021

Back to Blog

Usually developing, training, and testing deep learning models is all done using a single platform, whether it is TensorFlow, PyTorch, Keras, MATALB or one of the many platforms out there. But in some cases, it is required to migrate models between different platforms. One such example is migrating a model from a  complete training framework (such as TensorFlow, PyTorch, etc.) to a lite inference only framework (such as TensorRT, OpenCV Deep Neural Networks, etc.).  Another recurring example comes from the practice of transfer learning: in many cases a desired pre-trained model is available only in certain frameworks and using it (without replacing your existing framework) will require migration of the model between different frameworks.

Importing and exporting models boils down to finding the right format that both frameworks can read and write. Such formats may include the Keras H5 format, TensorFlow SavedModel format, ONNX, or others. Once a common format is found, nonstandard layers will have to be manually converted to allow correct implementation on both frameworks. Finally, the network inference will be tested in both frameworks. In this blog we will describe an example for such migration process along with code examples.

Migrating a TensorFlow 2.1 model to OpenCV DNN (Deep Neural Networks):

It is common for a production code to run on a native C++, so the ability to perform model inference using OpenCV instead of TensorFlow (TF) will result in a much simpler integration and will improve run time. Unfortunately TF 2.1 default format (SavedModel format) is currently not supported by OpenCV DNN, but a rather simple solution exists:   by converting all of the network weights to constants (freezing) it is possible to save the entire model in a single *.pb file supported by OpenCV DNN (similar to the *.pb format in TF 1.x)[i].

For example, using this procedure we can export a pre-trained resNet50 model from TF to OpenCV.

(Full code and environment can be found at https://github.com/orinoked1/TF2_to_OpenCV)

 

Loading the pre-trained network:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
import os

# get a pre-trained ResNet50 model
base_model = tf.keras.applications.ResNet50(weights='imagenet')
model_input_shape = base_model.input_shape

Saving it in TF native SavedModel format locally:

# save model in SavedModel format
SavedModel_path = os.path.join('model', 'SavedModel_folder')
base_model.save(SavedModel_path, save_format='tf')

 

Saving it as a frozen graph:

# load saved model and freeze it
loaded_model = tf.saved_model.load(SavedModel_path)
infer = loaded_model.signatures['serving_default']
concrete_function = tf.function(infer).get_concrete_function(input_1=tf.TensorSpec(shape=model_input_shape,
                    dtype=tf.float32))

wrapped_fun = convert_variables_to_constants_v2(concrete_function)
graph_def = wrapped_fun.graph.as_graph_def()
# save again in a *.pb file including weighs as constants
model_frozen_pb_full_path = os.path.join('model', 'frozenGraph_folder')
os.makedirs(model_frozen_pb_full_path, exist_ok=True)
with tf.io.gfile.GFile(os.path.join(model_frozen_pb_full_path,'frozenGraph.pb'), 'wb') as f:
ססססססססססססס f.write(graph_def.SerializeToString())

 

Then we can perform inference on the same random input in both frameworks (notice that in TF the input is shape is channels first while in DNN it is channels last):

import cv2 as cv
import numpy as np
# generate random input
image_shape = np.asarray(model_input_shape[1:])
input_shape = np.insert(image_shape, 0, 1, axis=0)  # add observation dim
test_input_c_last = np.random.standard_normal(input_shape).astype(np.float32)
test_input_c_first = np.moveaxis(test_input_c_last, -1, 1)
# inference in tensorflow
out_tf = base_model.predict(test_input_c_last)
# inference in openCV DNN
net = cv.dnn.readNet(os.path.join(model_frozen_pb_full_path,'frozenGraph.pb'))
net.setInput(test_input_c_first)
out_dnn = net.forward()

 

and finally compare the two feature vectors :

# compare feature vectors
np.testing.assert_allclose(out_tf, out_dnn, rtol=1e-03, atol=1e-05)
max_abs_diff = np.max(np.abs(out_tf-out_dnn))
max_rel_diff = np.max(np.abs((out_tf-out_dnn)/out_tf))
print('max absolute difference is {:e} max relative difference is {:e}'.format(max_abs_diff,max_rel_diff))

 

Next we can use OpenCV to perform inference in a C++ project, first we will save the TF model results along with the randomly generated input image:

# save TF result to compare to Cpp
test_image_folder = 'img_folder'
os.makedirs(test_image_folder, exist_ok=True)
np.save(os.path.join(test_image_folder, 'TF_feature_vector.npy'), out_tf)
# export the test image to a Cpp project (using openCV *.xml file format)
# save the float array with shape [rows X cols X channels]
fs = cv.FileStorage(os.path.join(test_image_folder, 'test_img.xml'), cv.FILE_STORAGE_WRITE)
fs.write("test_img", np.squeeze(test_input_c_last))
fs.release()

 

Then we can open a visual studio project to perform inference in C++.

#include <windows.h>
#include <opencv2/dnn.hpp>
using namespace cv;
using namespace std;
using namespace dnn;
int main(int argc, char** argv)
{

ססססססססססססס// define net path & test file path
סססססססססססססstring netPath = "C:\\git_repos\\TF2_to_OpenCV\\py\\model\\frozenGraph_folder\\frozenGraph.pb";
ססססססססססססס// load network
סססססססססססססNet resNet50 = readNet(netPath);
ססססססססססססס// load test image
סססססססססססססstring inputFilePath = "C:\\git_repos\\TF2_to_OpenCV\\py\\img_folder\\test_img.xml";
סססססססססססססFileStorage inputFile;
סססססססססססססMat inputImage;
ססססססססססססס inputFile.open(inputFilePath, FileStorage::READ);
סססססססססססססinputFile["test_img"] >> inputImage;
סססססססססססססinputFile.release();
ססססססססססססס// reshape image to blob [rows X cols X channels] ->  [observation(1) X channels X rows X cols]
סססססססססססססMat blob;
ססססססססססססס blobFromImage(inputImage, blob, 1.0, Size(inputImage.cols, inputImage.rows), Scalar(0.0), false, false);
ססססססססססססס// forward pass
סססססססססססססMat  featureVector;
סססססססססססססresNet50.setInput(blob);
סססססססססססססfeatureVector = resNet50.forward();
ססססססססססססס// save output
ססססססססססססס string outputFilePath = "C:\\git_repos\\TF2_to_OpenCV\\py\\img_folder\\cpp_feature_vector.xml";
סססססססססססססFileStorage outputFile;
סססססססססססססoutputFile.open(outputFilePath, FileStorage::WRITE);
 סססססססססססססoutputFile << "feature_vector" << featureVector;
סססססססססססססoutputFile.release();

}

And finally, compare the C++ feature vector to the TF feature vector:

import cv2 as cv
import numpy as np
import os

test_image_folder = 'img_folder'

tf_out = np.load(os.path.join(test_image_folder, 'TF_feature_vector.npy'))
openCV_file = cv.FileStorage(os.path.join(test_image_folder,'cpp_feature_vector.xml'),  

cv.FILE_STORAGE_READ)
openCV_out = openCV_file.getFirstTopLevelNode().mat()
np.testing.assert_allclose(tf_out, openCV_out, rtol=1e-03, atol=1e-05)
max_abs_diff = np.max(np.abs(tf_out-openCV_out))
max_rel_diff = np.max(np.abs((tf_out-openCV_out)/tf_out))
print('max absolute difference is {:e} max relative difference is {:e}'.format(max_abs_diff,max_rel_diff))

 

Save-Load-Inference-From-TF2-Frozen-Graph

Recent Posts

How to get the most out of your feasibility/POC stage with Vision Elements
Read More
By way of introduction, Vision Elements provides customized research and algorithm development services for Computer Vision and Artificial […]
What is Computer Vision and How is it Changing the World?
Read More
It is no secret that computer vision is rapidly changing our lives. Images and videos are an integral […]
Matlab, Python and How to Best Combine Between Them
Read More
As data scientists and computer vision specialists, the most prominent tools we use are Matlab and Python. In […]
What is AI? 15 Common Questions, Answered
Read More
Artificial intelligence, also called AI, is revolutionizing nearly every sector of society. As time goes on, more companies […]
40 Artificial Intelligence Statistics You Need to Know
Read More
Want to know the state of artificial intelligence (AI) in some of the top industries in 2021? We’ve […]