Using TensorFlow from C++ for inference and custom ML integration
TensorFlow offers a C++ API
primarily for inference. While training models is mostly done in Python, the C++ API allows deployment of trained models efficiently in performance-critical environments like robotics, embedded systems, or game engines.
To use TensorFlow C++:
Note: TensorFlow does not distribute precompiled C++ libraries, so you must build from source.
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure // Follow prompts to configure CUDA, XLA, etc.
bazel build //tensorflow:libtensorflow_cc.so
After this, your shared libraries will be in bazel-bin/tensorflow
Link your C++ application with the built shared libraries.
g++ -std=c++17 my_app.cpp -I/path/to/tensorflow/include \
-L/path/to/tensorflow/lib -ltensorflow_cc -ltensorflow_framework
Load and run inference on a model saved in the SavedModel
format.
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/platform/env.h"
int main() {
tensorflow::SavedModelBundle bundle;
tensorflow::SessionOptions session_options;
tensorflow::RunOptions run_options;
std::string model_dir = "model/my_saved_model";
TF_CHECK_OK(tensorflow::LoadSavedModel(session_options, run_options, model_dir,
{"serve"}, &bundle));
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1, 10}));
auto input_data = input_tensor.flat().data();
for (int i = 0; i < 10; ++i) input_data[i] = i * 1.0f;
std::vector> inputs = {
{"serving_default_input:0", input_tensor}
};
std::vector outputs;
TF_CHECK_OK(bundle.session->Run(inputs, {"StatefulPartitionedCall:0"}, {}, &outputs));
std::cout << "Output: " << outputs[0].DebugString() << std::endl;
return 0;
}
Make sure the input/output tensor names match your actual model. You can inspect them using Python and saved_model_cli show
.
saved_model_cli
to inspect model signature.