TensorFlow C++ Guide

Using TensorFlow from C++ for inference and custom ML integration

Overview

TensorFlow offers a C++ API primarily for inference. While training models is mostly done in Python, the C++ API allows deployment of trained models efficiently in performance-critical environments like robotics, embedded systems, or game engines.

Requirements

To use TensorFlow C++:

  • Bazel (build system)
  • C++17 compiler
  • TensorFlow source code
  • Trained SavedModel or frozen graph

Installation (Building TensorFlow C++)

Note: TensorFlow does not distribute precompiled C++ libraries, so you must build from source.

git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure    // Follow prompts to configure CUDA, XLA, etc.
bazel build //tensorflow:libtensorflow_cc.so

After this, your shared libraries will be in bazel-bin/tensorflow

C++ Project Setup

Link your C++ application with the built shared libraries.

g++ -std=c++17 my_app.cpp -I/path/to/tensorflow/include \
    -L/path/to/tensorflow/lib -ltensorflow_cc -ltensorflow_framework

Basic Inference Example

Load and run inference on a model saved in the SavedModel format.

#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/platform/env.h"

int main() {
  tensorflow::SavedModelBundle bundle;
  tensorflow::SessionOptions session_options;
  tensorflow::RunOptions run_options;

  std::string model_dir = "model/my_saved_model";
  TF_CHECK_OK(tensorflow::LoadSavedModel(session_options, run_options, model_dir,
              {"serve"}, &bundle));

  tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1, 10}));
  auto input_data = input_tensor.flat().data();
  for (int i = 0; i < 10; ++i) input_data[i] = i * 1.0f;

  std::vector> inputs = {
    {"serving_default_input:0", input_tensor}
  };

  std::vector outputs;
  TF_CHECK_OK(bundle.session->Run(inputs, {"StatefulPartitionedCall:0"}, {}, &outputs));

  std::cout << "Output: " << outputs[0].DebugString() << std::endl;
  return 0;
}

Make sure the input/output tensor names match your actual model. You can inspect them using Python and saved_model_cli show.

Tips and Troubleshooting

  1. Use saved_model_cli to inspect model signature.
  2. If you use protobuf, ensure it matches TensorFlow’s version.
  3. C++ API doesn’t support dynamic graph creation like Python.
  4. Use CMake with Bazel-built shared libraries for integration in large projects.