This document covers the C++ inference system for PaddleOCR, including the CMake build configuration, dependency management, and compilation process. The C++ inference system provides high-performance OCR inference capabilities by directly interfacing with the Paddle Inference API, offering superior performance compared to Python-based inference for production deployments.
For Python-based inference patterns, see page 5.1. For high-performance optimization techniques applicable to both Python and C++, see page 5.2. For service deployment options, see page 5.4.
Sources: deploy/cpp_infer/CMakeLists.txt1-278
The C++ inference system uses CMake as its build system generator, with the main configuration located at deploy/cpp_infer/CMakeLists.txt The build system produces an executable named ppocr that can perform OCR inference operations on images and documents.
Sources: deploy/cpp_infer/CMakeLists.txt1-11 deploy/cpp_infer/CMakeLists.txt14-17
The build system exposes several compile-time options to customize the build for different deployment scenarios.
| Option | Description | Default | Impact |
|---|---|---|---|
WITH_MKL | Use Intel MKL for BLAS operations instead of OpenBLAS | ON | Enables optimized CPU matrix operations via MKL |
WITH_GPU | Enable GPU inference support | OFF | Requires CUDA and cuDNN libraries |
WITH_STATIC_LIB | Link against static libraries | ON | Produces standalone executable without runtime dependencies |
USE_FREETYPE | Enable FreeType for text rendering | OFF | Requires OpenCV compiled with opencv_freetype module |
Sources: deploy/cpp_infer/CMakeLists.txt9-12
The build system supports two BLAS backend options:
The MKL path is configured at deploy/cpp_infer/CMakeLists.txt136-161 with platform-specific library paths:
${PADDLE_LIB}/third_party/install/mklml/lib/mklml.lib${PADDLE_LIB}/third_party/install/mklml/lib/libmklml_intel.soSources: deploy/cpp_infer/CMakeLists.txt29-31 deploy/cpp_infer/CMakeLists.txt136-161
The build system manages dependencies through multiple mechanisms: Paddle Inference bundled libraries, system-provided libraries (OpenCV), and auto-downloaded third-party packages.
Sources: deploy/cpp_infer/CMakeLists.txt109-134
The build system configures include paths for both the Paddle Inference API header (${PADDLE_LIB}/paddle/include) and all bundled third-party headers (protobuf, glog, gflags, xxhash, zlib, onnxruntime, paddle2onnx, yaml-cpp, openvino, tbb, boost, eigen3). Corresponding link directories are set for each of those libraries as well.
See deploy/cpp_infer/CMakeLists.txt109-134 for the full list of include_directories and link_directories calls.
Sources: deploy/cpp_infer/CMakeLists.txt109-134
The build system automatically downloads and decompresses required third-party packages:
The download logic is implemented at deploy/cpp_infer/CMakeLists.txt229-249 Packages are downloaded from:
https://paddle-model-ecology.bj.bcebos.com/paddlex/cpp/libs/${PKG}.tgz
After download, the packages are integrated:
abseil-cpp: Compiled as subdirectory at deploy/cpp_infer/CMakeLists.txt251clipper_ver6.4.2/cpp: Compiled as subdirectory at deploy/cpp_infer/CMakeLists.txt252nlohmann: Header-only JSON librarySources: deploy/cpp_infer/CMakeLists.txt228-258
The build system adapts to Windows and Linux platforms with different compiler flags and library naming conventions.
| Aspect | Windows | Linux |
|---|---|---|
| Compiler Flags | /bigobj /MT /openmp | -g -O3 -fopenmp -std=c++11 |
| Static Library Prefix | (default) | (empty string) |
| Library Extension | .lib / .dll | .a / .so |
| Math Library Threading | /openmp (MSVC) | -fopenmp (GCC) |
| Runtime Library | /MT (static), /MD (dynamic) | (N/A) |
Windows Configuration:
deploy/cpp_infer/CMakeLists.txt70-87 sets Windows-specific flags:
The /bigobj flag is required for large object files, and /MT specifies static runtime linking.
Linux Configuration:
deploy/cpp_infer/CMakeLists.txt88-95 sets Linux-specific flags:
Sources: deploy/cpp_infer/CMakeLists.txt70-95
OpenCV configuration differs by platform:
Sources: deploy/cpp_infer/CMakeLists.txt42-67
When WITH_GPU=ON, the build system requires CUDA and cuDNN library paths.
GPU configuration diagram:
Key behavioral differences between platforms (deploy/cpp_infer/CMakeLists.txt97-107):
| Behavior | Linux | Windows |
|---|---|---|
CUDA_LIB required | Yes | Yes |
CUDNN_LIB validated | Yes (fatal error if missing) | No (used in linking but not validated) |
add_definitions(-DWITH_GPU) | Yes | No |
GPU library linking is configured at deploy/cpp_infer/CMakeLists.txt210-220 On Linux, libcudart and libcudnn are linked as shared libraries using ${CMAKE_SHARED_LIBRARY_SUFFIX}. On Windows, cudart, cublas, and cudnn are linked as static libraries using ${CMAKE_STATIC_LIBRARY_SUFFIX}.
Sources: deploy/cpp_infer/CMakeLists.txt97-107 deploy/cpp_infer/CMakeLists.txt210-220
The build process follows a standard CMake workflow with platform-specific configuration.
The deploy/cpp_infer/tools/build.sh script provides a template for building the project:
Sources: deploy/cpp_infer/tools/build.sh1-23
Sources: deploy/cpp_infer/CMakeLists.txt1-278 deploy/cpp_infer/tools/build.sh1-23
The build system compiles C++ source files into a single executable.
The executable target is defined at deploy/cpp_infer/CMakeLists.txt263-266:
Where ${DEMO_NAME} is "ppocr" as defined at deploy/cpp_infer/CMakeLists.txt4
Sources: deploy/cpp_infer/CMakeLists.txt263-266 deploy/cpp_infer/CMakeLists.txt255-257
The build system supports both static and shared library linking modes, controlled by the WITH_STATIC_LIB option.
The library selection logic is at deploy/cpp_infer/CMakeLists.txt164-180 CMake suffix variables (${CMAKE_STATIC_LIBRARY_SUFFIX} and ${CMAKE_SHARED_LIBRARY_SUFFIX}) are used rather than hardcoded extensions, so the actual file extensions (.lib/.a for static, .dll/.so for shared) are resolved automatically per platform.
| Mode | Windows | Linux |
|---|---|---|
| Static | paddle_inference.lib | libpaddle_inference.a |
| Shared | paddle_inference.dll | libpaddle_inference.so |
Static Library Benefits:
-DSTATIC_LIB at deploy/cpp_infer/CMakeLists.txt81Shared Library Benefits:
Sources: deploy/cpp_infer/CMakeLists.txt79-83 deploy/cpp_infer/CMakeLists.txt164-180
On Windows builds with MKL support, the build system automatically copies required DLL files to the output directory.
The post-build step at deploy/cpp_infer/CMakeLists.txt268-277 copies MKL runtime libraries next to the built executable using copy_if_different. The three DLLs copied are:
| DLL | Source Path |
|---|---|
mklml.dll | ${PADDLE_LIB}/third_party/install/mklml/lib/ |
libiomp5md.dll | ${PADDLE_LIB}/third_party/install/mklml/lib/ |
mkldnn.dll | ${PADDLE_LIB}/third_party/install/onednn/lib/ |
Each DLL is copied to both the build directory and the release/ subdirectory. This ensures the executable can locate its runtime dependencies without manually setting PATH.
Sources: deploy/cpp_infer/CMakeLists.txt268-277
The following table summarizes all build configuration variables:
| CMake Variable | Purpose | Required | Default |
|---|---|---|---|
PADDLE_LIB | Path to Paddle Inference library | Yes | None |
OPENCV_DIR | Path to OpenCV installation | Yes | None |
WITH_MKL | Enable Intel MKL math library | No | ON |
WITH_GPU | Enable GPU inference support | No | OFF |
WITH_STATIC_LIB | Use static library linking | No | ON |
USE_FREETYPE | Enable FreeType text rendering | No | OFF |
CUDA_LIB | Path to CUDA libraries | If WITH_GPU=ON | None |
CUDNN_LIB | Path to cuDNN libraries | If WITH_GPU=ON (Linux) | None |
Example Configuration:
Sources: deploy/cpp_infer/CMakeLists.txt9-17 deploy/cpp_infer/tools/build.sh1-23
Refresh this wiki
This wiki was recently refreshed. Please wait 2 days to refresh again.