Introduction to Apache TVM

(This passage contains 0 words)

Table of Contents

    Edge deployment of deep models has been a popular topic since the vast advancement of deep learning. However, since the training of deep models heavily rely on Python-based frameworks with utilities for backpropagation, such as PyTorch and TensorFlow, there remains a gap between the Pythonic implementation of models and edge deployment which usually depends on pure C code. While chip manufacturers such as ST and Renesas have their own C code compilers for deep models, which are tuned for their own product, it is generally hard to use these utilities across different edge platforms, and for the increasingly complex models.

    This is where we need Apache TVM, an open-source, universal deep learning compiler, which aims at generating deployable models everywhere. It follows the principles of Python-first and universal deployment, suggesting its users to first write and train their models in Python, and later convert to any format they like with TVM.

    In this article, we present a tutorial for beginners on TVM, including its installation, basic concepts and a toy simulation.

    Installation

    Notice: this blog assumes its readers to be familiar with Anaconda, PyPI, some basic shell commands and CMake syntax.

    Preparation

    To some extent, installation of TVM could be the most frustrating stage. TVM's official tutorial provides three approaches to deploy the software locally, including 1) building from source, 2) installing via PyPI and 3) deploying docker containers . For beginners, it is highly recommended to directly install TVM via PyPI:

    
    pip install apache-tvm
                

    This is clean and friendly to beginners, but you will not find a suitable version unless you are on x86 Linux/macOS. For x86 Windows users, they can deploy a Linux virtual machine with WSL, and install TVM in the virtual system. As for those who are working with non-x86 platforms, deploying a Docker container could also be a good choice . For others, the only (potentially) feasible approach is to build TVM from source.

    To build TVM, readers could refer to TVM's official tutorial , however, the process may be difficult due to the lack of key details. So next, I replicate the installation of TVM from source step by step, hoping that my blog would be of help.

    Before we get into the installation, the version of TVM that I built is 0.23.0 (Nov 3, 2025). For those who are working with other versions, other problems might occur during the installation. The first thing to do is to verify that the build dependencies are properly installed, including Git, CMake and a C++ compiler that supports C++17. Anaconda is also strongly recommended to prevent corrupting local venv during the installation. Windows users could also install Virtual Studio, which provides a compiler tuned specifically for Windows (MSVC). Make sure that Git, CMake and Anaconda executables are added to environmental variables, and restart the terminal for changes to take effect:

    
    conda init
    conda create -n tvm -c conda-forge "llvmdev>=15" libxml2-devel zlib python=3.11
    conda activate tvm
                

    Remember to install both libxml2-devel and zlib, which are required but not mentioned in TVM's official tutorial. Next, clone the TVM repository from GitHub. Remember to add the recursive flag as many third-party utilities should be built at the same time:

    
    git clone --recursive https://github.com/apache/tvm tvm
                

    Build

    Edit CMake configurations: the next step is to configure the compilation options. Get into the source directory, copy configuration template to the build directory:

    
    cd tvm 
    mkdir -p build && cd build 
    cp ../cmake/config.cmake . 
                

    Add the following recommended flags to the end of CMake configuration:

    
    set(CMAKE_BUILD_TYPE RelWithDebInfo)
    set(USE_LLVM "llvm-config")  # no need to add --ignore-libllvm --link-static
    set(HIDE_PRIVATE_SYMBOLS ON)
                

    For those who are working with CUDA, or need BLAS support, adjust the following flags accordingly (turn OFF to ON):

    
    set(USE_CUDA    OFF)
    set(USE_METAL   OFF)
    set(USE_VULKAN  OFF)
    set(USE_OPENCL  OFF)
    set(USE_CUBLAS  OFF)
    set(USE_CUDNN   OFF)
    set(USE_CUTLASS OFF)
                

    Build TVM: now we can start building the program. First, run CMake under two manually specified library paths. For Linux users, run

    
    sudo cmake .. \
      -DCMAKE_LIBRARY_PATH="$CONDA_PREFIX/lib" \
      -DCMAKE_INCLUDE_PATH="$CONDA_PREFIX/include"
                

    For Windows users, the above paths should be changed to

    
    cmake .. `
      -DCMAKE_LIBRARY_PATH="$env:CONDA_PREFIX/Library/lib" `
      -DCMAKE_INCLUDE_PATH="$env:CONDA_PREFIX/Library/include"
    

    Next, build TVM (specify sudo if needed):

    
    cmake --build . --config RelWithDebInfo --parallel 8 --clean-first
                

    Adjust --config accordingly if you are working with other branches. It is also recommended that one should not use $(nproc) to specify the number of working threads, 4 or 8 is already ideal for the compilation of TVM. During this process, one might encounter errors such as missing key libraries like XML2, or LLVM. Please review the previous contents and make sure those libraries are already installed with Anaconda. To check whether a library has been installed, use:

    
    ls $CONDA_PREFIX/lib | grep <lib_name>  # Linux
    ls $env:CONDA_PREFIX/Library/lib | grep <lib_name>  # Windows
                

    Finally, we have built the TVM. Next step is to install its Python interface, tvm-ffi. This is done in a different directory where the installation is defined:

    
    cd ../3rdparty/tvm-ffi
    pip install .
                

    If you are working on macOS, you might encounter issues such as undefined symbols, and that the compiler tries to compile source files tailored for Windows. For these problems, please refer to my issues on macOS.

    Final stage: next, here comes to our final stage. Go into the python directory, and install the full API with pip:

    
    cd ../..  # use /path/to/tvm/root 
    cd python
    pip install .  # no -e flag for TVM users
                

    Next, copy the runtime libraries to the conda root:

    
    cp -a ./build/RelWithDebInfo/, $CONDA_PREFIX
                

    of if you are working with Windows:

    
    xcopy ".\build\RelWithDebInfo\*" "$env:CONDA_PREFIX" /E /I /Y  # for cmd 
    Copy-Item `
      -Path ".\build\RelWithDebInfo\*" `
      -Destination "$env:CONDA_PREFIX" `
      -Recurse -Force  # for Powershell
                

    Replace RelWithDebInfo with your build configuration name (such as Release and Debug, if they are your choice). This solves the problem that runtime libraries cannot be found when importing TVM in Python. Besides, though not explicitly hinted in the official tutorial, numpy and psutil are required by TVM:

    
    pip install numpy psutil
                

    Verify installation: after all the installation completes, we can import TVM in Python:

    
    python -c "import tvm; print(tvm.__version__);"
                

    If you see 0.23.dev0 (the version I installed), or any other hint indicating a successful installation, then the installation has completed.

    Extra tip: to show highlighted logs for TVM, install the following packages:

    
    pip install  "Pygments>=2.4.0" packaging --upgrade --user
                

    Possible issues on macOS: it is worth noting that when installing tvm-ffi on macOS, two errors might occur: 1) the process might fail when compiling backtrace_win.cc, and 2) pip might report undefined symbol __hash_memory on newer versions of macOS. The first error is caused by mistakenly including Windows-specified source file in the CMakeLists.txt in tvm-ffi, in:

    
    set(_tvm_ffi_objs_sources
        "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace.cc"  # <- here
        "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace_win.cc"  # <- here
        ...
    )
                

    To fix 1), comment these two lines out and add the right source file with guards:

    
    set(_tvm_ffi_objs_sources
    #   "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace.cc"  # <- comment out
    #   "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace_win.cc"  # <- comment out
        ...
    )
    if (WIN32)  # add Windows tailored source file here
        list(APPEND _tvm_ffi_objs_sources 
             "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace_win.cc")
    else ()  # for other platforms
        list(APPEND _tvm_ffi_objs_sources 
             "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/backtrace.cc")
    endif ()
                

    Problem 2) would be a bit tricky to solve, since we need an implementation of __hash_memory whose signature exactly matches libstdc++. Create a new source file src/ffi/hash_memory.cc:

    
    // hash_memory.cc — libc++ ABI-compatible
    #include <cstddef>
    #include <cstdint>
    namespace std {
    inline namespace __1 {
    __attribute__((visibility("default")))  // display the symbol in library
    std::size_t __hash_memory(const void* ptr, std::size_t len) noexcept {
        // FNV-1a 64-bit
        const unsigned char* p = static_cast(ptr);
        std::size_t h = 1469598103934665603ULL;
        for (std::size_t i = 0; i < len; ++i) {
            h ^= static_cast(p[i]);
            h *= 1099511628211ULL;
        }
        return h;
    }
    } // namespace __1
    } // namespace std
                

    Then add hash_memory to CMake:

    
    list(APPEND _tvm_ffi_objs_sources 
             "${CMAKE_CURRENT_SOURCE_DIR}/src/ffi/hash_memory.cc")
                

    The above tricks will allow tvm-ffi to be installed correctly on macOS. Next, run pip install . in tvm-ffi and continue your installation (you still need to install tvm).

    Quick Reference

    1. Install Apache TVM with Docker (see tutorial).
    1. Build Apache TVM from source (see tutorial).