Inferflow: an Efficient and Highly Configurable Inference Engine for Large Language Models

Shuming Shi1 , Enbo Zhao1, Deng Cai1, Leyang Cui1, Xinting Huang1, Huayang Li1,2*
1 Tencent AI Lab 2 Nara Institute of Science and Technology
*Work was done during the internship at Tencent AI Lab.

Abstract

We present Inferflow, an efficient and highly configurable inference engine for large language models (LLMs). With Inferflow, users can serve most of the common transformer models by simply modifying some lines in corresponding configuration files, without writing a single line of source code. Compared with most existing inference engines, Inferflow has some key features. First, by implementing a modular framework of atomic build-blocks and technologies, Inferflow is compositionally generalizable to new models. Second, 3.5-bit quantization is introduced in Inferflow as a tradeoff between 3-bit and 4-bit quantization. Third, hybrid model partitioning for multi-GPU inference is introduced in Inferflow to better balance inference speed and throughput than the commonly-adopted partition-by-layer and partition-by-tensor strategies.

Inferflow

We list major requirements for an LLM inference engine and possible technologies to address them.

Implementation status of key technologies in Inferflow

Main Features

  • Extensible and Highly configurable: A typical way of using Inferflow to serve a new model is editing a model specification file, but not adding/editing source codes. We implement in Inferflow a modular framework of atomic building-blocks and technologies, making it compositionally generalizable to new models. A new model can be served by Inferflow if the atomic building-blocks and technologies in this model have been "known" (to Inferflow).
  • 3.5-bit quantization: Inferflow implements 2-bit, 3-bit, 3.5-bit, 4-bit, 5-bit, 6-bit and 8-bit quantization. Among the quantization schemes, 3.5-bit quantization is a new one introduced by Inferflow.
  • Hybrid model partition for multi-GPU inference: Inferflow supports multi-GPU inference with three model partitioning strategies to choose from: partition-by-layer, partition-by-tensor, and hybrid partitioning. Hybrid partitioning is seldomly supported by other inference engines.
  • Wide file format support (and safely loading pickle data): Inferflow supports loading models of multiple file formats directly, without reliance on an external converter. Supported formats include pickle, safetensors, llama.cpp gguf, etc. It is known that there are security issues to read pickle files using Python codes. By implementing a simplified pickle parser in C++, Inferflow supports safely loading models from pickle data.
  • Wide network type support: Supporting three types transformer models: decoder-only models, encoder-only models, and encoder-decoder models.
  • GPU/CPU hybrid inference: Supporting GPU-only, CPU-only, and GPU/CPU hybrid inference.
  • Many key modules of the model network can be specified by configration, including layer normalization functions, activation functions, position embedding algorithms, tensor names, etc.

    Comparison

    Model New Model Support Supported File Formats Network Structures Quantization Bits Hybrid Parallelism for Multi-GPU Inference Programming Languages
    Huggingface Transformers Adding/editing source codes pickle (unsafe), safetensors decoder-only, encoder-decoder, encoder-only 4b, 8b Python
    vLLM Adding/editing source codes pickle (unsafe), safetensors decoder-only 4b, 8b Python
    TensorRT-LLM Adding/editing source codes decoder-only, encoder-decoder, encoder-only 4b, 8b C++, Python
    DeepSpeed-MII Adding/editing source codes pickle (unsafe), safetensors decoder-only - Python
    llama.cpp Adding/editing source codes gguf decoder-only 2b, 3b, 4b, 5b, 6b, 8b C/C++
    llama2.c Adding/editing source codes llama2.c decoder-only - C
    LMDeploy Adding/editing source codes pickle (unsafe), TurboMind decoder-only 4b, 8b C++, Python
    Inferflow Editing configuration files pickle (safe), safetensors, gguf, llama2.c decoder-only, encoder-decoder, encoder-only 2b, 3b, 3.5b, 4b, 5b, 6b, 8b C++

    Comparison between Inferflow and other inference engines

    Getting Started

    Get started with exploring our GitHub repository.

    BibTeX

    @misc{shi2024inferflow,
        title={Inferflow: an Efficient and Highly Configurable Inference Engine for Large Language Models},
        author={Shuming Shi and Enbo Zhao and Deng Cai and Leyang Cui and Xinting Huang and Huayang Li},
        year={2024},
        eprint={2401.08294},
        archivePrefix={arXiv},
        primaryClass={cs.CL}
    }
    

    Acknowledgement

    The CPU inference part of Inferflow is based on the amazing ggml library and llama.cpp. The FP16 data type in the CPU-only version of Inferflow is from the Half-precision floating-point library. We express our sincere gratitude to the maintainers and implementers of these source codes and tools.

    This website template is borrowed from the PandaGPT project and the Textbind project, which is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.