Welcome to the Anakin GitHub.
Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineers and is a large-scale application of industrial products.
Please refer to our release announcement to track the latest feature of Anakin.
-
Flexibility
Anakin supports a wide range of neural network architectures and different hardware platforms. It is easy to run Anakin on GPU / x86 / ARM platform.
-
High performance
In order to give full play to the performance of hardware, we optimized the forward prediction at different levels.
-
Automatic graph fusion. The goal of all performance optimizations under a given algorithm is to make the ALU as busy as possible. Operator fusion can effectively reduce memory access and keep the ALU busy.
-
Memory reuse. Forward prediction is a one-way calculation. We reuse the memory between the input and output of different operators, thus reducing the overall memory overhead.
-
Assembly level optimization. Saber is a underlying DNN library for Anakin, which is deeply optimized at assembly level. Performance comparison between Anakin, TensorRT and Tensorflow-lite, please refer to the benchmark tests.
-
It is recommended to check out the docker installation guide. before looking into the build from source guide.
For ARM, please refer run on arm.
It is recommended to check out the readme of benchmark.
We provide English and Chinese documentation.
-
Developer guide
You might want to know more details of Anakin and make it better. Please refer to how to add custom devices and how to add custom device operators.
-
User guide
You can get the working principle of the project, C++ interface description and code examples from here. You can also learn about the model converter here.
-
We appreciate your contributions!
You are welcome to submit questions and bug reports as Github Issues.
Anakin is provided under the Apache-2.0 license.