Skip to content

Commit 71c0bbb

Browse files
hardware plugin blog post (#57)
* Add hardware plugin blog post Signed-off-by: MengqingCao <[email protected]> * rename to updated date Signed-off-by: youkaichao <[email protected]> * minor update Signed-off-by: youkaichao <[email protected]> * update next step and acknowledgement Signed-off-by: Mengqing Cao <[email protected]> * rename the team Signed-off-by: youkaichao <[email protected]> * fix typo Signed-off-by: youkaichao <[email protected]> * fix typo Signed-off-by: youkaichao <[email protected]> --------- Signed-off-by: MengqingCao <[email protected]> Signed-off-by: youkaichao <[email protected]> Signed-off-by: Mengqing Cao <[email protected]> Co-authored-by: MengqingCao <[email protected]>
1 parent 32a83da commit 71c0bbb

File tree

1 file changed

+120
-0
lines changed

1 file changed

+120
-0
lines changed

_posts/2025-05-12-hardware-plugin.md

Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
---
2+
layout: post
3+
title: "Introducing vLLM Hardware Plugin, Best Practice from Ascend NPU"
4+
author: "The Ascend Team on vLLM"
5+
image: /assets/logos/vllm-logo-only-light.png
6+
---
7+
8+
Since December 2024, through the joint efforts of the vLLM community and the Ascend team on vLLM, we have completed the [Hardware Pluggable RFC](https://github.com/vllm-project/vllm/issues/11162). This proposal allows hardware integration into vLLM in a decoupled manner, enabling rapid and modular support for different hardware platforms.
9+
10+
---
11+
12+
## Why vLLM Hardware Plugin?
13+
14+
Currently, vLLM already supports multiple backends. However, as the number of vLLM backends continues to grow, several challenges have emerged:
15+
16+
- **Increased Code Complexity**: Each hardware backend has its own `Executor`, `Worker`, `Runner`, and `Attention` components. This has increased the complexity of the vLLM codebase, with non-generic backend-specific code scattered throughout the project.
17+
- **High Maintenance Costs**: The cost of maintaining backends is high, not only for the backend developers but also for the vLLM community. The scarcity of community contributor resources makes efficiently adding new features difficult when backend maintainers are not present.
18+
- **Lack of Extensibility**: While vLLM follows a well-structured layered design by implementing backends through `Executor`, `Worker`, `Runner`, and `Attention`, supporting new hardware often requires invasive modifications or patching rather than dynamic registration. This makes adding new backends cumbersome.
19+
20+
Recognizing the need for a flexible and modular approach to integrating hardware backends, we proposed hardware plugins as a feasible solution:
21+
22+
- **Decoupled Codebase**: The hardware backend plugin code remains independent, making the vLLM core code cleaner.
23+
- **Reduced Maintenance Burden**: vLLM developers can focus on generic features without being overwhelmed by the differences caused by backend-specific implementations.
24+
- **Faster Integration & More Independent**: New backends can be integrated quickly with less work to do and evolve independently.
25+
26+
---
27+
28+
## What is the vLLM Hardware Plugin?
29+
30+
Before introducing the vLLM Hardware Plugin, let's first look at two prerequisite RFCs:
31+
32+
- [[RFC] vLLM Plugin System](https://github.com/vllm-project/vllm/issues/7131): This RFC introduces a plugin-based approach to support various customization requirements, allowing users to define custom models, executors, schedulers, etc.
33+
- [[RFC] Make vLLM Device-Agnostic for Diverse Hardware Support](https://github.com/vllm-project/vllm/issues/9268) and ([vllm-project/vllm#6080](https://github.com/vllm-project/vllm/pull/6080)): This RFC introduces the **platform** submodule, which centralizes hardware-related implementations to reduce conditional logic in the main codebase and lays the foundation for modularization.
34+
35+
Based on these RFCs, we proposed [[RFC] Hardware Pluggable](https://github.com/vllm-project/vllm/issues/11162), which integrates the `Platform` module into vLLM as a plugin. Additionally, we refactored `Executor`, `Worker`, `ModelRunner`, `AttentionBackend`, and `Communicator` to support hardware plugins more flexibly.
36+
37+
Currently, the vLLM community has successfully implemented the Platform module introduced in the RFC. The functionality is validated through the [vllm-project/vllm-ascend](https://github.com/vllm-project/vllm-ascend) and [vllm-project/vllm-spyre](https://github.com/vllm-project/vllm-spyre) projects. Using this plugin mechanism, we successfully integrated vLLM with the Ascend NPU and IBM Spyre backends.
38+
39+
---
40+
41+
## How to Integrate a New Backend via vLLM Hardware Plugin Mechanism
42+
43+
This section will dive into integrating a new backend via the hardware plugin in both developer and user perspective.
44+
45+
### Developer Perspective
46+
47+
To integrate a new backend into vLLM using the hardware plugin, follow these steps:
48+
49+
#### Step 1: Create a New Project and Initialize the Platform
50+
51+
Start by creating a Python project for the new backend and adding a `platform.py` file. Then, import the `Platform` class from `vllm.platforms` and implement the required attributes and methods.
52+
53+
You can refer to the [`platform.py`](https://github.com/vllm-project/vllm-ascend/blob/72a43a61d8d2193dddbfcc60578fd642008225a5/vllm_ascend/platform.py#L52) in vLLM Ascend project for an example.
54+
55+
#### Step 2: Implement Custom Worker, Model Runner, Attention Backend, and Communicator Modules
56+
57+
Depending on the new backend's requirements, implement the following modules:
58+
59+
```python
60+
from vllm.worker.worker_base import WorkerBase
61+
from vllm.worker.model_runner_base import ModelRunnerBase
62+
from vllm.attention.backends.abstract import AttentionBackend
63+
from vllm.distributed.device_communicators.base_communicator import CommunicatorBase
64+
```
65+
66+
Each of these classes has a corresponding base class in vLLM. Again, you can refer to [vLLM Ascend's implementation](https://github.com/vllm-project/vllm-ascend/tree/main/vllm_ascend) for an example.
67+
68+
#### Step 3: Register the Plugin
69+
70+
Register the plugin in `setup.py` using the entrypoint mechanism of python:
71+
72+
```python
73+
setup(
74+
entry_points={'vllm.platform_plugins': ["{your_platform_name} = {code_path}:{register_function}"]}
75+
)
76+
```
77+
78+
- `{your_platform_name}`: The name of the new backend (can be arbitrary).
79+
- `{code_path}`: The path to the main Python module.
80+
- `{register_function}`: The register function, which returns the path of `Platform` class defined in step 1.
81+
82+
Refer to [`setup.py`](https://github.com/vllm-project/vllm-ascend/blob/72a43a61d8d2193dddbfcc60578fd642008225a5/setup.py#L102) in vLLM Ascend for a practical example.
83+
84+
---
85+
86+
### User Perspective
87+
88+
Users only need to install vllm and your plugin before running, taking [vllm-ascend](https://github.com/vllm-project/vllm-ascend) as an example:
89+
90+
```bash
91+
pip install vllm vllm-ascend
92+
```
93+
94+
On startup, you will observe the following logs, which means the backend plugin is working properly:
95+
96+
```bash
97+
INFO 02-06 15:49:01 __init__.py:30] Available plugins for group vllm.platform_plugins:
98+
INFO 02-06 15:49:01 __init__.py:32] name=ascend, value=vllm_ascend:register
99+
… …
100+
INFO 02-06 15:49:01 __init__.py:44] plugin ascend loaded.
101+
INFO 02-06 15:49:01 __init__.py:181] Platform plugin ascend is activated
102+
```
103+
104+
---
105+
106+
## What's Next?
107+
108+
Moving forward, we will continue collaborating with developers in the vLLM community to enhance the following aspects:
109+
110+
1. Continuous enhancements to the V1 Engine and VLMs.
111+
2. Expanding plugin support for more modules and features, such as scheduler, graph mode and custom operators.
112+
3. Better user experience and higher performance.
113+
4. Maintenance and enhancement of a stable plugin architecture for appropriate hardware platforms
114+
115+
We encourage everyone to try out this new feature! If you have any questions, join the [vLLM Slack](https://slack.vllm.ai) and participate in the **#sig-extensible-hardware** channel for discussions. 🚀
116+
117+
118+
## Acknowledgements
119+
120+
This flexible hardware backend plugin mechanism would not have been possible without the efforts of many vLLM contributors. Thus we are deeply grateful to the vLLM maintainers, including [Kaichao You](https://github.com/youkaichao), [Simon Mo](https://github.com/simon-mo), [Cyrus Leung](https://github.com/DarkLight1337), [Robert Shaw](https://github.com/robertgshaw2-redhat), [Michael Goin](https://github.com/mgoin) and [Jie Li](https://github.com/jeejeelee) for related refactor, deep discussion and quick review, [Xiyuan Wang](https://github.com/wangxiyuan), [Shanshan Shen](https://github.com/shen-shanshan), [Chenguang Li](https://github.com/noemotiovon) and [Mengqing Cao](https://github.com/MengqingCao) from the Ascend team on vLLM for mechanism design and implementation, [Joe Runde](https://github.com/joerunde) and [Yannick Schnider](https://github.com/yannicks1) from the Spyre team on vLLM for pluggable scheduler design and implementation, and other contributors, including [yancong](https://github.com/ice-tong) for extendable quantization method design and implementation, [Aviv Keshet](https://github.com/akeshet) for extendable `SamplingParams`.

0 commit comments

Comments
 (0)