You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi vLLM developers @WoosukKwon@zhuohan123 . I noticed that there is a plan to find alternative frameworks for distributed inference in the RoadMap. I think RPyC maybe a good option for single machine scenario. RPyC is a concise RPC framework with basically no extra dependencies and its APIs are quite handy. RPyC is adopted by lightllm, a project parallel with vLLM, as its distributed backend and this is how it uses RPyC.
As mentioned in some issues, Ray will bring heavy overhead for single machine serving. RPyC maybe a lightweight alternative to it. If you are interested, I think I can help to add RPyC support, create a new PR and implement it.
The text was updated successfully, but these errors were encountered:
I tried substituting Ray with RPyC at #1318, but wasn't able to get better performance. There's certainly more things that one could do to speed up RPyC, I don't really have time to work on it though. I feel like there's a few quirks with RPyC that makes this slower, e.g. the API it provides to send python objects over between processes and some tcp_nodelay setting that I had to patch in/expose.
I think if people have the time, it still could be worth looking into though.
Hi vLLM developers @WoosukKwon @zhuohan123 . I noticed that there is a plan to find alternative frameworks for distributed inference in the RoadMap. I think RPyC maybe a good option for single machine scenario. RPyC is a concise RPC framework with basically no extra dependencies and its APIs are quite handy. RPyC is adopted by lightllm, a project parallel with vLLM, as its distributed backend and this is how it uses RPyC.
As mentioned in some issues, Ray will bring heavy overhead for single machine serving. RPyC maybe a lightweight alternative to it. If you are interested, I think I can help to add RPyC support, create a new PR and implement it.
The text was updated successfully, but these errors were encountered: