Over 10 years of experience in server architecture design and optimization, proficient in networking, caching, and memory
-
xiaomi
- Beijing
-
06:17
(UTC +08:00)
Pinned Loading
-
vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
-
flashinfer-ai/flashinfer
flashinfer-ai/flashinfer PublicFlashInfer: Kernel Library for LLM Serving
-
pytorch/pytorch
pytorch/pytorch PublicTensors and Dynamic neural networks in Python with strong GPU acceleration
-
sgl-project/sglang
sgl-project/sglang PublicSGLang is a fast serving framework for large language models and vision language models.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.