Skip to content
@kvcache-ai

kvcache.ai

KVCache.AI is a joint research project between MADSys and top industry collaborators, focusing on efficient LLM serving.

Pinned Loading

  1. Mooncake Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    C++ 4.2k 413

  2. ktransformers ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    Python 15.3k 1.1k

  3. TrEnv-X TrEnv-X Public

    Go 64 1

Repositories

Showing 9 of 9 repositories
  • sglang Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang’s past year of commit activity
    Python 2 Apache-2.0 3,236 0 1 Updated Oct 31, 2025
  • Mooncake Public

    Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.

    kvcache-ai/Mooncake’s past year of commit activity
    C++ 4,180 Apache-2.0 413 172 (9 issues need help) 49 Updated Oct 31, 2025
  • ktransformers Public

    A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations

    kvcache-ai/ktransformers’s past year of commit activity
    Python 15,258 Apache-2.0 1,100 631 18 Updated Oct 31, 2025
  • sglang_awq Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang_awq’s past year of commit activity
    Python 0 Apache-2.0 3,233 0 0 Updated Oct 31, 2025
  • TrEnv-X Public
    kvcache-ai/TrEnv-X’s past year of commit activity
    Go 64 Apache-2.0 1 0 0 Updated Sep 15, 2025
  • sglang-npu Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    kvcache-ai/sglang-npu’s past year of commit activity
    Python 0 Apache-2.0 3,236 0 0 Updated Aug 12, 2025
  • DeepEP_fault_tolerance Public Forked from deepseek-ai/DeepEP

    DeepEP: an efficient expert-parallel communication library that supports fault tolerance

    kvcache-ai/DeepEP_fault_tolerance’s past year of commit activity
    Cuda 2 MIT 976 0 0 Updated Jul 31, 2025
  • custom_flashinfer Public Forked from flashinfer-ai/flashinfer

    FlashInfer: Kernel Library for LLM Serving

    kvcache-ai/custom_flashinfer’s past year of commit activity
    Cuda 5 Apache-2.0 550 0 0 Updated Jul 24, 2025
  • vllm Public Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    kvcache-ai/vllm’s past year of commit activity
    Python 14 Apache-2.0 11,031 0 0 Updated Mar 27, 2025

Most used topics

Loading…