From 02fdaa9d8ec6ec1b5506308dc8d7ac95eec261d5 Mon Sep 17 00:00:00 2001 From: Saliya Ekanayake Date: Fri, 12 Jul 2024 14:21:26 -0700 Subject: [PATCH] fix minor typo --- README.md | 2 +- docs/source/index.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 53285356adc75..6927c4a0dc71b 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ vLLM is flexible and easy to use with: - Seamless integration with popular Hugging Face models - High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more -- Tensor parallelism and pipieline parallelism support for distributed inference +- Tensor parallelism and pipeline parallelism support for distributed inference - Streaming outputs - OpenAI-compatible API server - Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs diff --git a/docs/source/index.rst b/docs/source/index.rst index 174d91b8d6a01..2691805ed97a4 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -38,7 +38,7 @@ vLLM is flexible and easy to use with: * Seamless integration with popular HuggingFace models * High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more -* Tensor parallelism and pipieline parallelism support for distributed inference +* Tensor parallelism and pipeline parallelism support for distributed inference * Streaming outputs * OpenAI-compatible API server * Support NVIDIA GPUs and AMD GPUs