Skip to content

Commit

Permalink
llama_index serving integration documentation (#6973)
Browse files Browse the repository at this point in the history
Co-authored-by: pavanmantha <[email protected]>
  • Loading branch information
pavanjava and pavanmantha authored Aug 14, 2024
1 parent f55a9ae commit 22b39e1
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 0 deletions.
1 change: 1 addition & 0 deletions docs/source/serving/integrations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,4 @@ Integrations
deploying_with_lws
deploying_with_dstack
serving_with_langchain
serving_with_llamaindex
27 changes: 27 additions & 0 deletions docs/source/serving/serving_with_llamaindex.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _run_on_llamaindex:

Serving with llama_index
============================

vLLM is also available via `llama_index <https://github.com/run-llama/llama_index>`_ .

To install llamaindex, run

.. code-block:: console
$ pip install llama-index-llms-vllm -q
To run inference on a single or multiple GPUs, use ``Vllm`` class from ``llamaindex``.

.. code-block:: python
from llama_index.llms.vllm import Vllm
llm = Vllm(
model="microsoft/Orca-2-7b",
tensor_parallel_size=4,
max_new_tokens=100,
vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)
Please refer to this `Tutorial <https://docs.llamaindex.ai/en/latest/examples/llm/vllm/>`_ for more details.

0 comments on commit 22b39e1

Please sign in to comment.