Skip to content

Commit

Permalink
Pushing changes to GitHub Pages.
Browse files Browse the repository at this point in the history
  • Loading branch information
docs-build committed Mar 26, 2024
1 parent 5e0daa4 commit f7bd214
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 43 deletions.
42 changes: 0 additions & 42 deletions main/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -295,48 +295,6 @@ <h2>Developer RAG Examples<a class="headerlink" href="#developer-rag-examples" t
</section>
<section id="open-source-connectors">
<h2>Open Source Connectors<a class="headerlink" href="#open-source-connectors" title="Permalink to this headline"></a></h2>
<p>These are open source connectors for NVIDIA-hosted and self-hosted API endpoints. These open source connectors are maintained and tested by NVIDIA engineers.</p>
<table class="colwidths-auto docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Name</p></th>
<th class="head"><p>Framework</p></th>
<th class="head"><p>Chat</p></th>
<th class="head"><p>Text Embedding</p></th>
<th class="head"><p>Python</p></th>
<th class="head"><p>Description</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p><a class="reference external" href="https://python.langchain.com/docs/integrations/providers/nvidia">NVIDIA AI Foundation Endpoints</a></p></td>
<td><p><a class="reference external" href="https://www.langchain.com/">Langchain</a></p></td>
<td><p><a class="reference external" href="https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints">YES</a></p></td>
<td><p><a class="reference external" href="https://python.langchain.com/docs/integrations/text_embedding/nvidia_ai_endpoints">YES</a></p></td>
<td><p><a class="reference external" href="https://pypi.org/project/langchain-nvidia-ai-endpoints/">YES</a></p></td>
<td><p>Easy access to NVIDIA hosted models. Supports chat, embedding, code generation, steerLM, multimodal, and RAG.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference external" href="https://github.com/langchain-ai/langchain/tree/master/libs/partners/nvidia-trt">NVIDIA Triton + TensorRT-LLM</a></p></td>
<td><p><a class="reference external" href="https://www.langchain.com/">Langchain</a></p></td>
<td><p><a class="reference external" href="https://github.com/langchain-ai/langchain-nvidia/blob/main/libs/trt/docs/llms.ipynb">YES</a></p></td>
<td><p><a class="reference external" href="https://github.com/langchain-ai/langchain-nvidia/blob/main/libs/trt/docs/llms.ipynb">YES</a></p></td>
<td><p><a class="reference external" href="https://pypi.org/project/langchain-nvidia-trt/">YES</a></p></td>
<td><p>This connector allows Langchain to remotely interact with a Triton inference server over GRPC or HTTP tfor optimized LLM inference.</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference external" href="https://docs.llamaindex.ai/en/stable/examples/llm/nvidia_triton.html">NVIDIA Triton Inference Server</a></p></td>
<td><p><a class="reference external" href="https://www.llamaindex.ai/">LlamaIndex</a></p></td>
<td><p>YES</p></td>
<td><p>YES</p></td>
<td><p>NO</p></td>
<td><p>Triton inference server provides API access to hosted LLM models over gRPC.</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference external" href="https://docs.llamaindex.ai/en/stable/examples/llm/nvidia_tensorrt.html">NVIDIA TensorRT-LLM</a></p></td>
<td><p><a class="reference external" href="https://www.llamaindex.ai/">LlamaIndex</a></p></td>
<td><p>YES</p></td>
<td><p>YES</p></td>
<td><p>NO</p></td>
<td><p>TensorRT-LLM provides a Python API to build TensorRT engines with state-of-the-art optimizations for LLM inference on NVIDIA GPUs.</p></td>
</tr>
</tbody>
</table>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
Expand Down
Loading

0 comments on commit f7bd214

Please sign in to comment.