Skip to content

Commit

Permalink
Merge pull request #7 from qdrant/dev
Browse files Browse the repository at this point in the history
mcp-server-qdrant 0.5.2
  • Loading branch information
kacperlukawski authored Dec 13, 2024
2 parents 701db26 + 5011332 commit a05ce72
Show file tree
Hide file tree
Showing 5 changed files with 106 additions and 11 deletions.
42 changes: 42 additions & 0 deletions .github/workflows/pypi-publish.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: PyPI Publish

on:
workflow_dispatch:
push:
# Pattern matched against refs/tags
tags:
- 'v*' # Push events to every version tag

jobs:
deploy:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.10.x'

- name: Install dependencies
run: |
python -m pip install uv
uv sync
- name: Build package
run: uv build

- name: Publish package
run: uv publish
with:
UV_PUBLISH_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
37 changes: 35 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# mcp-server-qdrant: A Qdrant MCP server
[![smithery badge](https://smithery.ai/badge/mcp-server-qdrant)](https://smithery.ai/protocol/mcp-server-qdrant)

> The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
Expand Down Expand Up @@ -38,6 +39,14 @@ uv run mcp-server-qdrant \
--fastembed-model-name "sentence-transformers/all-MiniLM-L6-v2"
```

### Installing via Smithery

To install Qdrant MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/protocol/mcp-server-qdrant):

```bash
npx @smithery/cli install mcp-server-qdrant --client claude
```

## Usage with Claude Desktop

To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your `claude_desktop_config.json`:
Expand Down Expand Up @@ -69,14 +78,38 @@ By default, the server will use the `sentence-transformers/all-MiniLM-L6-v2` emb
For the time being, only [FastEmbed](https://qdrant.github.io/fastembed/) models are supported, and you can change it
by passing the `--fastembed-model-name` argument to the server.

### Environment Variables
### Using the local mode of Qdrant

To use a local mode of Qdrant, you can specify the path to the database using the `--qdrant-local-path` argument:

```json
{
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant",
"--qdrant-local-path",
"/path/to/qdrant/database",
"--collection-name",
"your_collection_name"
]
}
}
```

It will run Qdrant local mode inside the same process as the MCP server. Although it is not recommended for production.

## Environment Variables

The configuration of the server can be also done using environment variables:

- `QDRANT_URL`: URL of the Qdrant server
- `QDRANT_URL`: URL of the Qdrant server, e.g. `http://localhost:6333`
- `QDRANT_API_KEY`: API key for the Qdrant server
- `COLLECTION_NAME`: Name of the collection to use
- `FASTEMBED_MODEL_NAME`: Name of the FastEmbed model to use
- `QDRANT_LOCAL_PATH`: Path to the local Qdrant database

You cannot provide `QDRANT_URL` and `QDRANT_LOCAL_PATH` at the same time.

## License

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "mcp-server-qdrant"
version = "0.5.1"
version = "0.5.2"
description = "MCP server for retrieving context from a Qdrant vector database"
readme = "README.md"
requires-python = ">=3.10"
Expand Down
14 changes: 10 additions & 4 deletions src/mcp_server_qdrant/qdrant.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,25 @@ class QdrantConnector:
:param qdrant_api_key: The API key to use for the Qdrant server.
:param collection_name: The name of the collection to use.
:param fastembed_model_name: The name of the FastEmbed model to use.
:param qdrant_local_path: The path to the storage directory for the Qdrant client, if local mode is used.
"""

def __init__(
self,
qdrant_url: str,
qdrant_url: Optional[str],
qdrant_api_key: Optional[str],
collection_name: str,
fastembed_model_name: str,
qdrant_local_path: Optional[str] = None,
):
self._qdrant_url = qdrant_url.rstrip("/")
self._qdrant_url = qdrant_url.rstrip("/") if qdrant_url else None
self._qdrant_api_key = qdrant_api_key
self._collection_name = collection_name
self._fastembed_model_name = fastembed_model_name
# For the time being, FastEmbed models are the only supported ones.
# A list of all available models can be found here:
# https://qdrant.github.io/fastembed/examples/Supported_Models/
self._client = AsyncQdrantClient(qdrant_url, api_key=qdrant_api_key)
self._client = AsyncQdrantClient(location=qdrant_url, api_key=qdrant_api_key, path=qdrant_local_path)
self._client.set_model(fastembed_model_name)

async def store_memory(self, information: str):
Expand All @@ -40,10 +42,14 @@ async def store_memory(self, information: str):

async def find_memories(self, query: str) -> list[str]:
"""
Find memories in the Qdrant collection.
Find memories in the Qdrant collection. If there are no memories found, an empty list is returned.
:param query: The query to use for the search.
:return: A list of memories found.
"""
collection_exists = await self._client.collection_exists(self._collection_name)
if not collection_exists:
return []

search_results = await self._client.query(
self._collection_name,
query_text=query,
Expand Down
22 changes: 18 additions & 4 deletions src/mcp_server_qdrant/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,22 +12,24 @@


def serve(
qdrant_url: str,
qdrant_url: Optional[str],
qdrant_api_key: Optional[str],
collection_name: str,
fastembed_model_name: str,
qdrant_local_path: Optional[str] = None,
) -> Server:
"""
Instantiate the server and configure tools to store and find memories in Qdrant.
:param qdrant_url: The URL of the Qdrant server.
:param qdrant_api_key: The API key to use for the Qdrant server.
:param collection_name: The name of the collection to use.
:param fastembed_model_name: The name of the FastEmbed model to use.
:param qdrant_local_path: The path to the storage directory for the Qdrant client, if local mode is used.
"""
server = Server("qdrant")

qdrant = QdrantConnector(
qdrant_url, qdrant_api_key, collection_name, fastembed_model_name
qdrant_url, qdrant_api_key, collection_name, fastembed_model_name, qdrant_local_path
)

@server.list_tools()
Expand Down Expand Up @@ -112,7 +114,7 @@ async def handle_tool_call(
@click.option(
"--qdrant-url",
envvar="QDRANT_URL",
required=True,
required=False,
help="Qdrant URL",
)
@click.option(
Expand All @@ -134,19 +136,31 @@ async def handle_tool_call(
help="FastEmbed model name",
default="sentence-transformers/all-MiniLM-L6-v2",
)
@click.option(
"--qdrant-local-path",
envvar="QDRANT_LOCAL_PATH",
required=False,
help="Qdrant local path",
)
def main(
qdrant_url: str,
qdrant_url: Optional[str],
qdrant_api_key: str,
collection_name: Optional[str],
fastembed_model_name: str,
qdrant_local_path: Optional[str],
):
# XOR of url and local path, since we accept only one of them
if not (bool(qdrant_url) ^ bool(qdrant_local_path)):
raise ValueError("Exactly one of qdrant-url or qdrant-local-path must be provided")

async def _run():
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
server = serve(
qdrant_url,
qdrant_api_key,
collection_name,
fastembed_model_name,
qdrant_local_path,
)
await server.run(
read_stream,
Expand Down

0 comments on commit a05ce72

Please sign in to comment.