Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 74 additions & 0 deletions DEVELOP.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,80 @@ for MacOS `brew install protobuf`

You can also install `chromadb` the `pypi` package locally and in editable mode with `pip install -e .`.

### Python-only dev setup (Windows)

If you want to work on the Python package only and you're on Windows (PowerShell), this minimal setup gets you started quickly.

Prerequisites

- Windows with Python 3.8+ installed and on PATH
- Git (optional, for cloning/forking)

Quick steps (PowerShell)

1. Create and activate a virtual environment:

```powershell
python -m venv .venv
.\\.venv\\Scripts\\Activate.ps1
```

2. Install runtime and dev dependencies and the package in editable mode:

```powershell
pip install -r .\\requirements.txt
pip install -r .\\requirements_dev.txt
pip install -e .
```

3. (Optional) If you plan to build or develop the Rust bindings later, install `maturin` and build:

```powershell
pip install maturin
maturin develop
```

Notes

- If you don't want to build the native Rust bindings, install the PyPI wheel instead:

```powershell
pip install chromadb
```

- Install the repository's pre-commit hooks to get consistent styling and checks:

```powershell
pre-commit install
```

This minimal flow keeps Python-only development fast on Windows. If you want to run the distributed system or use Tilt/Docker, follow the "Local dev setup for distributed chroma" section below.

### Quick local embedding examples

If you want to exercise embedding functions locally without downloading models or using API keys, the repository includes a tiny dependency-free embedding function called `local_simple_hash` in `chromadb.utils.embedding_functions`.

Run the example script (PowerShell):

```powershell
# install in editable mode (if not already installed)
python -m pip install -e .

# run the example script that demonstrates direct and config-based usage
python examples/local_simple_hash_example.py
```

What the example demonstrates:

- Constructing `SimpleHashEmbeddingFunction` directly and generating embeddings for a list of inputs (including non-strings).
- Using `config_to_embedding_function` to build an embedding function from a config dict (useful for dynamic configs loaded from JSON/YAML).
- Using the `examples/local_simple_hash_example.py` output as a quick smoke test in CI or developer workflows.

Notes:

- `local_simple_hash` is deterministic and very fast — it's intended for testing, not production semantic search.
- For higher-quality local embeddings, use `SentenceTransformerEmbeddingFunction` (requires installing `sentence_transformers`).

## Local dev setup for distributed chroma

We use tilt for providing local dev setup. Tilt is an open source project
Expand Down
25 changes: 15 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ pip install chromadb # python client
# for client-server mode, chroma run --path /chroma_db_path
```

Windows developers: for a minimal Python-only Windows (PowerShell) dev setup see the "Python-only dev setup (Windows)" section in `DEVELOP.md`.

## Chroma Cloud

Our hosted service, Chroma Cloud, powers serverless vector and full-text search. It's extremely fast, cost-effective, scalable and painless. Create a DB and try it out in under 30 seconds with $5 of free credits.
Expand Down Expand Up @@ -65,15 +67,17 @@ results = collection.query(
Learn about all features on our [Docs](https://docs.trychroma.com)

## Features
- __Simple__: Fully-typed, fully-tested, fully-documented == happiness
- __Integrations__: [`🦜️🔗 LangChain`](https://blog.langchain.dev/langchain-chroma/) (python and js), [`🦙 LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon
- __Dev, Test, Prod__: the same API that runs in your python notebook, scales to your cluster
- __Feature-rich__: Queries, filtering, regex and more
- __Free & Open Source__: Apache 2.0 Licensed

## Use case: ChatGPT for ______
- **Simple**: Fully-typed, fully-tested, fully-documented == happiness
- **Integrations**: [`🦜️🔗 LangChain`](https://blog.langchain.dev/langchain-chroma/) (python and js), [`🦙 LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon
- **Dev, Test, Prod**: the same API that runs in your python notebook, scales to your cluster
- **Feature-rich**: Queries, filtering, regex and more
- **Free & Open Source**: Apache 2.0 Licensed

## Use case: ChatGPT for **\_\_**

For example, the `"Chat your data"` use case:

1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Chroma embed them for you.
2. Query relevant documents with natural language.
3. Compose documents into the context window of an LLM like `GPT4` for additional summarization or analysis.
Expand All @@ -83,16 +87,17 @@ For example, the `"Chat your data"` use case:
What are embeddings?

- [Read the guide from OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
- __Literal__: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => `[1.2, 2.1, ....]`. This process makes documents "understandable" to a machine learning model.
- __By analogy__: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- __Technical__: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- __A small example__: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.
- **Literal**: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => `[1.2, 2.1, ....]`. This process makes documents "understandable" to a machine learning model.
- **By analogy**: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- **Technical**: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- **A small example**: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.

Embeddings databases (also known as **vector databases**) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses [Sentence Transformers](https://docs.trychroma.com/guides/embeddings#default:-all-minilm-l6-v2) to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.

## Get involved

Chroma is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.

- [Join the conversation on Discord](https://discord.gg/MMeYNTmh3x) - `#contributing` channel
- [Review the 🛣️ Roadmap and contribute your ideas](https://docs.trychroma.com/roadmap)
- [Grab an issue and open a PR](https://github.com/chroma-core/chroma/issues) - [`Good first issue tag`](https://github.com/chroma-core/chroma/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
Expand Down
71 changes: 71 additions & 0 deletions chromadb/test/test_simple_hash_embedding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import numpy as np
import pytest

from chromadb.utils.embedding_functions.simple_hash_embedding_function import (
SimpleHashEmbeddingFunction,
)
from chromadb.utils.embedding_functions import config_to_embedding_function


def test_simple_hash_basic() -> None:
ef = SimpleHashEmbeddingFunction(dim=16)

docs = ["hello", "world", ""]
embeddings = ef(docs)

assert isinstance(embeddings, list)
assert len(embeddings) == 3

# first two should be non-zero, third (empty string) should be zero vector
assert isinstance(embeddings[0], np.ndarray)
assert embeddings[0].dtype == np.float32
assert embeddings[0].shape == (16,)
assert np.linalg.norm(embeddings[0]) > 0
assert np.linalg.norm(embeddings[1]) > 0
assert np.allclose(embeddings[2], np.zeros(16, dtype=np.float32))


def test_embed_query_and_determinism() -> None:
ef = SimpleHashEmbeddingFunction(dim=8)
q = ["test query"]
a = ef(q)
b = ef.embed_query(q)
# same content -> same embedding
assert len(a) == len(b) == 1
assert np.allclose(a[0], b[0])

# deterministic across calls
c = ef(["test query"])
assert np.allclose(a[0], c[0])


def test_config_integration() -> None:
cfg = {"name": "local_simple_hash", "config": {"dim": 12}}
ef = config_to_embedding_function(cfg)
assert isinstance(ef, SimpleHashEmbeddingFunction)
out = ef(["x"])
assert len(out) == 1 and out[0].shape == (12,)


def test_edge_cases_long_and_non_string_inputs() -> None:
ef = SimpleHashEmbeddingFunction(dim=20)

# Very long string should produce a stable vector and not error
long_text = "a" * 10000
long_emb = ef([long_text])[0]
assert long_emb.shape == (20,)
assert np.linalg.norm(long_emb) > 0

# Non-string inputs should be accepted and converted via str()
samples = [123, None, 45.6, b"bytes"]
# Convert samples to strings before passing to the embedding function to satisfy typing
stringified = [str(s) for s in samples]
emb = ef(stringified)
assert len(emb) == 4
for v in emb:
assert isinstance(v, np.ndarray)
assert v.shape == (20,)

# Empty input list is not allowed at the public API layer; wrapper validates non-empty
with pytest.raises(ValueError):
ef([])
2 changes: 2 additions & 0 deletions chromadb/utils/embedding_functions/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,7 @@ def get_builtins() -> Set[str]:
"cloudflare_workers_ai": CloudflareWorkersAIEmbeddingFunction,
"together_ai": TogetherAIEmbeddingFunction,
"chroma-cloud-qwen": ChromaCloudQwenEmbeddingFunction,
"local_simple_hash": SimpleHashEmbeddingFunction,
}

sparse_known_embedding_functions: Dict[str, Type[SparseEmbeddingFunction]] = { # type: ignore
Expand Down Expand Up @@ -276,4 +277,5 @@ def config_to_embedding_function(config: Dict[str, Any]) -> EmbeddingFunction:
"register_embedding_function",
"config_to_embedding_function",
"known_embedding_functions",
"SimpleHashEmbeddingFunction",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
from typing import List, Dict, Any

import numpy as np

from chromadb.api.types import EmbeddingFunction, Embeddings, Documents, Space


class SimpleHashEmbeddingFunction(EmbeddingFunction[Documents]):
"""A tiny, dependency-free local embedding function used for tests and examples.

It deterministically converts each input string into a fixed-size float32 vector
by hashing character codes. This is intentionally simple so it works in CI
without external model or network dependencies.
"""

def __init__(self, dim: int = 32):
if dim <= 0:
raise ValueError("dim must be a positive integer")

self.dim = int(dim)

def _embed_one(self, text: str) -> np.ndarray:
# Simple deterministic embedding: accumulate character ordinals into a fixed-size vector
v = np.zeros(self.dim, dtype=np.float32)
if not text:
return v

for i, ch in enumerate(text):
idx = i % self.dim
v[idx] += (ord(ch) % 256) / 256.0

# Normalize to unit length to be more embedding-like
norm = np.linalg.norm(v)
if norm > 0:
v = v / norm

return v

def __call__(self, input: Documents) -> Embeddings:
if not input:
raise ValueError("Input documents list cannot be empty")

return [self._embed_one(str(d)) for d in list(input)]

return [self._embed_one(str(d)) for d in list(input)]

def embed_query(self, input: Documents) -> Embeddings:
# For this simple embedding, queries use the same function
return self.__call__(input)

@staticmethod
def name() -> str:
return "local_simple_hash"

def default_space(self) -> Space:
return "cosine"

def supported_spaces(self) -> List[Space]:
return ["cosine", "l2", "ip"]

@staticmethod
def build_from_config(config: Dict[str, Any]) -> "EmbeddingFunction[Documents]":
dim = config.get("dim", 32)
return SimpleHashEmbeddingFunction(dim=dim)

def get_config(self) -> Dict[str, Any]:
return {"dim": self.dim}

def validate_config_update(
self, old_config: Dict[str, Any], new_config: Dict[str, Any]
) -> None:
# Changing dim is allowed for this toy function
return

@staticmethod
def validate_config(config: Dict[str, Any]) -> None:
# Very small validation
if "dim" in config:
dim = config["dim"]
if not isinstance(dim, int) or dim <= 0:
raise ValueError("dim must be a positive integer")
Original file line number Diff line number Diff line change
Expand Up @@ -26,20 +26,20 @@ Chroma provides lightweight wrappers around popular embedding providers, making

For TypeScript users, Chroma provides packages for a number of embedding model providers. The Chromadb python package ships will all embedding functions included.

| Provider | Embedding Function Package
| ---------- | -------------------------
| All (installs all packages) | [@chroma-core/all](https://www.npmjs.com/package/@chroma-core/all)
| Cloudflare Workers AI | [@chroma-core/cloudflare-worker-ai](https://www.npmjs.com/package/@chroma-core/cloudflare-worker-ai)
| Cohere | [@chroma-core/cohere](https://www.npmjs.com/package/@chroma-core/cohere)
| Google Gemini | [@chroma-core/google-gemini](https://www.npmjs.com/package/@chroma-core/google-gemini)
| Hugging Face Server | [@chroma-core/huggingface-server](https://www.npmjs.com/package/@chroma-core/huggingface-server)
| Jina | [@chroma-core/jina](https://www.npmjs.com/package/@chroma-core/jina)
| Mistral | [@chroma-core/mistral](https://www.npmjs.com/package/@chroma-core/mistral)
| Morph | [@chroma-core/morph](https://www.npmjs.com/package/@chroma-core/morph)
| Ollama | [@chroma-core/ollama](https://www.npmjs.com/package/@chroma-core/ollama)
| OpenAI | [@chroma-core/openai](https://www.npmjs.com/package/@chroma-core/openai)
| Together AI | [@chroma-core/together-ai](https://www.npmjs.com/package/@chroma-core/together-ai)
| Voyage AI | [@chroma-core/voyageai](https://www.npmjs.com/package/@chroma-core/voyageai)
| Provider | Embedding Function Package
| ---------- | -------------------------
| All (installs all packages) | [@chroma-core/all](https://www.npmjs.com/package/@chroma-core/all)
| Cloudflare Workers AI | [@chroma-core/cloudflare-worker-ai](https://www.npmjs.com/package/@chroma-core/cloudflare-worker-ai)
| Cohere | [@chroma-core/cohere](https://www.npmjs.com/package/@chroma-core/cohere)
| Google Gemini | [@chroma-core/google-gemini](https://www.npmjs.com/package/@chroma-core/google-gemini)
| Hugging Face Server | [@chroma-core/huggingface-server](https://www.npmjs.com/package/@chroma-core/huggingface-server)
| Jina | [@chroma-core/jina](https://www.npmjs.com/package/@chroma-core/jina)
| Mistral | [@chroma-core/mistral](https://www.npmjs.com/package/@chroma-core/mistral)
| Morph | [@chroma-core/morph](https://www.npmjs.com/package/@chroma-core/morph)
| Ollama | [@chroma-core/ollama](https://www.npmjs.com/package/@chroma-core/ollama)
| OpenAI | [@chroma-core/openai](https://www.npmjs.com/package/@chroma-core/openai)
| Together AI | [@chroma-core/together-ai](https://www.npmjs.com/package/@chroma-core/together-ai)
| Voyage AI | [@chroma-core/voyageai](https://www.npmjs.com/package/@chroma-core/voyageai)

We welcome pull requests to add new Embedding Functions to the community.

Expand Down Expand Up @@ -237,6 +237,38 @@ await collection.query({ queryEmbeddings: embeddings });

## Custom Embedding Functions

## Lightweight local embeddings (quick & dependency-free)

For quick smoke tests, CI, or environments without model downloads or API keys, Chroma includes or supports several lightweight/local embedding options. These are useful for running examples or unit tests that must be deterministic and fast.

- `local_simple_hash` (recommended for CI and examples): a tiny, dependency-free deterministic embedding included in the Python package. It's implemented as a small hash-based function that converts text into a fixed-size float32 vector without external packages.
- `sentence_transformer` (local, requires `sentence_transformers`): a higher-quality local embedding using the SentenceTransformers library (requires model download or offline model).

Python — use `local_simple_hash` directly:

```python
from chromadb.utils.embedding_functions import config_to_embedding_function

# Option A: construct directly
from chromadb.utils.embedding_functions.simple_hash_embedding_function import SimpleHashEmbeddingFunction
ef = SimpleHashEmbeddingFunction(dim=16)
embs = ef(["hello world"])

# Option B: load by config (useful for reading configs)
cfg = {"name": "local_simple_hash", "config": {"dim": 16}}
ef2 = config_to_embedding_function(cfg)
embs2 = ef2(["hello world"])

print(embs)
```

Notes

- `local_simple_hash` is deterministic and dependency-free. It's intended for tests and examples, not high-quality semantic embeddings.
- If you need higher-quality local embeddings, use `SentenceTransformerEmbeddingFunction` (requires `pip install sentence_transformers`). For CI or quick validation prefer `local_simple_hash`.
- You can register custom embedding functions using `register_embedding_function` or add them to configs and call `config_to_embedding_function`.


You can create your own embedding function to use with Chroma; it just needs to implement `EmbeddingFunction`.

{% TabbedCodeBlock %}
Expand Down
Loading