Skip to content

Conversation

@jairad26
Copy link
Contributor

@jairad26 jairad26 commented Oct 28, 2025

Description of changes

Summarize the changes made by this PR.

  • Improvements & Bug fixes
    • This PR adds colbert embedding functions to python clients. Since chroma accepts 1 embedding per document, the embedding function applies muvera to transform the multivec results from colbert to 1 embedding per document/query
  • New functionality
    • ...

Test plan

How are these changes tested?

  • [ x] Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Migration plan

Are there any migrations, or any forwards/backwards compatibility changes needed in order to make sure this change deploys reliably?

Observability plan

What is the plan to instrument and monitor this change?

Documentation Changes

Are all docstrings for user-facing APIs updated if required? Do we need to make documentation changes in the docs section?

@github-actions
Copy link

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link
Contributor Author

jairad26 commented Oct 28, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@jairad26 jairad26 changed the base branch from jai/nomic-ef to graphite-base/5744 October 28, 2025 01:05
@jairad26 jairad26 changed the base branch from graphite-base/5744 to main October 28, 2025 01:06
@jairad26 jairad26 changed the title [ENH] Add muvera support [ENH] Add muvera and colBERT support to python client Oct 28, 2025
@jairad26 jairad26 marked this pull request as ready for review October 28, 2025 01:11
@propel-code-bot
Copy link
Contributor

propel-code-bot bot commented Oct 28, 2025

Add Muvera Fixed-Dimensional Encoding & ColBERT multivector support

Introduces chromadb/utils/muvera.py, a full NumPy implementation of the MuVERA Fixed-Dimensional Encoding (FDE) algorithm for collapsing ColBERT multivector outputs into single dense embeddings. The PR wires this encoder into the Python client via a new PylateColBERTEmbeddingFunction and extends JinaEmbeddingFunction with an optional return_multivector path that also funnels through MuVERA. Registry, JSON schema, and unit tests are updated accordingly.

Key Changes

• Added 700-line implementation of MuVERA FDE (chromadb/utils/muvera.py) with document/query and batch helpers
• New PylateColBERTEmbeddingFunction supporting models served by pylate.ColBERT; registers under pylate_colbert
• Extended JinaEmbeddingFunction to accept multivector responses (return_multivector flag) and convert via MuVERA
• Updated embedding-function registry (__init__.py) and builtin-list test to include the new function
• Added JSON schema schemas/embedding_functions/pylate_colbert.json for config validation
• Unit-test adjustments to expect new builtin; minor refactors and extra config fields (return_multivector) in Jina EF

Affected Areas

chromadb/utils/muvera.py (new core algorithm)
chromadb/utils/embedding_functions/ (ColBERT, Jina, registry)
• JSON schema validation layer
• EF builtin enumeration and tests

This summary was automatically generated by @propel-code-bot

Comment on lines +185 to +186

dims = len(multi_embeddings[0][0])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[CriticalError]

Potential null pointer dereference: Line 186 accesses multi_embeddings[0][0] without checking if multi_embeddings is empty or if the first embedding is empty. If the Jina API returns empty multivector embeddings, this will cause an IndexError. This is a known pattern in ChromaDB - similar issues were addressed in PR #3183 for empty embedding list validation.

if not multi_embeddings or not multi_embeddings[0]:
    raise RuntimeError("Invalid multivector embeddings format from Jina API")
dims = len(multi_embeddings[0][0])

Note: ChromaDB has established patterns for defensive programming against empty embedding arrays, and this follows the same principle.

Context for Agents
[**CriticalError**]

Potential null pointer dereference: Line 186 accesses `multi_embeddings[0][0]` without checking if `multi_embeddings` is empty or if the first embedding is empty. If the Jina API returns empty multivector embeddings, this will cause an IndexError. This is a known pattern in ChromaDB - similar issues were addressed in PR #3183 for empty embedding list validation.

```python
if not multi_embeddings or not multi_embeddings[0]:
    raise RuntimeError("Invalid multivector embeddings format from Jina API")
dims = len(multi_embeddings[0][0])
```

Note: ChromaDB has established patterns for defensive programming against empty embedding arrays, and this follows the same principle.

File: chromadb/utils/embedding_functions/jina_embedding_function.py
Line: 186

Comment on lines +44 to +46
return create_fdes(
multivec,
dims=len(multivec[0][0]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[CriticalError]

Potential null pointer dereference: Line 46 accesses multivec[0][0] without checking if multivec is empty or if the first vector is empty. If the pylate model returns empty multivectors, this will cause an IndexError. This follows the same vulnerability pattern that ChromaDB addressed in their normalization code (PR #3183).

if not multivec or not multivec[0]:
    raise ValueError("Model returned empty multivector embeddings")
return create_fdes(
    multivec,
    dims=len(multivec[0][0]),
    is_query=False,
    fill_empty_partitions=True,
)

Note: Pylate/ColBERT models generate multivector embeddings where each document produces multiple vectors, making this validation critical.

Context for Agents
[**CriticalError**]

Potential null pointer dereference: Line 46 accesses `multivec[0][0]` without checking if `multivec` is empty or if the first vector is empty. If the pylate model returns empty multivectors, this will cause an IndexError. This follows the same vulnerability pattern that ChromaDB addressed in their normalization code (PR #3183).

```python
if not multivec or not multivec[0]:
    raise ValueError("Model returned empty multivector embeddings")
return create_fdes(
    multivec,
    dims=len(multivec[0][0]),
    is_query=False,
    fill_empty_partitions=True,
)
```

Note: Pylate/ColBERT models generate multivector embeddings where each document produces multiple vectors, making this validation critical.

File: chromadb/utils/embedding_functions/pylate_colbert_embedding_function.py
Line: 46

Comment on lines +62 to +64
return create_fdes(
multivec,
dims=len(multivec[0][0]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[CriticalError]

Potential null pointer dereference: Same issue as in __call__ method - line 64 accesses multivec[0][0] without validation. This will fail if the model returns empty embeddings. The fix should be consistent with the document embedding method for maintainability.

if not multivec or not multivec[0]:
    raise ValueError("Model returned empty multivector embeddings")
return create_fdes(
    multivec,
    dims=len(multivec[0][0]),
    is_query=True,
    fill_empty_partitions=False,
)

Note: Query and document embedding methods should use consistent validation patterns to prevent similar IndexErrors across the codebase.

Context for Agents
[**CriticalError**]

Potential null pointer dereference: Same issue as in `__call__` method - line 64 accesses `multivec[0][0]` without validation. This will fail if the model returns empty embeddings. The fix should be consistent with the document embedding method for maintainability.

```python
if not multivec or not multivec[0]:
    raise ValueError("Model returned empty multivector embeddings")
return create_fdes(
    multivec,
    dims=len(multivec[0][0]),
    is_query=True,
    fill_empty_partitions=False,
)
```

Note: Query and document embedding methods should use consistent validation patterns to prevent similar IndexErrors across the codebase.

File: chromadb/utils/embedding_functions/pylate_colbert_embedding_function.py
Line: 64

Comment on lines +357 to +361
distances = [
_distance_to_simhash_partition(sketches[j], i)
for j in range(num_points)
]
nearest_point_idx = np.argmin(distances)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[PerformanceOptimization]

Performance concern: The nested loop on lines 357-361 creates O(N²) complexity when processing empty partitions. For each empty partition, it calculates distances to all num_points, which can be expensive for large point clouds.

Consider caching distance calculations or using vectorized operations:

# Pre-calculate all distances once per repetition
if config.fill_empty_partitions and num_points > 0:
    all_distances = np.array([
        [_distance_to_simhash_partition(sketches[j], i) 
         for i in range(num_partitions)]
        for j in range(num_points)
    ])
Context for Agents
[**PerformanceOptimization**]

Performance concern: The nested loop on lines 357-361 creates O(N²) complexity when processing empty partitions. For each empty partition, it calculates distances to all `num_points`, which can be expensive for large point clouds.

Consider caching distance calculations or using vectorized operations:

```python
# Pre-calculate all distances once per repetition
if config.fill_empty_partitions and num_points > 0:
    all_distances = np.array([
        [_distance_to_simhash_partition(sketches[j], i) 
         for i in range(num_partitions)]
        for j in range(num_points)
    ])
```

File: chromadb/utils/muvera.py
Line: 361

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants