-
Notifications
You must be signed in to change notification settings - Fork 563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chroma Similarity Backend Integration #4520
base: develop
Are you sure you want to change the base?
Conversation
…ests to run based on the import failing.
WalkthroughThe updates enhance FiftyOne with a new backend called ChromaDB for similarity search. The changes introduce classes and methods to manage configurations, compute similarities, and interact with similarity indexes. Additionally, testing functionalities are integrated to validate image and patch similarity processes. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant FiftyOne
participant ChromaSimilarityConfig
participant ChromaSimilarity
participant ChromaSimilarityIndex
User->>FiftyOne: Initialize ChromaDB Similarity
FiftyOne->>ChromaSimilarityConfig: Load configuration
ChromaSimilarityConfig->>FiftyOne: Return config object
FiftyOne->>ChromaSimilarity: Initialize with config
FiftyOne->>ChromaSimilarityIndex: Create/Manage collection
User->>FiftyOne: Perform similarity search
FiftyOne->>ChromaSimilarityIndex: Query collection with embeddings
ChromaSimilarityIndex->>FiftyOne: Return similarity results
FiftyOne->>User: Display results
Poem
Warning Review ran into problemsProblems (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
I don't know where to put the file to have it be imported after I have setup the brain_config.json (as attached). Any advice would be appreciated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Files selected for processing (2)
- fiftyone/utils/chroma_fiftyone.py (1 hunks)
- fiftyone/utils/tests_ch.py (1 hunks)
Additional comments not posted (5)
fiftyone/utils/tests_ch.py (2)
1-4
: Approved import statements.The imports are well-organized and relevant to the test functionalities being implemented.
6-10
: Verify the dataset deletion in the fixture.The fixture
dataset
correctly sets up and tears down the dataset, but ensure that the deletion of the dataset does not affect other tests or data that should be retained.#!/bin/bash # Description: Verify that dataset deletion in the fixture does not affect other tests or necessary data. # Test: Search for other usages of the dataset. Expect: No adverse effects on other tests. rg --type python $'dataset.delete()'fiftyone/utils/chroma_fiftyone.py (3)
1-14
: Approved initial setup and imports.The initial setup and imports are correctly structured for the functionality of the ChromaDB integration.
122-494
: Optimize and secureChromaSimilarityIndex
interactions.
- The error handling in
_initialize
and other methods should be robust against various failure modes.- Ensure that the handling of embeddings and sample IDs is secure and efficient, especially in the context of batch operations.
[REFACTOR_SUGGESTion]
- self._client=chromadb.HttpClient(host=self.config._url, ssl=False) + self._client=chromadb.HttpClient(host=self.config._url, ssl=True) # Enable SSL for security
111-117
: Ensure proper package requirements inChromaSimilarity
.Verify that the
chromadb
package is correctly ensured for both requirements methods to prevent runtime errors.#!/bin/bash # Description: Verify the presence and correct version of the `chromadb` package. # Test: Check package installation and version. pip show chromadb
def test_image_similarity_backend(dataset): | ||
backend = "chroma" | ||
prompt = "kites high in the air" | ||
brain_key = "clip_" + backend | ||
|
||
index = fob.compute_similarity( | ||
dataset, | ||
model="clip-vit-base32-torch", | ||
metric="euclidean", | ||
embeddings=False, | ||
backend=backend, | ||
brain_key=brain_key, | ||
) | ||
|
||
embeddings, sample_ids, _ = index.compute_embeddings(dataset) | ||
|
||
index.add_to_index(embeddings, sample_ids) | ||
assert index.total_index_size == 200 | ||
assert index.index_size == 200 | ||
assert index.missing_size is None | ||
|
||
sim_view = dataset.sort_by_similarity(prompt, k=10, brain_key=brain_key) | ||
assert len(sim_view) == 10 | ||
|
||
del index | ||
dataset.clear_cache() | ||
|
||
assert dataset.get_brain_info(brain_key) is not None | ||
|
||
index = dataset.load_brain_results(brain_key) | ||
assert index.total_index_size == 200 | ||
|
||
embeddings2, sample_ids2, _ = index.get_embeddings() | ||
assert embeddings2.shape == (200, 512) | ||
assert sample_ids2.shape == (200,) | ||
|
||
ids = sample_ids2[:100] | ||
embeddings2, sample_ids2, _ = index.get_embeddings(sample_ids=ids) | ||
assert embeddings2.shape == (100, 512) | ||
assert sample_ids2.shape == (100,) | ||
|
||
index.remove_from_index(sample_ids=ids) | ||
|
||
assert index.total_index_size == 100 | ||
|
||
index.cleanup() | ||
dataset.delete_brain_run(brain_key) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix logical issues in test_image_similarity_backend
.
- The
assert
statements need to check against dynamic values rather than hard-coded ones to be more robust and adaptable to changes in dataset size or configuration. - The deletion of the index and clearing of the cache should be verified to ensure they do not unintentionally affect other parts of the application.
- assert index.total_index_size == 200
- assert index.index_size == 200
+ # Suggested to replace with dynamic checks based on expected dataset configurations.
Committable suggestion was skipped due to low confidence.
def test_patch_similarity_backend(dataset): | ||
backend = "chroma" | ||
view = dataset.to_patches("ground_truth") | ||
|
||
prompt = "cute puppies" | ||
brain_key = "gt_clip_" + backend | ||
|
||
index = fob.compute_similarity( | ||
dataset, | ||
patches_field="ground_truth", | ||
model="clip-vit-base32-torch", | ||
metric="euclidean", | ||
embeddings=False, | ||
backend=backend, | ||
brain_key=brain_key, | ||
) | ||
|
||
embeddings, sample_ids, label_ids = index.compute_embeddings(dataset) | ||
|
||
index.add_to_index(embeddings, sample_ids, label_ids=label_ids) | ||
assert index.total_index_size == 1232 | ||
assert index.index_size == 1232 | ||
assert index.missing_size is None | ||
|
||
sim_view = view.sort_by_similarity(prompt, k=10, brain_key=brain_key) | ||
assert len(sim_view) == 10 | ||
|
||
del index | ||
dataset.clear_cache() | ||
|
||
assert dataset.get_brain_info(brain_key) is not None | ||
|
||
index = dataset.load_brain_results(brain_key) | ||
assert index.total_index_size == 1232 | ||
|
||
embeddings2, sample_ids2, label_ids2 = index.get_embeddings() | ||
assert embeddings2.shape == (1232, 512) | ||
assert sample_ids2.shape == (1232,) | ||
assert label_ids2.shape == (1232,) | ||
|
||
ids = label_ids2[:100] | ||
embeddings2, sample_ids2, label_ids2 = index.get_embeddings(label_ids=ids) | ||
assert embeddings2.shape == (100, 512) | ||
assert sample_ids2.shape == (100,) | ||
assert label_ids2.shape == (100,) | ||
|
||
index.remove_from_index(label_ids=ids) | ||
|
||
assert index.total_index_size == 1132 | ||
|
||
index.cleanup() | ||
dataset.delete_brain_run(brain_key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review and optimize test_patch_similarity_backend
.
- Similar to
test_image_similarity_backend
, replace hard-coded assert values with dynamic checks. - Verify that the deletion and cache clearing operations are isolated and do not impact other functionalities.
- assert index.total_index_size == 1232
+ # Replace with dynamic values based on dataset configurations.
Committable suggestion was skipped due to low confidence.
class ChromaSimilarityConfig(SimilarityConfig): | ||
"""Configuration for the ChromaDB similarity backend. | ||
|
||
Args: | ||
embeddings_field (None): the sample field containing the embeddings, | ||
if one was provided | ||
model (None): the :class:`fiftyone.core.models.Model` or name of the | ||
zoo model that was used to compute embeddings, if known | ||
patches_field (None): the sample field defining the patches being | ||
analyzed, if any | ||
supports_prompts (None): whether this run supports prompt queries | ||
collection_name (None): the name of a ChromaDB collection to use or | ||
create. If none is provided, a new collection will be created | ||
metric (None): the embedding distance metric to use when creating a | ||
new index. Supported values are | ||
``("cosine", "dotproduct", "euclidean")`` | ||
url (None): a ChromaDB server URL to use | ||
""" | ||
|
||
def __init__( | ||
self, | ||
embeddings_field=None, | ||
model=None, | ||
patches_field=None, | ||
supports_prompts=None, | ||
collection_name=None, | ||
metric=None, | ||
url=None, | ||
settings=None, | ||
**kwargs, | ||
): | ||
if metric is not None and metric not in _SUPPORTED_METRICS: | ||
raise ValueError( | ||
"Unsupported metric '%s'. Supported values are %s" | ||
% (metric, tuple(_SUPPORTED_METRICS.keys())) | ||
) | ||
|
||
super().__init__( | ||
embeddings_field=embeddings_field, | ||
model=model, | ||
patches_field=patches_field, | ||
supports_prompts=supports_prompts, | ||
**kwargs, | ||
) | ||
|
||
self.collection_name = collection_name | ||
self.metric = metric | ||
|
||
# store privately so these aren't serialized | ||
self._url = url | ||
|
||
@property | ||
def method(self): | ||
return "chromadb" | ||
|
||
@property | ||
def url(self): | ||
return self._url | ||
|
||
@url.setter | ||
def url(self, value): | ||
self._url = value | ||
|
||
@property | ||
def max_k(self): | ||
return None | ||
|
||
@property | ||
def supports_least_similarity(self): | ||
return False | ||
|
||
@property | ||
def supported_aggregations(self): | ||
return ("mean",) | ||
|
||
def load_credentials( | ||
self, url=None | ||
): | ||
self._load_parameters( | ||
url=url, | ||
) | ||
|
||
class ChromaSimilarity(Similarity): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review and refine ChromaSimilarityConfig
.
- Ensure that the error message for unsupported metrics is clear and actionable.
- Verify the URL handling to ensure it supports various ChromaDB configurations.
- raise ValueError("Unsupported metric '%s'. Supported values are %s" % (metric, tuple(_SUPPORTED_METRICS.keys())))
+ raise ValueError(f"Unsupported metric '{metric}'. Supported values are {tuple(_SUPPORTED_METRICS.keys())}")
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class ChromaSimilarityConfig(SimilarityConfig): | |
"""Configuration for the ChromaDB similarity backend. | |
Args: | |
embeddings_field (None): the sample field containing the embeddings, | |
if one was provided | |
model (None): the :class:`fiftyone.core.models.Model` or name of the | |
zoo model that was used to compute embeddings, if known | |
patches_field (None): the sample field defining the patches being | |
analyzed, if any | |
supports_prompts (None): whether this run supports prompt queries | |
collection_name (None): the name of a ChromaDB collection to use or | |
create. If none is provided, a new collection will be created | |
metric (None): the embedding distance metric to use when creating a | |
new index. Supported values are | |
``("cosine", "dotproduct", "euclidean")`` | |
url (None): a ChromaDB server URL to use | |
""" | |
def __init__( | |
self, | |
embeddings_field=None, | |
model=None, | |
patches_field=None, | |
supports_prompts=None, | |
collection_name=None, | |
metric=None, | |
url=None, | |
settings=None, | |
**kwargs, | |
): | |
if metric is not None and metric not in _SUPPORTED_METRICS: | |
raise ValueError( | |
"Unsupported metric '%s'. Supported values are %s" | |
% (metric, tuple(_SUPPORTED_METRICS.keys())) | |
) | |
super().__init__( | |
embeddings_field=embeddings_field, | |
model=model, | |
patches_field=patches_field, | |
supports_prompts=supports_prompts, | |
**kwargs, | |
) | |
self.collection_name = collection_name | |
self.metric = metric | |
# store privately so these aren't serialized | |
self._url = url | |
@property | |
def method(self): | |
return "chromadb" | |
@property | |
def url(self): | |
return self._url | |
@url.setter | |
def url(self, value): | |
self._url = value | |
@property | |
def max_k(self): | |
return None | |
@property | |
def supports_least_similarity(self): | |
return False | |
@property | |
def supported_aggregations(self): | |
return ("mean",) | |
def load_credentials( | |
self, url=None | |
): | |
self._load_parameters( | |
url=url, | |
) | |
class ChromaSimilarity(Similarity): | |
raise ValueError( | |
f"Unsupported metric '{metric}'. Supported values are {tuple(_SUPPORTED_METRICS.keys())}" | |
) |
What changes are proposed in this pull request?
builds out an integration to facilitate using chromaDB as a backend for the similarity functionality.
How is this patch tested? If it is not, please explain why.
Trying to do that right now ~ it has passed unit tests on my end but I can't get fiftyone to import the module to utilize some tests from @brimoor .
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
Chroma can now be used as a backend for the image similarity function.
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
New Features
Tests