Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add async capability to OpensearchVectorStore #11513

Merged
merged 3 commits into from
Mar 1, 2024

Conversation

ahs8w
Copy link
Contributor

@ahs8w ahs8w commented Feb 29, 2024

Description

OpensearchVectorStore does not currently have async capability. This limits the effectiveness of BatchEvalRunner when using OpenSearch as a vector store.

Added async options (modeled after elasticsearch) to the OpensearchVectorStore and OpensearchVectorClient classes
Also included tests, associated docker-compose.yml for spinning up a local opensearch client and updated the opensearch pyproject.toml

Fixes # (10360)

Type of Change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Added new unit/integration tests
  • Added new notebook (that tests end-to-end)
  • I stared at the code and made sure it makes sense

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Feb 29, 2024
@bwhartlove
Copy link
Contributor

I believe this PR will resolve the issue I posted #10719 .

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 1, 2024
@logan-markewich logan-markewich merged commit d3724db into run-llama:main Mar 1, 2024
8 of 9 checks passed
anoopshrma pushed a commit to anoopshrma/llama_index that referenced this pull request Mar 2, 2024
bdonkey added a commit to bdonkey/gpt_index that referenced this pull request Mar 5, 2024
* main: (2881 commits)
  Feature: Improve batch embedding generation throughput for Cohere in Bedrock (run-llama#11572)
  tqdm: add tdqm.gather (run-llama#11562)
  Fix URLs in Prompts documentation (run-llama#11571)
  Corrected colab links (run-llama#11577)
  add syntatic sugar to create chat prompt / chat message more easily  (run-llama#11583)
  Fix Issue 11565 - The MilvusVectorStore MetaDataFilters FilterCondition.OR is ignored (run-llama#11566)
  docs: fixes LangfuseCallbackHandler link (run-llama#11576)
  GHA: Add Check for repo source (run-llama#11575)
  add raptor (run-llama#11527)
  Logan/v0.10.15 (run-llama#11551)
  feat: adds langfuse callback handler (run-llama#11324)
  fixed storage context update & service context issue (run-llama#11475)
  Add async capability to OpensearchVectorStore (run-llama#11513)
  Logan/fix publish (run-llama#11549)
  Prevent async_response_gen from Stalling with asyncio Timeout (run-llama#11548)
  VideoDB Integration for Retrievers (run-llama#11463)
  fix import error in CLI (run-llama#11544)
  Updated the simple fusion to handle duplicate nodes (run-llama#11542)
  Add mixedbread reranker cookbook (run-llama#11536)
  Fixed some minor gramatical issues (run-llama#11530)
  ...
Izukimat pushed a commit to Izukimat/llama_index that referenced this pull request Mar 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants