Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update LlamaCloud integration #14254

Merged
merged 11 commits into from
Jun 22, 2024

Conversation

Javtor
Copy link
Member

@Javtor Javtor commented Jun 19, 2024

Description

Updates LlamaCloud integration

Version Bump?

Did I bump the version in the pyproject.toml file of the package I am updating? (Except for the llama-index-core package)

  • Yes
  • No

Type of Change

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

How Has This Been Tested?

  • Added new unit/integration tests
  • I stared at the code and made sure it makes sense

@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Jun 19, 2024
@Javtor Javtor marked this pull request as draft June 19, 2024 22:41
@Javtor Javtor marked this pull request as ready for review June 21, 2024 03:58
f"pipelines/execution?id={pipeline_execution.id}"
)

return pipeline_execution.id
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, so nice to remove this actually haha

@@ -62,6 +86,110 @@ def __init__(
self._service_context = None
self._callback_manager = callback_manager or Settings.callback_manager

def _wait_for_pipeline_ingestion(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: At some point, this probably needs to be async

**kwargs: Any,
) -> "LlamaCloudIndex":
"""Build a Vectara index from a sequence of documents."""
"""Build a LlamaCloud managed index from a sequence of documents."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol

Copy link
Collaborator

@logan-markewich logan-markewich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this looks super good to me!

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jun 21, 2024
return [
SentenceSplitter(),
OpenAIEmbedding(),
]


def get_client(
api_key: Optional[str] = None,
base_url: Optional[str] = None,
app_url: Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's app_url for? seems like its not being used?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previously, this was used to print link to go to the webapp.

@logan-markewich
Copy link
Collaborator

Besides some nits, I think this is good to merge. Don't want to hold up anything, so going to merge

@logan-markewich logan-markewich merged commit b517486 into main Jun 22, 2024
8 checks passed
@logan-markewich logan-markewich deleted the javier/llamacloud-framework-integration branch June 22, 2024 05:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:XL This PR changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants