Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Enable Usage of Local LLM in ChatEngine when chat_mode = context #11444

Merged
merged 2 commits into from
Feb 28, 2024

Conversation

Thomas-AH-Heller
Copy link
Contributor

Description

This PR adds an optional llm parameter with a default value of None to the ContextChatEngine constructor in context.py. This enhancement allows for the utilization of various LLMs instead of solely relying on the default OpenAI LLM.

Changes:

Introduced llm: Optional[LLM] = None parameter to the ContextChatEngine constructor in context.py.

Fixes # (issue)

Type of Change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • I stared at the code and made sure it makes sense

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Feb 27, 2024
@@ -69,6 +69,7 @@ def from_defaults(
prefix_messages: Optional[List[ChatMessage]] = None,
node_postprocessors: Optional[List[BaseNodePostprocessor]] = None,
context_template: Optional[str] = None,
llm: Optional[LLM] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

below, we should also add llm = llm or llm_from_settings_or_context(...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, thanks!

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 28, 2024
@logan-markewich logan-markewich merged commit 272685f into run-llama:main Feb 28, 2024
8 checks passed
Dominastorm pushed a commit to uptrain-ai/llama_index that referenced this pull request Feb 28, 2024
…run-llama#11444)

* Add initialization of llm parameter to None in ContextChatEngine constructor

* Refactor llm initialization
anoopshrma pushed a commit to anoopshrma/llama_index that referenced this pull request Mar 2, 2024
…run-llama#11444)

* Add initialization of llm parameter to None in ContextChatEngine constructor

* Refactor llm initialization
Izukimat pushed a commit to Izukimat/llama_index that referenced this pull request Mar 29, 2024
…run-llama#11444)

* Add initialization of llm parameter to None in ContextChatEngine constructor

* Refactor llm initialization
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants