Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: LlamaIndex, FireworksAI Known Models Error. LlamaIndex doesn't recognize firefunction-v2, even though it is a valid fireworks model. #14469

Open
1 task done
BenShaener opened this issue Jun 29, 2024 · 3 comments
Labels
question Further information is requested

Comments

@BenShaener
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

When trying to use the fireworks model: firefunction-v2, I get the following error:

File "/usr/local/lib/python3.9/site-packages/llama_index/core/indices/base.py", line 407, in as_query_engine
    return RetrieverQueryEngine.from_args(
  File "/usr/local/lib/python3.9/site-packages/llama_index/core/query_engine/retriever_query_engine.py", line 110, in from_args
    response_synthesizer = response_synthesizer or get_response_synthesizer(
  File "/usr/local/lib/python3.9/site-packages/llama_index/core/response_synthesizers/factory.py", line 73, in get_response_synthesizer
    llm.metadata,
  File "/usr/local/lib/python3.9/site-packages/llama_index/llms/fireworks/base.py", line 84, in metadata
    context_window=fireworks_modelname_to_contextsize(self.model),
  File "/usr/local/lib/python3.9/site-packages/llama_index/llms/fireworks/utils.py", line 61, in fireworks_modelname_to_contextsize
    raise ValueError(
ValueError: Unknown model: accounts/fireworks/models/firefunction-v2. Please provide a valid Fireworks model name.Known models are: accounts/fireworks/models/llama-v2-7b-chat, accounts/fireworks/models/llama-v2-13b-chat, accounts/fireworks/models/llama-v2-70b-chat, accounts/fireworks/models/llama-v2-34b-code-instruct, accounts/fireworks/models/llamaguard-7b, accounts/fireworks/models/llama-v3-8b-instruct, accounts/fireworks/models/llama-v3-70b-instruct, accounts/fireworks/models/mistral-7b-instruct-4k, accounts/fireworks/models/mixtral-8x7b-instruct, accounts/fireworks/models/firefunction-v1, accounts/fireworks/models/mixtral-8x22b-instruct

Is their anyway to override this error, or is there something wrong that's causing this error. Firefunction-v2 is supported by Fireworks and should have no problem working with LlamaIndex.

@BenShaener BenShaener added the question Further information is requested label Jun 29, 2024
Copy link

dosubot bot commented Jun 29, 2024

Hi @BenShaener! I'm here to help you with any bugs, questions, or contributions you have for the repository. Let's tackle this issue together!

The error occurs because the firefunction-v2 model is not listed in the ALL_AVAILABLE_MODELS dictionary. To fix this, you can add the firefunction-v2 model to the MISTRAL_MODELS dictionary (or ALL_AVAILABLE_MODELS dictionary) with an appropriate context size. Here’s how you can do it:

MISTRAL_MODELS = {
    "accounts/fireworks/models/mistral-7b-instruct-4k": 16384,
    "accounts/fireworks/models/mixtral-8x7b-instruct": 32768,
    "accounts/fireworks/models/firefunction-v1": 32768,
    "accounts/fireworks/models/mixtral-8x22b-instruct": 65536,
    "accounts/fireworks/models/firefunction-v2": 32768,  # Add this line
}

ALL_AVAILABLE_MODELS = {
    **LLAMA_MODELS,
    **MISTRAL_MODELS,
}

By adding the firefunction-v2 model to the dictionary, the fireworks_modelname_to_contextsize function will recognize it as a valid model and return the appropriate context size, thus preventing the error [1].

Additionally, you can configure the model directly in your code as shown below:

from llama_index.llms.fireworks import Fireworks

llm = Fireworks(
    model="accounts/fireworks/models/firefunction-v2",
    api_key="YOUR_API_KEY"
)

This ensures that the firefunction-v2 model is used correctly in your application [2].

To continue talking to Dosu, mention @dosu.

@BenShaener
Copy link
Author

@dosu if I make a dictionary called mistral models locally, will it still change the allowed models?

Copy link

dosubot bot commented Jun 29, 2024

No, creating a dictionary called MISTRAL_MODELS locally will not change the allowed models in LlamaIndex. The allowed models are defined in the MISTRALAI_MODELS dictionary within the mistralai_utils.py file. Your local dictionary will not affect the MISTRALAI_MODELS dictionary used by LlamaIndex. The MISTRALAI_MODELS dictionary is used internally by LlamaIndex to map model names to their context sizes and validate model names [1][2].

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant