You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to get the system to recognize a custom embedding endpoint to use a special embedding model. The system serving it is openai api compliant with the /v1/embeddings path. My full embedding endpoint is: http://embedder.example.com:8000/v1/embeddings.
I have tried in the llamaindex embedding settings to set it to openai and change the OPENAI_API_BASE to http://embedder.example.com:8000. Also tried with adding /v1 and /v1/embeddings. It always timeouts with can't connect. When I look at the embedder log, it shows no attempts.
I have no problem otherwise utilizing this endpoint directly in python, for instance.
Should I be taking another approach?
The text was updated successfully, but these errors were encountered:
It should work, Llama 3.1 works on my machine, just select the ollama provider in the Indexes -> Embeddings settings and define the model_name keyword argument with the model name.
I am trying to get the system to recognize a custom embedding endpoint to use a special embedding model. The system serving it is openai api compliant with the /v1/embeddings path. My full embedding endpoint is: http://embedder.example.com:8000/v1/embeddings.
I have tried in the llamaindex embedding settings to set it to openai and change the OPENAI_API_BASE to http://embedder.example.com:8000. Also tried with adding /v1 and /v1/embeddings. It always timeouts with can't connect. When I look at the embedder log, it shows no attempts.
I have no problem otherwise utilizing this endpoint directly in python, for instance.
Should I be taking another approach?
The text was updated successfully, but these errors were encountered: