Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E5-mistral-7b-instruct embedding support #2936

Closed
DavidPeleg6 opened this issue Feb 20, 2024 · 7 comments
Closed

E5-mistral-7b-instruct embedding support #2936

DavidPeleg6 opened this issue Feb 20, 2024 · 7 comments

Comments

@DavidPeleg6
Copy link

DavidPeleg6 commented Feb 20, 2024

Hi :)
I noticed in the roadmap that embedding support is intended, and was wondering whether it includes llms such as mistral as well.

Specifically, e5_mistral has the added benefit of including only the adapter in the HF repo. so in this case we could deploy a single pod for both inference as well as truly SOTA embedding without added costs.

I assume it would be easier to implement since decoder only architectures are already supported.
I think e5_mistral the tweak should be to add a function LLMEngine that would take the last hidden state rather than sample on the output yes? if so, i could try and add the pr myself

please let me know if theres anything i could do to help.

@alboimDor
Copy link

very interesting and important feature!

@Labmem009
Copy link

Looking forward to this featrue too!

@Opdoop
Copy link

Opdoop commented May 20, 2024

Any update on this?

@alboimDor
Copy link

@Opdoop it was merged to main branch last week via issue 3737

@DarkLight1337
Copy link
Member

DarkLight1337 commented Jun 3, 2024

Closed as completed by #3734.

@palash-fin
Copy link

I started this model using a daemonset over my GPU nodepool on AKS.

when used the example code to https://docs.vllm.ai/en/latest/getting_started/examples/openai_embedding_client.html hit the api

from openai import OpenAI

Modify OpenAI's API key and API base to use vLLM's API server.

openai_api_key = "EMPTY"
openai_api_base = "http://:8000/v1"

client = OpenAI(
# defaults to os.environ.get("OPENAI_API_KEY")
api_key=openai_api_key,
base_url=openai_api_base,
)

models = client.models.list()
print(models)
model = "intfloat/e5-mistral-7b-instruct"

responses = client.embeddings.create(input=["Hello my name is","The best thing about vLLM is that it supports many different models"],model=model)

for data in responses.data:
print(data.embedding) # list of float of len 4096

i am facing this error.

File "E:\o_env\lib\site-packages\openai_base_client.py", line 1030, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'base64 encoding is not currently supported', 'type': 'BadRequestError', 'par
am': None, 'code': 400}

What should I pass to the client for correct respond.

@DarkLight1337
Copy link
Member

I started this model using a daemonset over my GPU nodepool on AKS.

when used the example code to https://docs.vllm.ai/en/latest/getting_started/examples/openai_embedding_client.html hit the api

from openai import OpenAI

Modify OpenAI's API key and API base to use vLLM's API server.

openai_api_key = "EMPTY" openai_api_base = "http://:8000/v1"

client = OpenAI( # defaults to os.environ.get("OPENAI_API_KEY") api_key=openai_api_key, base_url=openai_api_base, )

models = client.models.list() print(models) model = "intfloat/e5-mistral-7b-instruct"

responses = client.embeddings.create(input=["Hello my name is","The best thing about vLLM is that it supports many different models"],model=model)

for data in responses.data: print(data.embedding) # list of float of len 4096

i am facing this error.

File "E:\o_env\lib\site-packages\openai_base_client.py", line 1030, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'base64 encoding is not currently supported', 'type': 'BadRequestError', 'par am': None, 'code': 400}

What should I pass to the client for correct respond.

base64 embedding is not supported yet, so for now you need to pass encoding_format="float". It will be supported in the upcoming release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants