Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm server for lm-eval. #82

Merged

Conversation

lkk12014402
Copy link
Collaborator

Description

init model with lm-eval https://github.com/opea-project/GenAIEval, which can support multi devices, and start a server with fastapi.

@lkk12014402
Copy link
Collaborator Author

lkk12014402 commented May 22, 2024

usage

export MODEL="hf"
export MODEL_ARGS="pretrained=Intel/neural-chat-7b-v3-3"
export DEVICE="cpu"
python self_hosted_hf.py

corresponds to this pr

@lkk12014402 lkk12014402 force-pushed the enable_lm_eval_with_microservice branch from e32eebc to a3511b0 Compare May 30, 2024 04:30
@chensuyue chensuyue merged commit ed311f5 into opea-project:main May 30, 2024
6 checks passed
ganesanintel pushed a commit to ganesanintel/GenAIComps that referenced this pull request Jun 3, 2024
lkk12014402 pushed a commit that referenced this pull request Aug 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants