-
Notifications
You must be signed in to change notification settings - Fork 109
Issues: huggingface/lighteval
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[FT] JudgeLLM should support litellm backend
feature request
New feature/request
#474
opened Dec 22, 2024 by
JoelNiklaus
[BUG] Issue with the scorer Attribute Initialization in ROUGE
bug
Something isn't working
#470
opened Dec 20, 2024 by
ryan-minato
[FT] Rerun evaluations with new metrics based on completions saved in details file
feature request
New feature/request
#467
opened Dec 19, 2024 by
JoelNiklaus
[BUG] Issue with LightevalTaskConfig.stop_sequence Attribute When Unset
bug
Something isn't working
#462
opened Dec 19, 2024 by
ryan-minato
[BUG] Issue with CACHE_DIR Default Value in Accelerate Pipeline
bug
Something isn't working
#460
opened Dec 19, 2024 by
ryan-minato
[FT] remove openai endpoint and only use litellm
feature request
New feature/request
#458
opened Dec 18, 2024 by
NathanHB
[BUG] how to eval large scale model use 1dp+8pp?
bug
Something isn't working
#447
opened Dec 13, 2024 by
mxjmtxrm
[FT] Align parameter names in config files and config classes
feature request
New feature/request
#439
opened Dec 12, 2024 by
albertvillanova
[FT] Fail faster when passing unsupported metrics to InferenceEndpointModel
feature request
New feature/request
#436
opened Dec 11, 2024 by
albertvillanova
[FT] Enable the evaluation of any function
feature request
New feature/request
#430
opened Dec 10, 2024 by
JoelNiklaus
[FT] Adding caching for each dataset run
feature request
New feature/request
#417
opened Dec 2, 2024 by
JoelNiklaus
[FT] Add System Prompt field in LightevalTaskConfig that can be used by model clients
feature request
New feature/request
#410
opened Nov 28, 2024 by
JoelNiklaus
[FT] The word "pretrained" is required in model_args but not in model_config_path
feature request
New feature/request
#405
opened Nov 25, 2024 by
albertvillanova
[FT] Support llama.cpp inference
feature request
New feature/request
#402
opened Nov 22, 2024 by
JoelNiklaus
[FT] Is it possible to save the predictions to prevent rerunning expensive inference
feature request
New feature/request
#396
opened Nov 19, 2024 by
JoelNiklaus
[BUG] Can't use lighteval to evaluate the nanotron
bug
Something isn't working
#395
opened Nov 19, 2024 by
alexchen4ai
[FT] Evaluation using a multi-document RAG based on statistical tools and LLM as judge
feature request
New feature/request
#379
opened Oct 30, 2024 by
louisbrulenaudet
[EVAL]: Add more African Benchmarks
good first issue
Good for newcomers
help wanted
Extra attention is needed
new task
#373
opened Oct 24, 2024 by
dadelani
[FT] More general approach than New feature/request
output_regex
to model answer extraction
feature request
#360
opened Oct 14, 2024 by
sadra-barikbin
[FT] Single token completion loglikelihood auto-detection
feature request
New feature/request
low prio
#355
opened Oct 10, 2024 by
hynky1999
[BUG] assertion error Something isn't working
assert text[: len(left)] == left
on MATH wen Qwen-Math-2.5
bug
#345
opened Oct 7, 2024 by
d1shs0ap
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.