Skip to content

Update crag eval with benchmark results #209

Update crag eval with benchmark results

Update crag eval with benchmark results #209

Triggered via pull request December 6, 2024 00:45
Status Success
Total duration 5m 23s
Artifacts 2

model_test_cpu.yml

on: pull_request
Matrix: Evaluation-Workflow
Genreate-Report
15s
Genreate-Report
Fit to window
Zoom out
Zoom in

Annotations

1 warning
Genreate-Report
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
FinalReport
1.55 KB
cpu-text-generation-opt-125m-lambada_openai
5.51 KB