Skip to content

Update crag eval with benchmark results #214

Update crag eval with benchmark results

Update crag eval with benchmark results #214

Triggered via pull request December 6, 2024 00:45
Status Success
Total duration 3m 47s
Artifacts 2

model_test_hpu.yml

on: pull_request
Matrix: Evaluation-Workflow
Genreate-Report
11s
Genreate-Report
Fit to window
Zoom out
Zoom in

Annotations

1 warning
Genreate-Report
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636

Artifacts

Produced during runtime
Name Size
FinalReport
1.55 KB
hpu-text-generation-opt-125m-lambada_openai
6.02 KB