Skip to content

Commit

Permalink
Auto-merge updates from auto-update branch
Browse files Browse the repository at this point in the history
  • Loading branch information
mlcommons-bot committed Jan 29, 2025
2 parents efa3923 + 42dee28 commit 73bd244
Show file tree
Hide file tree
Showing 24 changed files with 9,254 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------|------------|------------------------------------|--------------|-------------------|
| llama2-70b-99 | offline | (61.7021, 37.9679, 39.3617, 610.0) | 0.383 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
* CPU version: x86_64
* Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
* MLC version: unknown

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U mlcflow

mlc rm cache -f

mlc pull repo gateoverflow@mlperf-automations --checkout=92ea05b829908f4cf0c7d028c19875020cd116ae


```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf,
you should simply reload gateoverflow@mlperf-automations without checkout and clean MLC cache as follows:*

```bash
mlc rm repo gateoverflow@mlperf-automations
mlc pull repo gateoverflow@mlperf-automations
mlc rm cache -f

```

## Results

Platform: gh_action-reference-cpu-pytorch_v2.6.0-default_config

Model Precision: fp32

### Accuracy Results
`ROUGE1`: `61.7021`, Required accuracy for closed division `>= 43.98689`
`ROUGE2`: `37.9679`, Required accuracy for closed division `>= 21.81485`
`ROUGEL`: `39.3617`, Required accuracy for closed division `>= 28.33004`
`TOKENS_PER_SAMPLE`: `610.0`, Required accuracy for closed division `>= 265.005` and `<= 323.895`

### Performance Results
`Samples per second`: `0.383226`
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
INFO:datasets:PyTorch version 2.6.0+cpu available.
Loading dataset...
Finished loading dataset.
Loading checkpoint shards: 0%| | 0/15 [00:00<?, ?it/s]Loading checkpoint shards: 7%|▋ | 1/15 [00:01<00:23, 1.68s/it]Loading checkpoint shards: 13%|█▎ | 2/15 [00:03<00:23, 1.81s/it]Loading checkpoint shards: 20%|██ | 3/15 [00:05<00:22, 1.83s/it]Loading checkpoint shards: 27%|██▋ | 4/15 [00:07<00:20, 1.84s/it]Loading checkpoint shards: 33%|███▎ | 5/15 [00:09<00:18, 1.82s/it]Loading checkpoint shards: 40%|████ | 6/15 [00:10<00:16, 1.81s/it]Loading checkpoint shards: 47%|████▋ | 7/15 [00:12<00:14, 1.82s/it]Loading checkpoint shards: 53%|█████▎ | 8/15 [00:14<00:12, 1.85s/it]Loading checkpoint shards: 60%|██████ | 9/15 [00:20<00:18, 3.16s/it]Loading checkpoint shards: 67%|██████▋ | 10/15 [00:22<00:14, 2.89s/it]Loading checkpoint shards: 73%|███████▎ | 11/15 [00:26<00:12, 3.09s/it]Loading checkpoint shards: 80%|████████ | 12/15 [00:28<00:08, 2.75s/it]Loading checkpoint shards: 87%|████████▋ | 13/15 [00:30<00:05, 2.51s/it]Loading checkpoint shards: 93%|█████████▎| 14/15 [00:32<00:02, 2.42s/it]Loading checkpoint shards: 100%|██████████| 15/15 [00:33<00:00, 1.81s/it]Loading checkpoint shards: 100%|██████████| 15/15 [00:33<00:00, 2.20s/it]
INFO:Llama-70B-MAIN:Starting Benchmark run
/home/mlcuser/venv/mlc/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:628: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
/home/mlcuser/venv/mlc/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:633: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
/home/mlcuser/venv/mlc/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:676: UserWarning: `num_beams` is set to 1. However, `early_stopping` is set to `True` -- this flag is only used in beam-based generation modes. You should set `num_beams>1` or unset `early_stopping`.
warnings.warn(

No warnings encountered during test.

No errors encountered during test.
INFO:Llama-70B-MAIN:Run Completed!
INFO:Llama-70B-MAIN:Destroying SUT...
INFO:Llama-70B-MAIN:Destroying QSL...
Loaded model
Loaded tokenizer
IssueQuery started with 1 samples
IssueQuery done
Saving outputs to run_outputs/q0.pkl
Samples run: 1
BatchMaker time: 0.006089687347412109
Inference time: 769.5999689102173
Postprocess time: 0.02928757667541504
==== Total time: 769.6353461742401
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
{
"MLC_HOST_CPU_WRITE_PROTECT_SUPPORT": "yes",
"MLC_HOST_CPU_MICROCODE": "0x2b000603",
"MLC_HOST_CPU_FPU_SUPPORT": "yes",
"MLC_HOST_CPU_FPU_EXCEPTION_SUPPORT": "yes",
"MLC_HOST_CPU_BUGS": "spectre_v1 spectre_v2 spec_store_bypass swapgs eibrs_pbrsb bhi",
"MLC_HOST_CPU_TLB_SIZE": "Not Found",
"MLC_HOST_CPU_CFLUSH_SIZE": "64",
"MLC_HOST_CPU_ARCHITECTURE": "x86_64",
"MLC_HOST_CPU_TOTAL_CORES": "48",
"MLC_HOST_CPU_ON_LINE_CPUS_LIST": "0-47",
"MLC_HOST_CPU_VENDOR_ID": "GenuineIntel",
"MLC_HOST_CPU_MODEL_NAME": "Intel(R) Xeon(R) w7-2495X",
"MLC_HOST_CPU_FAMILY": "6",
"MLC_HOST_CPU_THREADS_PER_CORE": "2",
"MLC_HOST_CPU_PHYSICAL_CORES_PER_SOCKET": "24",
"MLC_HOST_CPU_SOCKETS": "1",
"MLC_HOST_CPU_MAX_MHZ": "4800.0000",
"MLC_HOST_CPU_L1D_CACHE_SIZE": "1.1 MiB (24 instances)",
"MLC_HOST_CPU_L1I_CACHE_SIZE": "768 KiB (24 instances)",
"MLC_HOST_CPU_L2_CACHE_SIZE": "48 MiB (24 instances)",
"MLC_HOST_CPU_L3_CACHE_SIZE": "45 MiB (1 instance)",
"MLC_HOST_CPU_NUMA_NODES": "1",
"MLC_HOST_CPU_TOTAL_LOGICAL_CORES": "48",
"MLC_HOST_MEMORY_CAPACITY": "192G",
"MLC_HOST_DISK_CAPACITY": "6.8T"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"starting_weights_filename": "https://github.com/mlcommons/cm4mlops/blob/b18ff890ff559e21d2e27a3b54cd26467ac1fd9e/script/get-ml-model-llama2/_cm.json#L51",
"retraining": "no",
"input_data_types": "fp32",
"weight_data_types": "fp32",
"weight_transformations": "no"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
graph TD
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> detect,os
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> get,sys-utils-cm
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> get,python
get-mlperf-inference-src,4b57186581024797 --> detect,os
get-mlperf-inference-src,4b57186581024797 --> get,python3
get-mlperf-inference-src,4b57186581024797 --> get,git,repo,_branch.master,_repo.https://github.com/mlcommons/inference
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> get,mlcommons,inference,src
pull-git-repo,c23132ed65c4421d --> detect,os
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> pull,git,repo
get-mlperf-inference-src,4b57186581024797 --> detect,os
get-mlperf-inference-src,4b57186581024797 --> get,python3
get-mlperf-inference-src,4b57186581024797 --> get,git,repo,_branch.master,_repo.https://github.com/mlcommons/inference
get-mlperf-inference-utils,e341e5f86d8342e5 --> get,mlperf,inference,src
app-mlperf-inference,d775cac873ee4231_(_reference,_llama2-70b-99,_pytorch,_cpu,_test,_r5.0-dev_default,_bfloat16,_offline_) --> get,mlperf,inference,utils
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> detect,cpu
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,sys-utils-cm
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,python
get-generic-python-lib,94b62a682bc44791_(_torch_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_torch
get-generic-python-lib,94b62a682bc44791_(_torchvision_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_torchvision
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,ml-model,llama2,raw,_pytorch
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> get,sys-utils-cm
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_package.pyarrow_) --> get,python3
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> get,generic-python-lib,_package.pyarrow
get-generic-python-lib,94b62a682bc44791_(_package.fastparquet_) --> get,python3
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> get,generic-python-lib,_package.fastparquet
get-generic-python-lib,94b62a682bc44791_(_package.transformers_) --> get,python3
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> get,generic-python-lib,_package.transformers
get-preprocessed-dataset-openorca,5614c39cb1564d72_(_validation,_mlcommons_) --> download-and-extract,_rclone,_url.mlc-inference:mlcommons-inference-wg-public/open_orca
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,preprocessed,dataset,openorca,_validation,_mlcommons
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,cpu
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,python
get-mlperf-inference-src,4b57186581024797 --> detect,os
get-mlperf-inference-src,4b57186581024797 --> get,python3
get-mlperf-inference-src,4b57186581024797 --> get,git,repo,_branch.master,_repo.https://github.com/mlcommons/inference
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,mlcommons,inference,src
get-mlperf-inference-sut-configs,c2fbf72009e2445b --> get,cache,dir,_name.mlperf-inference-sut-configs
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,sut,configs
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> generate,user-conf,mlperf,inference
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,loadgen
get-mlperf-inference-src,4b57186581024797 --> detect,os
get-mlperf-inference-src,4b57186581024797 --> get,python3
get-mlperf-inference-src,4b57186581024797 --> get,git,repo,_branch.master,_repo.https://github.com/mlcommons/inference
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,mlcommons,inference,src
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,mlcommons,inference,src
get-generic-python-lib,94b62a682bc44791_(_package.psutil_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.psutil
get-generic-python-lib,94b62a682bc44791_(_package.transformers_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.transformers
get-generic-python-lib,94b62a682bc44791_(_package.datasets_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.datasets
get-generic-python-lib,94b62a682bc44791_(_package.sentencepiece_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.sentencepiece
get-generic-python-lib,94b62a682bc44791_(_package.protobuf_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.protobuf
get-generic-python-lib,94b62a682bc44791_(_package.accelerate_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.accelerate
get-generic-python-lib,94b62a682bc44791_(_package.absl-py_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.absl-py
get-generic-python-lib,94b62a682bc44791_(_package.evaluate_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.evaluate
get-generic-python-lib,94b62a682bc44791_(_package.nltk_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.nltk
get-generic-python-lib,94b62a682bc44791_(_package.numpy_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.numpy
get-generic-python-lib,94b62a682bc44791_(_package.rouge-score_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.rouge-score
get-generic-python-lib,94b62a682bc44791_(_package.more-itertools_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.more-itertools
get-generic-python-lib,94b62a682bc44791_(_package.compressed_tensors_) --> get,python3
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> get,generic-python-lib,_package.compressed_tensors
detect-cpu,586c8a43320142f7 --> detect,os
benchmark-program,19f369ef47084895 --> detect,cpu
benchmark-program-mlperf,cfff0132a8aa4018 --> benchmark-program,program
app-mlperf-inference-mlcommons-python,ff149e9781fc4b65_(_llama2-70b-99,_offline,_cpu,_pytorch,_bfloat16_) --> benchmark-mlperf
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 73bd244

Please sign in to comment.