Skip to content

Commit

Permalink
Merge branch 'master' into zixuan/sdxl
Browse files Browse the repository at this point in the history
  • Loading branch information
violetch24 authored Jul 30, 2024
2 parents 78b49b9 + 41244d3 commit 7e967c4
Show file tree
Hide file tree
Showing 22 changed files with 523 additions and 350 deletions.
4 changes: 2 additions & 2 deletions .azure-pipelines/model-test-3x.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ stages:
displayName: "Publish report"
- script: |
if [ $(is_perf_reg) == 'true' ]; then
echo "[Performance Regression] Some model performance regression occurred, please check artifacts and reports."
echo "Some benchmark regression occurred or the reference data need to be updated, please check artifacts and reports."
exit 1
fi
displayName: "Specify performance regression"
displayName: "Specify regression"
32 changes: 2 additions & 30 deletions .azure-pipelines/model-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,33 +40,20 @@ parameters:
displayName: Run ONNX models?
type: boolean
default: true
- name: MXNet_Model
displayName: Run MXNet models?
type: boolean
default: false

- name: TensorFlowModelList
type: object
default:
- resnet50v1.5
- ssd_resnet50_v1
# - ssd_mobilenet_v1_ckpt
# - inception_v1
# - darknet19
# - resnet-101
- name: PyTorchModelList
type: object
default:
# - resnet18
- resnet18_fx
- name: ONNXModelList
type: object
default:
- resnet50-v1-12
- name: MXNetModelList
type: object
default:
- resnet50v1

stages:
- stage: TensorFlowModels
Expand Down Expand Up @@ -114,21 +101,6 @@ stages:
modelName: ${{ model }}
framework: "onnxrt"

- stage: MXNetModels
displayName: Run MXNet Model
pool: MODEL_PERF_TEST
dependsOn: []
condition: and(succeeded(), eq('${{ parameters.MXNet_Model }}', 'true'))
jobs:
- ${{ each model in parameters.MXNetModelList }}:
- job:
displayName: ${{ model }}
steps:
- template: template/model-template.yml
parameters:
modelName: ${{ model }}
framework: "mxnet"

- stage: GenerateLogs
displayName: Generate Report
pool:
Expand Down Expand Up @@ -191,7 +163,7 @@ stages:
displayName: "Publish report"
- script: |
if [ $(is_perf_reg) == 'true' ]; then
echo "[Performance Regression] Some model performance regression occurred, please check artifacts and reports."
echo "Some benchmark regression occurred or the reference data need to be updated, please check artifacts and reports."
exit 1
fi
displayName: "Specify performance regression"
displayName: "Specify regression"
10 changes: 3 additions & 7 deletions .azure-pipelines/scripts/models/generate_report.sh
Original file line number Diff line number Diff line change
Expand Up @@ -245,13 +245,9 @@ function generate_html_core {
if((new_result == nan && previous_result == nan) || new_result == "unknown"){
printf("<td class=\"col-cell col-cell3\" colspan=2></td>");
} else{
if(new_result == nan) {
job_status = "fail"
status_png = "background-color:#FFD2D2";
printf("<td style=\"%s\" colspan=2></td>", status_png);
} else{
printf("<td class=\"col-cell col-cell3\" colspan=2></td>");
}
job_status = "fail"
status_png = "background-color:#FFD2D2";
printf("<td style=\"%s\" colspan=2></td>", status_png);
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ FRAMEWORK="pytorch"
source /neural-compressor/.azure-pipelines/scripts/fwk_version.sh 'latest'
if [[ "${inc_new_api}" == "3x"* ]]; then
FRAMEWORK_VERSION="latest"
export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH
else
FRAMEWORK_VERSION=${pytorch_version}
TORCH_VISION_VERSION=${torchvision_version}
Expand Down
1 change: 1 addition & 0 deletions .azure-pipelines/scripts/ut/3x/run_3x_pt.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ echo "${test_case}"

# install requirements
echo "set up UT env..."
export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH
pip install -r /neural-compressor/test/3x/torch/requirements.txt
pip install pytest-cov
pip install pytest-html
Expand Down
1 change: 1 addition & 0 deletions .azure-pipelines/scripts/ut/run_itrex.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ echo "run itrex ut..."

# install inc 3x deps
pip install -r /neural-compressor/requirements_pt.txt
export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH

# prepare itrex
git clone https://github.com/intel/intel-extension-for-transformers.git /intel-extension-for-transformers
Expand Down
4 changes: 0 additions & 4 deletions .azure-pipelines/ut-itrex.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,6 @@ pr:
- requirements.txt
- .azure-pipelines/scripts/ut/run_itrex.sh
- .azure-pipelines/ut-itrex.yml
exclude:
- neural_compressor/common
- neural_compressor/torch
- neural_compressor/tensorflow

pool: MODEL_PERF_TEST

Expand Down
9 changes: 9 additions & 0 deletions docs/source/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,12 @@ ImportError: libGL.so.1: cannot open shared object file: No such file or directo
#### Issue 4:
Conda package *neural-compressor-full* (this binary is only available from v1.13 to v2.1.1) dependency conflict may pending on conda installation for a long time.
**Solution:** run *conda install sqlalchemy=1.4.27 alembic=1.7.7 -c conda-forge* before install *neural-compressor-full*.
#### Issue 5:
If you run 3X torch extension API inside a docker container, then you may encounter the following error:
```shell
ValueError: No threading layer could be loaded.
HINT:
Intel TBB is required, try:
$ conda/pip install tbb
```
**Solution:** It's actually already installed by `requirements_pt.txt`, so just need to set up with `export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH`.
Loading

0 comments on commit 7e967c4

Please sign in to comment.