-
Notifications
You must be signed in to change notification settings - Fork 63
Description
🐛 Describe the bug
torchbench_amp_fp16_training
xpu train timm_efficientdet
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 4177, in run
) = runner.load_model(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 320, in load_model
benchmark = benchmark_cls(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/util/model.py", line 39, in call
obj = type.call(cls, *args, **kwargs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/models/timm_efficientdet/init.py", line 55, in init
raise NotImplementedError("The original model code forces the use of CUDA.")
NotImplementedError: The original model code forces the use of CUDA.
model_fail_to_load
Versions
torch-xpu-ops: 31c4001
pytorch: 0f81473d7b4a1bf09246410712df22541be7caf3 + PRs: 127277,129120
device: PVC 1100, 803.61, 0.5.1