Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.xmodel file generation #1507

Open
ravi54116 opened this issue Feb 15, 2025 · 1 comment
Open

.xmodel file generation #1507

ravi54116 opened this issue Feb 15, 2025 · 1 comment

Comments

@ravi54116
Copy link

I extracted ".xmodel and .protxt" from model.yaml using below command but I didn't get any quantized.h5 file. i directly got model file. For this when I am running I am getting following error in smartcam docker.

name: yolov3_coco_416_tf2
type: xmodel
board: zcu102 & zcu104 & kv260
download link: https://www.xilinx.com/bin/public/openDownload?filename=yolov3_coco_416_tf2-zcu102_zcu104_kv260-r2.5.0.tar.gz
checksum: ae417567b1462c7d5b6708285643f140


[I 2025-02-15 17:51:28.097 ServerApp] Kernel restarted: 127238e8-e26c-4698-bb71-2edcbd07c57e
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0215 17:51:48.107272 146 dpu_runner_base_imp.cpp:676] CHECK fingerprint fail ! model_fingerprint 0x101000016010407 dpu_fingerprint 0x101000016010406
F0215 17:51:48.107425 146 dpu_runner_base_imp.cpp:648] fingerprint check failure.

Hello, I am trying to generate .xmodel file using quantized.h5 and arch.json. I used Vitis AI 2.5 version for this. Because new vitis ai models don't have tf2_yolov3_coco_416_416_65.9G_2.5. I am facing the following error. Please help me resolving it.

(vitis-ai-tensorflow2) Vitis-AI /workspace > vai_c_tensorflow2 -m /workspace/tf2_yolov3_coco_416_416_65.9G_2.5/quantized/quantized.h5 -a /workspace/arch.json -o /

workspace -n yolov3_coco_416_tf2


  • VITIS_AI Compilation - Xilinx Inc.

[INFO] Namespace(batchsize=1, inputs_shape=None, layout='NHWC', model_files=['/workspace/tf2_yolov3_coco_416_416_65.9G_2.5/quantized/quantized.h5'], model_type='tensorflow2', named_inputs_shape=None, out_filename='/tmp/yolov3_coco_416_tf2_0x101000016010406_org.xmodel', proto=None)
[INFO] tensorflow2 model: /workspace/tf2_yolov3_coco_416_416_65.9G_2.5/quantized/quantized.h5
[INFO] keras version: 2.6.0
[INFO] Tensorflow Keras model type: functional
[INFO] parse raw model : 0%| | 0/181 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/bin/xnnc-run", line 33, in
sys.exit(load_entry_point('xnnc==2.5.0', 'console_scripts', 'xnnc-run')())
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/main.py", line 49, in main
runner.normal_run(args)
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/runner.py", line 123, in normal_run
target=target,
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/xconverter.py", line 145, in run
model_files, model_type, _layout, in_shapes, batchsize
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/core.py", line 123, in make_xmodel
model_type=model_t,
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 107, in to_xmodel
model_type,
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 177, in create_xmodel
name, layers, layout, in_shapes, batchsize
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 458, in __create_xmodel_from_tf2
), f"[ERROR] Invalid shape of input layer: shape: {shape} (N,H,W,C), name: {xnode.op_name}"
AssertionError: [ERROR] Invalid shape of input layer: shape: [1, None, None, 3] (N,H,W,C), name: image_input

@jimheaton
Copy link

Please see the following from the VAI-2.5 User Guide, I believe this describes what you are seeing:

Sometimes, the TensorFlow model does not contain input tensor shape information because itmight cause the compilation to fail. You can specify the input tensor shape with an extra optionlike --options '{"input_shape": "1,224,224,3"}'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants