-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
polygraphy run 保存各层输出结果 #2055
Comments
@pranavm-nvidia Do we have documentations about the saved output format from Polygraphy? My current best guess is that it's Base64 encoded numpy.ndarray instance. Is that correct? |
thanks, `import json f = open('onnx_out.json','rb') print('onnx:', len(info_onnx.getitem(runners_onnx[0])[0])) for layer in info_onnx.getitem(runners_onnx[0])[0]: how can I read trt_out.json or onnx_out.json or other way to solve my problem? I want to locate the issue with onnx-->tensorrt,I am sure is ok that pytorch --> onnx. |
I have solved the problem ! |
@ymzx Could you share how you solved it? It may benefit others as well. Thanks |
Yes, It's my honor to share link of this issue after I've organized a complete document. |
For reference, you can use from polygraphy.comparator import RunResults
results = RunResults.load("onnx_out.json")
for runner_name, iterations in results.items():
for iteration in iterations:
for tensor_name, value in iteration.items():
print(tensor_name, value) @ymzx Also note that you can use polygraphy run --load-outputs onnx_out.json trt_out.json |
Closing due to >14 days without activity. Please feel free to reopen if the issue still exists. Thanks |
https://mp.weixin.qq.com/s/hrOZYw6eFVU9EMl8aa3tQg 我总结到了这篇文章中。 |
Thank you for raising this issue, it's really helpful! I have some suggestions that could help prevent potential bugs. I found that using ploygraphy.json in combination with the built-in json functionality allows to specify the type and shape of numpy arrays directly. Doing so can also speed up load times. import numpy as np
import json
from polygraphy import json as pjson
f = open('onnx_out.json')
info_onnx = json.load(f)
f = open('trt_out.json')
info_trt = json.load(f)
f.close()
onnx_outputs, trt_outputs = info_onnx['lst'][0][1][0], info_trt['lst'][0][1][0]
onnx_layers_outputs, trt_layers_outputs = onnx_outputs['outputs'], trt_outputs['outputs']
print('onnx节点数:', len(onnx_layers_outputs.keys()), ',', 'trt节点数:', len(trt_layers_outputs.keys()))
trouble_layers, ok_layers = [], []
for layer, value in onnx_layers_outputs.items():
if layer in trt_layers_outputs.keys():
onnx_out = pjson.from_json(json.dumps(value)).arr
trt_out = pjson.from_json(json.dumps(value)).arr
print(np.size(onnx_out), np.size(trt_out), layer)
np.testing.assert_allclose(onnx_out, trt_out, rtol=0.001, atol=0.001) |
Why are you using the same
I think in this way, the |
pth和onnx结果一致,但是trt的结果不一样,请问怎么回事呢 |
Hi, my model is very large, and I added an output to each node. As a result, the System memory(32GB) and Swp memory(16GB) are not enough when I save the results. How can I optimize it? @nvpohanh @zerollzeng |
You can mark only some tensors as outputs, not all of them at the same time. |
咨询个问题,使用tensorrt工具polygraph保存onnx各层各节点输出时,输入命令 polygraphy run yolov5s-face.onnx --onnxrt --onnx-outputs mark all --save-results=onnx_out.json, 得到的onnx_out.json,500M大小,onnx_out本质上是字典,只有两个key为['lst', 'polygraphy_class'], onnx_out['polygraphy_class']为RunResults,onnx_out['lst']='4Z2JLu95tlLzlQoa5jdQlur5Hcb602EW+u3KWvWL6UbvmXBRAPmIlvtDjAr...................',从这个信息能得到各层输出?需要再次解码么?tensorrt官方文档就是以上命令输出各层结果,也没看到解码相关步骤,所以比较困惑,和trt版本有关还是其它?,希望对此熟悉的朋友可以帮忙解答下,谢谢。
The text was updated successfully, but these errors were encountered: