Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error showing in concat function. Cannot get 2 branches of result #87

Open
ShahrinShahpar opened this issue Dec 7, 2022 · 0 comments
Open

Comments

@ShahrinShahpar
Copy link

I have 2 branches of output and a snip of the program is given below:

def forward(self, samples: NestedTensor):
    # get the backbone features
    features = self.backbone(samples)
    # forward the feature pyramid
    features_fpn = self.fpn([features[1], features[2], features[3]])

    batch_size = features[0].shape[0]
    # run the regression and classification branch
    regression = self.regression(features_fpn[1]) * 100  # 8x
    classification = self.classification(features_fpn[1])
    anchor_points = self.anchor_points(samples)
    # decode the points as prediction
    output_coord = regression + anchor_points
    output_class = classification
    out = torch.cat((output_class, output_coord))

    return out

When I try to compile this it shows me this error:

[UNILOG][FATAL][XCOM_SIZE_UNMATCH][The object's size is not not matching the requirement.] xir::Op{name = P2PNet__P2PNet_3645, type = concat}'s axis is error. It's 0

I updated the code to this:

def forward(self, samples: NestedTensor):
# get the backbone features
features = self.backbone(samples)
# forward the feature pyramid
features_fpn = self.fpn([features[1], features[2], features[3]])
batch_size = features[0].shape[0]
# run the regression and classification branch
regression = self.regression(features_fpn[1]) * 100 # 8x
classification = self.classification(features_fpn[1])
anchor_points = self.anchor_points(samples)
# decode the points as prediction
output_coord = regression + anchor_points
output_class = classification
outt = torch.cat((output_class, output_coord), 1)
print(outt)
return outt

When I try to compile this it shows me this warning:

[UNILOG][WARNING] xir::Op{name = P2PNet__P2PNet_3872, type = eltwise-fix} has been assigned to CPU: [DPU only supports positive "input_channel"(0)].

[UNILOG][WARNING] xir::Op{name = P2PNet__P2PNet_3875, type = concat-fix} has been assigned to CPU: [Input xir::Op{name = P2PNet__P2PNet_3872, type = eltwise-fix} is not in DPU subgraph. And output dimension is not 4.].

[UNILOG][WARNING] xir::Op{name = P2PNet__P2PNet_3870_new, type = const-fix} has been assigned to CPU: [Has no fanout or at least one fanout is out of DPU subgraph.].

[UNILOG][WARNING] xir::Op{name = P2PNet__P2PNet_3875, type = concat-fix} has been assigned to CPU: [Input xir::Op{name = P2PNet__P2PNet_3872, type = eltwise-fix} is not in DPU subgraph. And output dimension is not 4.].

The xmodel is created but i am not getting the result:

My cpu result for the code is:

output dimension: (1, 102400, 2)

tensor([[[ 4.3518, -6.3351],
[ 5.4877, -5.5600],
[ 4.8831, -5.5953],
...,
[1278.6753, 631.6872],
[1271.6644, 639.0435],
[1277.7587, 632.8604]]], grad_fn=)

but in DPU I am getting:

output_ndim: (1, 51200, 2)
[array([[ 1.5 , -1.5 ],
[-8.5 , -3. ],
[-0.75, -1.25],
...,
[ 1.25, -2.25],
[-2.25, 0.75],
[-0.5 , -5. ]], dtype=float32)]

Can you tell me how to fix this?

To get some understanding of what I am doing please visit. The work is similar but not the same:

https://blog.csdn.net/wjytbest/article/details/124188661

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant