Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the computational consumption may be underestimated #5

Closed
lixcli opened this issue Sep 20, 2022 · 4 comments
Closed

the computational consumption may be underestimated #5

lixcli opened this issue Sep 20, 2022 · 4 comments

Comments

@lixcli
Copy link

lixcli commented Sep 20, 2022

Hi,

I tried to migrate the model to tensorflow1 and used a 512*512*3 picture to calculate the FLOPs and parameters of the tf1 model with tf1's toolkits. Here is the result:

output = ESDNet(img,48,32,64,32,1) # normal size
======================End of Report==========================
FLOPs: 219.042771678G;    Trainable params: 27.338274M

For comparison, I build a DMCNN model with tensorflow1, Here is its FLOPs and parameters:

======================End of Report==========================
FLOPs: 101.802540544G;    Trainable params: 2.377423M

which is smaller than the ECDNet( ECDNet has smaller MACs and params than DMCNN as shown in the paper).

tf1 will calculate the computation cost of up-sample operation(e.g. tf.image.resize_bilinear ), while most toolkits of pytorch(e.g. thop) will ignore the cost of some operations(e.g. F.interpolate) which haven't be pre-defined.

As every SAM module has more than one F.interpolate, I think the computational consumption is underestimated and this computation cost cannot be ignore for fair comparison.

The performance of this work is so impressing. Thanks for your sharing! :)

@XinYu-Andy
Copy link
Member

Hi,

I tried to migrate the model to tensorflow1 and used a 5125123 picture to calculate the FLOPs and parameters of the tf1 model with tf1's toolkits. Here is the result:

output = ESDNet(img,48,32,64,32,1) # normal size
======================End of Report==========================
FLOPs: 219.042771678G;    Trainable params: 27.338274M

For comparison, I build a DMCNN model with tensorflow1, Here is its FLOPs and parameters:

======================End of Report==========================
FLOPs: 101.802540544G;    Trainable params: 2.377423M

which is smaller than the ECDNet( ECDNet has smaller MACs and params than DMCNN as shown in the paper).

tf1 will calculate the computation cost of up-sample operation(e.g. tf.image.resize_bilinear ), while most toolkits of pytorch(e.g. thop) will ignore the cost of some operations(e.g. F.interpolate) which haven't be pre-defined.

As every SAM module has more than one F.interpolate, I think the computational consumption is underestimated and this computation cost cannot be ignore for fair comparison.

The performance of this work is so impressing. Thanks for your sharing! :)

Thank you very much!
We used thop to calculate the cost for all the methods.
"ECDNet has smaller MACs and params than DMCNN as shown in the paper", in fact, the paper shows ESDNet has more params than DMCNN. As for the MACs, I am not familiar with the calculation of cost with tf, but maybe like you said, there are some built-in issues in thop. In fact, we could reduce some channel numbers to get lower cost while the visual quality keeps relatively similar in our early experiments (still far better than DMCNN).
In our work, we mainly aim to build an effective demoireing method on 4K which is the first principle, and then try to reduce the computational cost.

@lixcli
Copy link
Author

lixcli commented Sep 21, 2022

It's right. The ESDNet still has small cost and much better performence than other methods although calculate by tensorflow1 toolkits. Again, thanks for your code-sharing.
I will close this issue. :)

@lixcli lixcli closed this as completed Sep 21, 2022
@lixcli
Copy link
Author

lixcli commented Sep 24, 2022

I check my tf code again. There is a different kernel size in SAM module. Here is the result after correcting.

======================End of Report==========================
FLOPs: 135.26723124G;    Trainable params: 6.650316M

@XinYu-Andy
Copy link
Member

I check my tf code again. There is a different kernel size in SAM module. Here is the result after correcting.

======================End of Report==========================
FLOPs: 135.26723124G;    Trainable params: 6.650316M

Yup! this result looks like more normal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants