Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The use of BlurPool #1

Open
WW2401 opened this issue Oct 25, 2019 · 3 comments
Open

The use of BlurPool #1

WW2401 opened this issue Oct 25, 2019 · 3 comments

Comments

@WW2401
Copy link

WW2401 commented Oct 25, 2019

Great work. Thanks for your sharing.
When I use BlurPool, it only needs to change caffe.proto and replace the original base_conv_layer.cpp? And you mentioned that Caffe uses zero paddings instead of other types, can you tell me the deficiency of using zero paddings?

@ricky40403
Copy link
Owner

  1. Yes, and should freeze the blur convolution weight when training, or it will change during backpropagation.

  2. The only drawback should be that it can not increase the resolution. And the padding from default is set to 'reflect' in origin PyTorch repository.

@WW2401
Copy link
Author

WW2401 commented Nov 2, 2019

  1. Yes, and should freeze the blur convolution weight when training, or it will change during backpropagation.
  2. The only drawback should be that it can not increase the resolution. And the padding from default is set to 'reflect' in origin PyTorch repository.

thanks for your reply. Have you trained the net with blurpool? How about the performance? speed and accuracy

@ricky40403
Copy link
Owner

ricky40403 commented Nov 4, 2019

  1. Yes, and should freeze the blur convolution weight when training, or it will change during backpropagation.
  2. The only drawback should be that it can not increase the resolution. And the padding from default is set to 'reflect' in origin PyTorch repository.

thanks for your reply. Have you trained the net with blurpool? How about the performance? speed and accuracy

Yes, I had trained on my custom dataset, and the performance is indeed better. The accuracy and the shifting problem become better on the inference.
But I also trained with sparse, so I can not tell how the speed is in the general model.

By the way, I was training with segmentation, not classification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants