You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I train BiSeNetv2 in my custom dataset and find probability maps have a grid-like look, which is very unnatural.
After further observation I find you use PixelShuffle on the last upsampling layer, which leads to the grid results(seems like you also use it in v1). But in BiSeNetv2 paper, the author uses simple bilinear interpolation to upsample results, which gives smoothing probability maps in my experiment. why use PixelShuffle here?
So what's the idea of using PixelShuffle to upsample final results? Also, don't see an obvious improvement on validate score by using it tho.
By the way, there are some differences between your implementation and the paper on Gather-and-Expansion Layer. Cannot figure out if this will affect training results significantly.
The text was updated successfully, but these errors were encountered:
wasupandceacar
changed the title
BiSeNetv2 implementation difference
BiSeNetv2 implementation difference lead to strange results
Nov 1, 2022
wasupandceacar
changed the title
BiSeNetv2 implementation difference lead to strange results
BiSeNetv2 implementation difference leads to strange results
Nov 1, 2022
Hello, I train BiSeNetv2 in my custom dataset and find probability maps have a grid-like look, which is very unnatural.
After further observation I find you use PixelShuffle on the last upsampling layer, which leads to the grid results(seems like you also use it in v1). But in BiSeNetv2 paper, the author uses simple bilinear interpolation to upsample results, which gives smoothing probability maps in my experiment.
why use PixelShuffle here?
So what's the idea of using PixelShuffle to upsample final results? Also, don't see an obvious improvement on validate score by using it tho.
By the way, there are some differences between your implementation and the paper on Gather-and-Expansion Layer. Cannot figure out if this will affect training results significantly.
The text was updated successfully, but these errors were encountered: