You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@wangzheallen Thanks for sharing your interesting work!
I have a little question about the feature dimension of the final SVM classifier.
According to your paper and code, the dimension of vsad code ( i.e. the features for SVM classifier) is 2*len(f)*len(p) , where f denotes descriptors from your scene-PatchNet's feature layer(with dimension
of 100 reduced from1024 ) and p denotes codewords from object-PatchNet's softmax layer(i.e. probabilities with dimension of 256 reduced from 1000 ).
Even if dimension reduction is employed, the feature dimensions for SVM are still very high(say, 2×100×256=51200 according to your paper). Will this cause some problems to the final classification? What do you think? (Let me know if i was wrong somewhere! )
Thanks in advance!
The text was updated successfully, but these errors were encountered:
thanks for your interest in our work!
Based on my experiment I have not found anything wrong with the performance with such high dimension. But I never tried to use PCA directly on the '51200' dimensional feature.
@wangzheallen Thanks for sharing your interesting work!
I have a little question about the feature dimension of the final SVM classifier.
According to your paper and code, the dimension of vsad code ( i.e. the features for SVM classifier) is
2*len(f)*len(p)
, wheref
denotes descriptors from your scene-PatchNet's feature layer(with dimensionof 100 reduced from1024 ) and
p
denotes codewords from object-PatchNet's softmax layer(i.e. probabilities with dimension of 256 reduced from 1000 ).Even if dimension reduction is employed, the feature dimensions for SVM are still very high(say, 2×100×256=51200 according to your paper). Will this cause some problems to the final classification? What do you think? (Let me know if i was wrong somewhere! )
Thanks in advance!
The text was updated successfully, but these errors were encountered: