You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256
How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on.
I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.
The text was updated successfully, but these errors were encountered:
I use 200 classes from CelebA to train the model, but if you want to include more classes, I think it will be fine. For the second question, yes, the pipeline is the same across all the experiments.
Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256
How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on.
I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.
The text was updated successfully, but these errors were encountered: