Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suspected bug about best permutation return #2

Open
persistz opened this issue Apr 6, 2021 · 5 comments
Open

Suspected bug about best permutation return #2

persistz opened this issue Apr 6, 2021 · 5 comments

Comments

@persistz
Copy link

persistz commented Apr 6, 2021

In the paper, the authors pointed out that they found the best permutation for subsequent iterations.
And in the implementation, the authors used part of the Auto-Attack code.
But in fact, the key step of ‘find the best permutation’ seems to be missing from the implementation.

Using MultiTargetedAttack as an example. In line 1365 of attack_ops.py, for function run_once, the code only returns x_best_adv, instead of returning x_best like Auto-Attack.
After searching for the assignment operation of x_best_adv, it is not difficult to find that this will cause only random noise to be returned for those examples that fail the attack.
The same kind of errors will also occur on the return of now_p.

Due to the large amount of code, I cannot be sure this is a bug and whether other techniques have been used elsewhere to solve this problem. If my understanding is wrong, please point it out. Thanks.

@hanwei0912
Copy link

I totally agree with @persistz . I am confused that in the paper, the NSGA-II is used to solve the optimization problem but I cannot really find it in the code. I would like to kindly ask @vtddggg to help us to understand the code better.

Besides, I also have a small question about the paper. I have difficulty understanding what is actually the multi-objects. In the paper, the authors point out that they use eq. 3 as the evaluation function. But it is only one object. Sadly, I cannot find a more clear explanation of other objects. Since NSGA-II is an algorithm for multi-objects, I guess maybe it means different targeted classes. (I did not find anything to support my assumption.) I tried to find the answer from the code, but it is even shady for me. I am sorry I cannot get it, could you please tell me the answer? @vtddggg

Thank you in advance.

Hanwei

@vtddggg
Copy link
Owner

vtddggg commented Jun 29, 2023

Hi, Hanwei

For NSGA-II please refer to #3 (comment). This code merely contains the implementation of the final searched attacks.

eq. 3 has two objectives: 1) first term optimizes attack strength; 2) second term optimizes attack steps (complexity). They are balanced by $\alpha$. Sorry for that you cannot find this implementation from the code, it may be because that we didn't open the part of NSGA-II search.

Thanks for your attention!!

@hanwei0912
Copy link

Thank you very much! Now I understand better. @vtddggg

It is interesting. Just for discussion. If you use \alpha to balance the two objectives, you actually directly merge the two objectives into one. You don't really need multi-objective optimization to solve it, single-objective optimization is good enough. If you define a series of different values of \alpha during the search, then it will be closer to another algorithm MOEA/D. NSGA-II is a multi-objective optimization algorithm. You just need to give these two objectives separately and it will return the Pareto set, where you cannot tell which solution is the best according to the two objectives. If you are using NSGA-II, then how do you choose the best one solution out of it (or do you keep a set)? If you actually optimize eq. 3 with fix \alpha, it is a single-objective problem. How do you solve it with NSGA-II?

Thank you in advance,
Hanwei

@vtddggg
Copy link
Owner

vtddggg commented Jun 29, 2023

Hi, Hanwei

We use the NSGA-II implementation in pymoo, and we follow this to define our optimization problem. As you can see in line 90, line 92 and line 94, there are three objectives (f1, f2, f3) are defined. Similarly, in our case, f1 will be accuracy and f2 will be complexity.

By feed f1 and f2 into from pymoo.algorithms.nsga2 import NSGA2, we think the problem is optimized by multi-objective. For detailed implementation of NSGA2 you can also refer to https://github.com/anyoptimization/pymoo/blob/main/pymoo/algorithms/moo/nsga2.py

@hanwei0912
Copy link

Ah ha! Got it! Thank you for your informative reply. @vtddggg
Hanwei

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants