Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: merge review from @robvanderveer #2

Open
shsingh opened this issue Apr 27, 2023 · 1 comment
Open

fix: merge review from @robvanderveer #2

shsingh opened this issue Apr 27, 2023 · 1 comment
Assignees
Labels
issues/triage Issues that need further analysis review needed Issues that need reviewing
Milestone

Comments

@shsingh
Copy link
Collaborator

shsingh commented Apr 27, 2023

the following is an initial review taken from Slack logs: https://owasp.slack.com/archives/C04PESBUWRZ/p1677192099712519

by @robvanderveer


Dear all,
I did a first scan through the list to mainly look at taxonomy. Here are my remarks.
1.
ML01
In 'literature' the term ‘adversarial’ is often used for input manipulation attacks, but also for data poisoning, model extraction etc. Therefore in order to avoid confusion it is probably better to rename the ML01 adversarial attack entry to input manipulation?
2.
It is worth considering to add ‘model evasion’ aka black box input manipulation to your top 10? Or do you prefer to have one entry for input manipulation all together?
3.
ML03
It is not clear to me how scenarios 1 and 2 work. I must be missing something. Usually model inversion is explained by manipulating synthesized faces until the algorithm behaves like it recognizes the face.
4
ML04
It is not clear to me how scenario 1 works.
Standard methods against overtraining are missing form the ‘how to prevent’ part. Instead the advice is to reduce the training set size - which typically increases the overfitting problem.
5
ML05
Model stealing describes a scenario where an attacker steals model parameters, but generally this attack takes place by ways of black box: gathering input-output pairs and training a new model on it.
6
ML07
I don’t understand exactly how the presented scenario should work. I do know about the scenario where a pre-trained model was obtained that has been altered by an attacker. This matches the description.
7
ML08
Isn’t model skewing the same as data poisoning? If there’s a difference, to me they are not apparent from the scenario and description.
8
ML10 is called Neural net reprogramming but I guess the attack of changing parameters will work on any type of algorithm - not just neural networks. The description also mentions changing the training data, but perhaps that is better left out to avoid confusion with data poisoning?

@shsingh shsingh added the issues/triage Issues that need further analysis label Jul 23, 2023
@shsingh shsingh added this to the v0.2 milestone Jul 23, 2023
@shsingh shsingh changed the title initial review review from @robvanderveer Jul 24, 2023
@shsingh shsingh changed the title review from @robvanderveer fix: merge review from @robvanderveer Jul 25, 2023
@shsingh shsingh modified the milestones: v0.2, v0.3 Aug 2, 2023
@shsingh
Copy link
Collaborator Author

shsingh commented Sep 29, 2023

ML01
In 'literature' the term ‘adversarial’ is often used for input manipulation attacks, but also for data poisoning, model >extraction etc. Therefore in order to avoid confusion it is probably better to rename the ML01 adversarial attack entry to >input manipulation?

addressed in : #110

It is worth considering to add ‘model evasion’ aka black box input manipulation to your top 10? Or do you prefer to have one entry for input manipulation all together?

ML03
It is not clear to me how scenarios 1 and 2 work. I must be missing something. Usually model inversion is explained by manipulating synthesized faces until the algorithm behaves like it recognizes the face.

4
ML04
It is not clear to me how scenario 1 works.
Standard methods against overtraining are missing form the ‘how to prevent’ part. Instead the advice is to reduce the training set size - which typically increases the overfitting problem.

5
ML05
Model stealing describes a scenario where an attacker steals model parameters, but generally this attack takes place by ways of black box: gathering input-output pairs and training a new model on it.

6
ML07
I don’t understand exactly how the presented scenario should work. I do know about the scenario where a pre-trained model was obtained that has been altered by an attacker. This matches the description.

7
ML08
Isn’t model skewing the same as data poisoning? If there’s a difference, to me they are not apparent from the scenario and description.

8
ML10 is called Neural net reprogramming but I guess the attack of changing parameters will work on any type of algorithm >- not just neural networks. The description also mentions changing the training data, but perhaps that is better left out to >avoid confusion with data poisoning?

addressed in: #104

@shsingh shsingh modified the milestones: v0.3, v0.4 Sep 29, 2023
@shsingh shsingh pinned this issue Oct 30, 2023
@shsingh shsingh added the review needed Issues that need reviewing label Oct 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issues/triage Issues that need further analysis review needed Issues that need reviewing
Projects
None yet
Development

No branches or pull requests

2 participants