Skip to content

Experiments for paper ModSec-AdvLearn: Countering Adversarial SQL Injections with Robust Machine Learning

License

Notifications You must be signed in to change notification settings

pralab/modsec-advlearn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ModSec-AdvLearn

How to cite us

If you want to cite us, please use the following (BibTeX) reference:

Getting started

Setup

  1. Compile and install ModSecurity v3.0.10
  2. Install pymodsecurity
  3. Clone the OWASP CoreRuleSet
  4. Run experiments

Compile ModSecurity v3.0.10

First of all, you will need to install ModSecurity v3.0.10 on your system. Currently, this is a tricky process, since you will need to build ModSecurity v3.0.10 from source (although some distros might have an updated registry with ModSecurity 3.0.10 already available)

Install pymodsecurity

In modsec-learn ModSecurity methods are implemented via pymodsecurity. Since development on the official repository stopped on ModSecurity v3.0.3, the current workaround is: clone this fork and build it from source

Clone the OWASP CoreRuleSet

To detect incoming payloads, you need a Rule Set. The de facto standard is the OWASP CoreRuleSet, but of course, you can choose any Rule Set you want, or customize the OWASP CRS.

To run the recommended settings, just clone the OWASP CRS in the project folder:

git clone --branch v4.0.0 [email protected]:coreruleset/coreruleset.git

Run experiments

All experiments can be executed using the Python scripts within the scripts folder. The scripts must be executed starting from the project's root.

python3 scripts/<script_name>.py

About

Experiments for paper ModSec-AdvLearn: Countering Adversarial SQL Injections with Robust Machine Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages