Skip to content

⚔️ This is a little gem, made for HPC. Fully functional framework that runs on PFNano for neural network training, includes fancy adversarial techniques with unique evaluation scripts, reweighting etc., does not need any container, but loads a conda environment. Some theses have been completed with it, some people copied it and gave it a new name.

Notifications You must be signed in to change notification settings

AnnikaStein/AI_Safety_2021

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Safety 2021

Code for Master's thesis "Investigation of robustness of b-Tagging algorithms for the CMS Experiment" at RWTH Aachen University, Physics Institute III A


Annika Stein, last updated: 25.10.2021

This repository contains the code necessary to reproduce the results presented in the thesis, except for all additional studies conducted with the scale factor framework (Data / MC agreement and dependence of the same on adversarial training / attacks), which is hosted here at github.com.

You can fork the repository and clone your personal repo then to use the code on the HPC, or just download the relevant scripts and auxiliary files manually. If you go for the git clone variant, take a look at the instructions for setting up the ssh key between the HPC and the RWTH GitLab. After forking your own copy of the repo (this can be done right from the web interface with the button on the top right), you would do something like git clone [email protected]:your-name/ai_safety_2021.git at the place where you want to start your AI safety studies (e.g. at the HPC, where the code can be placed somewhere in your personal home directory /home/<username> - note that for $HOME, a backup is made regularly, in contrast to $WORK).

No matter how you set up the repository, you need to update basically all paths to files, code etc. to your own ones, this will become relevant especially when using your own inputs for the training - but the relevant places where modifications are necessary will be explained later.

Over the last couple of months, several versions of the preprocessing, training and evaluation have led to the creation of separate directories for each revision, named like month_21. Additionally, some scripts were originally placed in this main directory, which could be a bit confusing. The most up-to-date versions are placed inside june_21, although few important files are stored in may_21 or elsewhere, if no changes were necessary since their initial creation. I suggest to start looking at the README for the may_21 version of the code first, then go to june_21, and whenever relevant, the additional resources are quoted. The most important steps come with individual tutorials which introduce you to the concepts and how these have been implemented in python. The files and tutorials can be viewed from the web or can be modified after downloading / cloning, so that you actually work with and learn from the code yourself.

About

⚔️ This is a little gem, made for HPC. Fully functional framework that runs on PFNano for neural network training, includes fancy adversarial techniques with unique evaluation scripts, reweighting etc., does not need any container, but loads a conda environment. Some theses have been completed with it, some people copied it and gave it a new name.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages