Skip to content

Latest commit

 

History

History
227 lines (171 loc) · 6.89 KB

File metadata and controls

227 lines (171 loc) · 6.89 KB

Experiment maximal_attack-v4_tabular_q_learning

This is an experiment in the maximal_attack-v4 environment. An environment where the attack is following the attack_maximal attack policy. The attack_maximal policy entails that the attacker will always attack the attribute with the maximum value out of all of its neighbors. The defender is implemented with a random defense policy.

This experiment trains a defender agent using tabular q-learning to act optimally in the given environment and detect the attacker.

The network configuration of the environment is as follows:

  • num_layers=4 (number of layers between the start and end nodes)
  • num_servers_per_layer=5
  • num_attack_types=10
  • max_value=9

The starting state for each node in the environment is initialized as follows (with some randomness for where the vulnerabilities are placed).

  • defense_val=2
  • attack_val=0
  • num_vulnerabilities_per_node=1 (which type of defense at the node that is vulnerable is selected randomly when the environment is initialized)
  • det_val=2
  • vulnerability_val=0
  • num_vulnerabilities_per_layer=5

The environment has sparse rewards (+1,-1 rewards are given at the terminal state of each episode). The environment is partially observed (attacker can only see attack attributes of neighboring nodes, defender can only see defense attributes)

Environment

  • Env: random_attack-v4

Algorithm

  • Tabular Q-learning with linear exploration annealing

Instructions

To configure the experiment use the config.json file. Alternatively, it is also possible to delete the config file and edit the configuration directly in run.py (this will cause the configuration to be overridden on the next run).

Example configuration in config.json:

{
    "attacker_type": 0,
    "defender_type": 0,
    "env_name": "idsgame-maximal_attack-v4",
    "idsgame_config": null,
    "initial_state_path": null,
    "logger": null,
    "mode": 1,
    "output_dir": "/home/kim/storage/workspace/gym-idsgame/experiments/training/v4/maximal_attack/tabular_q_learning",
    "py/object": "gym_idsgame.config.client_config.ClientConfig",
    "q_agent_config": {
        "alpha": 0.05,
        "attacker": false,
        "defender": true,
        "epsilon": 1,
        "epsilon_decay": 0.9999,
        "eval_episodes": 100,
        "eval_frequency": 5000,
        "eval_log_frequency": 1,
        "eval_render": false,
        "eval_sleep": 0.9,
        "gamma": 0.999,
        "gif_dir": "/home/kim/storage/workspace/gym-idsgame/experiments/training/v4/maximal_attack/tabular_q_learning/gifs",
        "gifs": true,
        "load_path": null,
        "logger": null,
        "min_epsilon": 0.01,
        "num_episodes": 40000,
        "py/object": "gym_idsgame.agents.tabular_q_learning.q_agent_config.QAgentConfig",
        "render": false,
        "save_dir": "/home/kim/storage/workspace/gym-idsgame/experiments/training/v4/maximal_attack/tabular_q_learning/data",
        "train_log_frequency": 100,
        "video": true,
        "video_dir": "/home/kim/storage/workspace/gym-idsgame/experiments/training/v4/maximal_attack/tabular_q_learning/videos",
        "video_fps": 5,
        "video_frequency": 101
    },
    "simulation_config": null,
    "title": "AttackMaximalAttacker vs TrainingQAgent"
}

Example configuration in run.py:

q_agent_config = QAgentConfig(gamma=0.999, alpha=0.05, epsilon=1, render=False, eval_sleep=0.9,
                              min_epsilon=0.01, eval_episodes=100, train_log_frequency=100,
                              epsilon_decay=0.9999, video=True, eval_log_frequency=1,
                              video_fps=5, video_dir=default_output_dir() + "/videos", num_episodes=40000,
                              eval_render=False, gifs=True, gif_dir=default_output_dir() + "/gifs",
                              eval_frequency=5000, attacker=False, defender=True, video_frequency=101,
                              save_dir=default_output_dir() + "/data")
env_name = "idsgame-maximal_attack-v4"
client_config = ClientConfig(env_name=env_name, defender_type=AgentType.TABULAR_Q_AGENT.value,
                             mode=RunnerMode.TRAIN_DEFENDER.value,
                             q_agent_config=q_agent_config, output_dir=default_output_dir(),
                             title="AttackMaximalAttacker vs TrainingQAgent")

After the experiment has finished, the results are written to the following sub-directories:

  • /data: CSV file with metrics per episode for train and eval, e.g. avg_episode_rewards, avg_episode_steps, etc.
  • /gifs: If the gif configuration-flag is set to true, the experiment will render the game during evaluation and save gif files to this directory. You can control the frequency of evaluation with the configuration parameter eval_frequency and the frequency of video/gif recording during evaluation with the parameter video_frequency
  • /hyperparameters: CSV file with hyperparameters for the experiment.
  • /logs: Log files from the experiment
  • /plots: Basic plots of the results
  • /videos: If the video configuration-flag is set to true, the experiment will render the game during evaluation and save video files to this directory. You can control the frequency of evaluation with the configuration parameter eval_frequency and the frequency of video/gif recording during evaluation with the parameter video_frequency

Example Results

Hack probability

Train

Eval

Episode lengths

Train

Eval

Exploration Rate

Cumulative Rewards

Attacker (Train)

Defender (Train)

Policy Inspection

Evaluation after 0 Training Episodes

Evaluation after 5000 Training Episodes

Evaluation after 40000 Training Episodes

Commands

Below is a list of commands for running the experiment

Run

Option 1:

./run.sh

Option 2:

make all

Option 3:

python run.py

Run Server (Without Display)

Option 1:

./run_server.sh

Option 2:

make run_server

Clean

make clean