To add your own use case, several things should be considered.
- Where to generate use case-specific P4 codes?
- What is the input data?
- How to generate use case-specific testing procedures?
Among these, only the first one, case-specific P4 codes, is compulsory. To realise it, we need to create file common_p4.py
under the created folder <use_case_name>
under the directory ./src/use_cases
. The second and third one is optional and depends on if we want to utilize Planter to test our design. For the two optional changes, the preprocessing logic in the M/A pipeline will make training data different from testing data. Therefore, both input data in the file <name>_dataset.py
under the directory ./src/load_data
and the testing procedure in the file test_model.py
under the directory ./src/targets/<target_name>/<test_name>
need to be changed.
If this is still too complex, a good example is to see the difference between software
and software_ASCII
under src/targets/Tofino
.
In common_p4.py
, we can define functions to write use case related parsing and M/A pipeline logic into P4 files. This file requires some key functions:
- The overview of
common_headers(*)
functiondef common_headers(fname, config): # 1. Write use case related headers. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_headers(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- The overview of
common_metadata(*)
functiondef common_metadata(fname, config): # 1. Write use case related meta data. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_metadata(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- The overview of
common_parser(*)
functiondef common_parser(fname, config): # 1. Write use case related parsers. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_parser(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- The overview of
common_tables(*)
functiondef common_tables(fname, config): # 1. Write use case related tables, actions, and definitions in ingress. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_tables(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- The overview of
common_feature_extraction(*)
functiondef common_feature_extraction(fname, config): # 1. Write use case related apply() logic (and feature extraction) in ingress pipeline. # before the execution of in-network ML model. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_feature_extraction(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- The overview of
common_logics(*)
functiondef common_logics(fname, config): # 1. Write use case related apply() logic in ingress pipeline # after the execution of in-network ML model. with open(fname, 'a') as file: file.write("...\n") return
- The input of
common_logics(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
In test_model.py
, the packet header and testing data need to be customized based on the use case before testing. Not all the functions in the test_model.py
need to be updated. Based on the reference tester src/targets/<target_name>/software/test_model.py
, only function write_common_test_<model_type>(*)
need to be updated. In write_common_test_<model_type>(*)
, we need to make sure the header is use case customized (aligned to what is generated by common_p4.py
) and focus on how to generate raw data without preprocessing (different from test_x
).
- The overview of
write_common_test_<model_type>(*)
functiondef write_common_test_<model_type>(fname, config): # 1. use case related packet formulation (including use case customized packets) # 2. load the proper input data from'/src/temp/Test_Data.json'. # 3. send the generated packets, receive packets, and calculate the accuracy. with open(fname, 'a') as file: file.write("...\n") return
- The input of
write_common_test_<model_type>(*)
functionfname # str # P4 file directory # generate from p4_generator.py config # dict # P4 generator's configs - Planter_config['p4 config'] # generated by function load_config(*)
- By using function
write_common_test_<model_type>(*)
, files likesrc/test/test_switch_model_Tofino_software.py
can be generated.
In test_model.py
, compared to the standard one, the new one saves the raw data for testing and also returns the preprocessed data for training.
- The overview of
run_model(*)
functiondef load_data(num_features, data_dir): # 1. Load the data # 2. save original data Test_Data['original_test_X'] to '/src/temp/Test_Data.json' # and the number of original input features to Planter_config['data config']['number of original features'] # 3. preprocess the original data to simulate P4 feature extraction and generate # train_X, train_y, test_X, test_y return train_X, train_y, test_X, test_y, used_features
- The input of
run_model(*)
functionused_features # int, # number of used features # input in Planter.py data_dir # str # data file directory # generate from Planter.py
- The output (return) of
run_model(*)
functiontrain_X # data frame # preprocessed training data train_y # ndarray/list # the training labels test_X # data frame # preprocessed testing data test_y # ndarray/list # testing labels used_features # list # list of feature name