https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2812140
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2996930
python swaption_test.py --calibrate-history --model=g2++
Command-line flags:
model
: Financial Model to use: hullwhite, g2++, g2++_local (for g2++ calibrated with local optimizer)
python swaption_test.py --training-data=250000 --model=g2++ --seed=111 --error-adjusted --history-start=0 --history-end=132
Command-line flags:
training-data
: Size of sample to producemodel
: Financial Model to use: hullwhite, g2++, g2++_local (for g2++ calibrated with local optimizer), g2++_vae (use Variational Autoencoder to sample)seed
: If present, number is used to set seed of random number generatorerror-adjusted
: If present, samples are adjusted with errors taken from observed results. For a model that does not fit data perfectly, error adjustment is importanthistory-start
: If present, sets the start date from which the sample will be trained. Value is the row of the available dates. Needs to be given in conjunction with history-end.history-end
: If present, sets the end date from which the sample will be trained. Value is the row of the available dates. Needs to be given in conjunction with history-start.
python swaption_test.py --run-fnn=0.5 --model=g2++ --seed=111 --sample-size=500000 --error-adjusted --history-start=0 --history-end=264 --epochs=750 --residual-cells=1 --lr=0.0001 --exponent=6 --layers=9 --dropout=0.2 --earlyStopPatience=41 --reduceLRFactor=0.5 --reduceLRPatience=10 --reduceLRMinLR=0.000005 --prefix="SWO GBP " --save --compare-history
Command-line flags:
run-fnn
: Size of sample to use (a training set could have been produced with a large set in order to compare how set size affects training). Needs to be a number in (0, 1])model
: Financial Model to use: hullwhite, g2++, g2++_local (for g2++ calibrated with local optimizer), g2++_vae (use Variational Autoencoder to sample)seed
: If present, number is used to set seed of random number generatorsample-size
: Size of sample in file. Used in building automatically the file name it will look forerror-adjusted
: If present, samples are adjusted with errors taken from observed results. Used in building automatically the file name it will look forhistory-start
: If present, sets the start date from which the sample will be trained. Used in building automatically the file name it will look forhistory-end
: If present, sets the end date from which the sample will be trained. Used in building automatically the file name it will look forprefix
: String to preapped as prefix to model name. Affects name of file to which model could be saved. Default is emptypostfix
: String to apped as prefix to model name. Affects name of file to which model could be saved. Default is emptysave
: Save model to file, which can later be reloadedcompare-history
: Indicates that after training, the model should be used to compare the prediction with the observed historyload
: Name of file where model was saved. Model will be loaded, but not retrained. Intended to use to allow to compare-history at a later time
epochs
: Number of epochs to train for. If not present, default is 200residual-cells
: If present, sets the number of layers that will be skipped in building residual feedback, e.g. 1 means input of layer is added to it's ouput, 2 means that input of layer K is added to output of layer K+1. If not present, default is 1lr
: Learning rate of optimizer (Nadam). If not present, default is 1e-3exponent
: Number of neurons in layer will be 2**exponent. If not present default is 6 (so 64 neurons)layers
: Number of hidden layers. If not present, default is 4dropout
: Percentage of cells randomly dropped during each epoch of training so as to avoid overfitting by fitting an ensemble of models. If not present, default is 1/2earlyStopPatience
: Number of epochs after which training should be stopped if the score of the validation set has not improved. Default is not to stopreduceLRFactor
: Reduce learning rate by this factor if after a number of epochs, the score of the validation set has not improved. Default is not to reducereduceLRPatience
: Reduce learning rate by a factor if after this number of epochs, the score of the validation set has not improved. Default is not to reducereduceLRMinLR
: Minimum level to which to reduce the learning rate, in case it is being reduced.
python swaption_test.py --gbp-to-h5