diff --git a/04 Strategy Library/00 Strategy Library/01 Strategy Library.php b/04 Strategy Library/00 Strategy Library/01 Strategy Library.php index 963636c..77d790e 100644 --- a/04 Strategy Library/00 Strategy Library/01 Strategy Library.php +++ b/04 Strategy Library/00 Strategy Library/01 Strategy Library.php @@ -705,6 +705,15 @@ ], 'description' => "Analyzes the news releases of drug manufacturers and places intraday trades for the stocks with positive news.", 'tags'=>'Equities, NLP, News Sentiment, Drug Manufacturers, Tiingo, Intraday' + ], + [ + 'name' => 'Deep Learning Portfolio Optimization', + 'link' => 'strategy-library/deep-learning-portfolio-optimization', + 'sources' => [ + 'arXiv' => 'https://arxiv.org/abs/1812.04199' + ], + 'description' => "Analyzes the news releases of drug manufacturers and places intraday trades for the stocks with positive news.", + 'tags'=>'Equities, NLP, News Sentiment, Drug Manufacturers, Tiingo, Intraday' ] ]; diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/01 Abstract.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/01 Abstract.html new file mode 100644 index 0000000..5711df2 --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/01 Abstract.html @@ -0,0 +1,3 @@ +

+ In this tutorial, we apply Deep Learning to optimize a portfolio of various ETFs. +

diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/02 Introduction.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/02 Introduction.html new file mode 100644 index 0000000..d09fcce --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/02 Introduction.html @@ -0,0 +1,2 @@ +

Portfolio Optimization is the process of choosing asset allocation ratios to select the best portfolio according to some objective. For example, for investors looking to minimize the risk of their portfolio, an investor can set out to minimize a risk metric, such as Beta or Value at Risk. In our Optimal Pairs Trading algorithm, our objective was to maximize the fit of a portfolio of GLD and SLV to an Ornstein-Uhlenbeck process, which we measured using Maximum Likelihood Estimation. The strategy we discuss today will seek to maximize the Sharpe Ratio in an attempt to achieve a portfolio that achieves desirable risk-adjusted returns. However, it should be noted to investors that since the metrics are obtained through a fit to past data, there is no guarantee that the portfolio will behave in a similar fashion in the future. This can be partially addressed by choosing metrics and a combination of assets that lead to strong autocorrelation in the metric values, however, we leave this as a suggestion for future research. 

+

Moving on, there are a few ways to determine the optimal asset allocations. The simplest way is to all possible combinations of asset allocations ratios discretized to values between 0 and 1. To illustrate, say we discretize allocation ratios using an increment of .01, and say we are testing allocation ratios for two assets, A1 and A2, we’d first test 1.00 for A1 and 0.00 for A2, and compute the objective, then we’d test 0.99 for A1 and .44 for A2, and compute the objective, and repeat this method until the objective values for all combinations are calculated. Then, we can choose the allocation ratios that lead to the most optimal objective value. However, this iterative method can be quite slow, and the runtime grows exponentially with each newly added asset. Thus, several alternative methods have been developed to tackle optimization, such as Newton and Quasi-Newton optimization algorithms. In our strategy, we leverage a Deep Neural Network with Adam Optimizer to determine the asset allocations for optimal Sharpe.

diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/03 Method.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/03 Method.html new file mode 100644 index 0000000..048cc55 --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/03 Method.html @@ -0,0 +1,127 @@ +

+ Let’s start by importing the necessary packages. We use Keras to build the Model, and we require NumPy for a few manipulations of our data, so our imports are the following: +

+ +
+
+import numpy as np
+np.random.seed(1)
+import tensorflow as tf
+from tensorflow.keras.layers import LSTM, Flatten, Dense
+from tensorflow.keras.models import Sequential
+import tensorflow.keras.backend as K
+
+
+ +

np.random.seed(1) isn’t actually an import, however, we call it early, even before the imports, so that our results are reproducible.

+ +

We initialize our Model class with the following:

+ +
+
+class Model:
+    def __init__(self):
+        self.data = None
+        self.model = None
+
+
+ +

self.model is how we will store our Keras model, and self.data will be explained later, though for now think of it as a matrix of data of our asset prices stored in a TensorFlow Tensor object.

+ +

Now inside our Model’s __build_model function, we first build the Keras Neural Network:

+ +
+
+def __build_model(self, input_shape, outputs):
+    model = Sequential([
+                LSTM(64, input_shape=input_shape),
+                Flatten(),
+                Dense(outputs, activation='softmax')
+            ])
+
+
+ +

The outputs of our Neural Network are the allocations ratios of the assets. Our first layer is an LSTM as LSTMs work well with financial data. Since our input is not flat, we then need to flatten the neurons in the next layer with a Flatten layer. With our output layer, a standard Dense layer, we use Softmax activation so that the allocation ratios add up to 1.

+

Then, since we are optimizing the Sharpe Ratio, we need to create a custom loss function that computes the Sharpe Ratio:

+ +
+
+def sharpe_loss(_, y_pred):
+    # make all time-series start at 1
+    data = tf.divide(self.data, self.data[0])
+
+    # value of the portfolio after allocations applied
+    portfolio_values = tf.reduce_sum(tf.multiply(data, y_pred), axis=1)
+
+    portfolio_returns = (portfolio_values[1:] - portfolio_values[:-1]) / portfolio_values[:-1]  # % change formula
+
+    sharpe = K.mean(portfolio_returns) / K.std(portfolio_returns)
+
+    # since we want to maximize Sharpe, while gradient descent minimizes the loss,
+    #   we can negate Sharpe (the min of a negated function is its max)
+    return -sharpe
+
+
+ +

This uses the values of y_pred as the coefficients for the allocations for the assets, weighs the time-series data accordingly to the allocation, and computes the portfolio values. Then, we calculate the Sharpe Ratio from the portfolio values. We negate the Sharpe value before returning it because gradient descent minimizes the loss, so by minimizing the negative of our function, we get the maximum of our function. Note that we do not use the first argument, the “true y values” parameter, because there are no “true y values”, as we use the architecture of the Deep Neural Network for everything except prediction.

+ +

Finally, we need to compile our Neural Network using our custom defined loss along with the Adam solver:

+ +
+
+model.compile(loss=sharpe_loss, optimizer='adam')
+return model
+
+
+ +

Then, we need to get the ratios computed from this model when we feed in data, and we put this functionality inside the get_allocations method, where data is a pandas DataFrame of closing prices for the different assets:

+ +
+
+def get_allocations(self, data):
+
+
+ +

The features for the model are the original closing price time-series for each of the assets, as well as the daily returns computed from this data:

+ +
+
+data_w_ret = np.concatenate([ data.values[1:], data.pct_change().values[1:] ], axis=1)
+
+
+ +

Because computing the returns causes the first row to be NaNs, so we skip the first row for the returns data, and to make the data even in shape, we need to skip the first row for the original data as well.

+ +

Then, we also need to save the closing prices to self.data so we can use it to compute the Sharpe Ratio in the custom loss function we defined earlier:

+ +
+
+data = data.iloc[1:]
+self.data = tf.cast(tf.constant(data), float)
+
+
+ +

To remain consistent with before, we remove the first row of the data. When we store this data inside self.data, we need to convert the data into a Tensorflow Tensor and cast it to the standard float so it is compatible with the matrix operations we perform inside the Sharpe loss functions.

+

Then, to call the function we created earlier to build and compile our model, we use:

+ +
+
+if self.model is None:
+    self.model = self.__build_model(data_w_ret.shape, len(data.columns))
+
+
+ +

We delay the building and compiling of our Neural Network model to make the logic for determining the model parameters, the input shape and the number of outputs, cleaner as we can determine these parameters from the data directly. 

+

Then to compute and return the allocation ratios, we use the following:

+ +
+
+fit_predict_data = data_w_ret[np.newaxis,:]
+self.model.fit(fit_predict_data, np.zeros((1, len(data.columns))), epochs=20, shuffle=False)
+return self.model.predict(fit_predict_data)[0]
+
+
+ +

Note that we pass in zeros for the “true y values”. As long as the size of a row matches the size of the outputs, the values we pass in for this don’t matter, because as we explained earlier, we don’t use these values in our custom loss function.

+


The rest of the algorithm is quite simple. We pass in the DataFrame of the past 51 closing prices for the different assets and use SetHoldings(asset symbol, allocation) on each asset using the allocation computed using our Model’s get_allocations method.

+ diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/04 Algorithm.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/04 Algorithm.html new file mode 100644 index 0000000..3c1c807 --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/04 Algorithm.html @@ -0,0 +1,6 @@ +
+
+
+ +
+
diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/05 Results.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/05 Results.html new file mode 100644 index 0000000..ec078e2 --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/05 Results.html @@ -0,0 +1,3 @@ +

+ We benchmark the performance of this algorithm against the S&P500 index, which we track using SPY. Over the five years from October 2015 to October 2020, the algorithm achieved a Sharpe Ratio of 1.032, while SPY achieved a Sharpe Ratio of 0.769 over the same period. Furthermore, our algorithm had a relatively low drawdown during the drawdown in March. +

\ No newline at end of file diff --git a/04 Strategy Library/1035 Deep Learning Portfolio Optimization/06 References.html b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/06 References.html new file mode 100644 index 0000000..b25a29c --- /dev/null +++ b/04 Strategy Library/1035 Deep Learning Portfolio Optimization/06 References.html @@ -0,0 +1,5 @@ +
    +
  1. + Zihao Zhang, Stefan Zohren, & Stephen Roberts. (2020). Deep Learning for Portfolio Optimisation. Online Copy. +
  2. +
\ No newline at end of file