You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main goal of this would be to allow to backtesting runs on a GPU, and in particular allow to run backtests using online jupyter instances that offer GPU computing like colab or kaggle.
Since we mostly target online notebooks, and they are all nvidia, we would be using CUDA.jl
Making the backtest GPU compatible mostly means building structure adaptors for these types:
Strategy
Exchange
AssetInstance
DataFrame
Context
The Exchange type is a wrapper for a ccxt exchange, it is already discouraged to call python from a strategy, and the backtest itself never calls any python code, so this is not an issue per se, however it should be made abudantly clear in the documentation that a GPUStrategy cannot use python code.
This is however not enough:
To achieve parallelizability we have to run 1 backtest per GPU core, which means that the strategy has to be implemented as a kernel, in particular the main ping! function and whatever the function calls, needs to be a kernel. Of course we should provide example implementations and warn about the complexities of gpu programming.
The text was updated successfully, but these errors were encountered:
The main goal of this would be to allow to backtesting runs on a GPU, and in particular allow to run backtests using online jupyter instances that offer GPU computing like colab or kaggle.
Since we mostly target online notebooks, and they are all nvidia, we would be using CUDA.jl
Making the backtest GPU compatible mostly means building structure adaptors for these types:
The
Exchange
type is a wrapper for a ccxt exchange, it is already discouraged to call python from a strategy, and the backtest itself never calls any python code, so this is not an issue per se, however it should be made abudantly clear in the documentation that a GPUStrategy cannot use python code.This is however not enough:
To achieve parallelizability we have to run 1 backtest per GPU core, which means that the strategy has to be implemented as a kernel, in particular the main
ping!
function and whatever the function calls, needs to be a kernel. Of course we should provide example implementations and warn about the complexities of gpu programming.The text was updated successfully, but these errors were encountered: