Skip to content
/ Simkit Public

A generalized framework for generative and probabilistic modelling for training reinforcement learning agents in TensorFlow 2.

Notifications You must be signed in to change notification settings

jetnew/Simkit

Repository files navigation

Simkit

A generalized framework for generative and probabilistic modelling to for training reinforcement learning agents in TensorFlow.

Many pricing and decision making problems at the core of Grab’s ride-hailing and deliveries business can be formulated as reinforcement learning problems, with interactions of millions of passengers, drivers and merchants from over 65 cities across the Southeast Asia region.

Usage

from data.synthetic import get_normal_data, plot_data
from model.gmm import GMM

# Get 4 clusters of 1000 normally distributed synthetic data
X, y = get_normal_data(1000, plot=True)

# Fit a Gaussian Mixture Density Network
gmm = GMM(x_features=2,
          y_features=1,
          n_components=32,
          n_hidden=32)
gmm.fit(X, y, epochs=20000)

# Predict y given X
y_hat = gmm.predict(X)
plot_data(X, y_hat)

Models

Conditional Generative Feature Models

Feature models are used in reinforcement learning for generating features that represent the state during agent-environment interactions.

Gaussian Mixture Density Network

The Gaussian Mixture Density Network consists of a neural network to predict parameters that define the Gaussian mixture model.

Conditional Generative Adversarial Network

The Conditional Generative Adversarial Network consists of a generator network that generates candidate features and a discriminator network that evaluates them, both conditioned on parent features, that contest in optimisation.

Probabilistic Response Models

Response models are used in reinforcement learning for the uncertainty modelling of distributional rewards instead of point estimations, to enable stable learning of the agent in cases of spiky responses.

Bayesian Neural Network

The Bayesian Neural Network is a neural network with weights assigned a probability distribution to estimate uncertainty and trained using variational inference.

Monte Carlo Dropout

The Monte Carlo Dropout is a method shown to approximate Bayesian inference.

Deep Ensemble

The Deep Ensemble is an ensemble of randomly-initialised neural networks that performs better than Bayesian neural networks in practice.

Utilities

Performance Metrics

The performance metrics computed are the Kullback-Leibler divergence and Jensen-Shannon divergence, computed by splitting the data into histogram bins.

Kullback-Leibler Divergence

Jensen-Shannon Divergence

Performance Visualisation

The visualisation tools implemented include the probability density surface plot (left) that visualises the probability densities at each coordinate, and the grid violin relative density plot (right) that visualises the relative densities between the actual data and the generated data of the fitted model using histograms.

Hyperparameter Optimisation using Ax

Hyperparameter optimisation is implemented using Bayesian optimisation in the Ax framework, building a smooth surrogate model of outcomes using Gaussian processes from noisy observations from previous rounds of parameterizations to predict performance at unobserved parameterizations, tuning parameters in fewer iterations than grid search or global optimisation techniques.

References

  1. Mei, L. I. N., and Christopher William DULA. "Grab taxi: Navigating new frontiers." (2016): 40.
  2. Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
  3. Bishop, Christopher M. "Mixture density networks." (1994).
  4. Mirza, Mehdi, and Simon Osindero. "Conditional generative adversarial nets." arXiv preprint arXiv:1411.1784 (2014).
  5. Blundell, Charles, et al. "Weight uncertainty in neural networks." arXiv preprint arXiv:1505.05424 (2015).
  6. Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. 2016.
  7. Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in neural information processing systems. 2017.
  8. Fort, Stanislav, Huiyi Hu, and Balaji Lakshminarayanan. "Deep ensembles: A loss landscape perspective." arXiv preprint arXiv:1912.02757 (2019).
  9. Chang, Daniel T. "Bayesian Hyperparameter Optimization with BoTorch, GPyTorch and Ax." arXiv preprint arXiv:1912.05686 (2019).
  10. Dataset at https://www.kaggle.com/aungpyaeap/supermarket-sales.
  11. Dataset at https://www.kaggle.com/binovi/wholesale-customers-data-set.
  12. Dillon, Joshua V., et al. "Tensorflow distributions." arXiv preprint arXiv:1711.10604 (2017).

About

A generalized framework for generative and probabilistic modelling for training reinforcement learning agents in TensorFlow 2.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published