From 5a8614792ae983207cda1a724611a4684231279c Mon Sep 17 00:00:00 2001 From: Filippo Castelli <37574218+filippocastelli@users.noreply.github.com> Date: Fri, 1 Feb 2019 14:42:55 +0100 Subject: [PATCH] Update README.md --- README.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index bd19889..504f12c 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,16 @@ # GPnet Gaussian Process Regression and Classification on Graphs -Code still in alpha concept proofing and is based on the code from Chapter 18 of Machine Learning: An Algorithmic Perspective (2nd Edition) by Stephen Marsland, implements the algorithms in Chapters 2 and 3 of Gaussian Processes for Machine Learning by C.E. Rasmussen. +The code is loosely inspired on the code from Chapter 18 of Machine Learning: An Algorithmic Perspective (2nd Edition) by Stephen Marsland and implements the algorithms in Chapters 2 and 3 of Gaussian Processes for Machine Learning by C.E. Rasmussen. `GPnet` at the moment exposes two classes: `GPnetRegressor` and `GPnetClassifier` * **GPnetRegressor** provides basic regression functionality for functions defined on graph nodes * **GPnetClassfier** provides classification of test nodes, given a set of -1/+1 labels for the training nodes -The only kernel available at the moment is a _squared exponential_ kernel, more kernels and custom kernel composition will be added in the future. +The only kernel available at the moment is a _squared exponential_ kernel, more kernels and custom kernel composition will be added in the future + +- update - +kernel composition is already possible via custom function definitions: "soft" kernel composition is yet to be implemented. Some basic parameter optimization is provided by `scipy.optimize`. @@ -15,6 +18,6 @@ Some basic parameter optimization is provided by `scipy.optimize`. GPnet demos require `networkx`, `pandas` , `numpy`, `matplotlib`, `scipy.optimize`, and `random` to work: nothing too exotic. ## What's the future of this repo? -There's still much work to do, and that includes +Possible future work directions include: * including support for multiple and custom-defined kernels * fixing numerical issues with covariance matrix estimation and parameter optimization