graphdatascience
is a Python client for operating and working with the Neo4j Graph Data Science (GDS) library.
It enables users to write pure Python code to project graphs, run algorithms, as well as define and use machine learning pipelines in GDS.
The API is designed to mimic the GDS Cypher procedure API in Python code. It abstracts the necessary operations of the Neo4j Python driver to offer a simpler surface.
Please leave any feedback as issues on the source repository. Happy coding!
This is a work in progress and some GDS features are known to be missing or not working properly (see Known limitations below). Further, this library targets GDS versions 2.0+ (not yet released) and as such may not work with older versions.
To install the latest deployed version of graphdatascience
, simply run:
pip install graphdatascience
What follows is a high level description of some of the operations supported by graphdatascience
.
For extensive documentation of all capabilities, please refer to the Python client chapter of the GDS Manual.
Extensive end-to-end examples in Jupyter ready-to-run notebooks can be found in the examples
source directory:
The library wraps the Neo4j Python driver with a GraphDataScience
object through which most calls to GDS will be made.
from graphdatascience import GraphDataScience
# Use Neo4j URI and credentials according to your setup
gds = GraphDataScience("bolt://localhost:7687", auth=None)
There's also a method GraphDataScience.from_neo4j_driver
for instantiating the gds
object directly from a Neo4j driver object.
If we don't want to use the default database of our DBMS, we can specify which one to use:
gds.set_database("my-db")
If you are connecting the client to an AuraDS instance, you can get recommended non-default configuration settings of the Python Driver applied automatically.
To achieve this, set the constructor argument aura_ds=True
:
from graphdatascience import GraphDataScience
# Configures the driver with AuraDS-recommended settings
gds = GraphDataScience("neo4j+s://my-aura-ds.databases.neo4j.io:7687", auth=("neo4j", "my-password"), aura_ds=True)
Supposing that we have some graph data in our Neo4j database, we can project the graph into memory.
# Optionally we can estimate memory of the operation first
res = gds.graph.project.estimate("*", "*")
assert res["bytesMax"] < 1e12
G, res = gds.graph.project("graph", "*", "*")
assert res["projectMillis"] >= 0
The G
that is returned here is a Graph
which on the client side represents the projection on the server side.
The analogous calls gds.graph.project.cypher{,.estimate}
for Cypher based projection are also supported.
We can take a projected graph, represented to us by a Graph
object named G
, and run algorithms on it.
# Optionally we can estimate memory of the operation first (if the algo supports it)
res = gds.pageRank.mutate.estimate(G, tolerance=0.5, mutateProperty="pagerank")
assert res["bytesMax"] < 1e12
res = gds.pageRank.mutate(G, tolerance=0.5, mutateProperty="pagerank")
assert res["nodePropertiesWritten"] == G.node_count()
These calls take one positional argument and a number of keyword arguments depending on the algorithm.
The first (positional) argument is a Graph
, and the keyword arguments map directly to the algorithm's configuration map.
The other algorithm execution modes - stats, stream and write - are also supported via analogous calls. The stream mode call returns a list of dictionaries (with contents depending on the algorithm of course) - which we can think of as a table - as is also the case when using the Neo4j Python driver directly. The mutate, stats and write mode calls however return a dictionary with metadata about the algorithm execution.
The methods for doing topological link prediction are a bit different. Just like in the GDS procedure API they do not take a graph as an argument, but rather two node references as positional arguments. And they simply return the similarity score of the prediction just made as a float - not a list of dictionaries.
Some of the methods for computing similarity are also different.
The functions take two positional List[float]
vectors as input and return a similarty score.
The procedures that don't take a graph name as input (but only a configuration map) in the GDS API are represented by methods that only take keyword arguments mapping to the keys of their GDS configuration map.
In this library, graphs projected onto server-side memory are represented by Graph
objects.
There are convenience methods on the Graph
object that let us extract information about our projected graph.
Some examples are (where G
is a Graph
):
# Get the graph's node count
n = G.node_count()
# Get a list of all relationship properties present on
# relationships of the type "myRelType"
rel_props = G.relationship_properties("myRelType")
# Drop the projection represented by G
G.drop()
In GDS, you can train machine learning models.
When doing this using the graphdatascience
, you can get a model object returned directly in the client.
The model object allows for convenient access to details about the model via Python methods.
It also offers the ability to directly compute predictions using the appropriate GDS procedure for that model.
This includes support for models trained using pipelines (for Link Prediction and Node Classification) as well as GraphSAGE models.
There's native support for Link prediction pipelines and Node classification pipelines. Apart from the call to create a pipeline, the GDS native pipelines calls are represented by methods on pipeline Python objects. Additionally to the standard GDS calls, there are several methods to query the pipeline for information about it.
Below is a minimal example for node classification (supposing we have a graph G
with a property "myClass"):
pipe, _ = gds.beta.pipeline.nodeClassification.create("myPipe")
assert pipe.type() == "Node classification training pipeline"
pipe.addNodeProperty("degree", mutateProperty="rank")
pipe.selectFeatures("rank")
steps = pipe.feature_properties()
assert len(steps) == 1
assert steps[0]["feature"] == "rank"
model, res = pipe.train(G, modelName="myModel", targetProperty="myClass", metrics=["ACCURACY"])
assert model.metrics()["ACCURACY"]["test"] > 0
assert res["trainMillis"] >= 0
res = model.predict_stream(G)
assert len(res) == G.node_count()
Link prediction works the same way, just with different method names for calls specific to that pipeline. Please see the GDS documentation for more on the pipelines' procedure APIs.
Assuming we have a graph G
with node property x
, we can do the following:
model, res = gds.beta.graphSage.train(G, modelName="myModel", featureProperties=["x"])
assert len(model.metrics()["epochLosses"]) == model.metrics()["ranEpochs"]
assert res["trainMillis"] >= 0
res = model.predict_stream(G)
assert len(res) == G.node_count()
Note that with GraphSAGE we call the train
method directly and supply all training configuration.
All procedures from the GDS Graph catalog are supported with graphdatascience
.
Some examples are (where G
is a Graph
):
res = gds.graph.list()
assert len(res) == 1 # Exactly one graph is projected
res = gds.graph.streamNodeProperties(G, "rank")
assert len(res) == G.node_count()
Further, there's a call named gds.graph.get
(graphdatascience
only).
It takes a graph name as input and returns a Graph
object, if a graph projection of that name exists in the user's graph catalog.
The idea is to have a way of creating Graph
s for already projected graphs, without having to do a new projection.
All procedures from the GDS Pipeline catalog are supported with graphdatascience
.
Some examples are (where pipe
is a machine learning training pipeline object):
res = gds.beta.pipeline.list()
assert len(res) == 1 # Exactly one pipeline is in the catalog
res = gds.beta.pipeline.drop(pipe)
assert res["pipelineName"] == pipe.name()
Further, there's a call named gds.pipeline.get
(graphdatascience
only).
It takes a pipeline name as input and returns a training pipeline object, if a pipeline of that name exists in the user's pipeline catalog.
The idea is to have a way of creating pipeline objects for already existing pipelines, without having to create them again.
All procedures from the GDS Model catalog are supported with graphdatascience
.
Some examples are (where model
is a machine learning model object):
res = gds.beta.model.list()
assert len(res) == 1 # Exactly one model is loaded
res = gds.beta.model.drop(model)
assert res["modelInfo"]["modelName"] == model.name()
Further, there's a call named gds.model.get
(graphdatascience
only).
It takes a model name as input and returns a model object, if a model of that name exists in the user's model catalog.
The idea is to have a way of creating model objects for already loaded models, without having to create them again.
When calling path finding or topological link prediction algorithms one has to provide specific nodes as input arguments.
When using the GDS procedure API directly to call such algorithms, typically Cypher MATCH
statements are used in order to find valid representations of input nodes of interest, see eg. this example in the GDS docs.
To simplify this, graphdatascience
provides a utility function, gds.find_node_id
, for letting one find nodes without using Cypher.
Below is an example of how this can be done (supposing G
is a projected Graph
with City
nodes having name
properties):
# gds.find_node_id takes a list of labels and a dictionary of
# property key-value pairs
source_id = gds.find_node_id(["City"], {"name": "New York"})
target_id = gds.find_node_id(["City"], {"name": "Philadelphia"})
res = gds.shortestPath.dijkstra.stream(G, sourceNode=source_id, targetNode=target_id)
assert res[0]["totalCost"] == 100
The nodes found by gds.find_node_id
are those that have all labels specified and fully match all property key-value pairs given.
Note that exactly one node per method call must be matched.
For more advanced filtering we recommend users do matching via Cypher's MATCH
.
Operations known to not yet work with graphdatascience
:
- Numeric utility functions (will never be supported)
- Cypher on GDS (might be supported in the future)
graphdatascience
is licensed under the Apache Software License version 2.0.
All content is copyright © Neo4j Sweden AB.
This work has been inspired by the great work done in the following libraries:
- pygds by stellasia
- gds-python by moxious