You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The overlays-branch loads the full amount of metadata into memory, which means high memory overhead for large (100K+ nodes) structures and long startup time. A more scalable approach would be to store expanded metadata in chunks of 1000 nodes or so and fetch them only when needed.
The current full metadata (linked.js) for a sample node in the branch overlay is
where both in and out can be quite large. The size of the average entry for a node in a 200K node sample graph was 500 bytes.
The minimum needed for supporting clicking and searching is
{d:"bbc.co.uk", x:260.00223, y:-341.51917, r:5.0}
which is closer to 50 bytes. Roughly speaking, switching to on demand loading of extra metadata for overlay generation would mean a 10× increase in maximum scale, measured in node count.
The text was updated successfully, but these errors were encountered:
The
overlays
-branch loads the full amount of metadata into memory, which means high memory overhead for large (100K+ nodes) structures and long startup time. A more scalable approach would be to store expanded metadata in chunks of 1000 nodes or so and fetch them only when needed.The current full metadata (
linked.js
) for a sample node in the branchoverlay
iswhere both
in
andout
can be quite large. The size of the average entry for a node in a 200K node sample graph was 500 bytes.The minimum needed for supporting clicking and searching is
which is closer to 50 bytes. Roughly speaking, switching to on demand loading of extra metadata for overlay generation would mean a 10× increase in maximum scale, measured in node count.
The text was updated successfully, but these errors were encountered: