You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in today's Cadence Implementation Sync, I think we can resolve this by creating an optional keys-only register (map key index) for atree maps when required thresholds/conditions are satisfied.
One approach is for Atree to provide callback functions to determine if index should be created or deleted. If the caller (e.g. Cadence) provides different callback functions then those will be used instead of defaults.
We can:
Create index only for large atree maps.
Store index in its own register(s) by using atree array or map under the hood for scalability.
Map key iteration would use the index if it exists.
Deploy this feature using HCU (no spork).
Maintaining index adds overhead for both storage size and speed. Need to try to reduce this overhead.
TODO:
@j1010001 will find out if this is needed in Q4 2024 or later
vishalchangrani
changed the title
Create optional map key index to optimize map key iteration
Create optional map key index to optimize map key iteration 🆕
Dec 9, 2024
Atree
OrderedMap
stores element's key and value together to reduce number of touched slabs for write ops.However, this can be inefficient for map key iteration since both keys and values are loaded when only keys are needed.
For very large maps, interaction limits can be reached before iteration is completed. See issues:
sliceKeys
method on dictionary types cadence#3544Suggestion
As mentioned in today's Cadence Implementation Sync, I think we can resolve this by creating an optional keys-only register (map key index) for atree maps when required thresholds/conditions are satisfied.
One approach is for Atree to provide callback functions to determine if index should be created or deleted. If the caller (e.g. Cadence) provides different callback functions then those will be used instead of defaults.
We can:
Maintaining index adds overhead for both storage size and speed. Need to try to reduce this overhead.
TODO:
The text was updated successfully, but these errors were encountered: