You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
We're using the https://github.com/frolvanya/near-lake-framework-py repo to download and index NEAR data we need.
We've found the download to be quite slow, it takes about 80-90 seconds to download 100 blocks (even without a single line of processing, just downloading).
Additionally, the download get completely stuck from time to time and nothing gets downloaded for a whole minute.
Because of these 2 issues, our indexer never seem to catch up and the gap between our last indexed block and the tip is ever-increasing.
What can we do to increase the speed?
Maybe we can use S3 transfer acceleration?
Maybe make use of compression?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
We're using the https://github.com/frolvanya/near-lake-framework-py repo to download and index NEAR data we need.
We've found the download to be quite slow, it takes about 80-90 seconds to download 100 blocks (even without a single line of processing, just downloading).
Additionally, the download get completely stuck from time to time and nothing gets downloaded for a whole minute.
Because of these 2 issues, our indexer never seem to catch up and the gap between our last indexed block and the tip is ever-increasing.
What can we do to increase the speed?
Maybe we can use S3 transfer acceleration?
Maybe make use of compression?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions