-
Notifications
You must be signed in to change notification settings - Fork 116
provider: duplicated CIDs sent to provide queue #901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Providing is happening 3X
In the case of folders they are provided 3x too, also the wrapping folder that is discarded when not wrapping. But instead of the buffered provide, the last provide is when closing the MFS tree. |
Ok, so we have multiple issues the way things work right now:
This means we enqueue to announce:
When adding particularly, the root CID, and the temp-MFS root CID are submitted several times to the DAG service. If the DAG has folders, they are also submitted several times due to cacheSync operations on MFS (for example to print added directories while adding, which requires reading nodes, which in turns syncs the cache to disk). The issue is not so bad when adding big files (internal blocks are written only once). All of this is regardless on whether our providing strategy is "all" or "roots" or (soon) "mfs"... so we will be all blocks added even if not intended. Potential approachSeeing that most of the issue when adding comes from the temp-MFS, I believe
|
The provide queue is getting many duplicate CIDs. It turns out that normal
ipfs add
andmfs
are both adding blocks to theblockservice
, and causing duplicated CIDs to be provided.This may be tolerable since the CIDs are deduplicated after being read from the queue, here, but it does mean the queue has to hold double the CIDs that are actually provided.
It cases where things are added in such a way that follows both call paths, it may be worth passing some parameter that suppresses providing in one of the paths.
Call stack from normal ipfs-add path:
Call stack from mfs path:
The text was updated successfully, but these errors were encountered: