-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate what we do with data in Filecoin deals at renewal time (considering deletions, etc.) #23
Comments
I split this out of #7 |
Idea I had was to spin up a separate deployment of the filecoin pipeline with different service DID. We could ductape our code to divert data from nft.storage into that pipeline instead. I think it could be cheap and dirty but it would have following benefits:
|
One other alternative (which is original thinking on the subject, before it was decided nft.storage is not going to rebase) to update ucanto routes that |
@Gozala Is the goal to have nft.storage in a separate PoDSI aggregate and deal from all other w3up data? If so, wdyt of thinking of this as a add-on feature that customers can pay for if they want it: get their own PoDSI aggregate and probably more control over filecoin dealmaking of those aggregates. But spaces/accounts that dont need it dont get it unless they pay for it, and if you dont pay for it we intermix in the 'default aggregate' for as long as we can offer the persistence to filecoin without an addon fee. |
Please see https://hackmd.io/0K9xlcQRTp6pLps1bffxHQ with my original proposal. It includes already a big chunk of what has been talked except that we use w3up instead of lane directly to w3filecoin In short, we would need to build storacha/w3filecoin-infra#49 for aggregator. I think setting up new infra could be easier, but I think extra overhead in everything else. A simpler solution to implement piece buffering aware of grouping would be to just set up a new separate buffering queue for nft things (hardcoded obviously, not a generic solution that would spin new infra per group at this point), and depending on group we write to different queues in the start. Only the buffering part needs a different thing, rest could go on the same lane. |
No description provided.
The text was updated successfully, but these errors were encountered: