-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: de-duplicate events at the database level #88
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…-macros # Conflicts: # lib/workload/stateful/filemanager/Cargo.lock # lib/workload/stateful/filemanager/filemanager/src/events/aws/mod.rs
mmalenic
commented
Jan 23, 2024
Comment on lines
+1
to
+31
-- Bulk insert of s3 objects. | ||
insert into s3_object ( | ||
s3_object_id, | ||
object_id, | ||
bucket, | ||
key, | ||
-- We default the created date to a value event if this is a deleted event, | ||
-- as we are expecting this to get updated. | ||
created_date, | ||
deleted_date, | ||
last_modified_date, | ||
e_tag, | ||
storage_class, | ||
version_id, | ||
deleted_sequencer | ||
) | ||
values ( | ||
unnest($1::uuid[]), | ||
unnest($2::uuid[]), | ||
unnest($3::text[]), | ||
unnest($4::text[]), | ||
unnest($5::timestamptz[]), | ||
unnest($6::timestamptz[]), | ||
unnest($7::timestamptz[]), | ||
unnest($8::text[]), | ||
unnest($9::storage_class[]), | ||
unnest($10::text[]), | ||
unnest($11::text[]) | ||
) on conflict on constraint deleted_sequencer_unique do update | ||
set number_duplicate_events = s3_object.number_duplicate_events + 1 | ||
returning object_id, number_duplicate_events; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't being used yet, however I think it will be for #73.
mmalenic
added
filemanager
an issue relating to the filemanager
bug
Something isn't working
feature
New feature
and removed
bug
Something isn't working
labels
Jan 23, 2024
brainstorm
approved these changes
Jan 24, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #72
This was more complex than I expected, especially because there is a lot of caveats in postgres related to upserts and concurrency errors. I think the
on conflict
solution works well to avoid these.Ignore the commits related to macros, I was experimenting with writing postgres functions in Rust using plrust and converting regular functions in code into plrust functions using attribute macros. However, I don't think this is useful for this PR.
Changes
insert ... on conflict
.on conflict
avoids any concurrency issues related to manually upserting objects.bucket
andkey
are pushed fromobject
tos3_object
. This helps with de-duplication, and also makes more sense if considering an object which is the same but living in two different places.checksum
andsize
are inobject
, but this could be changed if two objects that are the same have different checksums, for example, different kinds of compression. If this this case, I think it would make sense to have another table one level up which links physical objects with the same checksums/sizes tological
orsemantic
objects.key
,bucket
,version_id
andsequencer
values are considered duplicates.