Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edits on caching docs #34

Merged
merged 1 commit into from
Sep 4, 2024
Merged

Edits on caching docs #34

merged 1 commit into from
Sep 4, 2024

Conversation

willcosgrove
Copy link
Collaborator

I know it doesn't look like much, but unseen in this PR, I implemented an LRU cache to prove that it was in fact slower.


If you think about it, with an LRU cache, you need to keep track of the order in which keys were accessed. That means every read becomes an expensive write. In order to avoid contention, you would also need to use a `Mutex` to make sure only one thread is accessing the cache at a time.

It is possible to minimise th overhead of an LRU cache using a B-Tree, but this is quite a complex data structure and it’s not built into Ruby. Additionally, to avoid readers blocking writers, you would probably need some kind of write-ahead log.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this line because a) it seems like it may be too much detail, and b) Claude actually said that a hash table with a doubly linked list would be faster than using a B-tree.

@joeldrapper joeldrapper merged commit 43bcb74 into main Sep 4, 2024
1 check failed
@willcosgrove willcosgrove deleted the cache-edits branch September 4, 2024 14:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants