-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heavy slow-down depending on number of keys #57
Comments
Yes, the whole dictionary is unserialized on every access. shared memory doesn't have a way to inform about changes. A way to solve this which comes into my mind is to store a version or something in the first line and read out this first and compare with the already/last fetched one. If it's unchanged the current value can still be used and no unserializing has to be done. |
Hmm, not sure regarding the serializers. I've hacked togeter a different approach that uses a stream of updates instead of serializing the whole dict. It only serializes the whole dict if necessary, ie. the stream buffer is full. Feel free to check it out: https://github.com/ronny-rentner/UltraDict It's not a real package yet, just a hack. |
I have tried to use numpy to convert dictionaries to numpy arrays for faster speed and then store them in shared memory, but reading numpy arrays written to shared memory in the same process is normal, and when reading across compilers, the reading process crashes and cannot generate dump files, after a while if there is a need I will see how to debug with gbd. Dictionary and array interchange:
The part of the array that will be written to memory:
|
@ronny-rentner On my side, I was thinking of using locks to force each read to be refreshed when using memory communication, while caching relies on multiple processes using shared memory, such as intra-process pipelines, to implement themselves. |
@wnark I don't know what is your specific use case but as I have created UltraDict, I no longer need shared-memory-dict. |
Tried on Debian, Python 3.9, main branch,
Is that the intended or expected behavior?
It looks to me like it is unserializing the whole dict for every single get of a value, even when nothing has ever changed.
The text was updated successfully, but these errors were encountered: