-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
outbuf.write is too slow #17
Comments
Hi there! I'm the original author but no longer at Pluralsight. It's been a while since I looked at this code. The idea was to keep things in the streaming interface (file like) and limit memory consumption. You can write directly to a binary file or a stream like stdout. Ultimately you likely want to output the serialized avro to something other than a BytesIO and just passing in the file or stream lets you go directly. I would be curious if there is time savings to collecting all of the data in memory and then doing a single write to the output buffer. I can't remember if I tested this or not. My hunch is that the complexity of managing your own buffer with pre-allocation and resizing, reuse, etc.. wouldn't buy you much but I could be totally wrong. Pluralsight doesn't seem to be maintaining this repo so I made a fork back in my personal github if you want to fork and play with the latest version -- https://github.com/mikepk/spavro |
Hi @mikepk ! Thank you for response. I've made some redesign (using cdef functions for writing values of base types and using Cython array.array instead of BytesIO) and I reached +40% performance in my app. The link to MR is above. I've have made some manual regression testing and it looks good. But I have some troubles with runing benchmark.py:
If you are interested and could help with performance test I would be glad to make MR to your repo. Thanks! |
Hi there! I'll take a look! Unfortunately I can't merge the pull request since I'm not at Pluralsight anymore and not the maintainer. |
Hi!
Thanks a lot for this exciting Cython extension for avro serialization! It makes my code approximately x2 faster.
But the next iteration of profiling shows the bottleneck in io.py (BytesIO.write) that is used in process of serialization. Probably, I use it in wrong way (please, correct me if so, maybe need to use something else BytestIO):
If this is correct use case, maybe you could suggest how to use native Cython data structures in fast_binary.pyx (because I'm just newbie in Cython for a while). Then I could create my own fork and try to implement it to avoid using of BytesIO.
Just my own point of view on this "problem" is that the .write() method is invoked for each field of schema. If schema is quiet complex it leads to multiple invoking of .write() method (in my profiling report it takes 50% of execution time). Probably, it's possible to fill some internal Cython data structure (maybe char*) and convert it once to BytesIO in the end.
I will be happy to see any answer from you!
Thank you!
The text was updated successfully, but these errors were encountered: