You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any way built in, or otherwise recommended to track the amount of memory allocated by libyang, ideally without LD_PRELOAD or similar?
For example, libxml provides an API to change the default allocators, which would allow the user to implement this however they want. However even some basic semi-accurate method to do this would be helpful.
The use case here would be for wrapping libyang in Ruby, which is a garbage collected scripting language. In this case, Ruby's garbage collector does not know how much memory is used by wrapped objects. Nokogiri is a common library used for xml which wraps libxml using the method above.
Thanks.
The text was updated successfully, but these errors were encountered:
There is no allocated memory tracking in libyang, it was not needed. As for using custom allocators, I would be worried about possible performance penalties, even if none are set. So if you have another simpler idea, please share it, I am not against adding something like this.
I do not understand why one might need memory accounting or custom allocators for GC. Anyway, if you are looking for a library which implements reachability tracking on top of libyang, have a look at libyang-cpp (which is also used for these Python bindings).
If you decide to implement any safe wrapper on top of libyang, I suggest to keep in mind that these are basically very mutable tree data structures with plenty of internal cross-references, so you have to keep track of which wrapper "uses" what part of the original data tree. It gets rather involved very quickly when modifying the tree (which includes operations like validation because these in fact do change the data tree).
Thank you all for the thoughts, will look through those implementations. Right now we use the priv data field of the node and reference counting to track nodes, but you're right it can be quite complex.
I do not understand why one might need memory accounting or custom allocators for GC.
Perhaps this is more specific to Ruby. If interested there is more detail here linked in this discussion. The TLDR; is that the garbage collector does either lazy of full collection. One condition for this is number of objects allocated, the other is memory allocated. If we load a tree that is say ~1MB, it may be accounted as 20 bytes instead by Ruby. We can easily stack many of these up, which are not frequently cleaned up. The result is what looks like a memory leak to the system, while Ruby happily chugs along. This condition either hits some equilibrium within available memory, or starts causing OOM (either within Ruby or the system).
Is there any way built in, or otherwise recommended to track the amount of memory allocated by libyang, ideally without LD_PRELOAD or similar?
For example, libxml provides an API to change the default allocators, which would allow the user to implement this however they want. However even some basic semi-accurate method to do this would be helpful.
The use case here would be for wrapping libyang in Ruby, which is a garbage collected scripting language. In this case, Ruby's garbage collector does not know how much memory is used by wrapped objects. Nokogiri is a common library used for xml which wraps libxml using the method above.
Thanks.
The text was updated successfully, but these errors were encountered: