We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Here is the minimal sample to reproduce the memory leak. When I run the example a new block of memory is allocated almost every few seconds.
marl::Scheduler scheduler(marl::Scheduler::Config::allCores()); scheduler.bind(); while (true) { marl::WaitGroup wg(1); marl::schedule([wg] { std::this_thread::sleep_for(10ms); // some delay to emulate real world load marl::WaitGroup nestedWaitGroup(1); marl::schedule([nestedWaitGroup] { std::this_thread::sleep_for(10ms); // some delay to emulate real world load nestedWaitGroup.done(); }); nestedWaitGroup.wait(); wg.done(); }); wg.wait(); } /* Measurements provided with TrackedAllocator per few frames stats_ {byUsage={ size=6 } } marl::TrackedAllocator::Stats byUsage { size=6 } std::array<marl::TrackedAllocator::UsageStats,6> + [0] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [1] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [2] {count=34 bytes=10816 } marl::TrackedAllocator::UsageStats + [3] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [4] {count=1 bytes=232 } marl::TrackedAllocator::UsageStats + [5] {count=140 bytes=5984 } marl::TrackedAllocator::UsageStats stats_ {byUsage={ size=6 } } marl::TrackedAllocator::Stats - byUsage { size=6 } std::array<marl::TrackedAllocator::UsageStats,6> + [0] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [1] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [2] {count=38 bytes=11072 } marl::TrackedAllocator::UsageStats + [3] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [4] {count=1 bytes=232 } marl::TrackedAllocator::UsageStats + [5] {count=146 bytes=6192 } marl::TrackedAllocator::UsageStats stats_ {byUsage={ size=6 } } marl::TrackedAllocator::Stats - byUsage { size=6 } std::array<marl::TrackedAllocator::UsageStats,6> + [0] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [1] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [2] {count=38 bytes=11072 } marl::TrackedAllocator::UsageStats + [3] {count=0 bytes=0 } marl::TrackedAllocator::UsageStats + [4] {count=1 bytes=232 } marl::TrackedAllocator::UsageStats + [5] {count=146 bytes=6192 } marl::TrackedAllocator::UsageStats */
Could you explain is my use case incorrect or it's a potential memory leak in library's code?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Here is the minimal sample to reproduce the memory leak. When I run the example a new block of memory is allocated almost every few seconds.
Could you explain is my use case incorrect or it's a potential memory leak in library's code?
The text was updated successfully, but these errors were encountered: