Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SharedCache] Improvements and fixes for MMappedFileAccessor stuff #6318

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

WeiN76LQh
Copy link

MMappedFileAccessor::Open wasn't really behaving as I think it was intended. It tried to use fileAccessorSemaphore to limit the number of concurrent files Binary Ninja could open. It didn't really confirm it had acquired a reference on fileAccessorSemaphore but then would always release one back when a MMappedFileAccessor was deallocated. This could actually inflate the count of fileAccessorSemaphore meaning it could end up with a higher file limit than originally set. Also using a worker to deallocate a MMappedFileAccessor on another thread felt like a bit of a gross hack and made synchronizing dropping a ref and acquiring one on fileAccessorSemaphore harder than it needed to be. This resulted in a bit of a rewrite of MMappedFileAccessor::Open to mitigate these issues. It now should respect the file limit set on startup but can go over it temporarily in extreme circumstances that I don't expect to occur in real world experiences.

Additionally there seemed to be some insufficient use of locking to synchronize accesses to certain data structures. I believe this commit has cleaned those up. They were causing very intermittent crashes.

`MMappedFileAccessor::Open` wasn't really behaving as I think it was intended. It tried to use `fileAccessorSemaphore` to limit the number of concurrent files Binary Ninja could open. It didn't really confirm it had acquired a reference on `fileAccessorSemaphore` but then would always release one back when a `MMappedFileAccessor` was deallocated. This could actually inflate the count of `fileAccessorSemaphore` meaning it could end up with a higher file limit than originally set. Also using a worker to deallocate a `MMappedFileAccessor` on another thread felt like a bit of a gross hack and made synchronizing dropping a ref and acquiring one on `fileAccessorSemaphore` harder than it needed to be.
This resulted in a bit of a rewrite of `MMappedFileAccessor::Open` to mitigate these issues. It now should respect the file limit set on startup but can go over it temporarily in extreme circumstances that I don't expect to occur in real world experiences.

Additionally there seemed to be some insufficient use of locking to synchronize accesses to certain data structures. I believe this commit has cleaned those up. They were causing very intermittent crashes.
@bdash
Copy link
Contributor

bdash commented Jan 14, 2025

As an FYI, #6316 also changes MMappedFileAccessor::Open and will conflict with this. It shouldn't be too bad to resolve as none of the behavior it changes is relevant to what you're fixing here.

@WeiN76LQh
Copy link
Author

Yes I hadn't seen your PRs until after I pushed this one. As you state it doesn't really conflict with any of your changes. If an updated PR is required in the future then I can do that.

@WeiN76LQh
Copy link
Author

WeiN76LQh commented Jan 15, 2025

I've identified another bug in this area of code to do with the blockedSessionIDs variable. If a user opens DSC using Open Selected Files With Options... and then changes a setting (I think any setting) and then clicks Open, in the background a DSCView is destroyed and a new one is created. The destruction of the first one calls MMappedFileAccessor::CloseAll which adds the current session ID to the blockedSessionIDs list. However the newly created DSCView has the same session ID so during initial load it is constantly creating and destroying MMappedFileAccessors due to this check preventing MMappedFileAccessors being added to fileAccessorReferenceHolder. The performance degradation is pretty crazy, causing the initial load to take minutes. During which Binary Ninja is unresponsive so its likely for the average user they will just kill the process after a bit of time of seemingly nothing happening.

I would provide a fix in this PR via an additional commit but I'm unsure how to proceed as I don't fully understand the mechanics of everything in this context and the levers that can be worked with. blockedSessionIDs existence feels like a hack to solve a problem which I don't really understand exactly how it exists. If this check is still required then another solution is going to be needed otherwise I guess the code can just be removed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants