Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add no_std once primitive for stdlib deserialization #1463

Merged
merged 23 commits into from
Sep 6, 2024

Conversation

sergerad
Copy link
Contributor

@sergerad sergerad commented Aug 21, 2024

Based on comment here:

One thing I've been thinking about is how to make it so that we deserialize things like miden-stdlib only once. We could have a static variable and use LazyLock in std context, but not sure whats the best way to handle it in no-std context.

  • Adds miden_core::utils::sync::racy_lock module for no_std environments.
  • Updates miden_core::utils to re-export std::sync::LazyLock or racy_lock::RacyLock as LazyLock for std and no_std, respectively.
  • Moves current sync module to rw_lock module.
  • Updates StdLibrary::default() to use above mentioned LazyLock.

@sergerad
Copy link
Contributor Author

@bobbinth I could only find these spin-lock based solutions for no_std lock:

https://docs.rs/lock_api/0.4.12/lock_api/
https://docs.rs/spin/latest/spin/

Given that the deserialization took ~2ms on my M1, I think whether a spin lock is appropriate here depends on the type of access expected from StdLibrary::default(). For example, if we expect a handful of calls on start up only, that would be fine. But if we expect many calls repeatedly throughout the runtime of the vm, then a spin lock might be undesirable. I imagine that the usage is more like the former, however. WDYT?

I have muddied benchmarks somewhat for now. Just out of interest I wanted to see how it performed through the lock + clone:

deserialize_std_lib/read_from_bytes
                        time:   [643.94 µs 648.79 µs 654.15 µs]
                        change: [-97.572% -97.527% -97.483%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild

Given the performance there, we might be fine with the clone. LMK if you would rather I turned it into a shared reference. Also LMK if I should keep the benchmarks as before, without the lock.

LMK any other thoughts you have on this. 🙏

@bobbinth
Copy link
Contributor

Thank you! Not a review yet but a few answers/comments:

Given the performance there, we might be fine with the clone. LMK if you would rather I turned it into a shared reference.

I think cloning is fine, and also once #1465 is merged, we'll be using shared references for the underlying MastForest - and so, the cloning overhead will be negligible.

Given that the deserialization took ~2ms on my M1, I think whether a spin lock is appropriate here depends on the type of access expected from StdLibrary::default(). For example, if we expect a handful of calls on start up only, that would be fine. But if we expect many calls repeatedly throughout the runtime of the vm, then a spin lock might be undesirable. I imagine that the usage is more like the former, however. WDYT?

I think the expected access pattern is that we'd make relatively few calls to StdLibrary::default() - and so the overhead of the lock can be considered acceptable. Also, the time it takes to deserialize the standard library will probably grow in the future. After #1454 and #1461 the time is already closer to 5 ms and if we go with more aggressive inlining, it can go as high as 10 ms. This is not to mention that we'll probably add more code to the library in the future - and so, it may very well end up requiring 20+ ms to deserialize.

A few other thoughts about the locks:

In the std context, I'd probably prefer to use something directly from the Rust's standard library (e.g., LazyLock). But what to use in no-std context is still an open question in my mind.

  • We could use the spin-based approach as you have here.
  • We also have our own implementation of a SpinLock and I wonder if we should try to use it here somehow.
  • Or maybe we should not use spinlocks at all and instead use something more like race-based approach. I think it will work pretty well for our use case.

Overall, if the race-based approach can be implemented relatively easily, I think this would be a better alternative to using spinlocks (caveat, my knowledge of spinlocks is very limited and is mostly based on this article). But also curious what @bitwalker and @plafer think.

@sergerad
Copy link
Contributor Author

Thats great context, thanks. I hadn't noticed the existing spin lock impl!

Here is a commit swapping out the spin lock with once_cell::race::OnceBox. Seemingly the same latency as the spin lock based on the benchmarks.

@bitwalker
Copy link
Contributor

In the std context, I'd probably prefer to use something directly from the Rust's standard library (e.g., LazyLock). But what to use in no-std context is still an open question in my mind.

+1 to this, no reason not to use LazyLock, etc., when available.

We also have our own implementation of a SpinLock and I wonder if we should try to use it here somehow.

This is what we should use in no-std environments for most things, and is one of the reasons why it was added in the core crate to begin with, specifically to solve the sorts of issues for VM crates that needed synchronization - or at least the appearance of it. As mentioned previously, we are not targeting multi-threaded Wasm runtimes currently, so the "spinlock" here is basically never actually exercised with contention.

I think this would be a better alternative to using spinlocks (caveat, my knowledge of spinlocks is very limited and is mostly based on this article).

Legitimate arguments against spinlocks, IMO, only apply when you have better locking primitives available you should be using instead (the bulk of the argument in that blog post, and not applicable here, as we will use better primitives when building for std), or when you have some form of OS that you should be coordinating with instead (to avoid things like priority inversion). None of the arguments brought up in that article are relevant for our no-std builds, because on Wasm we don't have an OS, we don't have other synchronization primitives available, and we don't have hardware interrupts that can preempt us - and that's setting aside the fact that we'll never contend the lock anyway, since on Wasm we're going to be executing single-threaded for the foreseeable future. We could in theory implement the lock_api::RawMutex interface to be effectively a no-op, but it's probably better to be able to catch unintentionally introduced deadlock uses in Wasm (which will happen if we acquire the lock, and then attempt to acquire it again without releasing it, since the second acquisition will deadlock in a single-threaded environment).

Using a race-style approach for something like OnceCell is fine, because it is effectively what happens even if you are using a lock, the difference being that you can usually avoid computing the initialization of the cell by detecting that the lock was acquired, but that in and of itself is inherently racy, unless you hold the lock while that computation runs, rather than just to store the result.

Anyway, I don't have a strong opinion about which thing we use here, the RwLock primitive implemented in miden-core may not be the best choice for a OnceCell-like use case, which is probably better served by the race approach.

Copy link
Contributor

@plafer plafer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current approach looks fine to me for no_std environments, specifically since IIUC in the worst case, multiple threads that need the StdLibrary at the beginning will all "redo" the work of deserializing it - but all subsequent calls to StdLibrary::default() will see no synchronization overhead. I can't think of scenarios where this would be undesirable.

So if I followed the thread correctly, we could keep this implementation in no_std environments, and use std::sync::LazyLock in std environments?

miden/benches/deserialize_std_lib.rs Outdated Show resolved Hide resolved
stdlib/Cargo.toml Outdated Show resolved Hide resolved
@bobbinth
Copy link
Contributor

Depending on how complex it is, I think my preferred approach would be to do something similar for LazyLock as we did for RwLock. That is:

  1. Export LazyLock from miden-core::utils::sync.
  2. Under std this would just re-export std::sync::LazyLock.
  3. Under no-std, we'd re-export our own implementation of LazyLock which would be based on the race-based methodology. Assuming it is not too much code, I'd probably just copy the relevant parts from once_cell (while making sure we include the link to the original source code).

Then, we can use miden-core::utils::sync::LazyLock in miden-stdlib.

The biggest uncertainty here is how much effort would it take to implement our version of LazyLock for no-std.

@plafer
Copy link
Contributor

plafer commented Aug 27, 2024

Assuming it is not too much code, I'd probably just copy the relevant parts from once_cell (while making sure we include the link to the original source code).

@bobbinth why copy code instead of importing the library? I'm against copying code primarily because if they find a bug and fix it, we don't get the fix (and don't get notified that there's a bug). I'm also not comfortable implementing a LazyLock interface (or reviewing an implementation) without extensive testing on many different platforms. These implementations can be tricky, especially since different platforms can have quirks that typically libraries such as once_cell deal with (although here we're dealing with no_std only so it's probably not as bad but I'm not sure). That's also why such libraries are so useful - everybody uses them, so they get tested on many different platforms, and receive all the bug fixes.

Note: I also didn't know about our SpinLock implementation, and I also would have pushed back against it. We have a bunch of unsafe code there, including the use of "C++ atomics" which are notoriously hard to get right on all platforms. Perhaps @bitwalker is an expert in their use, but probably no one else on the team is. My point is that by bringing in code about locks in our codebase, it's now on us to maintain and get right on all platforms that our code will run on, and I don't think we have the resources for that. I wouldn't want miden-node to start malfunctioning in 2 years, for us to ultimately find that there was a bug in our spin lock implementation in the VM.

All this to say that I actually prefer to see

#[cfg(not(feature = "std"))]
// use of `once_cell

#[cfg(feature = "std")]
// use of `LazyLock`

than a "nicer" miden-core::utils::sync::LazyLock that hides the complexity and is more bug-prone. The only exception to this is if it's actually easy to implement the LazyLock interface in no_std using primitives from once_cell.

@bitwalker
Copy link
Contributor

bitwalker commented Aug 27, 2024

I'm also not comfortable implementing a LazyLock interface (or reviewing an implementation) without extensive testing on many different platforms. These implementations can be tricky, especially since different platforms can have quirks that typically libraries such as once_cell deal with (although here we're dealing with no_std only so it's probably not as bad but I'm not sure).

While I'm in 100% in agreement that we should just use once_cell here, we don't need to support the gamut of possible no-std platforms in our own implementation, only WebAssembly. This significantly reduces the complexity of the implementation, and the implementation of something like LazyLock using a "first writer wins" strategy on top of atomics or a mutex is trivial to begin with. WebAssembly doesn't have any threading, and the extensions to Wasm that do support threading, are not supported in the browser, and will always remain an optional target feature - our need for synchronization is effectively to appease the compiler's requirement that this static be Sync, we could just as easily use UnsafeCell to implement this without any actual risks.

Note: I also didn't know about our SpinLock implementation, and I also would have pushed back against it. We have a bunch of unsafe code there, including the use of "C++ atomics" which are notoriously hard to get right on all platforms. Perhaps @bitwalker is an expert in their use, but probably no one else on the team is.

First - there is virtually no unsafe in that entire implementation, except for two trait methods, which are marked unsafe because our implementation of those methods must uphold the documented guarantees of that trait, but all of the code is using safe APIs. If you are talking about some other notion of safety, that's fine, but you should state that explicitly. Saying something is "unsafe" in a Rust context has a specific meaning, and this comes across as quite misleading IMO.

I mean, yes, I may be way more familiar with the intricacies of implementing things like this than the rest of the team, but I think you are also overstating the difficulty of implementing a primitive like that. Especially because in our specific implementation we are able to rely on simplifying assumptions about what no-std environments we support (i.e. just Wasm, where there is no OS, no threading, and no hardware interrupts).

The reason why these primitives have a reputation for being hard/tricky, has more to do with how they are used than their actual implementation, and even on the implementation side, it is isn't because the essential complexity is high, but because of the desire to aggressively reduce the overhead of synchronization in the general case, that implementations can get very hairy, very quick. We don't need to care about any overhead, since on Wasm, atomics are lowered to regular loads and stores anyway, and as a result our implementation can remain very simple.

The "C++" memory model used by Rust atomics is confusing to most people, because if your target environment has a relaxed memory model (primarily ARM and RISC-V, as x86 and Wasm are both strict), it requires you to understand the degree to which the compiler and the hardware can modify the original order of the program, namely with regard to loads and stores, without the use of any barriers or synchronization. However, this stuff is extremely well documented, for obvious reasons, and there are great testing tools available for it to boot. Our implementation is actually run through a form of model checker that tries different valid interleavings of the various atomic operations, to ensure that our implementation does not violate the desired semantics (such as letting two callers acquire a lock, or a deadlock).

In any case, here I think there is little reason to implement our own LazyLock, when once_cell is a great library for this use case, and is well maintained, it will save us time to rely on it instead.

My point is that by bringing in code about locks in our codebase, it's now on us to maintain and get right on all platforms that our code will run on, and I don't think we have the resources for that.

I don't disagree with this in principle, but again, we're not talking about implementing our own primitives for all platforms we support, just the one no-std environment out of those we do support (WebAssembly), so I don't find this to be a compelling reason to avoid implementing things ourselves in general. Bringing in dependencies also imposes an obligation on us, and one that can be misleading in the effort required. Many libraries are maintained by one person, and barely that, and vetting the implementations can be difficult (if not already well vetted, like once_cell), because they can't make the same simplifying assumptions we can by only needing to support a narrower range of targets. But it is absolutely a tradeoff, and I do agree that we should favor using well-established, well-tested libraries like once_cell to help us move faster, we can always do our own implementation later if we find it necessary to do so.

To be clear, I agree in this instance that once_cell is the preferable choice, I'm just pushing back on this reasoning in general. I would rather maintain our own code, than maintain a third-party crate we didn't build, if it comes down to it - ideally we pick deps that don't require us to do that, but sometimes it's a choice between the one decent option that exists, or doing it yourself.

I wouldn't want miden-node to start malfunctioning in 2 years, for us to ultimately find that there was a bug in our spin lock implementation in the VM.

This is what testing is for IMO, this same reasoning could be used just as easily to reject the use of a dependency (we have no defense against a maintainer merging a change that introduces a bug after we start depending on it). You have to plan defensively either way.

TL;DR: I agree with @plafer that we should just use once_cell for no-std builds, I don't think there is much benefit to debating the pros and cons of implementing this ourselves, and IMO the important thing is that implementing it ourselves will take time away from more important things, so we should just use the thing that lets us solve the problem in the most direct and convenient way possible.

@plafer
Copy link
Contributor

plafer commented Aug 27, 2024

we don't need to support the gamut of possible no-std platforms in our own implementation, only WebAssembly.

I was not aware of this, and knowing this changes my reaction by a margin. AFAIK, this is not documented anywhere. I think this should be documented loud and clear in the README that we only officially support and test on WASM for no_std.

First - there is virtually no unsafe in that entire implementation, except for two trait methods, which are marked unsafe because our implementation of those methods must uphold the documented guarantees of that trait, but all of the code is using safe APIs. If you are talking about some other notion of safety, that's fine, but you should state that explicitly.

You're right; I only glanced at it pretty quickly. I didn't mean it to be the main focus of my argument, and certainly was not implying that we ended up using unsafe code just for the fun of it.

I mean, yes, I may be way more familiar with the intricacies of implementing things like this than the rest of the team, but I think you are also overstating the difficulty of implementing a primitive like that. Especially because in our specific implementation we are able to rely on simplifying assumptions about what no-std environments we support

As mentioned in the first point, indeed if we only care about an arguably simple WASM environment, then we can even use these abstractions incorrectly and they'll end up being compiled down to correct code (since as you mentioned the WASM environment doesn't try to be fancy with the "multicore guarantees" it provides). And we get to test that code extensively, so once again, most of what I described doesn't apply anymore.

TLDR:

  1. I created chore: Add warning about no_std environment #1471 that adds a warning in the main README that the only no_std environment that we officially support is WASM
  2. I still think we should use once_cell for no_std in this particular case, but I am less concerned if we don't due to (1)

@bobbinth
Copy link
Contributor

Great discussion! A few points from me:

  1. I think exposing a LazyLock from miden-core::utils::sync is beneficial because then we can use it multiple places (and don't have to think how to do it every time). For example, we have a similar need in 0xPolygonMiden/miden-base#377.
  2. Now that things like LazyLock and OnceLock are available in the Rust standard library, I would expect that usage of crates like lazy_static and once_cell will taper off, and it may very well end up that these crates will be deprecated/unmaintained in a couple of years (e.g., see this discussion about lazy_static).
  3. Generally, if we need some very narrow functionality, I prefer to bring in the code rather than introduce another dependency. This is for sure a trade off, but if the code is simple enough and would require virtually no maintenance in the future, I think it is worth taking.
  4. Here, my thinking was that building no_std version of LazyLock (assuming WASM-target), would be a pretty simple task (both to implement and review). But if not, I do agree that we should go with a different approach.

Overall, for this specific issue, I think the most important point for me is point 1 - i.e., having a uniform interface for initializing static content both in std and no-std context (ideally relying on LazyLock from the Rust standard library in the std context). Then we can use this interface in many places and it would pretty easy to evolve it in the future.

@sergerad
Copy link
Contributor Author

Thanks guys. Got a bit on ATM so I'll aim to have something for review next week.

@sergerad
Copy link
Contributor Author

sergerad commented Sep 2, 2024

Based on my reading of the comments, I understand that @plafer and @bitwalker would prefer to import once_cell::race rather than implement it ourselves. Whereas @bobbinth wanted to explore a minimal implementation using the approach from once_cell::race but with the interface of LazyLock.

The latest changes show an implementation of LazyLock's interface using the primitives behind once_cell::race (atomic ptr and CAS).

This has the same interface for std and no_std:

#[cfg(feature = "std")]
pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
pub use racy_lock::RacyLock as LazyLock;

The alternative would look something like this (different interface for std vs. no_std):

#[cfg(feature = "std")]
pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
pub use once_cell::race::OnceBox;

PR has some fundamental unit tests and is still WIP until we reach consensus on importing vs. implementing.

Thanks!

Copy link
Contributor

@bobbinth bobbinth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Thank you! I left a couple of small comments inline. But would also be great to get reviews from @plafer and @bitwalker.

Also, I would probably put LazyLock into the sync module. The overall structure could look something like this:

utils
└── sync
    └── mod.rs
        ├── racy_lock.rs
        └── rw_lock.rs

In the above, rw_lock.rs would basically be the current utils/sync.rs.

stdlib/src/lib.rs Show resolved Hide resolved
/// Thread-safe, non-blocking, "first one wins" flavor of `once_cell::sync::OnceCell`
/// with the same interface as `std::sync::LazyLock`.
///
/// The underlying implementation is based on `once_cell::sync::race::OnceBox` which relies on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably include a link here (I think https://github.com/matklad/once_cell/blob/c48d3c2c01de926228aea2ac1d03672b4ce160c1/src/race.rs#L294 is the right one?).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add an # Acknowledgements section to core/README.md in which we explicitly state that core/src/utils/racy_lock.rs has some code taken from once_cell.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also nit: cargo doc suggests we add brackets to the hyperlink (<https://github.com/matklad/once_cell/blob/v1.19.0/src/race.rs#L294>) (for which an actual link is generated in the docs)

Copy link
Contributor

@bitwalker bitwalker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work!

@sergerad sergerad marked this pull request as ready for review September 4, 2024 04:31
Copy link
Contributor

@plafer plafer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might have found a data race in the Drop implementation - which is a bit concerning, since it was copied from OnceBox. Could you all please double-check this? @bitwalker @bobbinth @sergerad

core/src/utils/sync/racy_lock.rs Show resolved Hide resolved
core/src/utils/sync/racy_lock.rs Show resolved Hide resolved
Comment on lines 110 to 129
let ptr = *self.inner.get_mut();
if !ptr.is_null() {
drop(unsafe { Box::from_raw(ptr) });
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that this was copy/pasted from OnceBox::drop(), but isn't there a possible double-free here? Basically, 2 threads could enter the !ptr.is_null() if, and call drop on the Box.

I'm not sure what would be the safe way to drop yet, but at least if we look at how it's done with Arc, it's a lot more complex (and clearly takes thread synchronization into consideration).

Copy link
Contributor

@plafer plafer Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, but this lock is not Clone! So thinking of it like an Arc is wrong. IIUC, the only case then that 2 threads case race is if RacyLock is used in a static variable, but in the case of a static variable, drop() is never called. Hence, drop() is only ever called when a single thread has access to the lock, and therefore synchronization is not needed.

If this is correct, can we add this explanation (cleaned up) to a Safety comment above the unsafe block?

Copy link
Contributor

@bitwalker bitwalker Sep 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drop requires a mutable reference, so no other references can exist, and thus there can be no race.

+100 on adding a Safety note on unsafe blocks, but in this case I don't think it is necessary, since the unsafety here is due to the requirements of Box::from_raw, not Drop. I would like to see Safety docs only when our implementation of something assumes some safety invariant, rather than when we're interacting with documented safety invariants (unless it is not clear from the context how those invariants are being upheld). I'll always err on the side of more, rather than less, documentation though, so I'll defer to @plafer here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true of the RacyLock value, but my concern was about a data race on the pointer wrapped in self.inner (i.e. what's manipulated in the unsafe block, and then explicitly dropped).

I still think it's worth adding a Safety comment, as the reason why we don't need any synchronization in drop() is nontrivial.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed!

Copy link
Contributor Author

@sergerad sergerad Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@plafer I have updated the relevant comments in this commit. I struggled to formulate the # Safety section because it didn't quite add up to me. Strictly speaking I don't think this definition for when a # Safety block is required applies here:

Unsafe functions should be documented with a "Safety" section that explains all invariants that the caller is responsible for upholding to use the function correctly.

I say this because AFAICT the current implementation fulfils all the necessary invariants without any responsibility on the caller. I think we are more interested in explaining why the function is safe, rather than guiding users as to how to use it safely. So I have added explanatory comments to that effect.

Regarding the addition of Clone:

The reasoning being that e.g. if we were to implement Clone on RacyLock, it would break the entire drop() implementation

This is assuming we would be implementing Clone in a manner which performs a shallow copy right? So that the drop() would cause a double free? I'm not sure it would ever make sense for us to do this. Note that core::sync::AtomicPtr is not Clone.

In any case, that is a hypothetical that is outside of the current implementation's invariants. So, again, it feels awkward to explain that under # Safety.

LMK any further changes you would make around this. I was just worried the # Safety block would add confusion rather than provide help. LMK if I am mistaken anywhere! 🙏

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that the # Safety block in the method docstring could mislead users of the method.

My only goal here is to point future reviewers of the implementation in the right direction as to why the drop() implementation will never result in a double-free. The assumption is that most reviewers (myself included) will probably not use AtomicPtr often. For example, if you follow my stream of consciousness at the beginning of this thread, I mistakenly assumed AtomicPtr worked like an Arc (i.e. new owners of the underlying pointer can be created by cloning the AtomicPtr), and so in that model, the drop() implementation could lead to a double-free. I only later realized that my model of what an AtomicPtr is was incorrect - the only scenario where we can have multiple owners of the underlying pointer is when the AtomicPtr is used in a static variable. Otherwise, e.g. when used as a stack variable, it can be moved to other threads, but never cloned, so you'll only ever have one owner (at least that's my current understanding). But static variables never have drop() called, and so in the only scenario where there could be multiple owners, drop() is never called - hence, you can assume a single owner in drop() and forego any synchronization.

To reformulate, I want to make it easier for future reviewers (myself included) to get to that reasoning, assuming they don't regularly think about how AtomicPtrs work, whether or not they are cloneable, and how that affects the correctness of the drop() implementation.

So taking everything into consideration, I would go back to my previous suggestion of adding a safety block above this line and give a more concise version of the above explanation as to why that unsafe block is used safely. For example,

// SAFETY: for any given value of `ptr`, we are guaranteed to have at most a single instance of
// `RacyLock` holding that value. Hence, synchronizing threads in `drop()` is not necessary, and we
// are guaranteed never to double-free. In short, since `RacyLock` doesn't implement `Clone`, the
// only scenario where there can be multiple instances of `RacyLock` across multiple threads
// referring to the same `ptr` value is when `RacyLock` is used in a static variable - but `drop()`
// is never called for static variables.
drop(unsafe { Box::from_raw(ptr) });

I personally prefer to be a bit too wordy rather than not, since a bit more context is likely to help the reviewer that's not very familiar with AtomicPtr and lock implementations, while the seasoned reviewer can just skip the comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, TY. I omitted the last sentence about the static variables because (purely hypothetically) the invocation of a static variable's drop() would not be an issue because there is only ever a single such variable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, agreed, your comment is indeed more accurate.

However I noticed that you put it in the method's docstring instead of over the unsafe{} block, and in your previous comment you mentioned that you were concerned that a # Safety block in the docstring might be confusing to users (which I agree with you). Feel free to move it inside the method if you prefer that - I'll approve the PR regardless. Up to you!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks!

Copy link
Contributor

@plafer plafer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you for this great work, and thank you for bearing with me 🙂

Copy link
Contributor

@bobbinth bobbinth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All looks good! Thank you for the great work!

@bobbinth bobbinth merged commit 0ce2fe5 into 0xPolygonMiden:next Sep 6, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants