-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[js-api] Fix up missing preconditions on allocations #1793
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks; some small comments below
@@ -819,14 +822,14 @@ Each {{Table}} object has a \[[Table]] internal slot, which is a [=table address | |||
1. [=Throw=] a {{TypeError}} exception. | |||
1. Let |initial| be |descriptor|["initial"]. | |||
1. If |descriptor|["maximum"] [=map/exists=], let |maximum| be |descriptor|["maximum"]; otherwise, let |maximum| be empty. | |||
1. If |maximum| is not empty and |maximum| < |initial|, throw a {{RangeError}} exception. | |||
1. Let |type| be the [=table type=] { **min** |initial|, **max** |maximum| } |elementType|. | |||
1. If |type| is not [=valid tabletype|valid=], throw a {{RangeError}} exception. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since IDL already enforces most of this, I would prefer keeping the existing check, and adding an "Assert: type is valid" line after it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The IDL cannot enforce semantic well-formedness of Wasm types, which is what's needed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand your comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not every syntactically expressible table/memory (or other) type is actually legal. Hence, the Wasm semantics has to validate types themselves, pretty much the same way it validates instructions. Where the JS API allows expressing types in JS, it must hence validate them according to Wasm rules before using them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I quite understand which semantic issues @rossberg is referring to, but I guess to me it seems like it comes down to whether [EnforceRange] unsigned long
is equivalent enough to wasm's u32
. To me they do seem generally equivalent enough to not cause problems. After all, n <= k
, n <= m
, and m <= k
should all evaluate exactly the same whether you operate semantically on EnforceRange'd JS numbers or on u32
.
Still, to me it feels more natural to leverage the wasm spec's notion of validity here instead of duplicating the checks in the JS API. For one, wasm engines already have validation code around, and for another, calling out to the wasm spec is more "future-proof" in that updates to validation in the core spec will be implicitly reflected in the JS API spec. (This is especially on my mind as I work to finalize the spec for memory64.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More concretely, given the current state of the core spec, what possible way could there be for table type created from u32 values min and max, with min <= max, and reftype either funcref or externref, to fail to be valid?
I could understand an argument about the robustness to potential changes in the core spec (though the assertion I suggest would cover that fine). I just don't understand your suggestion that the current rules are impossible to verify from "the outside".
As an alternative, I'd at least add a note that the 0 <= n < 2 ** 32
checks cannot fail due to the conversion in the IDL layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While, for this moment, type validity still is relatively simple (here: proper ordering of limits, and checking the upper limit of memtypes is ≤2^16), that changes with function references or GC types, which are already included on the staging wasm-3.0 branch, and with whom it involves a recursive algorithm.
It's also a question of spec modularity and hygiene not to duplicate definitions that are "owned" by other parts of a spec if possible. That just risks incoherence and possible hick-ups. An assertion does not magically cover mistakes in such a case — rather, an assertion that doesn't actually hold is a spec bug in itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, let's just stick with the note about the integer limits, then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks and apologies for dropping the ball here.
Adds explicit steps to run memtype and tabletype validation and throw a RangeError on failure. This explicitly satisfies the preconditions from the wasm embedding spec, and removes the need for the
initial <= maximum
check in the JS API spec.One other small note: allocation failures on tables are now explicitly reported. This captures implementation limits on table length, which also weren't quite handled by the spec. (This removes a TODO referencing #584.)
Fixes #1792.