-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify Variant shredding and refactor for clarity #461
Conversation
c4b435f
to
8352319
Compare
VariantShredding.md
Outdated
We extract all homogenous data items of a certain path into `typed_value`, and set aside incompatible data items in `variant_value`. | ||
Intuitively, incompatibilities within the same path may occur because we store the shredding schema per Parquet file, and each file can contain several row groups. | ||
Selecting a type for each field that is acceptable for all rows would be impractical because it would require buffering the contents of an entire file before writing. | ||
All fields for a variant, whether shredded or not, must be present in the metadata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be controversial. I'm trying to say that you should not need to modify the metadata when reading. The reconstructed object should be able to use the stored metadata without adding fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused. When the field is not shredded, we will not have metadata for it, right? When it's getting shredded, then it will be like a column and we will generate metadata so it can be used for filtering/pruning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sfc-gh-aixu, this is saying that when writing, the metadata for a shredded value and the metadata for a non-shredded value should be identical. Writers should not alter the metadata by removing shredded field names so that readers do not need to rewrite the metadata (and values) to add it back.
For example, consider an event that looks like this:
{
"id": 102,
"event_type": "signup",
"event_timestamp": "2024-10-21T20:06:34.198724",
"payload": {
"a": 1,
"b": 2
}
}
And a shredding schema:
optional group event (VARIANT) {
required binary metadata;
optional binary value;
optional group typed_value {
required group event_type {
optional binary value;
optional binary typed_value (STRING);
}
required group event_timestamp {
optional binary value;
optional int64 typed_value (TIMESTAMP(true, MICROS));
}
}
}
The top-level event_type
and event_timestamp
fields are shredded. But this is saying that the Variant metadata
must include those field names. That ensure that the existing binary metadata can be returned to the engine without adding event_type
and event_timestamp
fields when merging those fields into the top-level Variant value
when the entire Variant is projected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for detailed explanation. Later I realize this is about variant metadata
and what I was talking about was column metadata (stats).
I get what you are saying: when the entire Variant is projected, we need to reconstruct the original value
and metadata
by merging back the shredded fields if the metadata
after shredding excludes the shredded fields.
That makes sense to me to reduce the metadata reconstruction on the read side.
VariantShredding.md
Outdated
|
||
Similarly the elements of an `array` must be a group containing one or more of `object`, `array`, `typed_value` or `variant_value`. | ||
Each shredded field is represented as a required group that contains a `variant_value` and a `typed_value` field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why each shredded field should be a required group is not clear to me. If fields were allowed to be optional, that would be another way of indicating non-existence of fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The primary purpose is to reduce the number of cases that implementers have to deal with. If all of the cases can be expressed with 2 optional fields rather than 2 optional fields inside an optional group, then the group should be required to simplify as much as possible.
In addition, every level in Parquet that is optional introduces another repetition/definition level. That adds up quickly with nested structures and ends up taking unnecessary space.
VariantShredding.md
Outdated
The `typed_value` field may be any type that has a corresponding Variant type. | ||
For each value in the data, at most one of the `typed_value` and `variant_value` may be non-null. | ||
A writer may omit either field, which is equivalent to all rows being null. | ||
If both fields are non-null and either is not an object, the value is invalid. Readers must either fail or return the `typed_value`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RussellSpitzer and @gene-db, this could use some attention.
Here, if both value
and typed_value
are non-null I initially thought it made more sense to prefer value
because it doesn't need to be re-encoded and may have been coerced by an engine to the shredded type.
However, this conflicts with object fields, where the value of typed_value
is preferred so that data skipping is correct. If the object's value
could contains a field that conflicts with a sub-field's typed_value
there is no way of knowing from field stats. If we preferred the field value stored in the object's value
then data skipping could be out of sync with the value returned in the case of a conflict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the value
is invalid
If both fields are non-null and either is not an object, the value is invalid. Readers must either fail or return the `typed_value`. | |
If both fields are non-null and either is not an object, the `value` is invalid. Readers must either fail or return the `typed_value`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't we just being proscriptive here? Isn't this essentially saying you can duplicate a subfield-field between typed_value and value? Wouldn't it be safer to just say this cannot be done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that readers won't actually implement restrictions like this and we can't fully prevent it. It is invalid for a writer to produce a value where value
and typed_value
conflict. But writer bugs happen and readers need to know what to do when they encounter that situation. Otherwise we would get different behaviors between readers that are processing the same data file.
It all comes down to end users -- if a writer bug produces data like this, readers will implement the ability to read because the data still exists and can be recovered. When that happens, we want to know how it is interpreted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My take is if readers have bugs and produce invalid values, I'm not sure you can really trust most of the data at all (even metadata). It sounds like we are assuming 1 specific type of bug where readers accidentally forget to clear a field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point is not the bug. It is that we want to make it valid to read a projection without checking the value
for bugs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, trying to make the projection of an object to another one without having to read the value is too spark specific. For example, with keys "a" and "b" shredded, If I am casting {"a":2, "b":3, "c":4} to a struct with keys "a" and "b", I can easily imagine a cast semantic that will fail that cast and such a semantic will force us reading both the typed_value and value unless value is null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I understand. If a
and b
are shredded, then there should be no fields with that name in value
and the spec is stating that you don't need to check for them. That means all of the information needed to continue is in the shredded Parquet columns. That's not specific to Spark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I was trying to say is, if I am casting that field to a struct with "a" and "b" but "c" exists in value, then some engines will require that cast to fail, because the object with keys "a", "b" and "c" do not match the struct schema. Apparently, with spark such a cast succeeds by producing the struct with the subset of the keys that match the struct schema.
VariantEncoding.md
Outdated
|---------------|-----------|----------------------------------------------------------|--------------------------------------| | ||
| Null type | null | `null` | `null` | | ||
| Boolean | boolean | `true` or `false` | `true` | | ||
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, 34.00 | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For exact numerics, we should allow truncating trailing zeros. For example, int8
value 1
and decimal(5,2)
value 100
can both be represented as a JSON value 1
.
Also, should the example be quoted to stay consistent?
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, 34.00 | | |
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, `34.00` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the intent of considering Exact Numeric
to be a single logical type is that we consider the int8
value 1
to be logically equivalent to decimal(5,2)
with unscaled value 100
. If that's the case, I think we'd want the produced JSON to be the same for both (probably 1
in both cases), and not recommend having the fraction match the scale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gene-db, @cashmand, these are concerns for the engine layer, not for storage. If Spark wants to automatically coerce between types that's fine, but the compromise that we talked about a couple months ago was to leave this out of the shredding spec and delegate the behavior to engines. Storage should always produce the data that was stored, without modification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the engine should be the one concerned with changing types.
However, my original question was about this JSON representation wording. Currently, the Representation requirements
for an Exact Numeric
says the Digits in fraction must match scale
. However, because the Exact Numeric
is considered a logical type, the value 1
could be stored in the Variant as int8
1 or decimal(5,2)
100. Both of those would be the same numeric value, so we should allow truncating trailing zeros in the JSON representation, instead of requiring that the digits in the fraction match the scale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gene-db, the JSON representation should match the physical type as closely as possible. The reader can interpret the value however it chooses to, but a storage implementation should not discard the information.
If you want to produce 34 from 34.00 stored as decimal(9, 2)
then the engine is responsible for casting the value to int8
and then producing JSON. The JSON representation for the original decimal(9, 2)
value is 34.00
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue I am confused with this JSON chart then. If we are talking about "storage implementation", then are you expecting there is a "storage implementation" that is converting variant values to JSON? When will storage convert a variant value to a JSON string?
I originally thought this chart was trying to say, "When an engine wants to convert a variant value to a JSON string, here are the rules". Therefore, we should allow engines to cast integral decimals to integers before converting to JSON, as you already mentioned in your previous comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I agree with @gene-db on this. I think any json representation that has semantically the same meaning in JSON should be allowed. Translation to JSON is inherently lossy and I think trying to match semantics will be more error prone then it is worth (i.e. it should be a non-goal to expect it to be able to reconstruct the exact same variant from the proposed JSON representation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe the wording or presentation of this mapping is a bit confusing.
I think we are on all on the same page of allowing engines to "normalize" the Variant value. For example, I think the Spark implementation already normalizes 1.00
to 1
. There are also many optimizations and efficiency aspects with normalization, so we should not disallow that.
Maybe what this chart is trying to show is: "if you want to output a Variant value as a JSON string, this is the output format you should use". So, for numbers, the conversion should be like 1
or 1.23
(no quotes), not "1"
, or "1.23"
. If this chart was about the JSON output formatting, would that be more clear?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When an engine wants to convert a variant value to a JSON string, here are the rules
Yes, this is correct. We want a clear way to convert to a JSON string. However, the normalization needs to happen first. We don't want to specify that the JSON must be any more lossy than it already is.
Why would we require an engine to produce a normalized value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would we require an engine to produce a normalized value?
At least for me, I don't think it is about "requiring" and engine to produce a normalized value first. I think if an engine is reading variant and converting it to JSON, it is possibly doing so through an internal representation so it can still apply operators on top of the JSON value and possibly even storing it as an internal representation. Conversion to a string is really only an end-user visible thing. So when I read this it seems to be requiring an engine to NOT normalize which could be hard to implement for some engines.
|
||
Dictionary IDs in a `variant_value` field refer to entries in the top-level `metadata` field. | ||
If a Variant is missing in a context where a value is required, readers must either fail or return a Variant null: basic type 0 (primitive) and physical type 0 (null). | ||
For example, if a Variant is required (like `measurement` above) and both `value` and `typed_value` are null, the returned `value` must be `00` (Variant null). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in my previous comment, I think it would be invalid for measurement
to have both value
and typed_value
be null, and should be an error. I don't understand why we're recommend returning variant null as an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule is to address the fact that arrays cannot contain a missing value. This is saying that if a value is required but both are null, the implementation must fill in a variant null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule(both value being null should be interpreted as json-null) is valid only for top level variant and array elements? I wonder how a top level variant can be inserted as both value and typed_value being null if the top level field is required. That seems inconsistent. For arrays, it looks like we could also require value being variant encoded null(json null) rather than allowing both fields to be null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sfc-gh-saya, if the top-level field is required but both fields are null, then the reader must produce a variant null value, 00
. We must state what happens in cases like this because it is possible for writers to produce them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the writers produce nulls for both value
and typed_value
, it's like a corrupted files and I feel it's reasonable for the readers to error out rather than give a default value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue Shouldn't this always be an error? This looks like invalid shredded data.
Above in the chart, we say (value=null, typed_value=null) | The value is missing; only valid for shredded object fields
. This means we can only have (null, null)
for shredded object fields.
In other scenarios, shredding should never produce (null, null)
. If there was ever a required variant that is a variant-null, then the shredding scheme should produce (variant-null, null)
, and should never produce (null, null)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could introduce more error cases, but I don't see much value in requiring this to be an error, but only when value
and typed_value
don't represent an object field. More error cases lead to more rejected data and it is simple to just define how to handle this case. That makes readers easier to implement.
At a high level, we replace the `value` field of the Variant Parquet group with one or more fields called `object`, `array`, `typed_value`, and `variant_value`. | ||
These represent a fixed schema suitable for constructing the full Variant value for each row. | ||
For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp') FROM tbl` only needs to load field `event_ts`, and shredding can enable columnar projection that ignores the rest of the `event` Variant. | ||
Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event, '$.event_type', 'string') = 'signup'`, the `event_type` shredded column metadata can be used for skipping and to lazily load the rest of the Variant. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event, '$.event_type', 'string') = 'signup'`, the `event_type` shredded column metadata can be used for skipping and to lazily load the rest of the Variant. | |
Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event, '$.event_type', 'string') = 'signup'`, the `event_type` shredded column metadata can be used for skipping while the rest of the Variant is lazily loaded for matching pages. |
# Data Skipping | ||
All elements of an array must be non-null because `array` elements in a Variant cannot be missing. | ||
That is, either `typed_value` or `value` (but not both) must be non-null. | ||
Null elements must be encoded in `value` as Variant null: basic type 0 (primitive) and physical type 0 (null). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just for consistency it was written as
`00` (Variant null).
Earlier in the doc but this is fine too
VariantShredding.md
Outdated
| `{"error_msg": "malformed: ..."}` | `{"error_msg", "malformed: ..."}` | null | | | | | Object with no shredding | | ||
| `"malformed: not an object"` | `malformed: not an object` | null | | | | | Not an object (stored as Variant string) | | ||
| `{"event_ts": 1729794240241, "click": "_button"}` | `{"click": "_button"}` | non-null | null | null | null | 1729794240241 | Field `event_type` is missing | | ||
| `{"event_type": null, "event_ts": 1729794954163}` | null | non-null | `00` (field exists, is null) | null | null | 1729794954163 | Field `event_type` is present and is null | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more requested examples,
Could we have where "event_ts" is a Date or something non transformable into a timestamp?
I assume this would make value be {"event_ts": "08-03-2025"}
while typed_value would be null
I also wonder if we could do a single example for a doubly nested field showing where typed_value.address.value != null. All the examples here cover a primitive field being typed, so It may be nice to show the behavior with a object being typed.
{
Name
Address {
City
ZIP (Shredded as INT but some values as String?)
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added most other examples, but I don't think that we need the nested example because it would make the table much larger. I also cover nesting in the next section specifically.
VariantShredding.md
Outdated
|
||
The `typed_value` associated with any Variant `value` field can be any shredded type according to the rules above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I understand this sentence, but I believe I understand the intent is that you can have objects or elements within arrays also shredded?
I think the tables above are easier for me to follow than the parquet schema below. I understand though if that's difficult to depict.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just saying that any time you have a value
field, you can also have a typed_value
field that might be any shredded type, like an array nested in a field or an object nested in an array.
|
||
Consider the following example: | ||
Statistics for `typed_value` columns can be used for file, row group, or page skipping when `value` is always null (missing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to specify "null" vs "variant null" I get a little confused sometimes in the doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't "null (missing)" clear that it is not variant null? Missing is only used to mean one form of null in the text.
“not an object” | ||
] | ||
``` | ||
When the corresponding `value` column is all nulls, all values must be the shredded `typed_value` field's type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sometimes we refer to the value
as a column and sometimes as a field. Just wondering if we should take a pass to standardize unless there is another meaning i'm not following here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I was referring to "value" as a Parquet column. All object variant fields should be referred to as "field".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be good to have this in a glossary at the top.
9c283bc
to
0bb8b97
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for driving this for clarity. @rdblue
VariantShredding.md
Outdated
| timestamp with time zone | TIMESTAMP(true, MICROS|NANOS) | | ||
| timestamp without time zone | TIMESTAMP(false, MICROS|NANOS) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this by adding separate columns for physical type and logical type annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should merge this and close on remaining open items in more focused PRs/Discussion:
- Open issues I see, JSON conversion suggestions, I think should be less strong (i.e. recommendations but since this is an engine concern probably should not be part of the official spec, especially because it might be complicated for engines to implement full precision requirements when going through an in memory model of JSON before actually serializing).
- https://github.com/apache/parquet-format/pull/461/files#r1819588055 has a few commentors and it isn't clear there is a resolution.
- I think a glossary would help new readers distinguish between subtle concepts.
Thanks to everyone for reviewing! I'll merge this tomorrow unless there are objections. I think to avoid confusion like #479, we should get this in and then close on any remaining issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for simplifying and improving all of the shredding details!
Thanks for all the reviews, everyone! I just merged this so that we can move forward with the next round of discussion. |
…hredded` by default ### What changes were proposed in this pull request? Disable `spark.sql.variant.allowReadingShredded` by default ### Why are the changes needed? apache/parquet-format#461 made incompatible changes on the shredding spec, if Spark delivers the current shredding implementation as-is in Spark 4.0, additional migration/compatible efforts will be required in the future. ### Does this PR introduce _any_ user-facing change? No, variant is an unreleased feature. ### How was this patch tested? Pass GHA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49874 from pan3793/SPARK-45891-followup. Authored-by: Cheng Pan <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…hredded` by default ### What changes were proposed in this pull request? Disable `spark.sql.variant.allowReadingShredded` by default ### Why are the changes needed? apache/parquet-format#461 made incompatible changes on the shredding spec, if Spark delivers the current shredding implementation as-is in Spark 4.0, additional migration/compatible efforts will be required in the future. ### Does this PR introduce _any_ user-facing change? No, variant is an unreleased feature. ### How was this patch tested? Pass GHA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49874 from pan3793/SPARK-45891-followup. Authored-by: Cheng Pan <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit ba3e271) Signed-off-by: Dongjoon Hyun <[email protected]>
Rationale for this change
Updating the Variant and shredding specs from a thorough review.
What changes are included in this PR?
Spec updates, mostly to the shredding spec to minimize it and make it clear. This also attempts to make the variant spec more consistent (for example, by using
value
in both).object
andarray
in favor of always usingtyped_value
required
to avoid unnecessary null casesmetadata
must be valid for all variant values without modificationDo these changes have PoC implementations?
No.