You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A few months ago, I tried to pack constant data and failed to come up with something usable.
However, the method I tried took into account that the constant data was quantized as part of the error metric. This obviously gave pretty bad results and to compensate the added error, animated samples had to retain more data and often failed to meet the error.
Shifting our perspective, perhaps there is another way. If we quantize the constant data at the end, after everything has been compressed and optimized using the full precision samples, we destructively compress the data. The key insight is that constant sub-tracks contribute a constant error. For example, if quantizing a specific sub-track adds a 1mm error or 1 degree, it adds it to every key frame consistently. In practice, any compression error is most visible when it changes from frame to frame. If we are consistently 1mm off for every key frame, it won't be visible to the naked eye unless we line up on a point of contact where that precision level matters.
As such, I propose the following. In the compression settings, add an optional flag to enable destructive constant data quantization. Perhaps consider adding a new error threshold for this (e.g. 1mm or higher instead of the usual 0.1mm). Ideally, it would be more optimal if every constant sub-track could be quantized (or not) to keep the decompression code path as simple as possible. The operation is destructive to begin with. For rotations, this is probably fine. Quantizing them on 8 or 16 bits per component is likely entirely safe. But things are more complicated for translation and scale. In exotic clips with very small scale values, often translation is used to compensate and we can end up with very large translation values. In those clips, any amount of quantization on the constant translations could be problematic. If we grab the clip AABB to quantize them, it'll be heavily skewed by these extreme values. We would have to detect this edge case and fallback to full precision.
The text was updated successfully, but these errors were encountered:
A few months ago, I tried to pack constant data and failed to come up with something usable.
However, the method I tried took into account that the constant data was quantized as part of the error metric. This obviously gave pretty bad results and to compensate the added error, animated samples had to retain more data and often failed to meet the error.
Shifting our perspective, perhaps there is another way. If we quantize the constant data at the end, after everything has been compressed and optimized using the full precision samples, we destructively compress the data. The key insight is that constant sub-tracks contribute a constant error. For example, if quantizing a specific sub-track adds a 1mm error or 1 degree, it adds it to every key frame consistently. In practice, any compression error is most visible when it changes from frame to frame. If we are consistently 1mm off for every key frame, it won't be visible to the naked eye unless we line up on a point of contact where that precision level matters.
As such, I propose the following. In the compression settings, add an optional flag to enable destructive constant data quantization. Perhaps consider adding a new error threshold for this (e.g. 1mm or higher instead of the usual 0.1mm). Ideally, it would be more optimal if every constant sub-track could be quantized (or not) to keep the decompression code path as simple as possible. The operation is destructive to begin with. For rotations, this is probably fine. Quantizing them on 8 or 16 bits per component is likely entirely safe. But things are more complicated for translation and scale. In exotic clips with very small scale values, often translation is used to compensate and we can end up with very large translation values. In those clips, any amount of quantization on the constant translations could be problematic. If we grab the clip AABB to quantize them, it'll be heavily skewed by these extreme values. We would have to detect this edge case and fallback to full precision.
The text was updated successfully, but these errors were encountered: