Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisit constant data packing #355

Open
nfrechette opened this issue May 1, 2021 · 0 comments
Open

Revisit constant data packing #355

nfrechette opened this issue May 1, 2021 · 0 comments

Comments

@nfrechette
Copy link
Owner

A few months ago, I tried to pack constant data and failed to come up with something usable.

However, the method I tried took into account that the constant data was quantized as part of the error metric. This obviously gave pretty bad results and to compensate the added error, animated samples had to retain more data and often failed to meet the error.

Shifting our perspective, perhaps there is another way. If we quantize the constant data at the end, after everything has been compressed and optimized using the full precision samples, we destructively compress the data. The key insight is that constant sub-tracks contribute a constant error. For example, if quantizing a specific sub-track adds a 1mm error or 1 degree, it adds it to every key frame consistently. In practice, any compression error is most visible when it changes from frame to frame. If we are consistently 1mm off for every key frame, it won't be visible to the naked eye unless we line up on a point of contact where that precision level matters.

As such, I propose the following. In the compression settings, add an optional flag to enable destructive constant data quantization. Perhaps consider adding a new error threshold for this (e.g. 1mm or higher instead of the usual 0.1mm). Ideally, it would be more optimal if every constant sub-track could be quantized (or not) to keep the decompression code path as simple as possible. The operation is destructive to begin with. For rotations, this is probably fine. Quantizing them on 8 or 16 bits per component is likely entirely safe. But things are more complicated for translation and scale. In exotic clips with very small scale values, often translation is used to compensate and we can end up with very large translation values. In those clips, any amount of quantization on the constant translations could be problematic. If we grab the clip AABB to quantize them, it'll be heavily skewed by these extreme values. We would have to detect this edge case and fallback to full precision.

@nfrechette nfrechette added this to the v2.1 milestone May 1, 2021
@nfrechette nfrechette modified the milestones: v2.1, v2.2 Mar 3, 2022
@nfrechette nfrechette modified the milestones: v2.2, v3.0 Dec 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant