-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated "added support for partial bitstream decoding" #1407
Conversation
… decoding is enabled
|
|
@rouault A good file for testing partial codestream decoding is input/nonregression/test_lossless.j2k I did tests with truncated versions of that file at 32k, 64k, and 128k sizes. The opj_decompress command can produce an image when using the
|
|
@ouyangmingjun-work |
Thank you very much for your answer. What I want to ask is, can other open source software achieve the effect I want? It is to allow packets to be lost in the middle |
No. All Jpeg2K decoding libraries would have the same issue. By "packets" I mean network packets. You can try filling the missing packet 5 with zeros in the decode buffer before appending the next packet 6. This requires that your "packet" protocol tracks the byte offset of each packet (or if all packets are the same length). The decoding logic might still have a problem with skipping over the zeros if it was expecting some important header. So the decode buffer would be filled like (packets 1-4, zeros to replace packet 5, packets 6-10). Based on your description it seems that your decode buffer is filled with (packets 1-4, packets 6-10), which would cause problems with trying to decode the data after packet 4, since the data for packets 6-10 would be at the wrong offset and look like corrupted data. If the missing packet is received later (can happen with UDP based protocols) after packet 6, then it can replace the zeros in the decode buffer. Jpeg2k's progressive transmission (and I think normal Jpeg?) feature is not for missing packets. It is for allowing decoding when only X% of the data has been received (that X% is without gaps or missing packets). HTTP range requests can allow a client to request just the first 1k of all images on a page (where the images could be very large) and then should at least a low quality version of the image, before continuing to downloading more of each image as needed. SecondLife does that for the texture of 3d objects, since the 3d viewer needs to download all the textures when moving to a new location. There can be 100-1000 textures that need to be downloaded and displayed quickly. So the viewer only downloads the first 1k of each image, so it can start showing something, then it progressively downloads more of each image for objects that are visible to improve the quality. |
Thank you very much for your patience and detailed answer. |
- Update from version 2.4.0 to 2.5.0 - Update of rootfile - Changelog 2.5.0 (May 2022) No API/ABI break compared to v2.4.0, but additional symbols for subset of components decoding (hence the MINOR version bump). * Encoder: add support for generation of TLM markers [\#1359] (uclouvain/openjpeg#1359) * Decoder: add support for high throughput \(HTJ2K\) decoding. [\#1381] (uclouvain/openjpeg#1381) * Decoder: add support for partial bitstream decoding [\#1407] (uclouvain/openjpeg#1407) * Bug fixes (including security fixes) Signed-off-by: Adolf Belka <[email protected]> Signed-off-by: Michael Tremer <[email protected]>
This is an updated version of #1251 by @chafey