Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quick demo of complex serialization #48

Draft
wants to merge 11 commits into
base: master
Choose a base branch
from
Draft

quick demo of complex serialization #48

wants to merge 11 commits into from

Conversation

jerch
Copy link
Owner

@jerch jerch commented Mar 9, 2023

Only playground branch atm...

Can be tested with:

  • call yarn start in xterm.js base folder to run demo
  • insert this into terminal:
    echo -e 'AAA\x1b[5CBBB\x1bPq#0;2;0;0;0#1;2;100;100;0#2;2;0;100;0#1~~@@vv@@~~@@~~~~@@vv@@~~@@~~~~@@vv@@~~@@~~$#2??}}GG}}??}}????}}GG}}??}}????}}GG}}??}}??\x1b\\\x1b[5CCCC'
    
  • open console.log and inut these commands:
    • ia=term._addonManager._addons[5].instance
    • res = ia.serialize(0,6)
    • term.write('\r\n'+res.join('\r\n'))

Result should look like this:
image

@jerch jerch mentioned this pull request Mar 9, 2023
@jerch
Copy link
Owner Author

jerch commented Mar 10, 2023

A more involved example with the line base serializer (still not using serialize addon, thus no FG/BG attributes):

image

exec in console after running command in terminal (grr4 == output of imgcat palette.png):

ia=term._addonManager._addons[5].instance;
res = ia.serialize(0,8);
term.write('\r\n\r\n\r\nfrom serialize:\r\n'+res.join('\r\n'))

@jerch
Copy link
Owner Author

jerch commented Mar 12, 2023

While the line based serialization works as expected, it is still pretty wasteful, as it creates a full line image, if there is at least one cell tile. This has bad impact on the fifo buffer when reading back. Furthermore it duplicates most of ImageStorage.render and ImageRenderer.draw.

A better way to deal with things here might be to create a method extractCanvasAtBufferRange and let it work on multiple lines at once. This way render and draw is just a special buffer range case, where it points to the active viewport. Also single line extraction for serialization becomes just a special case, where we can apply left&right truncation rules on top saving precious fifo buffer space later on.

@jerch
Copy link
Owner Author

jerch commented Mar 24, 2023

Added the QOI image format temporarily to IIP sequence, as it promises huge performance gains for a lossless serialization format, while only being slightly worse in compression rate compared to PNG.

In my early tests decoding is 2-3x faster than PNG. Have yet to test the encoding path, but QOI itself states to be 30-50x faster during encoding than PNG.

Here one of the test images with alpha channel:
image

@jerch
Copy link
Owner Author

jerch commented Mar 26, 2023

The last commit makes it possible to directly compare PNG vs. QOI serialization:
image

Meaning of the numbers:

  • storageUsage: ~44 MB of RGBA data loaded by the addon
  • _storage._images.size: 80 images in total
  • sPng() - serialize all images via canvas.toDataURL as base64 encoded PNG
    • ~1.2 seconds to encode 44 MB RGBA
    • resulting in ~17.6 MB of base64 data
  • sQoi() - serialize all images via custom QoiEncoder.encode, custom b64encode and TextDecoder('latin1').decode
    • ~ 0.35 seconds to encode 44 MB RGBA
    • resulting in ~17.7 MB of base64 data

To repro the numbers yourself:

  • run xterm.js demo
  • change scrollback to 10000
  • run in shell for i in {1..10}; do imgcat addons/xterm-addon-image/fixture/qoi/* ; done
    which loads 8 QOI example images 10 times --> 80 images in the addon storage
  • open console and type:
    ia = term._addonManager._addons[5].instance;
    ...
    ia.sPng()
    ia.sOoi()

In general QOI encoding gives a significant speedup over builtin PNG method. For different images the speedup is 2.5 - 5x, which is way lower than propagated by QOI (20-50x). The reason for the lower values is the fact, that the string serialization is highly dominated by other tasks as well (see profiler below):

  • QOI encoding
    QOI encoding takes ~37% of the runtime (wasm-function[1] in graphics below).
    If we assume, that the PNG path has a similar runtime penalty from the other tasks, then QOI encoding is 8 - 9 times faster than PNG encoding. So why not 20 - 50x faster?
    Maybe there is still room for improvement on the QOI encoder, I did only a straight forward implementation of the reference code with some wasm adjustments. Also the QOI logic is pretty register-ish, which prolly gets noisy in wasm and maybe penalized by wasm's stack machine. Whether SIMD can help, is not clear to me yet (code is very branchy with high data entropy, not the best circumstances for any SIMD trickery).
    On the other hand - it could also be, that chrome has a quite fast PNG encoder, so 20-50x was never in the books.
    Last but not least - PNG encoding speed differs alot by given image data, while QOI is pretty stable. Same goes for final compression ratio.
  • pixel transfer from (hardware accelerated) canvas
    The second biggest runtime portion for the QOI path accounting for ~31% of total runtime on my computer (getImageData & drawImage in graphics below). This runtime will highly differ between machine setups, e.g. highly depend on GPU bus speed and such. Since I have a fairly old laptop (i7 Haswell), I'd assume this to be lower on most newer desktop machines. Also we cannot do much about this at all, beside drawing to an interim software canvas (already done that way).
  • base64 encoding + string conversion
    This is a big bummer, but cannot be avoided as a significant runtime portion, as Javascript lacks fast string creation from arraybuffers. On my machine this accounts for ~13% of total runtime with QOI (decode and b64encode in graphics below). Also the base64 encoder is not yet optimized (but accounts only for 5% of total runtime).

image

TL;DR
QOI serialization with custom encoders is 2.5 - 5 times faster than builtin PNG serialization.


Update: As with all data intensive profiling in JS - the runtime numbers above are slightly flawed from opened devtools. With closed devtools, QOI encoding finishes the test above in 250 ms, PNG encoding in 1100 ms, speedup is ~4.4x. So QOI is in fact a tiny bit faster than described here.

@jerch
Copy link
Owner Author

jerch commented Mar 30, 2023

My earlier wasm serializer compared to serialize addon:

  • serialize addon
    image

  • wasm serialize
    image

(Tested with ls-lR /usr output on 10k scrollbuffer at 87 cols width.)

The important entry is serialize (highlighted above). It shows a whopping speed difference of ~10x (and in fact it is much higher at ~20x when looking only at the real impl under wasm-function[2], but half of the speedup gets eaten from data transfers).

The wasm text buffer serializer still needs cleanup and a few more config options, before we can look into a proper way to integrate image serialization as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant