Replies: 2 comments 4 replies
-
We've got RLE support already in OLA. One thing to consider with multiple universe is you hit the sync issue, if your input universes aren't in step timing wise, you have to do clever or complicated things. See the SPI plugin for some examples of this and options. |
Beta Was this translation helpful? Give feedback.
-
Oh, very good, I was not aware of the existing RLE implementation. I found it here: https://github.com/OpenLightingProject/ola/blob/master/common/dmx/RunLengthEncoder.cpp Are you okay with me adding the heatshrink library + OLA glue code + unit tests (so it behaves just like the current RLE implementation) to the same folder? The license of the heatshrink library is |
Beta Was this translation helpful? Give feedback.
-
In most cases, one DMX512 universe is never "completely utilized" (which means that all 512 available channels are actually used). Of course there are cases where over 90% of the channels are mapped to fixtures, mostly on "fixed" installations. However, in "mobile" installations and smaller venues, or when splitting the fixtures to multiple universes, lots of channels will have a value of zero.
Therefore, the data will most probably compress well. Meaning that the difference in size between a compressed DMX buffer and the compressed representation will usually be huge. However, since we are talking about an embedded device with limited memory and computation speed, not all compression algorithms are an advantage here. Run-length encoding is one that is easy to understand and to implement. However, it suffers from poor compression gains. I've used heatshrink for DMX-data on embedded devices in the past and I've been pretty happy with it. The algorithm is designed to run on embedded devices (uses few resources such as code size, RAM and CPU time and can be used without dynamic memory allocation). I haven't used it on the RP2040 yet, tests pending.
Right now, I cannot find the file where I wrote down my experiments results back then, but as far as I remember the best set of parameters was:
WINDOW_BITS = 9, LOOKAHEAD_BITS = 6, INPUT_BUFFER_SIZE = 600
(that part I actually found written down). If a DMX frame is all-zeroes or all-ones, it compresses from 512 byte down to 16 byte. If using completely random data, it compresses "up" to a "maximum" (observed) of 580 byte. So, yes, the compressed size can be larger than the input.Therefore, the host would need to compress every DmxBuffer, compare if it shrank or grew and then send the smaller one to the dongle (using two different commands, one for raw data and one for compressed data).
If all universes currently patched are compressed as a whole, the gains might even be larger. So there might be a command in the USB protocol like: "Compressed data for multiple universes". Which universes are contained in that command's payload would be up to the host, figuring out which ports are currently patched and to be updated and which combination of universes gives the best compression gain. Of course there is a trade-off between "perfect combination" and "CPU time taken to compress all possible combinations"
Beta Was this translation helpful? Give feedback.
All reactions