You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PNG encoding can contribute substantially to graph execution time, especially when the graph passes large images around. When a graph operates on a large image, each time it is saved, we have to wait for the image to be encoded before writing to disk.
Currently, we use PIL to encode PNGs. Invoke's default compression level is 1 - the lowest amount of compression. This is substantially faster than PIL's default compression level of 6, but large images can still take many seconds to encode. Setting the compression level to 0 disables compression, resulting in much faster encodes but much larger files.
Here are a few ideas to improve the situation:
Use a faster PNG encoder. For example, there are python bindings for fpnge and fpng, two very fast PNG encoders. These packages aren't published to pypi, but maybe we can install them from github.
cv2 is marginally faster than PIL. It's not clear if the gains would offset the additional time needed to covert images from RGB (PIL) to BGR (cv2) constantly.
Reduce the number of times we encode PNGs by flagging certain node image outputs as needing to stick around only while the graph executes. We could skip saving it to disk and instead cache it in memory. This would require some internal changes and the UX of workflows may be impacted, as we expect node outputs to be visible in the UI. I think we'd also need to think carefully about the invocation cache.
Perhaps we would flag certain nodes as saving their outputs, and only those are written to disk? We are currently kinda dancing around this with the intermediate image pattern.
In a similar vein to idea 2, we could encode the ephemeral images with a compression level of 0. These images would be erased when no longer needed (after graph execution?). This way we'd still have physical images, but no compression.
Is there an existing issue for this?
Contact Details
No response
What should this feature add?
PNG encoding can contribute substantially to graph execution time, especially when the graph passes large images around. When a graph operates on a large image, each time it is saved, we have to wait for the image to be encoded before writing to disk.
Currently, we use PIL to encode PNGs. Invoke's default compression level is 1 - the lowest amount of compression. This is substantially faster than PIL's default compression level of 6, but large images can still take many seconds to encode. Setting the compression level to 0 disables compression, resulting in much faster encodes but much larger files.
Here are a few ideas to improve the situation:
Use a faster PNG encoder. For example, there are python bindings for fpnge and fpng, two very fast PNG encoders. These packages aren't published to pypi, but maybe we can install them from github.
cv2 is marginally faster than PIL. It's not clear if the gains would offset the additional time needed to covert images from RGB (PIL) to BGR (cv2) constantly.
Reduce the number of times we encode PNGs by flagging certain node image outputs as needing to stick around only while the graph executes. We could skip saving it to disk and instead cache it in memory. This would require some internal changes and the UX of workflows may be impacted, as we expect node outputs to be visible in the UI. I think we'd also need to think carefully about the invocation cache.
Perhaps we would flag certain nodes as saving their outputs, and only those are written to disk? We are currently kinda dancing around this with the intermediate image pattern.
In a similar vein to idea 2, we could encode the ephemeral images with a compression level of 0. These images would be erased when no longer needed (after graph execution?). This way we'd still have physical images, but no compression.
Alternatives
No response
Additional Content
Ref: #6594
The text was updated successfully, but these errors were encountered: