-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DEFLATE Inflator #8
base: master
Are you sure you want to change the base?
Conversation
New dependencies detected. Learn more about Socket for GitHub ↗︎
|
@@ -152,6 +153,61 @@ contract BundleBulkerTest is Test { | |||
) | |||
); | |||
} | |||
|
|||
function test_DeflateInflator() public { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recommend pulling this test out into DeflateInflator.t.sol
(i'll deleteDaimoTransferInflator
in a future PR, already superseded by DaimoOpInflator
)
|
||
const compressed = pako.deflateRaw(new Uint8Array(Buffer.from(process.argv[2], 'hex')), { level: 9 }); | ||
|
||
console.log(Buffer.from(compressed).toString('hex')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
cc @nalinbhardwaj this is a good example of why we might not want to require IInflator
or IOpInflator
to come with a compress()
function in solidity/evm
in this case, there's an existing library that decompresses a DEFLATE
stream, but none that compresses > easier to compress in js
string[] memory inputs = new string[](3); | ||
inputs[0] = "node"; | ||
inputs[1] = "test/deflate.js"; | ||
inputs[2] = '000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000008bffa71a959af0b15c6eaa10d244d80bf23cb6a20000000000000000501c58693b65f1374631a2fca7bb7dc600000000000000000000000000000000000000000000000000000000000000000000000000000160000000000000000000000000000000000000000000000000000000000000018000000000000000000000000000000000000000000000000000000000000493e000000000000000000000000000000000000000000000000000000000000aae6000000000000000000000000000000000000000000000000000000000007b44a300000000000000000000000000000000000000000000000000000000000f427200000000000000000000000000000000000000000000000000000000000f4240000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000003400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000014434fcd5be000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000020000000000000000000000000833589fcd6edb6e08f4c7c32d4f71b54bda02913000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000044a9059cbb000000000000000000000000a1b349c566c44769888948adc061abcdb54497f700000000000000000000000000000000000000000000000000000000000f42400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001499d720cd5a04c16dc5377638e3f6d609c895714f00000000000000000000000000000000000000000000000000000000000000000000000000000000000001e80100006553c75f00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000c0000000000000000000000000000000000000000000000000000000000000012000000000000000000000000000000000000000000000000000000000000000170000000000000000000000000000000000000000000000000000000000000001ce1a2a89ec9d3cecd1e9fd65808d85702d7f8681d42ce8f0982363a362b87bd5498c72f497f9d27ae895c6d2c10a73e85b73d258371d2322c80ca5bfad242f5f000000000000000000000000000000000000000000000000000000000000002500000000000000000000000000000000000000000000000000000000000000000500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006f7b2274797065223a22776562617574686e2e676574222c226368616c6c656e6765223a22415141415a5650485830567a705463726d35665a6846505f566369545433584d57484832624e7a6a6435346531774e354d32696f222c226f726967696e223a226461696d6f2e636f6d227d0000000000000000000000000000000000000000000000000000000000000000000000000000000000'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just inputs[2] = raw;
?
overall, i don't know about adding FFI
i'd prefer to just say
bytes memory compressed = '<constant hex>';
...computed by calling test/deflate separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after comments
Adds a generic DEFLATE compression inflator that can be used by any bundler and any UserOps without any customization.
It compresses the calldata by 69-73 percent (tested on a few single-userop bundles found in the wild)
And accounting for calldata pricing, it saves 32-38 percent of the bundles' calldata cost.
While custom UserOp inflators will always result in superior savings, the value in also having a generic bundle compressor is that everyone can use it without much effort to reduce userop fees today on op-stack L2s (e.g. Optimism, Base)
I'd also like to raise a concern I have about a misalignment of interests this technique raises:
Both Optimism and Arbitrum have implemented calldata compression when rolling up the L2 transactions into an L1 transaction.
Where they differ is in their L2 tx gas calculations:
On Arbitrum you need to compress your calldata and use the compressed calldata size in the data fee formula.
While on Optimism you simply multiply your calldata costs by 0.684.
As a result of this, on Arbitrum, you won't get any savings from their calldata compression because compressing already-compressed data will result in the same sized data.
But on op-stack chains - you can enjoy a "double compression discount" while the sequencer will in a sense subsidize that for you.
I might be totally missing something, or relying on outdated documentation, so please correct me if I'm mistaken.