-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Epic] Solver Slippage V2 #177
Comments
Some very helpful Proof of Concept Code can be found
|
I think one open design decision is what data we should store in our database (future data warehouse). I believe @olgafetisova was in favor of ingesting the raw transaction receipts/simulation results and then run the slippage computation based on another script on our database:
The latter has the advantage IMO that it would make the script easily reproducible by third parties (all they need is tenderly access, no need to access our DB). On the other hand, having one script ingesting the simulation result and another script computing the slippage based off that splits responsibilities and may allow us more easily replace the tenderly dependency (e.g by our own node simulation) later on. Also, if there are other use cases that would use those simulation results, we wouldn't have to run the simulation multiple times. |
Thanks Felix, these are all very good points to consider and the avenue I have been going is definitely the second. I want to point out that you keep mentioning "Script to compute slippage" as if that is what we are doing here... however this project is aimed at computing "internalized token imbalance". If that is not your understanding, then this should be aligned (maybe you want to read the introduction section of the specification). We can easily change gears if necessary, but the goal here currently is to replace the approximation being made in the existing slippage query. Long term, we would transition this into a fully internal slippage computation.
I am very much in favour of saving simulations and storing their IDs on our side, but each simulation is a VERY LARGE JSON blob that I don't think we should be keeping on our side (at least not without reducing it by pruning unnecessary parts).
The project has been written in a way (with an interface |
I was judging from the title of this EPIC ("Solver Slippage v2"), that the final deliverable of this EPIC was a script to compute more accurate solver slippage. On the way there the goal is to pioneer and build the foundation for our data warehouse infra. |
Foreseeable Issues & Complications
Workflow Failure Points
|
Yes this is correct, but does not mention computing it from our own database. Rather an adaptation of our existing query.
In fact this is already happening. Its just that we are not just gonna start pumping a bunch of bloat into a database right from the start. It will be very small and compact representation of what we need to "compute accurate slippage" |
Part of #177 A preliminary step before #231. In order to compute token imbalances we will need to aggregate collections of transfers by Token around a "focalAddress" (the settlement contract). This is just a utility accumulation method, a slight type change, add a relevant field to TransferEvent that was missing (namely token).
As agreed earlier with Ben, he is going to give us a presentation with the choices that have been already made and the actual state of the project, additionally, he is going to host a discussion session on the open questions so that all the reviewers of the project are aligned on the next steps. |
Part of #177 A preliminary step before #231. In order to compute token imbalances we will need to aggregate collections of transfers by Token around a "focalAddress" (the settlement contract). This is just a utility accumulation method, a slight type change, add a relevant field to TransferEvent that was missing (namely token).
1. Implement the method `internalizedTokenImbalance` Which takes a transaction, database and transaction simulator to perform the full pipeline process outlined in this [spec doc](https://docs.google.com/document/d/15mByvoI7EQ-wFAM9QkDCMnjA4AVjbG0_G0GdirziGBM/edit#) and illustrated in the screenshot in PR description 2. Write tests 3. Update the index.ts file (i.e. the tenderly web-hook) to make use of this function. nearly concludes Phase 1 of part #177 (excluding a few follow ups). --------- Co-authored-by: Gent Rexha <[email protected]>
## Explanation of Bug A previous deployment encountered a [primary-key error](https://jsonblob.com/1106009775560474624) on insertion of settlements with primary key `(blockNumber, LogIndex)` and this was entirely due to the fact that we had been using the transaction-wise log index as opposed to the block-wise log index. ### Example: 1. Passing Execution [c125a325-69c4-4acc-8b9d-dc0609eb0143](https://dashboard.tenderly.co/gp-v2/solver-slippage/action/4f657ddd-c054-4704-9e1e-50385ad2fc2c/execution/c125a325-69c4-4acc-8b9d-dc0609eb0143) for tx [0xe08fd9651626cc0827a83721e9b6ef99a8be752d2a88490218f75ba84082a887](https://etherscan.io/tx/0xe08fd9651626cc0827a83721e9b6ef99a8be752d2a88490218f75ba84082a887) and 2. Failing Execution [8d30189b-7282-4f63-990a-7d6849c49e62](https://dashboard.tenderly.co/gp-v2/solver-slippage/action/4f657ddd-c054-4704-9e1e-50385ad2fc2c/execution/8d30189b-7282-4f63-990a-7d6849c49e62) for tx [0xec894e53627f4b0a05be069a6c0c0cca08efe67f0b12deb85aeaa146f77eb049](https://etherscan.io/tx/0xec894e53627f4b0a05be069a6c0c0cca08efe67f0b12deb85aeaa146f77eb049) where both settlements occurred in the same block and had exactly the same settlement event index of 12 (transaction-wise) clashed here. Meanwhile the block- wise log index of these events were 104 and 227 respectively. Unfortunately, Tenderly does not provide the correct log index in their [web-action payloads](http://jsonblob.com/1103717033526444032) nor in their simulation results... so we are forced to fetch [transaction receipts](http://jsonblob.com/1103716885064859648) from the EVM (basically ignoring the entire payload data except for txHash) ## Solution Here we introduce a field `index` on our model `EventLog` and populate it with the values contained in `TransactionReceipt`. Having imposed this in our work flow detected several places where the field was required, but not yet available. In particular we are forces to "fudge" the values on the Tenderly Simulation results because they are not provided and they are only hypothetical transactions anyway. This does not affect us in any way because we do not write Settlement Event data for simulated transactions anyway -- furthermore, since we always simulate at slot 0, these fudged values are actually the correct log indices. ## Test Plan All relevant unit tests have been updated. Along with a couple more for the example provided above. This is nearly the final PR for Phase 1 of #177
Part of #177 Doing a TODO to mark data as process when simulation is inserted. This happens in both cases of the pipeline (when there is an is not a simulation to go with it). ## Test Plan Every component is tested individually. Could also add a test fot `insertSettlementAndMarkProcessed` although it just runs both, tested methods, together.
Based on Specification Documentation
We layout a top-down approach plan for this repo. Development will take place in a "Feature Branch" and this is an open PR containing an aggregation of all changes.
Phase 1: Acquiring Internalized Transfers
Note that a Proof of Concept has already been implemented in python here and can be used for reference when reviewing (cc @gentrexha)
Full - Actual
(🔑 - Get Internalized Imbalance #231)InternalizedImbalances
into DB (Create Table internalized_imbalances #242)Phase 2: Dune Sync
txHashes
).Phase 3: Refactor Slippage Query
Additional TODOs & Cleanup
Follow Up Tasks
The text was updated successfully, but these errors were encountered: