You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @thaaddeus , have just run into a strange problem which only occurred in this one case when processing data for SOL/USDC on 20220620; the time ordered sequence of events is given below:
2022-06-20T19:06:45.209Z: open message for order 631339815922709507324380 180.0 @ 34.225, slot 138327374
2022-06-20T19:06:52.228Z: fill message for order 631339815922709507324380, traded for 83.7 against order 340282366920938463463374607431664944666, slot 138327386
2022-06-20T19:06:59.126Z: An orderbook snapshot is written to recover some issue in the data; however, the slot associated with this update is 138327272 - this implies to me that it should be processed before handling the above open and fill, since related to an earlier block. In previous experiments it was found to be necessary to ignore the timestamp on snapshots and order messages by the slot for processing - however...
2022-06-20T19:07:01.134Z: Another open message for order 631339815922709507324380, now for [email protected], slot 138327402
2022-06-20T19:07:01.134Z: A duplicate of the fill messages for order 631339815922709507324380, traded for 83.7 against order 340282366920938463463374607431664944666, slot 138327402
2022-06-20T19:07:01.134Z: done message for 631339815922709507324380 (reason = filled), as expected based on the second set of messages
Need then to confirm the correct way to order the messages when processing historically; currently ordering by slot and then by the order in which the messages were received, which has worked well in all other dates, but the above suggests that, at least for snapshots, the slot may not always be sufficient for sorting the messages, and timestamp may need to be considered... any advice would be appreciated.
This recording was using v1.6.1, I do note that 1.7.0 is now available and will update, but since this has only occurred once since starting to record I won't be able to say whether the issue is resolved by the update - if the update is expected to fix this, please let me know.
The text was updated successfully, but these errors were encountered:
Hi @thaaddeus , have just run into a strange problem which only occurred in this one case when processing data for SOL/USDC on 20220620; the time ordered sequence of events is given below:
2022-06-20T19:06:45.209Z:
open
message for order631339815922709507324380
180.0 @ 34.225, slot 1383273742022-06-20T19:06:52.228Z:
fill
message for order631339815922709507324380
, traded for 83.7 against order340282366920938463463374607431664944666
, slot 1383273862022-06-20T19:06:59.126Z: An orderbook snapshot is written to recover some issue in the data; however, the
slot
associated with this update is 138327272 - this implies to me that it should be processed before handling the above open and fill, since related to an earlier block. In previous experiments it was found to be necessary to ignore the timestamp on snapshots and order messages by the slot for processing - however...2022-06-20T19:07:01.134Z: Another
open
message for order631339815922709507324380
, now for [email protected], slot 1383274022022-06-20T19:07:01.134Z: A duplicate of the
fill
messages for order631339815922709507324380
, traded for 83.7 against order340282366920938463463374607431664944666
, slot 1383274022022-06-20T19:07:01.134Z:
done
message for631339815922709507324380
(reason = filled), as expected based on the second set of messagesNeed then to confirm the correct way to order the messages when processing historically; currently ordering by
slot
and then by the order in which the messages were received, which has worked well in all other dates, but the above suggests that, at least for snapshots, the slot may not always be sufficient for sorting the messages, and timestamp may need to be considered... any advice would be appreciated.This recording was using v1.6.1, I do note that 1.7.0 is now available and will update, but since this has only occurred once since starting to record I won't be able to say whether the issue is resolved by the update - if the update is expected to fix this, please let me know.
The text was updated successfully, but these errors were encountered: