You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A community member recently reached out to me for help with a wallet that failed to stake/send/sweep its balance. After some diagnostics we got down to this wallet error:
Error: transaction <xxx> was rejected by daemon
Error: Reason: Sanity check failed
and on the receiving daemon:
E amount of unique indices is too low (amount of rct indices is X, out of total Yindices.
where X and Y are fairly large numbers (between 1000 and 2000), with X about 79% of Y.
, triggered here because the wallet was just under the 80% threshold.
@tewinget and I had a bit of discussion around this, and it seems likely to be caused by a combination of batching and the way the wallet chooses decoys: it selects some recently block heights, then goes searching for outputs around those blocks. Because because we have some blocks with 0 outputs, the selection of decoys is often going to end up selecting the same block: for example if I look randomly for an output in block heights 1000-1020 but there are only 3 batch payments in that range, there is going to be far more duplication among the outputs we select, and thus (in this user's case) the 79% was triggered.
There are a few ways we can go about fixing this, perhaps by selecting from a distribution over output indices rather than block heights, by selecting from a larger range of potential decoy blocks, or by changing the selection distribution to be more appropriate. We also might consider lowering the "sanity check" threshold here (which appears to just be a completely arbitrary value since it started at 90% then was changed to 80% because 90% triggered sometimes).
Unfortunately, all of these require a hard fork because existing nodes would reject anything that fails the sanity check here. (See comment below). We ought to be able to add an interim workaround for this with a oxen-node update that reduces the sanity check threshold.
--
Avoiding the issue
If someone else runs into this same issue, the workaround to clear the wallet by sending it back to yourself in smaller chunks, as follows:
Make a note of your wallet balance. Let's say it's 6543.21
Try a sweep_all first. (But if you're in this case, that will likely fail).
Try sending most, but not all, of the balance. So start at, perhaps, 6500, and transfer that to yourself.
If that fails, try a lower amount (say 6400), and keep trying until it succeeds.
Once it succeeds, try a sweep_all again, right away (i.e. no need to wait for the first transfer to unlock).
If that fails, check your current unlocked balance and repeat the whole process by trying to transfer most of what you have, etc. (This step probably won't be necessary, but might be if you have an exceptionally large wallet).
This is most likely to affect wallets that were staked and haven't been swept since long before the Oxen 10 hard fork: they tend to build up enough outputs to trigger the issue. (It's less likely to reoccur once fixed, or on a wallet swept since Oxen 10 started, since the number of outputs accumulated will be much lower with batching).
The text was updated successfully, but these errors were encountered:
Unfortunately, all of these require a hard fork because existing nodes would reject anything that fails the sanity check here.
This sanity check is not consensus code; it is called by the wallet in get_outs and by the daemon in the rpc call SEND_RAW_TX if the request has the argument do_sanity_checks set to true.
The checks here break wallets sometimes (see issue oxen-io#1639). They aren't
really "sanity checks" anyway because there are lots of legitimate ways
a wallet could end up producing such a transaction, especially with
Oxen's relatively low output creation rate.
A community member recently reached out to me for help with a wallet that failed to stake/send/sweep its balance. After some diagnostics we got down to this wallet error:
and on the receiving daemon:
where X and Y are fairly large numbers (between 1000 and 2000), with X about 79% of Y.
This in turn comes down to this condition
oxen-core/src/cryptonote_core/tx_sanity_check.cpp
Line 87 in b309bf8
@tewinget and I had a bit of discussion around this, and it seems likely to be caused by a combination of batching and the way the wallet chooses decoys: it selects some recently block heights, then goes searching for outputs around those blocks. Because because we have some blocks with 0 outputs, the selection of decoys is often going to end up selecting the same block: for example if I look randomly for an output in block heights 1000-1020 but there are only 3 batch payments in that range, there is going to be far more duplication among the outputs we select, and thus (in this user's case) the 79% was triggered.
There are a few ways we can go about fixing this, perhaps by selecting from a distribution over output indices rather than block heights, by selecting from a larger range of potential decoy blocks, or by changing the selection distribution to be more appropriate. We also might consider lowering the "sanity check" threshold here (which appears to just be a completely arbitrary value since it started at 90% then was changed to 80% because 90% triggered sometimes).
Unfortunately, all of these require a hard fork because existing nodes would reject anything that fails the sanity check here.(See comment below). We ought to be able to add an interim workaround for this with a oxen-node update that reduces the sanity check threshold.--
Avoiding the issue
If someone else runs into this same issue, the workaround to clear the wallet by sending it back to yourself in smaller chunks, as follows:
This is most likely to affect wallets that were staked and haven't been swept since long before the Oxen 10 hard fork: they tend to build up enough outputs to trigger the issue. (It's less likely to reoccur once fixed, or on a wallet swept since Oxen 10 started, since the number of outputs accumulated will be much lower with batching).
The text was updated successfully, but these errors were encountered: