Skip to content

Latest commit

 

History

History
61 lines (40 loc) · 7.87 KB

kip-0003.md

File metadata and controls

61 lines (40 loc) · 7.87 KB
  KIP: 3
  Layer: Consensus (hard fork), DAA
  Title: Block sampling for efficient DAA with high BPS
  Author: Shai Wyborski <[email protected]>
          Michael Sutton <[email protected]>
  Status: rejected (see below)

Motivation

The difficulty adjustment algorithm requires maintaining, for each block, a list of the N blocks in its past with the highest accumulated weight. When validating a block this list needs to be traversed to compute the various quantities required for computing the difficulty target. Since the length of the difficulty window corresponds to real-time considerations, the size of desired difficulty window is measured in seconds, not blocks. This means that if we increase the block rate by a factor of R, we increase the complexity of the difficulty adjustment by R^2 (the first factor is due to the fact that validating each block requires R times more steps, and the second factor is since blocks emerge R times faster).

This is partially mitigated by the incremental nature of the difficulty window. Like the blue set, each block inherits its selected parent's difficulty window and then fixes it according to the blocks in its anticone. This converts some of the CPU overhead to RAM overhead, which would also increase by a factor of R.

To reduce the overhead, the current Golang implementation uses window sizes that are not optimal (263 blocks for timestamp flexibility, 2641 blocks for DAA window). As a result, miners whose clock drifts into the future more than two minutes than the majority of the network have their block delayed decreasing the probability they will be blue. A secondary motivation for this proposal is to increase the timestamp flexibility (which requires increasing the DAA window accordingly) efficiently.

Block Sampling

We propose, instead of using all of the blocks in the past, to sample a subset of the blocks to be used for difficulty adjustment. The sampling method should be:

  • Deterministic: it could be calculated from the DAG and would turn out the same for all nodes
  • Uniformly distributed: all blocks have a similar chance to be sampled
  • Incremental: it should not depend on the point of view of a particular block, so it could be inherited by future blocks
  • Secure: a miner should not be able to cheaply affect the probability of their block to be sampled, the cost should be comparable to discarding a valid block

We propose the following sampling method: to sample one in every C blocks, blocks satisfying that the blake2b hash of the kHH hash of their header has log(C) trailing 0s. The reason we don't use the kHH of the header directly is for a cleaner design and to prevent the (unlikely, but possible) scenario that as the difficulty increases we run out of bits. We do not use the blake2b hash directly on the header to prevent an adversarial miner from filtering blake2b nonces before computing kHH nonces (an attack with marginal cost given that currently blake2b is extremely faster than kHH).

Since the block sampling relies on the kHH of the header, it makes more sense to compute it while verifying the difficulty of the block, and store two booleans, one for DAA window and the other for timespamp flexibility.

(note that this method only allows choosing C to be a power of 2, there are methods to refine this. For example, by limiting the first three bits to be 0, and the last two bits to contain at least one 0, we get a probability of (1/8)*(3/4) = 3/32 for each block to be sampled. I see no reason to treat this problem generally, whatever sampling rate we choose, we could tailor a sampling rule ad-hoc that is sufficiently close)

Update: Georges Künzli proposed the following condition for sampling one in every C blocks: consider a u32 or u16 (LE or BE) of the Blake hash as V, define a threshold T = (((u32::MAX as u64 + 1 + C / 2) / C) - 1) as u32 and sample any block having V <= T. The bigger the part of the hash being used for T the higher the precision we get. The selection test is then also very cheap.

That this method is deterministic and incremental is true-by-design, the uniformity follows from assuming the standard assumption that Blake sufficiently approximates a random oracle. The security follows from the fact that the nonce is included in the header, so (assuming Blake and kHH distribute independently), the expected number of hashes required to find a nonce that makes the block both valid and selected is C times larger than the hashes required for a valid block. It could be argued that a negligible factor of (C-1)/C is required for the miner to adversarialy mine a block which is not selected (whereby harming the sampling process), however, the memorylessness of the Poisson process implies that, conditioned on the fact that the miner found a nonce for a sampled block, they still have the same expected waiting time as working on a new block.

Regulating Emissions

We currently regulate emissions by not rewarding blocks outside the difficulty window. This approach is unsuitable when the difficulty window is sampled. So instead, we only reward blocks whose accumulated blue score is at least as much as the lowest accumulated blue score witnessed in the difficulty window.

Proposed Constants

We propose the following:

  • Increase the timestamp flexibility from two minutes to 10 minutes. This requires a window time of 20 minutes. I propose a sample rate of a block per 10 seconds. The overall size of the timestamp flexibility window would then be 121 blocks.
  • Increase the length of the difficulty window to 500 minutes, sampling a block per 30 seconds. The overall size of the difficulty window would be 1000 blocks.

In practice, the length of the windows and the sample rate would probably be slightly adjusted for simpler implementation. I am not explicating the adjustments as they depend on the block rate.

Loose Ends

  • On a glance, it might seem worrisome that the bound on block rewards is probabilistic and has variance, since it might destabilize the emission schedule. However, since this only holds for red blocks that were not previously merged, the effect is marginal. Furthermore, the pruning protocol strongly limits the ability to merge old blocks, and the bound thereof will become more steep as we increase the length (in real world time) of the difficulty window. While I am positive this effect is negligible, we should measure it on the testnet before deploying to main. 
  • The DAA is currently retargeted based on the average difficulty across the difficulty window. This causes the difficulty adjustment to lag during times of greatly changing global hashrate. It might be better to use a different averaging, e.g. giving more weights to newer blocks (or just taking the average of a small subwindow).
  • The median timestamp selection is much more sensitive to variance than the actual difficulty retargeting, we should make sure that the chosen constants do not incur problematic variance.

Backwards compatibility

Breaks consensus rules, requires hardfork

Rejection

After more consideration we noticed the following attack: a new large miner joining the network can supress sampled blocks, thus preventing the network from adjusting to the added difficulty. This could be solved in several ways, the most direct one being to use the sampled blocks for sampling timestamps, but choosing the difficulty according to the total number of blocks created, not just sampled blocks. This solution incurs added complexity required to track the actual number of produced blocks, and still allows a large miner to somewhat tamper with the timestamp deviation tolerance (thus exacerbating difficulty attacks, though to a limited extend). When discussing the solution we came up with a different approach detailed in KIP4. While the KIP4 approach is very similar to the current proposal, we found it sufficiently different to warrent a new KIP rather than updating the current one.