KIP: 9
Layer: Mempool, P2P, Consensus
Title: Extended mass formula for mitigating state bloat
Authors: Michael Sutton <[email protected]>
Ori Newman <[email protected]>
Shai Wyborski <[email protected]>
Yonatan Sompolinsky
Status: proposed, implemented in the rust codebase and activated in testnet 11
We propose a mechanism for regulating the growth rate of the UTXO set in both organic and adversarial settings. Our proposal is fundamentally different than existing attempts to mitigate state bloat (e.g., statelessness or state rent, both requiring active user involvement), and only entails a simple change to the logic of how the mass of a transaction is computed. In this proposal, we specify the revised formula and its consequences and describe how ecosystem software such as wallets, pools, and exchanges should adapt to this change without affecting the quality of service. We provide an intuitive overview of the properties of the new mass formula and defer the formal treatment to a soon-to-be-published preprint.
A few months ago, the Kaspa network faced a dust attack that exploited the high throughput to create many minuscule UTXOs, forever increasing the storage costs of full nodes. The attack was countered and eventually stopped by deploying a mempool patch [2]. The patch imposed a limitation on the number of transactions with more outputs than inputs that are allowed in each block; see [1] for a detailed account of the attack and its resolution. Since many standard transactions are of this form (most typically, a transaction with a single input, a destination output, and a change output), this resulted in noticeable UX and QoS inconveniences. This KIP aims to address the state bloat challenge more fundamentally, by introducing a transaction cost -- or mass -- function that inherently limits state bloat attacks. While the current state of the system is bearable (the patch increasing the delay of such transactions by a dozen seconds at worst), increased adoption will greatly exacerbate this effect, providing ample motivation to expedite the implementation of a sustainable solution.
The formal specification of the suggested changes is described below, followed by a detailed overview and analysis.
We refer to the quantity currently known as "mass" by the name compute_mass
, and introduce a new quantity called the storage_mass
. As the name suggests, the former regulates computational costs while the latter regulates storage costs. We redefine the total mass of a transaction as the maximum over both:
In the context of storage mass, a transaction is modeled as two collections of values: the input values
A relaxed version of the mass formula treats inputs and outputs symmetrically:
We call this the relaxed formula. As shown in [3], storage_mass*
can be used for transactions satisfying
As mandatory in consensus systems, storage mass calculation must use only integers and cannot rely on floating-point arithmetics. This means that the constant
def negative_mass(I, |O|):
"""
Calculates the negative component of the storage mass formula. Note that there is no dependency on output
values but only on their count. The code is designed to maximize precision and to avoid intermediate overflows.
In practice, all calculations should be saturating, i.e., clamped between u64::MIN and u64::MAX
"""
if |O| == 1 or |O| <= |I| <= 2:
return sum(C/v for v in I)
return |I|*(C/(sum(v for v in I)/|I|))
def storage_mass(I, O):
N = negative_mass(I, |O|)
P = sum(C/o for o in O)
return max(P-N, 0)
def mass(tx):
return max(storage_mass(I=tx.inputs, O=tx.outputs), compute_mass(tx))
Storage mass (unlike compute mass) cannot be computed from the standalone transaction structure, but requires knowing the exact input values, which are only available with full UTXO context (i.e., it requires a populated transaction).
We suggest setting
The new mass formula can be implemented as a mempool policy rather than a consensus rule. This has the benefit of not requiring a fork to roll out, but the disadvantage of trusting the miners to follow this policy. Due to the urgency of the update, we suggest to initially roll it out as a P2P/mining rule, while also including it in the nearest upcoming hard-fork.
P2P rule A node receiving a
Mining The new mass should be used by the mempool transaction selection algorithm to calculate the
Applying the proposal as a consensus change requires the following updates:
Transaction mass field A new explicit mass
field should be added to the Transaction
structure. This field is used as a commitment to the total mass consumed by the transaction (including the storage mass), until it can be verified in full UTXO context. To make the stated mass binding, the mass field must be hashed when computing the transaction hash. However, we suggest that the logic for computing a transaction ID should ignore this field, as this avoids unnecessary changes in ecosystem software (clients such as wallets frequently compute the ID locally). Note that wallets composing a transaction can leave the mass field empty, and let miners fill it for them during block template building.
Block validation rule The mass field of all transactions in a given block is summed and verified not to pass the block limit. Additionally, for each transaction compute_mass
is calculated and verified to be equal or lower than the mass commitment
Transaction validation rule The full mass is only calculated during contextual transaction validation (that is, validation with full UTXO context of the containing block or a chain block merging it). If the computed mass does not match the committed mass, the transaction is considered invalid and is discarded. Like other transaction validation rules, blocks containing transactions committed to a wrong storage mass are considered disqualified from the selected-chain.
Current wallet developers in the Kaspa ecosystem are familiar with the term “compounding”. When compute_mass
of a transaction is larger than storage_mass
introduced, wallets might encounter cases where the desired payment is extremely small and the initial storage_mass
is larger than
Consider a user with a single UTXO holding
0.5 0.1 0.05
/ \ / \ /
1.0
\ / \ / \
0.449 0.898 0.947
In section Wallet algorithm we fully specify an unbalancing algorithm that minimizes the number of steps required to make the micropayment, and discuss the relation to existing wallet algorithms. In section Micropayments we provide alternative costless methods for supporting micropayments in the long-run.
Currently, the vast majority of Kaspa transactions have outputs larger than one KAS, for which storage_mass
hardly ever surpass
Before the hard-fork is applied, all mining software must update their RPC client interface to a version that includes the new transaction mass field. Otherwise, blocks submitted via RPC will hash incorrectly and be deemed invalid.
The following provides an overview of the design decisions made and the rationality behind them. Formal statements presented rely on the analysis in [3].
In Kaspa, the capacity of a block is given in a quantity called mass. The difference between mass and size is that different types of data are considered "heavier" or "denser", in the sense that they consume more mass per byte. For instance, signatures have a relatively high mass per byte compared with other data since verifying them is computationally intensive. For an exact specification of how compute_mass
is currently calculated, see here.
Increasing the mass cost of storage-wasting transactions is an effective countermeasure for regulating state growth, as it allows us to control how much each block increases the state (to a limited but sufficient extent, as we will soon see). This approach could be considered an adaptable version of imposing high fees on wasteful transactions, with the following benefits:
- It does not require an estimate of what constitutes a "high" fee. Instead, the fee is naturally monetized as the value of the blockspace consumed by the transaction.
- It works in the absence of an active fee market. Even if there are no fees, the mass limits and bounded block rate regulate the storage increase.
Our overarching goal is to make the cost of wasting some amount of storage quadratic in the amount of the storage wasted. That is, taking up ten times more storage takes 100 times more mass, taking up 100 times more storage takes 10000 times more mass, etc.
As mentioned in the previous section, lower bounds on mass consumption can be seen as lower bounds on the time required to execute a series of transactions. Alternatively, one can argue that the budget locked within a segment of storage, adds additional cost to the currency owner, in the form of initial acquisition, unearned interest, etc. Thus, we sought for a formula where the accumulated mass required to waste some amount of storage decreases with the budget locked within it. If we use growth to denote the inflation in storage caused by the attack, the desired constraint is that the accumulated mass is proportional to
Designing a mass function providing such quadratic growth is challenging, due to two contrasting desires:
- Local mass pricing: the mass of a transaction is defined locally and may not depend on other transactions.
- Global cost: the cost should be quadratic in the state inflation caused, regardless of how the attacker structured his attack. An attacker can structure his attack in many ways, creating many transactions with intertwined dependencies between them, and the mass formula must ensure that even the cleverest of attacks will ultimately pay a quadratic price.
To illustrate the intricacy, consider attempting the above by setting the mass of a transaction with
A more mathematical representation of the formula defined above can be given using the harmonic and arithmetic means (
The idea behind this formula is that a transaction should be charged for the storage it requires and credited for the storage it frees. However, there is a principal difference between them: the storage charge should reflect how distributed the outputs are, while the credit should only reflect the number of consumed UTXOs and their total value. This way, transactions that do not increase the UTXO set much (or even decrease it) but create minuscule outputs are heavily charged, while transactions whose inputs and outputs are on the same order of magnitude will not pay a lot of mass, even if they do increase the UTXO set. This excludes the ability of a storage attack, since such an attack requires breaking large UTXOs into small UTXOs. This is why using the arithmetic mean to credit and the harmonic mean to charge makes sense: The arithmetic mean is the same for any two sets of values of the same size and sum, it completely "forgets" how the values are distributed. On the other hand, the harmonic mean is extremely sensitive to how values are distributed and becomes very small if the smallest value is very small (by definition, the harmonic sum of
In [3], we formalize this intuitive interpretation by proving that this definition of storage_mass
satisfies the property that the total storage_mass
required for executing a set of transactions is lower bounded by a quadratic function of the global state growth it caused. More formally, assume
Below we provide concrete illustrations of the consequences of this bound.
In this section, we consider the consequences of applying the storage mass policy, using
-
Fixed budget Say the attacker has a budget of
$20,000$ kas. That is,$2\cdot 10^4\cdot 10^8$ dworks. Plugging this into the bound, we get that$C\cdot growth^2/budget = (10^{12}\cdot 4 \cdot 10^{14})/(2 \cdot 10^4 \cdot 10^8) = 2 \cdot 10^{14}$ . That is, the attack would cost$2 \cdot 10^{14}$ grams, which would take the full capacity of 400 million blocks. Hence, in 10BPS such an attack would require at least a year and a half, assuming the attacker uses 100% of the network throughput and the fees are negligible. -
Fixed growth rate Say the attacker wishes to increase the storage by a whole GB within a single day (again, assuming the attacker is given the entire network throughput for negligible fees). In 10BPS, the network has a throughput of a bit over
$4\cdot 10^{11}$ grams per day. Substituting into the bound and rearranging we get$budget \ge C \cdot growth^2/\text{mass}$ . Substituting$C = 10^{12}$ ,$growth = 2 \cdot 10^7$ and$\text{mass} = 4\cdot 10^{11}$ we get that the required budget is at least$10^{15}/4$ dworks, which is$2.5$ million kaspa.
Overall, the attack is either very slow or very expensive. How does this compare with the dust attack? That attack created 1185 UTXOs per block. In 10BPS, this means 50GB per day. Without the quadratic bound, the only limitation to such an attack is that each output must be at least
We can also use the bound to provide an absolute worst-case upper bound on the organic growth of the UTXO set. Assume for simplicity the total circulation is
We discuss the implications of storage mass on common everyday transactions.
First consider a transaction
In contrast, one can check that
More generally, we see that the storage_mass
becomes dominant only when relatively small inputs emerge (where the exact threshold is inversely proportional to the total budget of the transaction). Still, even in everyday use, we see that small outputs can emerge. Small UTXOs commonly appear as change or as micropayments, and however we set
A transaction
It is worth deliberating a bit on how fees affect this phenomenon. The presence of a fee modifies the mass equation as so:
In the case
The overall takeaway of the analysis is that compounding is treated "nicely" by the storage mass function, even in the presence of fees.
Exchanges and pools should immediately benefit from this proposal. Both do not usually deal with micropayments, and the currently deployed patch highly limits their ability to submit many transactions at times of increased load. Also in the long term, an exchange can be considered a highly active wallet whose deposit and withdrawal volumes are roughly the same. That is, the values of deposits are distributed roughly the same as those of withdrawals. By matching inputs with outputs of the same magnitude, exchanges can keep the storage mass low even in a future where typical transactions have low (say, sub-kaspa) values.
Similarly, pools can remove most storage mass by setting the cashout limit to be larger than a typical coinbase output.
A comprehensive solution must also consider scenarios where everyday users wish to spend very small values (much smaller than
Consider an eccentric millionaire with a penchant for ice cream. Every morning, our hero grabs a wallet containing a single UTXO of at least
Consider a transaction
The key point is in realizing that the merchant selling the ice cream does not keep such small amounts indefinitely, but rather will compound them eventually (say on a daily basis). However the future actions of the merchant are unknown to the system, and the action is locally indistinguishable from the actions of a deliberate dust attacker. In an account-based model, such a transaction would merely appear as a transfer of a small value between significantly larger accounts. Essentially, the account model embeds the future compounding of the payment into the local operation.
We argue that this exact behavior can be emulated also in the UTXO model by creating a mutually signed transaction.
The vendor and the millionaire create a transaction together with two
The downside to this solution is that the merchant must constantly have a hot wallet available and cooperate with the customer to create a mutually signed transaction. In a following KIP, we will specify auto-compounding wallet addresses, where UTXOs owned by such addresses will allow anyone to add to their balance without the owner of the UTXO having to sign it. Among other applications, this mechanism will allow the millionaire to purchase ice cream as described above, using her wallet alone.
In this section we provide an optimal algorithm for making a payment of a given value using a given set of UTXOs. The algorithm is "optimal" in the sense that it minimizes the number of transactions required to make this payment and the overall mass they consume.
Disclaimer: the algorithm described does not cover all possible edge-cases (especially fee-related aspects), and is brought here as a guideline to the path which should be taken. With the help of the community we hope to soon publish a comprehensive standalone reference implementation (in the form of a Python Jupyter notebook), which will address the need for a more accurate reference.
Denote
Say the user desires to make a payment with value
Single step optimization We begin by solving a single step of the process. The question we ask is “Given a mass bound
Iterative process Using this single step optimization we now describe an iterative algorithm for composing the chain of transactions required for making a micropayment:
def build_txs(inputs, payment):
txs = []
while storage_mass(I=inputs, O=[payment, sum(inputs) - payment - F]) > M:
T = sum(inputs) - F
N = negative_mass(inputs, 2)
D = (M + N)T/C
alpha = (D - sqrt(D^2 - 4D))/2D # single step optimization, taking the smaller solution
outputs = [ceiling(alpha * T), T - ceiling(alpha * T)] # round up in order to not increase the mass above M
txs.append((inputs, outputs))
inputs = outputs
txs.append((inputs, [payment, sum(inputs) - payment - F]))
return txs
Remarks
- In all iterations of the while loop (except maybe for the first),
$I=O=2$ - Without the relaxed formula which uses the harmonic mean over the inputs (in the 2:2 case), the loop would not converge. The arithmetic averaging done over the inputs would yield the same
$N$ value over and over - The initial inputs should contain sufficient slack to cover all fees along the process. The value returned by the initial call to
storage_mass
is a good estimation for the overall mass required and hence can be used to estimate the overall fee. - In all intermediate transactions built within the loop, the recipient for both outputs should be the change address of the sender
- In the final transaction built, it is possible that the actual fee can be decreased below
$F$ depending on the final mass
Recall that consensus implementation required a relatively complex design. Here we elaborate on this subtlety, and how it was solved by slightly changing the structure of a transaction.
Background The block validation in Kaspa is divided into three phases:
- Header validation: the block header has the required form, points to known blocks, satisfies the difficulty target, etc.
- Data validation: the block body has the required form, as do all transactions therein, and in particular it satisfies the mass limit
- Transaction validation: all transactions have valid signatures and spend existing UTXOs. Perhaps surprisingly, a block only has to pass the first two phases to be considered valid. The reason for that is that the third part of validation does not rely on the block data alone, but requires us to compute the UTXO set from the point of view of that block. For efficiency reasons, we only want to compute the UTXO set for selected-chain blocks. This tension is resolved by having this "intermediate status" where a block could be perfectly valid, but disqualified from being in the selected-chain. In particular, future blocks pointing at this block will process any valid transactions that appear therein (otherwise they will also be disqualified from the selected-chain).
The Problem The values of UTXOs are not explicitly written inside the transaction but are rather recovered from the UTXO set. So in order to compute the storage mass, we must compute the UTXO state of the block. This contradicts our ambition to validate the block during the data validation phase while avoiding having to compute its UTXO state.
Solution We adapt an idea we already used for fixing an issue with the sig_op_count
opcode (see [4]). We require transactions to have an explicit mass
field. During the data validation phase, the client only verifies that the sum of masses does not exceed the block mass limit. Verifying that the stated mass values are consistent with the formula is only performed for chain blocks and deferred to the transaction validation phase. Transactions with false mass values are considered invalid, and blocks containing them are disqualified from the selected-chain.
This proposal is already implemented by the Kaspa on Rust codebase (PR). Currently applied on the mempool level for all networks (mainnet, testnet 10, testnet 11). It is also implemented on the consensus level, but only activated for testnet 11 (TN11). The current implementation however is slightly different. Particularly, it sums the storage mass with the compute mass instead of using max, and it also does not implement the relaxed formula. The implementation should be updated shortly to reflect this final proposal. To avoid code clutter and multiple versions, we suggest hard-forking or restarting TN11 with the fixed rules.
[1] “A Very Dusty Yom-Kippur (dust attack post-mortem)” https://medium.com/@shai.wyborski/a-very-dusty-yom-kippur-dust-attack-post-mortem-faa11804e37
[2] Kaspad go-lang dust patch: kaspanet/kaspad#2244
[3] Unpublished paper on state growth regulation in permissionless systems, this reference will be updated once the paper is published online.
[4] “Kaspa Security Patch and Hard Fork — September 2022” https://medium.com/@michaelsuttonil/kaspa-security-patch-and-hard-fork-september-2022-12da617b0094