Many resources in the Ethereum Virtual Machine (EVM) have the following attributes: they have the following attributes: they have an impact on sudden increase in capacity (that is, how much capacity we can handle for one or a few blocks) and sustained capacity (that is, the capacity that we can have for a long time) How many) have very different restrictions. Give some examples:
- EVM usage : Block processing time may occasionally be 2 seconds, but it takes so long for each block to make it very difficult to keep the nodes in sync
- Block data : The client has enough bandwidth to process 2 MB blocks, but not enough disk space to store them
- Witness data : the same problem as data-the client has enough bandwidth to handle medium and large witnesses, but not enough disk space to store them
- Filled state size : There is basically no limit to how much the state can increase in a single block (if the state surges from 45 GB to 46 GB in a block, but the state growth returns to normal afterwards, who will notice?) As long as the witness Can handle it, but we cannot have rapid state growth in every block
The solution we currently use is to combine all resources into a single multi-dimensional resource (“gas”), which does a poor job of dealing with these differences. For example, on average, transaction data plus call data consumes about 3% of the gas in the block. Therefore, the worst-case block contains approximately 67 times more data than the average (including 2x slack from EIP 1559) data. The size of the witness is similar: the average witness is only a few hundred kB, but in the worst case, even after the Verkle gas reform, the size of the witness is several megabytes, an increase of 10-20 times.
Crambling all resources into a single virtual resource (gas) will force the worst-case/average-case ratio to become based on usage, when the ratio based on usage and the ratio of burst limits and continuous limits that we know the client can handle Very inconsistent, will lead to very undesirable gas cost.
This article proposes an alternative solution to this problem: Multidimensional EIP 1559 .
Suppose there are n resources, and each resource has a sudden increase limit bi and a continuous target Si (we need bi>>si). We hope that the number of resource i in any single block will never exceed bi, and the long-term average consumption of resource i is equal to Si.
The solution is simple: we maintain a separate EIP 1559 targeting solution for each resource! We maintain a basic fee (basefees) vector f1…fn, where fi is the basic fee (basefee) of a unit resource i. We design a hard rule that the resource i consumed by each block cannot exceed bi units. fi is adjusted by a targeting rule (we will use exponential adjustment because we now know that it has better properties):
In order to complete this work in the Ethereum environment, only one type of resource (gas) is passed from the parent call to the child call, and we still charge all fees with gas.
Option 1 (simpler but not so pure): We keep the execution gas cost fixed and keep the current EIP 1559; let f1 be the base fee (basefee). The gas price of all “special” resources (calling data, storage usage…) becomes fi/f1. Blocks have both the current gas limit and each resource limit b1…bn. Priority fees operate in the same way as today.
Option 2 (harder but purer): The gas basefee is fixed at 1 wei (or, if we want, it can be 1 gwei). The gas price for each resource used (one of which is executed once) becomes fi. There is no block gas limit; each resource has only b1…bn limits. In this model, “gas” and “ETH” become true synonyms. The priority fee works by specifying a percentage; the priority fee paid to the block producer is equal to the basefee multiplied by the percentage (a more advanced method is to specify a vector of n priority fees, one for each resource).
Multi-dimensional pricing and the knapsack problem ( knapsack problem ) objection
The main objection to multi-dimensional pricing models in history is that they impose a difficult optimization problem on block builders: block builders cannot simply accept transactions from high to low cost per gas, they must be between different dimensions. Balance and solve the multidimensional backpack problem. This will create room for proprietary optimized miners with significantly better performance than stock algorithms, leading to centralization.
This problem is much weaker than before in two key aspects:
- Miner Extractable Value (MEV) has created opportunities for optimized miners, so the “ship has left port” of the stock algorithm is optimized in a meaningful way. Proposer/builder separation (PBS) solves this problem and isolates the economies of scale of block production from the consensus layer.
- EIP 1559 means that any resource that reaches the limit is an edge case rather than an average case, so the naive algorithm will only perform poorly in a few abnormal blocks.
To understand why (2) is the case, we need to pay attention to a very important fact: in multi-dimensional EIP 1559, the “Slack” parameter (maximumtarget) of each resource may be much higher than 2x. This is because today’s 2x Slack parameters create a sudden increase/sustainable gap that is superimposed on the burst/sustainable gap from unpredictable use, while in the multi-dimensional EIP 1559, the slack parameter represents the entire sudden/sustained gap. For example, we can target calldata usage at ~256 kB (8 times more than today), on top of which there is an 8 times Slack parameter (bisi), and still have a sudden increase limit comparable to today. If the witness gas cost remains the same, we can bind the witness size to another approximately 2 MB, and the Slack parameter of the witness size is approximately 6 times. A survey of the 240 latest blocks shows that even with 4 times the calldata Slack parameters, only 1 of these blocks will reach the limit!
This shows a very good effect of Multidimensional EIP 1559: it makes the edge cases of priority fee auctions more rare and clears sudden increase transactions faster.
Which resources can be multi-dimensionally priced?
We can start from the basics:
- EVM execution
- Send calldata
- Witness data
- Storage size growth
Once you have shards, you can also add shard data to this list. This has brought us a lot of benefits, able to support more scalability, while reducing the risk of sudden increase in usage.
In the long run, we can even make the pricing more granular:
- Split witness by reading and writing
- Split witnesses by branch and block
- Price each individual precompilation separately
- Each individual opcode
The main value of this is that it adds another layer of DoS protection: if each opcode is only allocated, for example, a maximum expected execution time of 100 milliseconds, then if an attacker finds an opcode or precompilation speed is reduced by 10 times , They can only add an expected execution time of 900 milliseconds to the block. This is in stark contrast to today, they can use this opcode or precompilation to fill the entire block, so any single opcode or precompilation slower by 10 times may allow an attacker to create a block that cannot be processed in a single interval in time.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/vitalik-proposes-a-multi-dimensional-eip1559-scheme-to-optimize-the-gas-model-and-increase-dos-protection/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.