What exactly is the soaring EIP-4844? V God personally explained in detail

Ethereum founder Vitalik Buterin recently answered a question related to Proto-danksharding (aka EIP-4844). Danksharding’s new sharding design for Ethereum, what exactly can this technology bring?

Author: Vitalik Buterin

What is Danksharding?

Danksharding is a new sharding design proposed for Ethereum that introduces some significant simplifications compared to previous designs.

The main difference between all recent Ethereum sharding proposals since 2020 (including Danksharding and earlier Danksharding) and most non-Ethereum sharding proposals is Ethereum’s Rollup-centric roadmap: Ethereum Sharding Slices do not provide more space for transactions, but for data, which the Ethereum protocol itself does not attempt to interpret. Validating a blob simply checks that the blob is available, that is, it can be downloaded from the network. The data space in these blobs is expected to be used by the Layer 2 (Layer2) Rollup protocol that supports high-throughput transactions.

The main innovation introduced by Danksharding is the merged fee market: there is no longer a fixed number of shards, each shard has a different block and a different block proposer, in Danksharding only one proposer chooses to enter the slot All transactions and all data.

To avoid this design placing high system requirements on validators, we introduce proposer/builder separation (PBS): a special class of participants called block builders bids for the right to choose a slot, and proposers only need to choose the bid The highest valid header is sufficient. Only the block builder needs to process the entire block (here, a third-party decentralized oracle protocol can also be used to implement distributed block builders); all other validators and users can be sampled very efficiently by data availability Validate the block (remember: the “big” part of the block is just data).

What is proto-danksharding (aka EIP-4844)?

Proto-danksharding (aka EIP-4844) is an Ethereum Improvement Proposal (EIP) that implements most of the logic and “scaffolding” (e.g. transaction formats, validation rules) that make up the full Danksharding specification, but has not actually implemented any points yet piece. In the proto-danksharding implementation, all validators and users must still directly verify the availability of complete data.

The main feature introduced by proto-danksharding is a new transaction type, which we call a transaction with a blob. A transaction with a blob is similar to a regular transaction, except that it also carries extra data called a blob. Blobs are very large (~125 kB) and much cheaper than a similar amount of calldata. However, the EVM execution cannot access the blob data; the EVM can only view promises to the blob.

Because validators and clients still need to download the full blob content, the data bandwidth target in proto-danksharding is 1 MB per slot instead of the full 16 MB. However, since this data does not compete with the gas usage of existing Ethereum transactions, there is still a large scalability benefit.

Why is it OK to add 1 MB of data to a chunk that everyone has to download, instead of making calldata 10 times cheaper?

This has to do with the difference between average load and worst case load. Today, we’ve run into situations where the average block size is around 90 kB, but the theoretical maximum possible block size (if all 30M gas in the block is used for call data) is around 1.8 MB. The Ethereum network has processed blocks close to the maximum in the past. However, if we simply reduce the calldata gas cost by a factor of 10, then while the average block size increases to a still acceptable level, the worst case becomes 18 MB, which is too much for the Ethereum network too much.

Current gas pricing schemes cannot separate these two factors: the ratio between average load and worst-case load depends on the user’s choice of how much gas to spend on calldata versus other resources, which means that gas prices must be based on worst-case Possibility of setting, causing the average load to be unnecessarily lower than the system can handle. However, if we change gas pricing to more explicitly create a multi-dimensional fee market, we can avoid average-case/worst-case load mismatches and include data in each block that is close to the maximum amount of data we can safely process. Proto-danksharding and EIP-4488 are two proposals to do this.

What exactly is the soaring EIP-4844? V God personally explained in detail

How does proto-danksharding compare to EIP-4488?

EIP-4488 was an earlier and simpler attempt to address the same average case/worst case load mismatch. EIP-4488 does this using two simple rules:

  • Calldata gas cost reduced from 16 gas per byte to 3 gas per byte
  • 1 MB limit per block plus an additional 300 bytes per transaction (theoretical maximum: ~1.4 MB)

A hard limit is the easiest way to ensure that large increases in average-case load do not result in an increase in worst-case load. The reduction in gas cost will greatly increase the use of rollups, potentially increasing the average block size to hundreds of KB, but the hard limit will directly prevent the worst-case possibility of a single block containing 10 MB. In fact, the worst-case block size will be smaller than it is now (1.4 MB vs. 1.8 MB).

Proto-danksharding instead creates a separate transaction type that can hold cheaper data in large fixed-size blobs and limit how many blobs each block can contain. These blobs are not accessible from the EVM (only commits to blobs), and blobs are stored by the consensus layer (beacon chain) rather than the execution layer.

The main practical difference between EIP-4488 and proto-danksharding is that EIP-4488 tries to minimize the changes required today, whereas proto-danksharding has a lot of changes today, so future upgrades to full sharding require very few changes. Although implementing full sharding (using data availability sampling etc.) is a complex task, and remains a complex task after proto-danksharding, this complexity is contained in the consensus layer. Once proto-danksharding rolls out, the execution layer client team, rollup developers and users do not need to do further work to complete the transition to full sharding.

Note that the choice between the two is not either: we can implement EIP-4488 as soon as possible and then follow it up with proto-danksharding half a year later.

What parts of full danksharding does proto-danksharding implement, and what else needs to be implemented?

Quoting EIP-4844:

The work done in this EIP includes:

A new transaction type with the exact same format that needs to exist in “full shard”.

All execution layer logic required for full sharding.

All execution/consensus cross-validation logic required for full sharding.

Layer separation between BeaconBlock validation and data availability sampling blobs.

Most of the BeaconBlock logic required for full sharding.

Self-adjusting independent gasprice for blobs.

The work that needs to be done to achieve full sharding includes:

Low-level extension of blob_kzgs in consensus layer to allow 2D sampling.

Practical implementation of data availability sampling,

 PBS (Proposer/Builder Separation) to avoid requiring a single validator to process 32 MB of data in a single slot.

Proof of Escrow or similar intra-protocol requirement for each validator to verify a specific portion of the sharded data in each block.

Note that all remaining work is consensus layer changes and does not require any additional work by client teams, users, or Rollup developers.

What if all these very large blocks increase disk space requirements?

Both EIP-4488 and proto-danksharding result in a long-term maximum usage of about 1 MB per socket (12 seconds). This equates to about 2.5 TB per year, which is much higher than the growth rate Ethereum needs today.

In the case of EIP-4488, addressing this requires a history expiration scheme (EIP-4444‌) where clients are no longer required to store history beyond a certain period of time (duration from 1 month to 1 year has been proposed time).

In the case of proto-danksharding, with or without implementing EIP-4444, the consensus layer could implement separate logic to automatically delete blob data after a period of time (eg 30 days). However, implementation of EIP-4444 as soon as possible is strongly recommended regardless of the short-term data expansion solution.

Both strategies limit the additional disk load of the consensus client to a maximum of a few hundred GB. In the long run, having some historical expiration mechanism is inherently mandatory: a full shard adds about 40 TB of historical blob data per year, so users can only actually store a fraction of that for a while. So it’s worth setting expectations for this early on.

How will users access old blobs if data is deleted after 30 days?

The purpose of the Ethereum consensus protocol is not to guarantee permanent storage of all historical data. Instead, the aim is to provide a highly secure real-time bulletin board and leave room for other decentralized protocols for longer term storage. Bulletin boards exist to ensure that data posted on a bulletin board is available long enough so that any user who wants that data, or any long-term agreement to back up the data, has enough time to fetch the data and import it to their other in the application or protocol.

In general, long-term history storage is easy. While 2.5 TB per year is too much of a requirement for a regular node, it’s very manageable for dedicated users: you can buy very large hard drives for about $20 per TB, which is more than enough for a hobbyist. Unlike consensus, which has an N/2-of-N trust model, history storage has a 1-of-N trust model: you only need one of the datastores to be honest. Therefore, each piece of historical data only needs to be stored hundreds of times, instead of the complete thousands of nodes doing real-time consensus verification.

Some useful ways to store the full history and make it easily accessible include:

  • Application-specific protocols, such as Rollup, may require their nodes to store parts of their application-related history. Lost historical data poses no risk to the protocol, only to a single application, so it makes sense for an application to take on the burden of storing data relevant to itself.
  • Storing historical data in BitTorrent, eg. A 7 GB file containing blob data from blocks is automatically generated and distributed daily.
  • The Ethereum portal network (currently in development) can easily be extended to store history.
  • Block explorers, API providers and other data services may store complete history.
  • Individual hobbyists and academics engaged in data analysis may store complete historical records. In the latter case, storing the history locally provides them with significant value, as it makes it easier to compute on it directly.
  • Third-party indexing protocols such as TheGraph may store full history.

At higher levels of historical storage (e.g. 500 TB per year), the risk of some data being forgotten becomes higher (in addition, the data availability verification system becomes more strained). This may be the true limit of scalability for sharded blockchains. However, all currently proposed parameters are very far from this point.

What is the format of the blob data and how is it submitted?

A blob is a vector of 4096 field elements, numbers in the range:

0 <= x < 52435875175126190479447740508185965837690552500527637822603658699938581184513

A blob is mathematically regarded as representing a degree < 4096 polynomial over a finite field with the above modulus, where the field element at position i in the blob is the evaluation of that polynomial at wi. w is a constant satisfying w=1.

The commitment to the blob is the KZG commitment to a hash of that polynomial. However, from an implementation point of view, it is not important to focus on the mathematical details of the polynomial. Instead, there will only be a vector of elliptic curve points (based on Lagrangian’s plausible setting), and KZG’s commitment to the blob will be just a linear combination. Code citing EIP-4844:

PRSEEZpcD65V6ogRno0wD2pkCXrbdTKrPvfIc9Na.png

BLS_MODULUS is the modulus above, and KZG_SETUP_LAGRANGE is a vector of elliptic curve points, which are Lagrangian-based trusted settings. For implementers, it is now reasonable to think of it simply as a black-box specialized hash function.

Why use KZG’s hash instead of KZG directly?

Instead of using KZG to represent the blob directly, EIP-4844 uses a versioned hash: a single 0x01 byte (representing this version) followed by the last 31 bytes of the SHA256 hash of the KZG.

This is done for EVM compatibility and future compatibility: KZG promises to be 48 bytes, while EVM uses 32 byte values ​​more naturally, if we switch from KZG to something else (for quantum resistance reasons, for example), KZG promises can continue to be 32 bytes.

What are the two precompiles introduced in proto-danksharding?

Proto-danksharding introduces two kinds of precompilation: blob verification precompilation and point evaluation precompilation .

Blob validation precompile is self-explanatory: it takes a versioned hash and a blob as input, and verifies that the provided versioned hash is actually a valid versioned hash of the blob. This precompile is intended for use by Optimistic Rollup. Quoting EIP-4844:

Optimistic Rollup only needs to actually provide the underlying data when submitting a fraud proof. The fraud proof submission feature will require the entire contents of the fraudulent blob to be submitted as part of calldata. It will use blob verification to verify data against previously submitted versioned hashes, and then perform fraud proof verification on that data as it does today.

The point evaluation precompile takes as input a versioned hash, an x ​​coordinate, a y coordinate, and a proof (the blob’s KZG commitment and KZG evaluation proof). It verifies the proof to check that P(x) = y, where P is the polynomial represented by the blob with the given versioned hash. This precompile is intended for use by ZK Rollup. Quoting EIP-4844:

A ZK rollup will have two promises for its transaction or state delta data: KZG in the blob and some promise using whatever proof system ZK rollup uses internally. They will use proof-of-promise of the equivalence protocol, precompiled using point evaluation, to prove that kzg (the protocol is guaranteed to point to available data) and ZK rollup’s own promises refer to the same data.

Note that most major Optimistic Rollup designs use multi-round fraud prevention schemes, where only a small amount of data is required for the final round. So it is conceivable that Optimistic Rollup could also use point evaluation precompilation instead of blob verification precompilation, and it would be cheaper to do so.

What does a KZG trusted setup look like?

Look:

https://vitalik.ca/general/2022/03/14/trustedsetup.html General description of how powers-of-tau trusted setup works

https://github.com/ethereum/research/blob/master/trusted_setup/trusted_setup.py Example implementation of all important trusted setup related calculations

Specifically in our case, the current plan is to run four sizes in parallel (n1=4096, n2=16), (n1=8192, n2=16), (n1=16834, n2=16) and (n1= 32768,n2=16) rituals (with different secrets). In theory, only the first one is needed, but running more larger sizes increases future applicability by allowing us to increase the blob size. We can’t just have a larger setting, because we want to be able to have a hard limit on the degree of polynomial that can be effectively committed, which is equal to the blob size.

A possible practical approach would be to start with a Filecoin setup and then run a ritual to extend it. Multiple implementations, including browser implementations, will allow many people to participate.

Can’t we use some other promise scheme without trusted setup?

Unfortunately, using anything other than KZG (such as IPA or SHA256) makes the sharding roadmap more difficult. There are several reasons for this:

  • Non-arithmetic commitments (such as hash functions) are not compatible with data availability sampling, so if we use such a scheme, when we move to full sharding, we’ll have to change to KZG anyway.
  • IPA may be compatible with data availability sampling, but it results in more complex schemes with weaker properties (e.g. self-healing and distributed block building becomes more difficult)
  • Neither hash nor IPA is compatible with point-evaluated precompiled cheap implementations. Therefore, a hash or IPA based implementation will not be able to efficiently enable ZK Rollups or support cheap fraud proofs in multi-round Optimistic Rollups.

So unfortunately the loss of functionality and added complexity of using anything other than KZG is far greater than the risk of KZG itself. Additionally, any KZG-related risks are covered: a KZG failure will only affect Rollup and other applications that depend on blob data, not the rest of the system.

How “complex” and “new” is KZG?

KZG commitments were introduced in a 2010 paper, and have been widely used in PLONK-type ZK-SNARK protocols since 2019. However, the underlying math that KZG promises is a relatively simple arithmetic on top of the underlying math of elliptic curve operations and pairings.

The specific curve used is BLS12-381‌, which was invented by Barreto-Lynn-Scott in 2002. Elliptic curve pairings are necessary to verify KZG commitments and are very complex mathematics, but they were invented in the 1940s and used in cryptography since the 1990s. By 2001, many encryption algorithms using pairing were proposed.

From an implementation complexity standpoint, KZG is no harder to implement than IPA: the function to compute the commitment (see above) is exactly the same as for IPA, just using a different set of elliptic curve point constants. Point-validation precompilation is more complicated as it involves pairwise evaluation, but the math is the same as what has already been done in the EIP-2537 (BLS12-381 precompilation) implementation, and is very similar to bn128 paired precompilation (see also: Optimizing Python accomplish). Therefore, no complex “new work” is required to implement KZG verification

What are the different software parts implemented by proto-danksharding?

There are four main components:

1. The execution layer consensus has changed (see EIP for details):

  • New transaction type with blobs
  • Output the opcode of the versioned hash of the ith blob in the current transaction
  • Blob validation precompiled
  • point evaluation precompile

2. Consensus layer consensus changes (see this folder in the repo):

  • List of blob KZGs in BeaconBlockBody
  • “sidecar” mechanism where the full blob content is passed along with a separate object from the BeaconBlock
  • Cross-check between blob versioned hashes in the execution layer and blob KZG in the consensus layer

3. Memory Pool

  • BlobTransactionNetworkWrapper (see network section of EIP)
  • Stronger anti-DoS protection to compensate for large blob sizes

4. Block construction logic

  • Accept the transaction wrapper from the mempool, put the transaction into the ExecutionPayload, put the KZG into the beacon block and the main body in the sidecar
  • Responding to the two-dimensional fee market

Note that for the smallest implementation, we don’t need the mempool at all (we can rely on the second layer transaction bundling market), we just need a client to implement the block building logic. Only consensus changes at the execution and consensus layers require extensive consensus testing and are relatively lightweight. Anything is possible between such a minimal implementation and a “full” deployment where all clients support block production and mempools.

What does a proto-danksharding multidimensional fee market look like?

Proto-danksharding introduces a multi-dimensional EIP-1559 fee market with two resources, gas and blob, with separate floating gas prices and separate limits.

That is, there are two variables and four constants:

What exactly is the soaring EIP-4844? V God personally explained in detail

The blob fee is charged in gas, but it is a variable amount of gas that is adjusted so that in the long run, the average number of blobs per block is actually equal to the target number.

The two-dimensional nature means that block builders will face a harder problem: instead of simply accepting the transaction with the highest priority fee until they run out of transactions or hit the block gas limit, they will have to avoid hitting both different restrictions.

Here is an example. Let’s say the gas limit is 70 and the blob limit is 40. The mempool has many transactions, enough to fill up blocks, and there are two types (tx gas including per-blob gas):

  • Priority fee 5 per gas, 4 blobs, 4 total gas
  • Priority fee 3 per gas, 1 blob, 2 total gas

A miner following the naive “lower priority fee” algorithm will fill the entire block with 10 transactions of the first type (40 gas) and earn 5 * 40 = 200 gas. Because these 10 transactions fill up the blob full limit, they will not be able to contain more transactions. But the optimal strategy is to take 3 trades of the first type and 28 trades of the second type. This gives you a block of 40 blobs and 68 gas, and a revenue of 5 * 12 + 3 * 56 = 228.

What exactly is the soaring EIP-4844? V God personally explained in detail

Do executing clients now have to implement complex multidimensional knapsack problem algorithms to optimize their block production? No, there are several reasons:

  • EIP-1559 ensures that most blocks do not hit either limit, so only a few blocks actually face multidimensional optimization problems. In the usual case where the mempool doesn’t have enough (enough paying) transactions to hit either limit, any miner can get the best income by including every transaction they see.
  • In practice, fairly simple heuristics can be close to optimal. In a similar situation, see Ansgar’s EIP-4488 analysis‌ for some data on this.
  • Multidimensional pricing isn’t even the biggest revenue stream from specialization – MEV is. Dedicated MEV revenue extracted from on-chain DEX arbitrage, liquidations, preemptive NFT sales, etc. via dedicated algorithms accounts for a significant portion of the total “retrievable revenue” (i.e. priority fees): Dedicated MEV revenue appears to average around 0.025 ETH per block block, the total priority fee is usually around 0.1 ETH per block.
  • Proposer/Builder separation‌ is designed around highly specialized block production. PBS turns the block building process into an auction where professional participants can bid for the privilege of creating blocks. Regular validators only need to accept the highest bid. This is to prevent MEV-driven economies of scale from spreading to validator centralization, but it takes care of all the issues that might make it harder to optimize block construction.

For these reasons, more complex fee market dynamics do not greatly increase centralization or risk; in fact, more broadly applied principles can actually reduce the risk of DoS attacks!

How does the exponential EIP-1559 blob fee adjustment mechanism work?

Today’s EIP-1559 adjusts the base fee b to achieve a specific target gas usage level t as follows:

What exactly is the soaring EIP-4844? V God personally explained in detail

where b(n) is the base fee for the current block, b(n+1) is the base fee for the next block, t is the target, and u is the gas used.

A big problem with this mechanism is that it doesn’t actually target t. Suppose we get two blocks, the first u=0 and the next u=2t. we got:

What exactly is the soaring EIP-4844? V God personally explained in detail

Even though the average usage is equal to t, the base fee is down 63/64. So basefee is only stable when usage is slightly higher than t; apparently about 3% higher in practice, although the exact number depends on the variance.

A better formula is exponential adjustment:

What exactly is the soaring EIP-4844? V God personally explained in detail

exp(x) is the exponential function e^x, where e≈2.71828. When the value of x is small, exp(x)≈1+x. However, it has a convenient feature not related to transaction permutation: multi-step adjustment

What exactly is the soaring EIP-4844? V God personally explained in detail

Depends only on the sum u1+…+u/n, not on the distribution. To see why, we can do the math:

What exactly is the soaring EIP-4844? V God personally explained in detail

Therefore, the same transactions that are included will result in the same final base fee, no matter how they are distributed across different blocks.

The last formula above also has a natural mathematical explanation: the term (u1+u2+…+u/n-nt) can be seen as redundant: the difference between the total gas actually used and the total gas that is intended to be used.

The current base fee is equal to

What exactly is the soaring EIP-4844? V God personally explained in detail

The fact clearly shows that the excess cannot go beyond a very narrow range: if it exceeds 8t∗60, then the basefee becomes e^60, which is so high that no one can pay for it, and if it is below 0, the resource is basically is free, and the chain will be spammed until the excess is back above zero.

The adjustment mechanism works exactly in these terms: it tracks the actual total (u1+u2+…+u/n) and calculates the target total (nt), and calculates the price as an index of the difference. To make the calculation easier, we don’t use e^x, but 2^x; in fact, we use an approximation of 2^x: the fake_exponential function in EIP. The false index is almost always within 0.3% of the actual value.

To prevent long periods of underutilization leading to long 2x full blocks, we’ve added an extra feature: we don’t let excess blocks go below zero. If actual_total is lower than targeted_total, we simply set actual_total equal to targeted_total. In extreme cases (blob gas all the way down to zero), this does break transaction order invariance, but the added security makes this an acceptable compromise. Also note an interesting result of this multidimensional marketplace: when proto-danksharding was first introduced, there were probably very few users initially, so the cost of a blob will almost certainly be very cheap, even “regular”, over a period of time Ethereum blockchain activity is still expensive.

The authors argue that this fee adjustment mechanism is better than the current approach, so eventually all parts of the EIP-1559 fee market should move to using it.

For a longer and more detailed explanation, see Dankrad’s post‌.

How does fake_exponential work?

For convenience, here is the code for fake_exponential:

zbQYmGgbrzes2hdIFs1Tp8QNnU6kE6CS6hIQpKOQ.png

Here is the core mechanism mathematically rephrased, with rounding removed:

What exactly is the soaring EIP-4844? V God personally explained in detail

The goal is to stitch together many instances of (QX), one appropriately shifted and scaled for each [2^k,2^(k+1)] range. Q(x) itself is an approximation of 2^x for 0≤x≤1, chosen for the following properties:

  • Simplicity (this is a quadratic equation)
  • Correctness of the left edge (Q(0)=2^0=1)
  • Correctness of the right edge (Q(1)=2^1=2)
  • smooth slope (we ensure that Q'(1)=2∗Q'(0), so each shifted + scaled copy of Q has the same slope on its right edge as the next copy on its left edge)

The last three require three linear equations with three unknown coefficients, and the Q(x) given above gives a unique solution.

The approximation is surprisingly good; fake_exponential gives an answer within 0.3% of the actual value of 2^x for all but the smallest input:

What exactly is the soaring EIP-4844? V God personally explained in detail

What issues in proto-danksharding are still being debated?

NOTE: This section is easily out of date. Don’t trust it to give the latest thoughts on any particular issue.

  • All major Optimistic Rollups use multi-round proofs, so they can use (much cheaper) point evaluation precompiles instead of blob verification precompiles. Anyone who really needs blob verification can implement it themselves: take blob D and versioned hash h as input, choose x=hash(D,h), compute y=D(x) using barycentric evaluation‌ and use point evaluation Precompile verifies that h(x)=y. So, do we really need blob validation precompilation, or can we just remove it and just use dot evaluation?
  • How well does the chain handle durable long-term 1 MB+ blocks? Should the target blob count be reduced in the first place if the risk is too great?
  • Should blobs be denominated in gas or ETH (burned)? Should there be other adjustments to the fee market?
  • Should the new transaction type be treated as a blob or an SSZ object, in the latter case changing the ExecutionPayload to a union type? (It’s a “do more work now” vs “do more work later” trade-off)
  • The exact details of the trusted setup implementation (technically beyond the scope of the EIP itself, as this setup is “just a constant” for implementers, but still needs to be done).

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/what-exactly-is-the-soaring-eip-4844-v-god-personally-explained-in-detail/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-03-22 09:46
Next 2022-03-22 09:47

Related articles