Delphi Digita: The Industry’s Most Complete Ethereum Advanced Guide

Core points:

  • Ethereum is the only major protocol intended to build a scalable unified settlement and data availability layer
  • Rollup Scales Computation While Leveraging Ethereum’s Security
  • All roads lead to the endgame of centralized block production, decentralized trustless block verification and censorship resistance.
  • Innovations such as initiator-builder separation and weak statelessness, which lead to separation of power (construction and verification), enable scalability without sacrificing security or decentralization goals
  • MEV is now front and center – many designs are planned to mitigate its harm and prevent its tendency to centralize
  • Danksharding combines multiple avenues of cutting-edge research to provide the scalable base layer needed for Ethereum’s rollup-centric roadmap
  • I do expect Danksharding to be implemented in our lifetime

Table of contents

Part 1 The Road to Danksharding

Original Data Sharding Design – Independent Sharding Proposal

Data Availability Sampling (DAS)

KZG Commitment

KZG Promise vs. Fraud Proof

Initiator and Builder separation within the protocol

Censorship Resistance List (crList)

2D KZG strategy

Danksharding

Danksharding – Honest Majority Verification

Danksharding – Rebuild

Danksharding – Malicious Majority Security for Private Random Sampling

Danksharding – Key Summary

Danksharding – Constraints on blockchain scaling

Native danksharding (EIP-4844)

Multidimensional EIP-1559

Part 2 History and State Management

Calldata gas cost reduction and calldata total limit (EIP-4488)

Limit historical data in executing clients (EIP-4444)

Restoring historical data

Weak Statelessness

Verkle Tries

Status expired

Part 3 Everything is MEV’s Pot

Today’s MEV Supply Chain

MEV-Boost

Committee Driven MEV Smoothing

Single-slot Finality

Single Secret Leader Election

Part 4 The Secret of Merging

merged client

Summary moment

Introduction

I’ve been fairly skeptical about the timing of the merger since Vitalik said that people born today have a 50-75% chance of living to 3000 AD and he wants to live forever. But, whatever, let’s have some fun, let’s take this opportunity to take a closer look at Ethereum’s ambitious roadmap.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

(after 1000 years)

This article is not for fast food. If you want a broad and detailed look at Ethereum’s ambitious roadmap, give me an hour and I’ll save you months of work.

Ethereum research has a lot to keep track of, but everything is ultimately intertwined into one overarching goal — scaling up computation without sacrificing decentralized verification.

Vitalik has a famous “Endgame” saying, don’t know if you’ve heard it. He acknowledged that Ethereum scaling requires some centralization. In blockchain, the C letter for centralization is scary, but a reality. We just need to control this power with decentralized and trustless verification. There is no compromise here.

Professionals will build on the L1 and beyond. While Ethereum maintains incredible security with simple decentralized verification, rollup inherits its security from L1. Ethereum then provides the availability of settlement and data, allowing the rollup to scale. All of the research here is ultimately aimed at optimizing these two roles, and at the same time, making full verification of the blockchain easier than ever.

The following terms are repeated about seven, eight, fifty-nine times:

  • DA – Data Availability Data Availability
  • DAS – Data Availability Sampling
  • PBS – Proposer-builder Separation
  • PDS – Proto-danksharding native danksharding
  • DS – Danksharding an Ethereum sharding design
  • PoW – Proof of Work Proof of Work
  • PoS – Proof of Stake

Part 1 The Road to Danksharding

Hopefully you’ve heard that Ethereum has moved to a rollup-centric roadmap. No more executing shards – Ethereum will instead optimize data-hungry rollups. This is achieved through data sharding (somewhat Ethereum’s plan) or larger blocks (Celestia’s plan).

The consensus layer does not interpret sharded data. It has only one job – make sure the data is available.

I will assume you are familiar with some basic concepts like rollups, fraud and ZK proofs, and why DA (Data Availability) is important. If you’re not familiar or just need a refresher, check out Can’s recent Celestia report.

Original Data Sharding Design – Independent Sharding Proposal

The design described here has been deprecated, but is worth knowing as a background. For simplicity, I’ll call it “shard 1.0”.

Each of the 64 shards has individual proposals and committees that rotate through the set of validators. They individually verify that their shard’s data is available. It won’t be DAS (Data Availability Sampling) initially – it relies on an honest majority in each shard’s validator set to fully download the data.

This design introduces unnecessary complexity, a worse user experience, and a vector for attack. Reorganizing validators between shards can be risky.

It’s also hard to guarantee that voting will be done within a single slot unless you introduce very strict synchronization assumptions. Beacon block proposals require the collection of votes from all individual committees, which may be delayed.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

(Original data sharding design, each shard is confirmed by committee votes, voting is not always done in a single slot, shards can be confirmed up to two epochs)

DS (Danksharding) is completely different. Validators perform DAS, confirming that all data is available (no more separate shard committees). A special builder (Builder) creates a large block with the Beacon block and data of all shards and confirms it. Therefore, PBS (Proposal and Builder Separation) is necessary for DS to remain decentralized (building that big block together is resource intensive).

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Data Availability Sampling (DAS)

Rollups will publish a lot of data, but we don’t want to burden nodes to download all the data. This would imply high resource allocation, thus hurting decentralization.

In contrast, DAS allows nodes (even light clients) to easily and securely verify that all data is available without downloading all data.

  • Naive solution – just check the random part from the block. If there is no problem, just sign it. But what if you miss a transaction that will empty all your ETH to some bad guy? Is this safe (safu).
  • Clever solution – erasure coding the data first. Data is expanded using Reed-Solomon codes. This means that the data is interpolated as a polynomial, which we can then evaluate elsewhere. This is a bit complicated, so let’s break it down.

Don’t worry about your math, here’s a crash course. (I promise the math isn’t that scary here – I had to watch some Khan Academy videos to write these parts, but even I get it now).

A polynomial is an expression that adds a finite number of expressions of the form . The number of terms represents the highest index. For example, + + – 4 is a cubic polynomial. You can reconstruct a polynomial of any degree from any coordinates lying on this polynomial.

Now look at a concrete example. Below we have four data blocks (to). These blocks of data can be mapped to polynomial values ​​at given points. For example, =. Then you find the minimum degree polynomial that satisfies these values. Since this is four blocks of data, we can find a cubic polynomial. We can then extend this data by adding four values ​​( to ) that lie on the same polynomial.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Remember that key polynomial property – we can reconstruct it from any four points, not just our original four blocks of data.

Back to our DAS. Now we just need to be sure that any 50% (4/8) erasure coded data is available. From this, we can reconstruct the entire block of data.

Therefore, an attacker would have to hide more than 50% of the data blocks in order to successfully fool the DAS node into thinking the data is available (it is not).

After many successful random samplings, the probability that <50% of the data is available is very small. If we successfully sample the erasure coded data 30 times, then <50% probability is usable.

KZG Commitment

OK, so we did a bunch of random samples and they were all available. But we have one more question – is data erasure coded correctly? Otherwise, maybe the block maker just added 50% garbage when expanding the block, and our sampling was a waste of time. In this case, we won’t actually be able to reconstruct the data.

Often, we just commit large amounts of data by using a Merkle root. This is valid for proving that a set contains some data.

However, we also need to know that all the original and extended data lie on the same low-degree polynomial. Merkel root cannot prove it. Therefore, if you go with this scheme, you will also need fraud proofs to prevent possible blunders.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

(Merkel roots allow us to make commitments on data and extensions of data, but it cannot tell us whether they fall on the same low-degree polynomial)

Developers can address this issue in two directions:

  • Celestia is going the fraud proof route. This scheme requires someone to watch, and if a block is incorrectly erasure-coded, they submit a fraud proof to alert everyone. This requires the standard honest-minority assumption and synchronization assumption (ie, in addition to someone sending me a fraud proof, I need to assume that I am connected and will receive it for a limited time).
  • Ethereum and Polygon Avail are taking a new path – KZG commitments (aka Kate commitments). This removes the honest-minority and synchronization assumptions to keep fraud proofs safe (though they still exist and are used for reconstruction, which we’ll get to shortly).

Other programs are not without, but less sought after. For example, you can use ZK-proofs. Unfortunately, they are computationally impractical (for now). However, they are expected to improve over the next few years, so it is likely that Ethereum will move to STARKs in the future, as KZG promises not to be quantum resistant.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

(I don’t think you guys are ready for post-quantum STARKs, but your next generation will love it)

Back to KZG commitments – these are a polynomial commitment scheme.

A commitment scheme is just a cryptographic way of making a commitment to some numerical value verifiable. The best analogy is to put a letter in a locked box and hand it to someone else. The letter cannot be changed once put in, but can be opened and proven with a key. You promise the letter, and the key is the proof.

In our case, we map all the original and extended data onto an X,Y grid, and then find the polynomials of the smallest degree that fit them (this process is called Lagrangian interpolation). This polynomial is what the prover promises.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

(KZG promises allow us to make promises on data and data extensions and prove that they fall on the same low-degree polynomial)

A few key points:

  • First there is a polynomial
  • The prover’s commitment to this polynomial is formed
  • This depends on the trusted setup of Elliptic Curve Cryptography. As for how it works, check out a great tweet from Bartek.
  • For any value of this polynomial, the prover can compute a proof
  • Speak human: The prover gives these fragments to any verifier, then the verifier can confirm that the value at a certain point (where the value represents the data behind it) is correctly located on the submitted polynomial.
  • This proves that our extension to the original data is correct, since all the values ​​lie on the same polynomial
  • Note: Validators do not need to use polynomials
  • Important properties – Commit size for compliance, proof size for , and verification time for . Even for the prover, the generation of commitments and proofs has only the complexity of , where is the degree of the polynomial.
  • Speaking in person: Even if (the number of values) increases (i.e. the dataset grows with the size of the shard) – the size of the promises and proofs remain the same, the amount of work required for verification is constant.
  • Both promises and proofs are just an elliptic curve element on the Pairing Friendly Curves (BLS12-381 used here). In this case, they’re only 48 bytes each (really small).
  • So the prover’s commitment to a large amount of original and extended data (represented as many values ​​on a polynomial) is still only 48 bytes, and the proof will be only 48 bytes too
  • In simpler terms: it scales pretty well

Here, KZG roots (a polynomial commitment) are analogous to Merkle roots (a vector commitment).

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

The original data is the value of the polynomial at positions to , then we extend it by evaluating the polynomial at to . All points to are guaranteed to lie on the same polynomial.

In one sentence: DAS allows us to check if erasure coded data is available. KZG promises to prove to us that the original data was scaled correctly, and promises all the data.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Well, all the algebra ends here.

KZG Promise vs. Fraud Proof

Now that we know how KZG works, let’s go back and compare the two approaches.

The disadvantage of KZG is that it will not be post-quantum safe, and it requires a trusted initialization. This is not worrying. STARKs provide a post-quantum alternative, while trusted initialization (open participation) requires only one honest participant.

The advantage of KZG is that it has lower latency than a fraud proof setup (although GASPER will not have a fast final result anyway as mentioned before), and it ensures proper code erasure without introducing fraud proofs Inherent synchronicity and honest minority assumptions.

However, considering that Ethereum still reintroduces these assumptions in block rebuilds, you’re not actually removing them. The DA layer always needs to assume that blocks are initially available, but then nodes need to communicate with each other to put them back together. This reconstruction requires two assumptions:

  • You have enough nodes (light or heavy) to sample the data so that they have enough power to combine the data. This is a fairly weak, unavoidably honest minority assumption, and nothing to worry about too much.
  • The synchronicity assumption is reintroduced – nodes need to communicate within a certain amount of time in order to put it back together.

Ethereum validators download shard data completely in PDS (native danksharding), while for DS they only do DAS (download specified rows and columns). Celestia will require validators to download the entire block.

Note that in both cases we need synchronization assumptions to rebuild. In cases where blocks are only partially available, full nodes must communicate with other nodes to combine them.

If Celestia wants to move from requiring validators to download the entire data to just doing DAS, then the latency benefits of KZG will become apparent (although this transition is not currently planned). Then, they also need to fulfill the KZG commitment – waiting for fraud proofs means greatly increasing the block interval, and the risk of validators voting for incorrectly coded blocks is high.

I recommend reading the following articles for an in-depth understanding of how KZG promises work:

  • The (relatively easy to understand) basics of elliptic curve cryptography
  • Exploring Elliptic Curve Pairing by Vitalik
  • KZG Polynomial Commitment by Dankrad
  • Principles of Trusted Initialization by Vitalik

Initiator and Builder separation within the protocol

Today’s consensus nodes (miners) and merged consensus nodes (validators) have different roles. They build the actual block, which is then submitted to other consensus nodes for verification. Miners do this by “voting” on top of the previous block, and after the merge, validators will directly vote on the block to be valid or invalid.

PBS (Originator and Builder Separation) splits these up – it explicitly creates a new intra-protocol builder role. Dedicated builders will put blocks together and bid for initiators (validators) to choose their blocks. This counters the centralized power of MEVs.

Review Vitalik’s “Endgame” – all roads lead to centralized block production with trustless and decentralized verification. PBS has carried it forward. We need an honest builder to serve network effectiveness and censorship resistance (two would be more efficient), but the validator group needs an honest majority. PBS makes the role of the initiator as simple as possible to support the decentralization of the validator.

Builders receive a priority fee prompt, plus any MEV they can withdraw. In an efficient market, competitive builders will bid for the full value they can extract from the block (minus their amortized costs such as expensive hardware, etc.). All the value will permeate into the decentralized validator group – which is exactly what we want.

The specific PBS implementation is still under discussion, but a two-slot PBS might look like this:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

  • Builders commit to block headers while bidding
  • The beacon block initiator selects the winning block header and bid. The initiator gets paid for winning the bid unconditionally, even if the builder fails to create the block body.
  • The committee of witnesses (committees of attestors) confirm the winning block header
  • The builder discloses the subject of the winning bid
  • Different committees of witnesses elect the winning body (if the winning builder does not present the body, it votes to prove its non-existence).

The initiator is selected from the validator group using the standard RANDAO mechanism. We then use a commit-disclosure strategy where the complete subject is not disclosed until the block header is confirmed by the committee.

Commitment-disclosure is more efficient (sending hundreds of full block bodies can overwhelm the bandwidth of the p2p layer), and it also prevents MEV theft. If a builder submits their full block, another builder can see it, figure out its strategy, incorporate it, and quickly publish a better block. Furthermore, a sophisticated initiator can detect the MEV strategy used, replicating it without compensating the builder. If this MEV-stealing becomes an equalizing force, it will incentivize the merger of builders and initiators, so we use a commit-disclosure strategy to avoid this.

After the initiator chooses the winning block header, the committee confirms it and solidifies it in the fork choice rules. The winning builder then announces their winning full “builder block” body. If published in time, the next committee will certify it. If they fail to publish in time, they still pay the originator in full (and lose all MEV and fees). This unconditional payment eliminates the need for the initiator to trust the builder.

The downside of this “dual slot” design is latency. The merged block will be a fixed 12 seconds, so we need 24 seconds for the full block time (two 12 second slots) when no new assumptions are introduced. 8 seconds per slot (16 seconds block time) seems like a safe compromise, although research is ongoing.

Censorship Resistance List (crList)

Unfortunately, PBS gives builders a lot of power to vet transactions. Maybe the builders just don’t like you, so they ignore your deal. Maybe they’re so good at their work that other builders give up, or maybe they set a high price for a block because they really don’t like you.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

crLists prevents this. The specific implementation is again an open design space, but “Hybrid PBS” seems to be the most popular. Builders specify a list of all eligible transactions they see in the mempool, and the builder will be forced to accept a package of transactions (unless the block is full).

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

  • The initiator publishes the crList and crList summary with all eligible transactions.
  • Builders create a proposed block body and then submit a bid that includes a hash of the crList digest to prove they have seen it.
  • The initiator accepts the winning bidder’s bid and block header (they haven’t seen the block body yet).
  • Builders publish their blocks, including proof that they have included all transactions in crList, or that the block is full. Otherwise, the block will not be accepted by the fork choice rules.
  • Attestors check the validity of published subjects

There are still some important issues to address here. For example, one dominant economic strategy is for the initiator to submit an empty list. That way, even builders who were supposed to be vetted win the auction with the highest bid. There are ways around this (or others), I’m just emphasizing that the design here isn’t rock solid.

2D KZG strategy

We saw how the KZG promise allowed us to promise data and prove it was scaled correctly. However, this is a simplification of what Ethereum actually does. It does not commit all the data in one KZG commitment – a block will use many KZG commitments.

We already have dedicated builders, so why not just let them create a huge KZG promise? The problem is, this requires a powerful supernode to refactor. We can accept supernode requirements for initial builds, but we need to avoid making assumptions about rebuilds. We need normal entities to be able to handle rebuilds, so splitting the KZG commitment into multiple shares is fine. Given the amount of data at hand, reconstruction might even be fairly common, or a fundamental assumption in this design.

To make reconstruction easier, each block will include m shard data encoded into m KZG commitments. This can lead to a lot of sampling if you’re not smart – you’ll be doing DAS on each shard block to make sure it’s available (requires m*k samples, where k is the number of samples per block).

So, Ethereum will use a 2D KZG strategy. We again use the Reed-Solomon code to scale m promises to 2m promises.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

We make it a 2D policy by extending additional KZG commitments (here 256-511) that lie on the same polynomial as 0-255. Now we just do DAS on the above table to ensure the availability of all sharded data.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

2D sampling requires that 75% of the data be available (as opposed to the 50% mentioned earlier), which means we need to draw a much larger fixed number of samples. The simple 1D strategy above requires 30 samples, here will need 75 samples to ensure a consistent probability of reconstructing a usable block.

Shard 1.0 (corresponding to the 1D KZG commitment strategy) only needs 30 samples, but you need to sample 64 shards, and a full check requires 1920 samples. Each sample is 512 B, so that is:

(512 B x 64 tiles x 30 samples) / 16 seconds = 60 KB/s bandwidth

In reality, validators will be chosen at random, rather than checking all shards by one person.

Merging blocks with a 2D KZG strategy makes complete DA verification incredibly easy. Just pick 75 samples from a single merged block:

(512 B x 1 block x 75 samples) / 16 seconds = 2.5 KB/s bandwidth

Danksharding

PBS was initially designed to blunt the centralizing forces of MEV on the validator set. However, Dankrad recently took advantage of that design realizing that it unlocked a far better sharding construct – DS.

DS leverages the specialized builder to create a tighter integration of the Beacon Chain execution block and shards. We now have one builder creating the entire block together, one proposer, and one committee voting on it at a time. DS would be infeasible without PBS – regular validators couldn’t handle the massive bandwidth of a block full of rollups’ data blobs:

PBS was originally designed to hedge against the centralized power of MEV in validator groups. However, Dankrad recently took advantage of this design and came up with a better sharding scheme – DS (Danksharding).

DS utilizes specialized builders to enable tighter integration between Beacon Chain execution blocks and shards. We now have a builder who can create entire blocks; a proposer; and a committee that votes. DS is not feasible without PBS – ordinary builders can’t have the huge bandwidth to satisfy a block containing countless rollup blocks.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Shard 1.0 includes 64 independent committees and sponsors that allow each shard to go wrong individually. Tighter integration allows us to ensure complete data availability (DA) in one go. Data is still “sharded” in the black box, but from a practical standpoint, shards start to feel more like chunks of data, which is great.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Danksharding – Honest Majority Verification

Let’s see how the validator proves that the data is trustworthy:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

This requires relying on a majority of honest validators – as a single validator, my columns and rows are available, not enough to give me statistical confidence that the entire block is available. We need an honest majority to come to this conclusion. Decentralized verification is important.

Note that this is not the same as the 75 random samples we discussed earlier. Private random sampling means that low profile individuals will be able to easily check availability (eg I can run a DAS light node and know blocks are available). However, validators will continue to use the row and column approach to check availability and guide block rebuilds.

Danksharding – Rebuild

As long as 50% of a single row or column is available, then it can easily be completely reconstructed by sampled validators. When they reconstruct any blocks that are missing in a row/column, they reassign those blocks to the orthogonal line. This helps other validators rebuild any missing blocks as needed from the rows and columns they intersect.

The safe assumptions for rebuilding a usable block here are:

  • There are enough nodes performing sampling requests so that together they have enough data to reconstruct the block
  • Synchronization assumption between nodes that are broadcasting their respective block shards

So, how many nodes are enough? A rough estimate requires 64,000 individual instances (more than 380,000 so far). This is also a very conservative calculation, which assumes no crossover between nodes running by the same validator (which is far from the case, as nodes are limited to 32 ETH instances). If you sample more than 2 rows and 2 columns, you increase the chance of collectively retrieve because of the intersection. This starts to scale quadratically – if validators are running, say 10 or 100 validators, the 64,000 requirement can drop by orders of magnitude.

If the number of online validators starts to get very low, DS can be set to automatically reduce the number of sharded blocks. Therefore, the safety assumption will be reduced to a safe level.

Danksharding – Malicious Majority Security for Private Random Sampling

We see that the verification of DS relies on an honest majority to prove blocks. I, as an individual, cannot prove that a block is available by downloading several ranks. However, private random sampling can give this guarantee without trusting anyone. This is what happens when the node discussed earlier checks 75 random samples.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

DS won’t include private random sampling initially, as this is a very difficult problem to solve in terms of networking (PSA: maybe you can help them!).

It’s important to note “private” because if an attacker de-anonymizes you, they can fool a small number of sampling nodes. They can just return the exact chunk of data you asked for and hide the rest. So you don’t know all the data is provided just from your own samples.

Danksharding – Key Summary

DS is very exciting, and it’s not just a good name. It finally fulfills Ethereum’s vision of a unified settlement and DA layer. This tight coupling between beacon blocks and shards can achieve the effect of not sharding.

In fact, let’s define why it’s even considered “sharded”. The only sharding here is simply the fact that validators are not responsible for downloading all the data. There is no other.

So, if you are now questioning whether this is true sharding, you are not crazy. That’s why PDS (which we’ll discuss shortly) is not considered a “shard” (despite having “shard” in its name, yes, I know it’s confusing). PDS requires each validator to fully download all blocks in order to prove their availability. DS then introduced sampling, so individual validators only download certain pieces of it.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Minimal sharding means simpler design than sharding 1.0 (so faster delivery, right?). Simplified content includes:

  • Compared to the sharding 1.0 spec, the DS spec may have hundreds of lines of code less (thousands less on the client side).
  • No more shard committees as infrastructure, committees only need to vote on the main chain
  • There is no need to keep track of individual shard block (blob) confirmations, now they are all confirmed in the main chain, or not.

A good result of this is a consolidated fee market for data. Sharding 1.0, where different initiators make different blocks, will make this all fragmented.

The abolition of sharding committees also strongly resists bribery. DS validators vote for the entire block once per epoch, so data is immediately confirmed by 1/32 of the entire validator group (32 places per epoch). Shard 1.0 validators also vote once per epoch, but each shard has its own committee that needs to be restructured. Therefore, each shard is only confirmed by 1/2048 validator groups (1/32 divided among 64 shards).

As discussed, the block combined with the 2D KZG commitment scheme also makes DAS much more efficient. Shard 1.0 requires 60KB/s of bandwidth to check all DAs of all shards, DS only needs 2.5KB/s.

There is also an exciting possibility for DS – a synchronous call between ZK-rollup and L1 Ethereum execution. Transactions from sharded data blocks can be instantly confirmed and written to L1 because everything happens in the same beacon chain block. Shard 1.0 removes this possibility because of separate shard confirmations. This leaves an exciting design space that could be very valuable for things like shared liquidity (e.g., dAMM).

Danksharding – Constraints on blockchain scaling

Modular layers scale gracefully – more decentralization brings more expansion. This is fundamentally different from what we see today. Adding more nodes to the DA tier safely increases data throughput (ie more room to allow rollup).

There are still limits to the scalability of blockchains, but we can improve by orders of magnitude compared to today. A secure and scalable base layer allows execution to be scaled quickly. Improvements in data storage and bandwidth will also increase data throughput over time.

It is certainly possible to exceed the DA throughput envisaged in this paper, but it is difficult to say where this maximum will fall. We don’t have a clear red line, but can enumerate intervals where supporting certain hypotheses starts to become difficult.

  • Data Storage – This is related to DA and data retrievability. The role of the consensus layer is not to guarantee that the data can be retrieved indefinitely, its role is to make the data available long enough that anyone willing to download it can satisfy our security assumptions. Then, it’s dumped anywhere – which is comfortable because history is 1 of N trust assumptions, and we’re not really talking about that much data, that big plan. However, as throughput increases, it can enter uncomfortable territory.
  • Validators – DAS needs enough nodes to rebuild blocks together. Otherwise, attackers can wait around and only respond to queries they receive. If these queries provided are not enough to reconstruct the block, the attacker can withhold the rest and we are out of the game. To safely increase throughput, we need to add more DAS nodes or increase their data bandwidth requirements. For the throughput discussed here, this is not an issue. Still, if throughput were to increase by orders of magnitude over this design, it could be uncomfortable.

Note that the builder is not the bottleneck. You need to generate KZG proofs quickly for 32MB of data, so will want to have a GPU or reasonably powerful CPU with at least 2.5GBit/s of bandwidth. Regardless, it’s a dedicated role, and for them, it’s a negligible business cost.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Native danksharding (EIP-4844)

DS is great, but we have to be patient. PDS is here to help us get through this – it implements the necessary forward-compatible steps on a tight schedule (targeting the Shanghai hard fork) to provide orders of magnitude scaling during the transition. However, it does not actually implement data sharding (ie validators need to download all data individually).

Today’s rollups use L1 calldata for storage, which is perpetuated on-chain. Rollup only requires DA for some short period of time, though, so anyone interested has plenty of time to download it.

EIP-4844 introduces a new blob-carrying transaction format, where rollup will be used for future data storage. Blobs carry a lot of data (~125KB) and they are much cheaper than a similar amount of calldata. Data blobs are pruned from nodes after a month, which reduces storage requirements. makes enough time to satisfy our DA security assumptions.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

For extended context, current Ethereum blocks typically average around 90 KB (calldata is about 10 KB of that). PDS frees up more DA bandwidth for blobs (target ~1MB, max ~2MB) as they are pruned after a month. They do not burden nodes all the time.

A blob is a vector of 4096 field elements of 32 bytes each. PDS allows up to 16 blobs per block, while DS increases this to 256.

PDS DA bandwidth = 4096 x 32 x 16 = 2 MiB per block, target is 1 MiB

DS DA bandwidth = 4096 x 32 x 256 = 32 MiB per block, target is 16 MiB

Each step is an order of magnitude expansion. PDS still requires consensus nodes to fully download data, so it is conservative. DS distributes the load of storing and disseminating data among validators.

Here are some goodies that EIP-4844 introduced on the way to DS:

  • Transaction format data that carries blobs
  • KZG commitment to blob
  • All execution layer logic required by DS
  • All execution/consensus cross-validation logic required by DS
  • Layer separation between beacon block validation and DAS blobs
  • Most of the beacon block logic required by DS
  • Self-adjusting independent gas prices for blobs (multi-dimensional EIP-1559 and exponential pricing rules)

DS will also add:

  • PBS
  • THE
  • 2D KZG strategy
  • Proof-of-custody or similar in-protocol requirement that enables each validator to verify the availability of shard data for a specific portion of each block (approximately one month)

Note that these data blobs were introduced as a new transaction type on the execution chain, but they do not impose additional burden on the executor. The EVM only looks at promises attached to data blocks. The implementation layer changes brought about by EIP-4844 are also forward compatible with DS, no further changes are required at this point. The upgrade from PDS to DS only needs to change the consensus layer.

In PDS, data blocks are completely downloaded by consensus clients. The data block is now referenced, but not fully encoded in the beacon block body. Rather than embedding the entire content in the body, the content of the blob is propagated individually as a “sidecar”. Each block has a blob sidecar, which is fully downloaded in PDS, and then DAS (Data Availability Sampling) on ​​it by DS validators.

We discussed earlier how to commit to blobs using the KZG polynomial commitment. However, instead of using KZG directly, EIP-4844 implements what we actually use – its versioned hash. This is a single 0x01 byte (representing the version) followed by the last 31 bytes of the SHA256 hash of the KZG.

We do this for EVM compatibility and forward compatibility:

  • EVM compatibility – KZG promises to be 48 bytes, while EVM more naturally uses 32 byte values
  • Forward compatibility – if we switch from KZG to something else (like STARKs for quantum resistance), the promise can continue to stay at 32 bytes

Multidimensional EIP-1559

PDS ultimately creates a tailor-made data layer – data blocks will get their own unique fee market, with independent floating gas prices and limits. So even if some NFT project sells a bunch of monkey land on L1, your rollup data costs won’t go up (though proof settlement costs will). This shows that the main cost of any rollup project today is publishing its data to L1 (not proof).

The gas fee market is unchanged, while the data block is added as a new market:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

The blob fee is charged in gas, but it is a variable amount that adjusts according to its own EIP-1559 mechanism. The long-term average number of blobs per block should be equal to the target.

There are actually two parallel auctions here – one for computation and one for DA. This is a huge step forward in efficient resource pricing.

Some interesting designs can be seen. For example, it may be reasonable to change the current gas and blob pricing mechanism from linear EIP-1559 to the new exponential EIP-1559 mechanism. The current implementation does not average to our target block size. Today’s base fee stability is poor, causing the observed gas usage per block to exceed the target by an average of about 3%.

Part 2 History and State Management

A quick recap of the basic concepts:

  • History – everything that has ever happened on the chain. You can put it directly on the hard drive as it doesn’t require fast access. This is 1 of N honest hypotheses in the long run.
  • Status – A snapshot of all current account balances, smart contracts, etc. Full nodes (currently) need to have this data in order to validate transactions. It’s too big for RAM, and the hard drive is too slow – it fits in a solid state drive. High-throughput blockchains have swelled this state at a rate far faster than what we ordinary people can sustain on our laptops. If everyday users can’t maintain that state, they can’t be fully authenticated, and decentralization can’t be talked about.

In short, these things can get really big, so it’s hard for you to run a node that has to keep this data if required. If running a node is too hard, we ordinary people won’t do it. This sucks, so we need to make sure it doesn’t happen.

Calldata gas cost reduction and calldata total limit (EIP-4488)

PDS is a good stepping stone towards DS and it satisfies many final requirements. Implementing the PDS within a reasonable time frame can bring the DS schedule forward.

An easier fix to implement is EIP-4488. It’s less elegant, but it still addresses the current cost urgency. Unfortunately, it doesn’t give the steps to get to the DS, so it’s inevitable to catch up later. If it’s starting to feel like PDS is a little slower than we’d like, it might make sense to quickly pass EIP-4488 (it’s just a few lines of code modification), and then get into PDS six months later. We are free to take our time.

EIP-4488 has two main components:

  • Reduced calldata cost from 16 gas per byte to 3 gas per byte
  • Increase the Calldata limit of 1MB per block, plus an additional 300 bytes per transaction (theoretically up to a total of about 1.4MB)

The limit needs to be increased to prevent the worst – a block full of calldata will be 18MB, which is far more than Ethereum can handle. EIP-4488 increases Ethereum’s average data capacity, but due to this calldata limit (30 million gas/16 gas per calldata byte = 1.875MB), its burst data capacity actually decreases slightly.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

EIP-4488 has a much higher sustained load than PDS because this is still calldata vs. data blocks (which can be pruned off after a month). With EIP-4488, the speedup will increase meaningfully, but it will also introduce bottlenecks in running nodes. Even if EIP-4444 is implemented in sync with EIP-4488, it only reduces the execution payload history after one year. A lower sustained load of the PDS is clearly preferable.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Limit historical data in executing clients (EIP-4444)

EIP-4444 allows customers to choose to locally prune historical data (including header, body, and receipt) older than one year. It specifies that clients stop serving this pruned historical data at the p2p layer. Pruning historical data allows customers to reduce users’ disk storage requirements (currently in the hundreds of gigabytes and growing).

This thing is important in the first place, but if EIP-4488 is implemented, it is basically mandatory (since it greatly increases the historical data). We hope this can be done in a relatively short period of time. Eventually some form of historical expiry is needed, so now is a good time to deal with it.

The history is required for full synchronization of the chain, but not for validating new blocks (only the state). Therefore, once a client has synced to the top of the chain, historical data will only be retrieved if explicitly requested via JSON-RPC or if some point attempts to sync the chain. With the implementation of EIP-4444, we need to find alternative solutions for these.

Clients will not be able to do a “full sync” using devp2p as they do today – instead they will do a “checkpoint sync” from a weakly subjective checkpoint, which they will see as the genesis block.

Note that weak subjectivity would not be an additional assumption – something that will necessarily come with moving to PoS. This requires the use of effective weakly subjective checkpoints for synchronization due to the possibility of remote attacks. The assumption here is that the client will not sync from an invalid or old weak subjectivity checkpoint. This checkpoint must be within the period when we start pruning historical data (here, within a year), otherwise the p2p layer will not be able to provide the required data.

This will also reduce the bandwidth usage of the network as more customers adopt a lightweight synchronization strategy.

Restoring historical data

EIP-4444 will prune historical data after a year which sounds good, while PDS prunes blobs faster (about a month later). These are all necessary actions because we cannot require nodes to store all data and remain decentralized.

  • EIP-4488 – May require ~1MB per slot long term, adding ~2.5TB of storage per year
  • PDS – Aim for ~1MB per socket, adding ~2.5TB of storage per year
  • DS – Aim for ~16MB per socket, adding ~40TB of storage per year

But where did this data go? Do we still need them? Yes, but please note that losing historical data is not a risk for the protocol – just for individual applications. So the work of the Ethereum core protocol should not include permanently maintaining all this consensus data.

So, who will store this data? Here are some potential contributors:

  • Individual and Institutional Volunteers
  • Block explorers (like etherscan.io), API providers and other data services
  • Third-party indexing protocols such as TheGraph can create incentivized marketplaces where clients pay servers for historical data with Merkle proofs
  • Clients in the Portal Network (currently in development) can store random parts of the chain’s history, and the Portal Network automatically directs data requests to the nodes that own the data
  • BitTorrent, for example, automatically generates and distributes a 7GB file containing blob data for each day’s blocks
  • Certain application protocols (like rollup) can require their nodes to store parts of history related to their application

The long-term data storage problem is a relatively easy one because it is one of the N trust assumptions, as we discussed earlier. This issue is many years away from being the ultimate limit on blockchain scalability.

Weak Statelessness

Well, we’ve got a good grasp on managing history, but what about state? This is actually the main bottleneck for improving Ethereum’s TPS at the moment.

Full nodes take the pre-state root, execute all transactions in a block, and check that the post-state root matches what they provided in the block. In order to know if these transactions are valid, they currently need the state on the counterparty to be verified.

Enter statelessness – don’t need the state at hand to perform its role. Ethereum is working towards “weak statelessness,” meaning that state is not required to validate blocks, but is required to construct blocks. Validation becomes a pure function – give me a block in complete isolation and I can tell you if it works. Basically like this:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

It’s acceptable for builders to still need state due to PBS – they’ll be more centralized high-profile entities anyway. Our focus is on the decentralization of validators. Weak statelessness creates slightly more work for builders, and significantly less work for validators. Very good deal.

We use witnesses to achieve this magical stateless execution. Witnesses are proofs of correct state access, and builders will start including these proofs in every block. Validating a block doesn’t actually require the entire state – you just need the state read or influenced by the exchanges in that block. Builders will start including fragments of the state affected by the transaction in a given block, and they will use witnesses to prove that they accessed those states correctly.

Let’s take an example. Alice wants to send 1 ETH to Bob. In order to verify the block of this transaction, I need to know:

  • Before the transaction – Alice has 1 ETH
  • Alice’s public key – so I can know the signature is correct
  • Alice’s nonce code – so I can know that the transactions were sent in the correct order
  • After executing the transaction, Bob gains 1 ETH and Alice loses 1 ETH

In a weakly stateless world, builders add the aforementioned witness data to the block and prove its accuracy. Validators receive the block, execute it, and decide if it is valid. That’s ok.

From a validator’s perspective, here are some implications:

  • Gone is the huge SSD requirement needed to maintain state – a key bottleneck for scaling at the moment.
  • Bandwidth requirements will increase a bit because you are now also downloading witness data and proofs. This is a bottleneck for Merkle-Patricia trees, but not a problem, not the kind of bottleneck encountered by Verkle Tries.
  • You still execute the transaction to fully validate. Statelessness acknowledges the fact that this is currently not a bottleneck for scaling Ethereum.

Weak statelessness also allows Ethereum to relax its self-limitation on its execution throughput, and state bloat is no longer a pressing issue. It might be reasonable to increase the gas limit by a factor of 3.

At this point, most users’ execution will be on L2, but higher L1 throughput is beneficial even for them. Rollup relies on Ethereum for DA (publish to shards) and settlement (requires L1 execution). As Ethereum scales its DA layer, the amortized cost of issuing proofs may take a larger share of the rollup cost (especially for ZK-rollups).

Verkle Tries

We intentionally skipped over how these witnesses work. Ethereum currently uses Merkle-Patricia trees to store state, but the required Merkle proofs are too large for these witnesses to be feasible.

Ethereum will turn to Verkle tries to store state. Verkle proofs are much more efficient, so they serve as viable witnesses for weak statelessness.

First let’s review what a Merkle tree looks like. Every transaction starts with a hash value – these hash values ​​at the bottom are called “leafs”. All hashes are called “nodes” and they are the hashes of the two child nodes below. The resulting hash is the “Merkle root”.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

This data structure is very helpful, it can prove the integrity of the transaction without downloading the whole tree. For example, if you want to verify that transaction H4 is included, you need H12, H3, and H5678 in the Merkle proof. We have H12345678 from the block header. So a lightweight client can ask for these hashes from a full node and hash them together according to the route in the tree. If the result is H12345678, then we have successfully proved that H4 is on the tree.

But the deeper the tree, the longer the route to the bottom, so you need more items to prove it. Therefore, shallow and wide trees are more suitable for efficient proofs.

The problem is that if you want to make the Merkle tree wider by adding more children under each node, that will be very inefficient. You need to hash the hashes of all siblings together to reach the whole tree, so you need to receive more hashes of siblings for the Merkle proof. This will make the scale of the proof huge.

That’s where efficient vector promises come in. Note that the hashes used in Merkle trees are actually vector promises – they’re just some bad promises that can only effectively promise two elements. So we want vector promises, we don’t need to receive all siblings to validate it. Once we have that, we can make the tree wider and reduce its depth. This is how we get efficient proof size – reducing the amount of information that needs to be provided.

A Verkle trie is similar to a Merkle tree, but it uses an efficient vector commitment (hence the name “Verkle”) instead of a simple hash to commit to its children. So the basic idea is that each node can have many children, but I don’t need all children to verify the proof. This is a proof of a constant size regardless of width.

In fact, we’ve covered a good example of this possibility before – KZG promises are also available as vector promises. In fact, that’s what the Ethereum developers originally planned to use here. They later turned to Pedersen commitments to accomplish a similar role. It will be based on an elliptic curve (in this case Bandersnatch) promising 256 values ​​(much better than two!).

So why not build a tree of depth 1 and as wide as possible? This is a good thing for the validator because he now has a super compact proof. But there is a practical trade-off that the validator needs to be able to compute this proof, and the wider it is, the harder it is. Therefore, Verkle tries will lie between the extremes of the width of 1~256 values.

Status expired

Weak statelessness removes the state inflation constraint from validators, but the state does not magically disappear. The costs of transactions are capped, but they impose a permanent tax on the network by adding state. State growth remains a permanent drag on the network. What needs to be done to solve this fundamental problem.

That’s why we need the state to expire. Long periods of inactivity (like a year or two) get axed, even for what the block builder was supposed to include. Active users won’t notice anything changing, and we can discard heavy states that are no longer needed.

If you need to restore an expired state, you just need to present a proof and reactivate it. This goes back to one of the N storage assumptions. As long as someone still has the full history (block explorers, etc.), you can get what you need from them.

Weak statelessness will weaken the base layer’s immediate need for state expiration, but it’s good in the long run, especially as L1 throughput increases. For high throughput rollups, this would be a more useful tool. The L2 state will grow at a higher rate, so much so that it will drag down even high profile builders.

Part 3 Everything is MEV’s Pot

PBS is necessary for a secure implementation of DS (Danksharding), but keep in mind that it was originally designed to counter the centralized power of MEV. You’ll notice a recurring trend in Ethereum research today – MEV is now front and center in cryptoeconomics.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

When designing a blockchain, consider that MEV is the key to maintaining security and decentralization. The basic protocol-level methods are:

  • Mitigate harmful MEVs as much as possible (e.g. finality, single secret leader choice)
  • Democratize the rest (eg MEV-Boost, PBS, MEV smoothing)

The remainder must be easily captured and propagated among validators. Otherwise, it will centralize the validator group by not being able to compete with sophisticated searchers. This is exacerbated by the fact that MEV will be a much higher percentage of validator rewards after the merger (staking issuance is much lower than the inflation given to miners). This cannot be ignored.

Today’s MEV Supply Chain

Today’s event sequence looks like this:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Mining pools play the role of builders here. MEV seekers pass bundles of transactions (along with their respective bids) to mining pools via Flashbots. The mining pool operator aggregates a complete block and passes the block header to each miner. Miners use PoW to prove blocks, giving them weights in fork choice rules.

Flashbots came about to prevent vertical integration of the entire stack – which would open the door to censorship and other pesky external factors. When Flashbots was born, mining pools had already started exclusive deals with trading companies to withdraw MEV. However, Flashbots gave them an easy way to aggregate MEV bids and avoid vertical integration (by implementing MEV-geth).

After the merger, the pool will disappear. We made it relatively easy for regular home validators to participate. This requires someone to take on the professional builder role. Home validators may not be as good at capturing MEV as hedge funds with high salaries. If left unchecked, ordinary people cannot survive the competition, which will drive the validator group towards centralization. If properly structured, the protocol can divert MEV revenue towards staking earnings for daily validators.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

MEV-Boost

Unfortunately, the PBS within the protocol was not ready at all when it was merged. Flashbots once again provides an emergency solution – MEV-Boost.

Merged validators by default receive transactions directly from the public mempool to their executing clients. They can package these transactions, give them to consensus clients, and broadcast them to the network. (If you need a refresher on how Ethereum’s consensus and execution clients work together, I covered that in Part 4).

But your mom and mass validators don’t know how to extract MEV, as we discussed, so Flashbots is offering an alternative. MEV-boost will plug into your consensus client, allowing you to outsource specialized block building. Importantly, you still retain the right to use your own execution client.

MEV seekers will continue to function as they are today. They will run specific strategies (Statistical Arbitrage, Atomic Arbitrage, Sandwich Strategies, etc.) and bid on their pack to be included. Builders then aggregate all the packed blocks they see as well as any private order streams (for example, from Flashbots Protect) into the best complete block. Builders pass only block headers to validators through relays running on MEV-Boost. Flashbots intend to run repeaters and builders, and plan to decentralize over time, but the process of whitelisting other builders can be slow.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

MEV-Boost requires the validator to trust the relay, that is, the consensus client receives the header information, signs it, and then the block body is revealed. The purpose of the relay is to prove to the initiator that the subject is valid and exists, so that the validator does not have to trust the builder directly.

When the PBS within the protocol is ready, it aggregates what MEV-Boost provides in the meantime. PBS provides the same separation of powers, making it easier for builders to decentralize, and freeing originators from trusting anyone.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Committee Driven MEV Smoothing

PBS also enables a cool idea – committee-driven MEV Smoothing.

We see that the ability to withdraw MEV is a centralizing force on the validator group, but so is distribution. The high variability of MEV rewards per block encourages validators to team up in order to smooth out rewards over time (as we see with mining pools today, albeit to varying degrees).

The default is that the actual block initiator gets the full payment from the builder. And MEV smoothing will take a portion of this money to many validators. A committee of validators will check the submitted block and certify that it is indeed the highest bid block. If all goes well, the block goes into the process and the reward will be distributed between the committee and the initiator.

This also solves another problem – out-of-band bribes. Initiators may be incentivized to submit a suboptimal block and accept outright out-of-band bribes to hide the payment they received from someone. This proof puts the sponsor in check.

In-protocol PBS is a prerequisite for MEV smoothing. You need to have an understanding of the builder market and the clear bids being submitted. There are several open research questions here, but this is an exciting proposal that is critical to ensuring decentralized validators.

Single-slot Finality

It’s good to get the end result quickly. Waiting 15 minutes is not ideal for user experience or cross-chain communication. More importantly, this is a MEV reorganization problem.

Post-Ethereum mergers will see stronger confirmations than today – thousands of validators attesting to each block, instead of miners competing with each other and potentially mining at the same block height instead of vote. This will make restructuring quite difficult. However, this still isn’t the real final touch. If the last block has some attractive MEV, it may tempt the validator to try to reorganize the chain and eat the reward.

The single-slot clapper eliminates this threat. Reversing a completed block requires at least one third of the validators, and their stake is immediately slashed (millions of ETH).

I won’t go into too much discussion of potential mechanisms here. The single-slot clapper is a very distant part of Ethereum’s roadmap, and it’s an open design space.

In today’s consensus protocol (no single-slot finalization), Ethereum only needs 1/32 of the validators to prove each slot (12,000 of the current over 380,000 validators). Extending this voting to the full validator group with BLS signature aggregation in a single slot requires more work. This squeezes hundreds of thousands of votes into a single validation.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Vitalik lists some interesting solutions, check it out here.

Single Secret Leader Election

SSLE is designed to patch another MEV attack vector we will face after the merger.

The list of Beacon Chain validators and the upcoming leader selection list is public, and it is fairly easy to de-anonymize them and map their IP addresses. You should easily spot the problem here.

More sophisticated validators can use some tricks to hide themselves better, but regular validators will be especially vulnerable to being mined and subsequently DDOSd. This is easily exploited by MEV.

Suppose you are the initiator of zone n and I am the initiator of zone n+1. If I know your IP address, I can easily DDOS you so that you cannot generate your block due to a timeout. Now I can capture the MEV of two blocks and get double the reward. This is exacerbated by EIP-1559’s elastic block size (maximum gas per block is twice the target size), so I can cram what should be two blocks of transactions into my single block, And the block is now twice as long.

In a nutshell, home validators can simply give up verification because they will be attacked. SSLE will prevent this attack by making it impossible for anyone but the initiator to know when their turn is coming. This won’t go into effect at the time of the merge, but hopefully it will be implemented sooner rather than later.

Part 4 The Secret of Merging

Well, actually I’ve been joking above. I really think (hopefully) that the merger will come relatively soon.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

Such an exciting thing, I think I have to stand up and shout a few words. This concludes your crash course in Ethereum.

merged client

Today, you run a monolithic client (like Go Ethereum, Nethermind, etc.) to handle everything. Specifically, a full node does the following two things:

  • Execution – Executes every transaction in the block to ensure validity. Use the pre-state root, perform various operations, and check that the resulting post-state root is correct
  • Consensus – Verify that you are on the longest (highest PoW) chain with the most work done (aka Nakamoto consensus).

The two are inseparable because a complete node follows not only the longest chain, but also the longest valid chain. That’s why they are full nodes and not light nodes. Even under a 51% attack, full nodes will not accept invalid transactions.

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

The beacon chain currently only runs consensus, giving PoS a trial run, but no execution. Ultimately, the terminal total difficulty will be determined, at which time the current Ethereum execution blocks will be merged into the beacon chain blocks into a single chain:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

However, a full node would run two independent clients under the black box and interoperate.

  • Execution client (fka Eth1 client) – The current Eth 1.0 client continues to handle execution. They process blocks, maintain mempools, and manage and synchronize state. PoW is deprecated.
  • Consensus Client (fka Eth2 Client) – The current Beacon Chain client continues to handle PoS consensus. They keep track of the chain head, communicate and attest blocks, and receive rewards from validators.

The client receives the block of the beacon chain, executes the client-run transaction, and then the consensus client will follow the chain if everything checks correct. You will be able to mix and match execution and consensus clients of your choice, all supporting interoperability. A new engine API will be introduced for communication between clients:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

or like this:

Delphi Digita: The Industry's Most Complete Ethereum Advanced Guide

post-merger consensus

Today’s Nakamoto Consensus is simple. Miners create new blocks and add them to the longest observed valid chain.

The merged Ethereum moves to GASPER – a combination of Casper FFG (clapping tool) plus LMD GHOST (fork choice rule) to reach consensus. In a nutshell – this is a liveness favoring consensus, not a security bias.

The difference is that a consensus algorithm that supports security (such as Tendermint) stops when it fails to get the necessary votes (in this case ⅔ of the validator group). Chains that support liveness (like PoW + Nakamoto Consensus) will continue to build an optimistic ledger no matter what, but without enough votes, they can’t make it through. Bitcoin and Ethereum today will never reach finality – you just assume that after enough blocks, the refactoring won’t happen again.

However, Ethereum will also achieve the final call with regular checkpoints if there are enough votes. Each instance of 32 ETH is an independent validator, and there are already over 380,000 Beacon Chain validators. The total epoch consists of 32 slots, and all validators are split to validate a slot within a given epoch (meaning about 12,000 validators per slot). The fork choice rule LMD Ghost then determines the current chain head based on these proofs. A new block is added after each slot (12 seconds), so the total epoch is 6.4 minutes. Typically after two epochs the final result is achieved with the necessary number of votes (so every 64 slots, although it may take as many as 95).

Summary moment

All roads lead to the endgame of centralized block production, decentralized trustless block verification and censorship resistance. Ethereum’s roadmap highlights this vision.

Ethereum aims to be the ultimate unified DA and settlement layer – massive decentralization and security underpinned by scalable computation. This is to condense cryptographic assumptions into a powerful layer. A unified modular (or now decentralized?) base layer that includes execution while capturing the highest value of the entire L1 design – leading to monetary premium and economic security, as I recently reported (now publicly available) .

I hope you have a clearer idea of ​​how Ethereum research is intertwined. It’s very cutting edge, all the parts are changing, and it’s not easy to finally figure out the big picture. You need to follow up all the time.

Ultimately, it all comes back to that single vision. Ethereum gives us a compelling path to massive scalability, while staying true to those values ​​that we care so much about in this space.

Special thanks to Dankrad Feist for his review and insights.

references

  1. Endgame: https://vitalik.ca/general/2021/12/06/endgame.html
  2. Can:https://twitter.com/CannnGurel
  3. Celestia :https://members.delphidigital.io/reports/pay-attention-to-celestia
  4. Great tweet: https://twitter.com/bkiepuszewski/status/1518163771788824576
  5. The (relatively easy to understand) basics of elliptic curve cryptography: https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/
  6. Exploring Elliptic Curve Pairing: https://vitalik.ca/general/2017/01/14/exploring_ecp.html
  7. KZG Polynomial Commitments: https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html
  8. The principle of trusted initialization: https://vitalik.ca/general/2022/03/14/trustedsetup.html
  9. The scalability of blockchain still has limitations: https://vitalik.ca/general/2021/05/23/scaling.html
  10. Multidimensional EIP-1559: https://ethresear.ch/t/multidimensional-eip-1559/11651
  11. Index Pricing Rules: https://ethresear.ch/t/make-eip-1559-more-like-an-amm-curve/9082
  12. Proof-of-custody: https://dankradfeist.de/ethereum/2021/09/30/proofs-of-custody.html
  13. Resource Pricing: https://www.youtube.com/watch?v=YoWMLoeQGeI
  14. Exponential EIP-1559 Mechanism: https://dankradfeist.de/ethereum/2022/03/16/exponential-eip1559.html
  15. devp2p:https://github.com/ethereum/devp2p
  16. weak subjectivity:https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/
  17. nonce :https://www.investopedia.com/terms/n/nonce.asp
  18. MEV-Boost:https://ethresear.ch/t/mev-boost-merge-ready-flashbots-architecture/11177
  19. GASPER :https://arxiv.org/abs/2003.03052
  20. Dankrad Feist:https://twitter.com/dankrad
  21. http://disaggregated
  22. https://members.delphidigital.io/reports/valuing-layer-1s-memes-money-or-more

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/delphi-digita-the-industrys-most-complete-ethereum-advanced-guide/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-06-20 12:04
Next 2022-06-20 12:06

Related articles