The Hitchhiker ‘s Guide to Ethereum: Understanding the Ethereum Roadmap

key takeaways

  • Ethereum is the only major protocol to build a scalable unified settlement and data availability layer;
  • Rollups scale computation while leveraging the security of Ethereum;
  • All roads lead to centralized block production, decentralized & trustless block verification and censorship-resistant endgame;
  • Innovations such as Block Proposer-Builder Separation (PBS) and weak statelessness open up this separation of power (build and verify) to achieve without sacrificing security or decentralization scalability;
  • MEV is now in a central and important position , and Ethereum has planned many designs to mitigate the harm of MEV and prevent the centralization tendency of MEV;
  • Danksharding combines multiple avenues of cutting-edge research to provide a scalable base layer for Ethereum ‘s Rollup-centric roadmap ;
  • I expect danksharding to be realized in our lifetime.

introduction

I’ve been skeptical about the timing of the merger since Ethereum co-founder Vitalik Buterin said that people born today have a 50%-75% chance of living to 3000, and he hopes to live forever. But in any case, let’s take a look at Ethereum’s ambitious roadmap.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

This is not a crash article. If you want a broad and nuanced understanding of Ethereum’s ambitious roadmap – give me an hour and I’ll save you months of work.

There’s a lot to like about Ethereum research , but it’s all ultimately woven into an overarching goal — to scale computing without sacrificing decentralized verification .

With simple decentralized verification, Ethereum maintainers have very high security, while Rollups inherit the security of Ethereum L1. Ethereum also provides settlement and data availability, allowing Rollups to scale . All of the research here is ultimately about optimizing both roles while making validating the chain easier than ever.

Here are a few acronyms that appear N times in this article:

  • DA – Data Availability
  • DAS – Data Availability Sampling
  • PBS – Proposer-builder Separation
  • PDS – Proto-danksharding
  • DS – Danksharding
  • PoW – Proof of Work
  • PoS – Proof of Stake

Part I: The Road to Danksharding

Hopefully, by now you know that Ethereum has moved to a Rollup-centric roadmap . Ethereum will no longer have execution shards but will be optimized for Rollups that require large amounts of data. This is achieved through data sharding .

The Ethereum consensus layer does not interpret shard data, its job is to ensure that the data is available .

In the text, I will assume that you are familiar with basic concepts like Rollups , Fraud Proofs, and ZK Proofs, and the importance of DA (Data Availability).

1) Initial data sharding design: individual shard block proposers

The design described in this subsection is obsolete , but it is valuable for us to understand the background information. For simplicity, I call this design “Sharding 1.0”.

Each shard block of Ethereum’s 64 shards has separate block proposers and committees, who are randomly selected from the validator set and assigned to each shard chain. They respectively verify the availability of data in their own shards. At first this will not be DAS (Data Availability Sampling), but will rely on an honest majority of validators per shard to fully download that data.

This design brings unnecessary complexity, worse user experience, and attack vectors. Shuffling (shuffling) validators between shards is tricky.

Without introducing a very tight synchronization assumption, it is difficult to guarantee that voting will be done within a single slot. Proposers of beacon blocks need to collect votes from all independent committees, which may be delayed.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Danksharding (DS for short) is completely different. Validators perform DAS (Data Availability Sampling) to confirm that all data is available no more separate shard committees ). A dedicated block builder is responsible for creating a large block that confirms the beacon block along with all shard data. Therefore, Proto-danksharding (PDS for short) is necessary for DS to remain decentralized (as building such large blocks is resource-intensive).

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

2) Data Availability Sampling (DAS)

Rollups publish a lot of data, but we don’t want to burden nodes with downloading all of this data, as that would mean a high resource requirement, compromising the decentralization of the network’s nodes.

Instead, DAS allows nodes (even light clients) to easily and securely verify the availability of all this data without having to download all of it .

  • Simple solution: just check each block for some random block of data. If the check passes, the node signs the block. But if the node doesn’t see that transaction about you sending ETH to Sifu, then the funds are no longer safe.
  • Clever solution: Erasure code the data first. Extend this data using Reed-Solomon code. This means that the data is interpolated as a polynomial, and then we evaluate that polynomial in many additional places. It’s hard to understand, so let’s break it down.

This is a quick lesson for those who forget math class. (I promise it won’t be terribly scary math – I had to watch some Khan Academy videos to write these parts, but I get it all now).

A polynomial is an expression that sums any finite number of monomials . The “degree” of a polynomial is the degree of the monomial with the highest degree in the polynomial. For example, 2ab+b-1 is a polynomial consisting of three monomials 2ab, b, and -1 , and we call these three monomials the “terms” of the polynomial, so this polynomial is a “trinomial”; and Since the monomial 2ab has the highest degree (the degree is 2), this trinomial is a “quadratic trinomial”. You can reconstruct any polynomial of degree d from any d+1 coordinate lying on that polynomial.

Now take a concrete example. Below we have four chunks of data (d0 to d3). These chunks of data can be mapped to the evaluation of the polynomial f(X) at a given point. For example, f(0) = d0. Now you have found the minimum degree polynomial that runs through these evaluations. Since this is a block of four, we can find the cubic polynomial. We can then extend this data to add four evaluations (e0 to e3) along the same polynomial.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Remember a key property of a polynomial: we can reconstruct it from any four points, not just the original four blocks of data.

Back to our DAS. Now we just need to make sure that any 50% (4/8) erasure coded data is available. From this, we can reconstruct the entire block.

Therefore, an attacker would have to hide >50% of the block’s data in order to successfully fool a DAS node into thinking that the data is available (but is not).

After many successful random samplings, the probability that <50% of the data is available is very small. If we successfully sample the erasure coded data 30 times, the probability of <50% usable is 2-30.

3) KZG promise

Ok, so we did a bunch of random samples and they were all available. But we have another question – is data erasure coding correct? Otherwise, maybe the block producer just added 50% garbage when expanding the block , and we sampled the crap . In this case, we won’t actually be able to reconstruct the data.

Usually we just commit a lot of data by using the Merkle root. This is valid for proving that the collection contains some data.

However, we also need to know that all original and extended data lie on the same low-degree polynomial. Merkle roots don’t prove it. So if you use this scheme, you also need fraud proofs, just in case you do something wrong.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Ok, so we did a bunch of random samples and they were all available. But we have another question – is data erasure coding correct? Otherwise, it’s possible that the block producer just added 50% garbage when expanding the block, and we sampled the crap. In this case, we won’t actually be able to reconstruct the data.

Usually we just commit a lot of data by using the Merkle root. This is valid for proving that the collection contains some data.

However, we also need to know that all original and extended data lie on the same low-degree polynomial. Merkle roots don’t prove it. So if you use this scheme, you also need fraud proofs, just in case you do something wrong.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Back to KZG commitment – this is a polynomial commitment scheme.

A commitment scheme is just a cryptographic way of provably promising some value. The best analogy is to put a letter in a locked box and give it to someone else. The letter cannot be changed once inside, but can be opened with a key and proven. The new you submit is the promise, and the key is the proof.

In our case, we map all the original and extended data onto an X,Y grid and then find the minimum degree polynomial that passes through them (a process called Lagrangian interpolation). This polynomial is what the prover will commit to:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Here are the key points:

  • We have a “polynomial” f(X)
  • The prover makes a “commitment” to the polynomial C (f)
    • This relies on elliptic curve cryptography with trusted settings. For more details on how it works, see great post by Bartek
  • For any “evaluation” y = f(z) of that polynomial, the prover can compute the “proof” π(f,z)
  • Given a commitment C(f) , a proof π(f,z) , any position z , and an evaluation y of the polynomial at , the verifier can confirm that indeed f(z) = y
    • Explanation: The prover provides these fragments to any verifier, who can then confirm that the evaluation at a point (where the evaluation represents the underlying data) lies correctly on the polynomial of the commitment
    • This proves that the original data is scaled correctly, since all evaluations lie on the same polynomial
    • Note that the verifier does not need the polynomial f(X)
  • Important properties – this has O(1) commitment size, O(1) proof size and O(1) verification time. Even for the prover, commitment and proof generation scale only in O(d) , where d is the degree of the polynomial
    • Explanation: Even if n ( the number of values ​​in X ) increases (i.e., as the dataset increases with larger shard blobs ) – commitments and proofs remain the same size, and verification requires ongoing effort
    • Both promise C(f) and prove π(f,z) is just an elliptic curve element on the pair-friendly curve (this will use BL12-381 ). In this case they are only 48 bytes each (very small)
    • So the prover submits a lot of original and extended data (representing multiple evaluations of the polynomial) still only 48 bytes, and the proof is also only 48 bytes
    • TLDR – this is highly scalable

Then, the KZG root (polynomial commitment) is similar to the Merkle root (which is a vector commitment):

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

The original data is the polynomial f (X) computed at positions f(0) to f(3) , then we extend it by computing the polynomials at f (4) to f(7) . All points f(0) to f(7) are guaranteed to lie on the same polynomial.

Bottom line: DAS allows us to check if erasure coded data is available. KZG ‘s commitment proves to us that the raw data is properly scaled and promises all of that.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Well done, that’s what algebra is about today.

4) KZG Commitment and Fraud Proof

Now that we understand how KZG works, take a step back and compare the two methods.

Disadvantage of KZG – it is not post-quantum secure, it requires a trusted setup. These are not worrying. STARKs provide a post-quantum alternative, while a trusted setup (open participation) requires only one honest participant.

Advantages of KZG – lower latency than Fraud Proof set (although GASPER is not going to be fast deterministic anyway) and it ensures correct erasure coding without introducing the inherent in Fraud Proof Synchronized and honest few assumptions.

However, considering that Ethereum still reintroduces these assumptions for block rebuilds, you don’t actually delete them. The DA layer always needs to plan for the scenario where the block is initially provided, but then the nodes need to communicate with each other to put it back together. This reconstruction requires two assumptions:

  1. You have enough nodes (light nodes or full nodes) to sample the data so that they are good enough to put them back together. This is a fairly weak and unavoidable honest-minority assumption, so it’s not a huge problem.
  2. Reintroduce the synchronization assumption – nodes need to be able to communicate for a period of time to put it back together.

Ethereum validators fully download shard blobs in PDS , and with DS they will only do DAS (download allocated rows and columns). Celestia will require validators to download the entire block.

Note that in either case, we need synchronization assumptions to rebuild. If the block is only partially available, full nodes must communicate with other nodes to put it back together.

If Celestia wants to move from requiring validators to download full data to just performing DAS (although such a transition is not currently planned), then the latency advantage of KZG will become apparent. Then, they also need to implement the KZG commitment – waiting for fraud proofs would mean significantly increasing the block interval, and the risk of validators voting for incorrectly coded blocks would be very high.

I recommend the following for a more in-depth exploration of how KZG promises work:

  • Introduction to Elliptic Curve Cryptography (relatively easy to understand)‌
  • Exploring Elliptic Curve Pairing‌ –Vitalik
  • KZG Polynomial Commitment‌ –Dankrad
  • How does the trusted setup work? ‌ –Vitalik

5) Intra-protocol proposer – builder separation

Today’s consensus nodes (miners) and merged (validators) play two roles. They build the actual block and then propose it to other consensus nodes that validate it. By “voting” on the basis of the previous block, after the merge, the validator will directly vote on the validity or invalidity of the block.

PBS separates them – it explicitly creates a new in-protocol builder role. Professional builders will put blocks together and bid on proposers (validators) to choose their blocks. This fights against the centralized power of MEVs .

Recall Vitalik ‘s “Endgame” article – all roads lead to centralized block production, with trustless and decentralized verification. PBS compiled this. We need an honest builder to serve the network for liveness and censorship resistance (two are needed for an efficient market), but the validator set needs an honest majority. PBS makes it as easy as possible for the proposer role to support validator decentralization.

Builders receive a priority fee tip and any MEV they can withdraw . In an efficient market, competitive builders will bid for the full value they can extract from a block (minus their amortized cost, such as powerful hardware, etc.). All value percolates to a decentralized set of validators – exactly what we want.

The exact PBS implementation is still under discussion, but a two-slot PBS might look like this:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

  1. Builders commit block headers with their bids
  2. The beacon block proposer selects the winning block header and bid. Even if the builder fails to produce the body, the bidder will win the bid unconditionally
  3. Board of Attesters confirms winning header
  4. Builder announces winning body
  5. Independent attester committee elects the winning body (if the winning builder disagrees, it is voted to be absent)

Proposers are selected from the set of validators using the standard RANDAO mechanism. Then we use a commit-reveal scheme, which does not reveal the full block body until the committee has confirmed the block header.

commit-reveal is more efficient (sending around hundreds of full block bodies can overwhelm the bandwidth of the p2p layer), and it also prevents MEV stealing. If a builder were to commit their full block, another builder could see it, figure out that strategy, merge it, and quickly publish a better block. Furthermore, mature proposers can detect the MEV strategy used and replicate it without compensating the builders. If this MEV stealing becomes balanced, it will incentivize merge builders and proposers, so we use commit-reveal to avoid this.

After the proposer elects the winning block header, the committee confirms and fixes it in the fork choice rule. The winning builder then publishes their winning full “builder block” body. If released in time, the next committee will testify. If they fail to post in time, they still pay the proposer the full bid (and lose all MEV and fees). This unconditional payment removes the need for the proposer to trust the builder.

The disadvantage of this “two-slot” design is latency. The merged block will be a fixed 12 seconds, so here we need 24 seconds to get the full block time (two 12-second slots) if we don’t want to introduce any new assumptions. 8 seconds per slot (16 second block time) seems like a safe compromise, although research is ongoing.

6) Anti-censorship list (crList)

Unfortunately, PBS gives the builder a higher ability to censor transactions. Maybe the builder just doesn’t like you, so they ignore your deal. Maybe they did such a good job that all the other builders gave up, or maybe they just overpaid for the block because they really didn’t like you.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

This capability is checked by crLists. The exact implementation is again an open design space, but “Hybrid PBS” seems to be the most popular. Proposers specify a list of all eligible transactions they see in the mempool, and builders will be forced to include them (unless the block is full):

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

  1. The proposer publishes a crList and crList summary that includes all eligible transactions
  2. Builders create a proposed block body, then submit a bid that includes the hash of the crList digest, proving that they have seen it
  3. Proposers accept bids and block headers from winning bidders (they haven’t seen the body yet)
  4. Builders publish their block and include proof that they have included all transactions from crList or that the block is full. Otherwise the fork choice rule will not accept the block
  5. The prover checks the validity of the published body

There are still important issues to be resolved here. For example, the dominant economic strategy here is for proposers to submit an empty list. Even a review builder can win the auction as long as the highest bid is made. There are a few ideas to address this and others, but just to stress that the design here isn’t set in stone.

7) Two-dimensional KZG scheme

We saw how the KZG promise allowed us to promise data and prove it was scaled correctly. However, I simplified the actual operation of Ethereum. It will not commit all data in a single KZG commitment – a single block will use many KZG commitments.

We already have a dedicated builder, so why not let them create a huge KZG commitment? The problem is that this requires a strong supernode to rebuild. We can accept the supernode requirement for the initial build, but we need to avoid the rebuild assumption here. We need an entity with fewer resources to handle the rebuild, and splitting it into many KZG commits makes this feasible. Reconstruction might even be fairly common given the amount of data at hand, or a base case assumption in this design.

To make rebuilding easier, each block will contain m shard blobs encoded in m KZG commitments. Doing this naively would result in a lot of sampling – you would do a DAS on each shard blob to know it’s all available (m*k samples, where k is the number of samples per blob).

Instead, Ethereum will use a 2D KZG scheme. We extend the m promises to 2 m promises again using the Reed-Solomon code:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

We make it a 2D scheme by extending an additional KZG commitment (256-511 here) on the same polynomial as 0-255. Now we just need to DAS the above table to ensure data availability for all shards.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

The 2D sampling requirement that ≥75% of the data is available (compared to the previous 50% ) means that we do a slightly larger number of fixed samples. Before I mentioned 30 samples for DAS in a simple 1D scheme , but this would require 75 samples to ensure the same probabilistic odds of reconstructing a usable block.

Sharding 1.0 (with a 1D KZG commitment scheme) requires only 30 samples, but if you want to check a full DA of 1920 samples, you need to sample 64 shards. Each sample is 512 B , so this requires:

( 512 B x 64 shards x 30 samples) / 16 seconds = 60 KB/s bandwidth

In effect, validators are just shuffled without checking all shards individually.

Now, the combined block using the 2D KZG commitment scheme makes it trivial to check the full DA . It only needs 75 samples of a single uniform block :

( 512 B x 1 block x 75 samples) / 16 seconds = 2.5 KB/s bandwidth

8) Danksharding

PBS was originally designed to weaken the concentration of MEVs on the validator set. However, Dankrad has recently taken advantage of this design, realizing that it unlocks a better sharding structure  DS .

DS utilizes specialized builders to create beacon chains that perform tighter integration of blocks and shards. We now have a builder creating the entire block together, a proposer and a committee. DS would not be feasible without PBS – regular validators cannot handle the huge bandwidth of blocks full of Rollup data blobs :

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Shard 1.0 includes 64 independent committees and proposers, so each shard may be individually unavailable. The tighter integration here allows us to ensure DA in one go. Data is still “sharded” behind the scenes, but from a practical standpoint, danksharding is starting to feel more like large blocks, which is great.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

9) Danksharding – Honest Majority Verification

The validator attests that the data is available as follows:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

This relies on an honest majority of validators – as a single validator, my column and row availability is not enough to give me statistical confidence that the entire block is available. It’s up to the honest majority to say it is. Decentralized verification is important.

Note that this is not the same as the 75 random samples we discussed earlier. Private random sampling is a way for resource-starved individuals to be able to easily check availability (e.g. I can run a DAS light node and know the block is available). However, validators will continue to use row and column methods for checking availability and bootstrapping block rebuilds.

10) Danksharding — Refactoring

As long as 50% of a single row or column is available, the sampling validator can easily fully rebuild it. When they rebuild any blocks that are lost in the row / column, they reassign those blocks to the orthogonal lines. This helps other validators rebuild any missing blocks from intersecting rows and columns as needed.

The safe assumptions for rebuilding available blocks here are:

  1. enough nodes to execute the sample request so that together they have enough data to rebuild the block
  2. Synchronization assumption between nodes broadcasting their respective blocks

So, how many nodes are enough? A rough estimate is about 64,000 individual instances (more than 380,000 so far ). This is also a very pessimistic calculation that assumes no crossovers among nodes run by the same validator (which is a far cry from nodes being limited to 32 ETH instances ).

If you sample more than 2 rows and columns, the odds that you can retrieve them collectively increase due to the intersection. This starts to scale quadratically – if the validator is running 10 or 100 validators, 64,000 could be an order of magnitude less.

If the number of online validators starts to get unusually low, DS can be set up to automatically reduce the shard data blob count. Therefore, the safety assumption will be reduced to a safe level.

11) Danksharding – Malicious Majority Security for Private Random Sampling

We see that DS verification relies on an honest majority to prove blocks. As a person, I cannot prove to myself that a block is available by downloading only a few rows and columns. However, private random sampling can give me this assurance without trusting anyone. As mentioned earlier, this is where the node checks these 75 random samples.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

DS won’t include private random sampling initially, because that’s a very hard problem to solve in terms of networking ( PSA : maybe they could actually use your help here!).

Note that “private” is important because if an attacker de-anonymizes you, they can spoof a small number of sampling nodes. They can just return the exact block you requested and keep the rest. So you don’t know that all data is available just from your own sampling.

12) Danksharding – key takeaways

Besides being a sweet name, DS is also very exciting. It finally realizes the vision of Ethereum’s unified settlement and DA layer. This tight coupling of Beacon blocks and shards essentially pretends not to be sharded.

In fact, let’s define why it’s even considered a “shard”. The only remnant of “sharding” is that validators are not responsible for downloading all data. That’s it.

So if you’re questioning now whether this is really still sharding, you’re not crazy. This distinction is why PDS (which we’ll cover shortly) is not considered a “shard” (even though it has “shard” in its name, yes, I know it’s confusing). PDS requires each validator to fully download all shard blobs to prove their availability. DS then introduced sampling, so each validator only downloads a portion of it.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Fortunately, minimal sharding means a simpler design than sharding 1.0 (delivered so fast, right?). In short, it includes:

  • Compared to the sharding 1.0 spec, the code in the DS spec could be hundreds of lines less (thousands less on the client side)
  • No shard committee infrastructure, committees only need to vote on the main chain
  • Individual shard blob confirmations are not tracked , now they are all confirmed in the main chain, or they are not

A nice result of this is a consolidated fee market for data. Shards 1.0 with different blocks made by different proposers would spread this out.

The elimination of the shard committee also strengthens anti-bribery. DS validators vote for the entire block once per epoch , so data is immediately confirmed by 1/32 of the entire validator set ( 32 slots per epoch ). Shard 1.0 validators also vote once per epoch , but each shard has its own committee that is shuffled. Therefore, each shard is only confirmed by 1/2048 of the validator set ( 1/32 divided into 64 shards).

Combining blocks with the 2D KZG commitment scheme also makes DAS more efficient, as mentioned earlier. Shard 1.0 requires 60 KB/s of bandwidth to check the full DA of all shards DS only needs 2.5 KB/s .

Another exciting possibility exists for DS – synchronous calls between ZK-rollups and L1 Ethereum executions. Transactions from shard blobs can be immediately confirmed and written to L1 because everything is generated in the same beacon chain block. Shard 1.0 will eliminate this possibility due to separate shard confirmation . This allows for an exciting design space that could be very valuable for things like shared liquidity (e.g. dAMM‌ ).

A modular base layer scales gracefully – more decentralization leads to more expansion. This is fundamentally different from what we see today. Adding more nodes to the DA tier allows you to safely increase data throughput (i.e. more room for rollup to exist on top).

There are still limits to the scalability of blockchains, but we can increase orders of magnitude higher than anything we see today. A secure and scalable base layer allows execution to proliferate on top of them. Improvements in data storage and bandwidth will also allow for higher data throughput over time.

Exceeding the DA throughput envisioned here is certainly possible, but it’s hard to say where this maximum will end up. There are no clear red lines, but some areas where assumptions will start to feel uncomfortable:

  • Data Storage – This is related to DA and data retrievability. The role of the consensus layer is not to guarantee the retrievability of data indefinitely. What it does is make it available long enough for anyone who cares to download it, to satisfy our security assumptions. Then it gets dumped anywhere – which is comfortable because history is 1 in the N trust assumption , and we’re not actually talking that much data in the grand scheme of things. This could enter troubling territory in a few years as throughput increases by orders of magnitude.
  • Validators  DAS needs enough nodes to rebuild blocks together. Otherwise, attackers may wait and only respond to queries they receive. If these queries provided are not enough to reconstruct the block, the attacker can keep the rest and we are out of luck. To safely increase throughput, we need to add more DAS nodes or increase their data bandwidth requirements. This is not an issue of throughput as discussed here. However, if the throughput is increased by orders of magnitude further than this design, this can be uncomfortable.

Note that the builder is not the bottleneck. You need to generate KZG proofs quickly for 32 MB of data , so a GPU or reasonably powerful CPU and at least 2.5 GBit/s of bandwidth are required. It’s a dedicated role anyway, and it’s a negligible business cost to them.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

13) Proto-danksharding (EIP-4844)

DS is great, but we have to be patient. The PDS is designed to help us through this difficult time  it implements the necessary forward-compatibility steps for DS on an accelerated timeline (for the Shanghai hard fork) to provide orders of magnitude scaling in the meantime. However, it does not actually implement data sharding (i.e. validators need to download all data individually).

Rollups today use L1  calldata ” for storage, which lives on-chain forever. However, Rollup only needs DA for a reasonable period of time , so that anyone interested has enough time to download it.

EIP-4844 introduces a new transaction format that supports blobs , which Rollup will use for future data storage. Blobs carry a lot of data ( ~125 KB ) and they are much cheaper than a similar amount of calldata The data blob is then removed from the node after a month , which reduces storage requirements. This is enough time to satisfy our DA security assumption.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Current Ethereum blocks typically average around 90 KB (call data is around 10 KB of that). PDS unlocks more DA bandwidth for blobs (target ~1 MB and max ~2 MB) as they are pruned after a month. They don’t become a permanent drag on the node.

A blob is a vector of 4096 field elements, each of 32 bytes. PDS allows up to 16 blocks per block, while DS will increase to 256.

PDS DA bandwidth = 4096 x 32 x 16 = 2 MiB per block, target 1 MiB

DS DA bandwidth = 4096 x 32 x 256 = 32 MiB per block, target 16 MiB

Each step scales by orders of magnitude. PDS still requires consensus nodes to fully download data, so it is conservative. DS distributes the load of storing and propagating data among validators.

Here are some goodies that EIP-4844 introduced on the way to DS:

  • Transaction format for carrying data blobs
  • KZG’s commitment to blobs
  • All execution layer logic required by DS
  • All execution/consensus cross-validation logic required by DS
  • Layer separation between BeaconBlock validation and DAS blobs
  • Most of the BeaconBlock logic required by DS
  • Self-adjusting independent gas prices for blobs (multidimensional EIP-1559 with exponential pricing rules)

Then DS further adds:

  • PBS
  • data collection system
  • 2D KZG scheme
  • Proof of Escrow or similar intra-protocol requirement for each validator to verify the availability of a specific portion of the shard data in each block (probably about a month)
  • Note that these data blobs were introduced as a new transaction type on the execution chain, but they do not impose additional requirements on the execution side. The EVM only looks at promises attached to blobs. Execution layer changes using EIP-4844 are also forward compatible with DS and no further changes are required in this regard. Then upgrading from PDS to DS only requires changing the consensus layer.

Data blobs are fully downloaded by consensus clients in PDS. Now, the blob is referenced in the Beacon block body, but not fully encoded. Instead of embedding the entire content in the body, the content of the blob is broadcast separately, as a “sidecar”. Each block has a blob sidecar, which is fully downloaded in PDS, and then DAS will be performed on it using DS validators.

We previously discussed how to commit blobs using KZG polynomial commitments. However, instead of using KZG directly, EIP-4844 implements what we actually use – its versioned hash. This is a 0x01 byte (representing this version) followed by the last 31 bytes of the SHA256 hash of the KZG.

We do this for easier EVM compatibility and forward compatibility:

  • EVM compatibility – KZG promises 48 bytes, while EVM more naturally uses 32 byte values
  • Forward compatibility – if we switch from KZG to something else (STARKs are available for quantum resistance) these promises can continue to be 32 bytes

PDS eventually creates a custom data layer – data blobs will have their own unique fee market with separate floating gas prices and limits. So even if some NFT project sells a bunch of monkey land on L1, your Rollup data costs won’t go up (though proof settlement costs will go up). This acknowledges that the main cost of any Rollup today is publishing their data to L1 (not attestation).

The gas fee market remains the same and data blobs are added as a new market:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

The blob fee is charged in gas , but it is adjusted by a variable amount based on its own EIP-1559 mechanism. The long-term average number of blobs per block should be equal to the target.

You actually have two auctions running in parallel – one for computation and one for DA . This is a huge leap forward in efficient resource pricing‌.

Here are some interesting designs. For example, it might make sense to change the current gas and blob pricing mechanism from linear EIP-1559 to a new exponential EIP-1559 mechanism. The current implementation does not average out to our target block size in practice. Today, the basefee is not completely stable, causing the observed average gas usage per block to exceed the target by an average of about 3% .

Part II : History & State Management

A quick review of some basics here:

  • History – everything that happened on-chain. You can stick it on your hard drive as it doesn’t require quick access. In the long run, 1 of N is honestly assumed.
  • Status – A snapshot of all current account balances, smart contracts, etc. Full nodes (currently) all need this to validate transactions. It’s too big for RAM , and the hard drive is too slow – it’s in your SSD . High-throughput blockchains bloat their state far beyond what the average person can hold on a laptop. If everyday users can’t hold state, they can’t fully verify, then say goodbye to decentralization.

TLDR – These things get really big, so it’s hard to run a node if you let nodes store them. If it’s too hard to run a node, we ordinary people won’t do it. This sucks, so we need to make sure that doesn’t happen.

1) Calldata Gas cost reduction and total Calldata limit (EIP-4488)

The PDS is an important stepping stone to the DS , checking many final requirements. Implementing the PDS within a reasonable time span can advance the timeline on the DS .

Easier to implement a Band-Aid is EIP-4488 . It’s not as elegant, but it still solves expense emergencies. Unfortunately, it doesn’t implement steps in the process of implementing DS , so all unavoidable changes still need to be made later. If it starts to feel like PDS is going to be a little slower than we’d like, it might make sense to quickly pass EIP-4488 (it’s just a few lines of code change) and then implement PDS about months later . A specific timetable has not yet been determined.

EIP-4488 has two main components:

  • Reduced calldata cost from 16 gas per byte to 3 gas per byte
  • Add a limit of 1 MB invocation data per block and an additional 300 bytes per transaction (theoretical maximum is about 1.4 MB )

Limits need to be added to prevent the worst-case scenario – a block full of calldata would be 18 MB , which is far beyond what Ethereum can handle. EIP-4488 increases Ethereum ‘s average data capacity, but its burst data capacity actually decreases slightly due to this call data limit (30 million g as / 16 gas per calldata byte = 1.875 MB ).

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

The sustained load of EIP-4488 is much higher than PDS, since this is still calldata and blobs, data that can be pruned after a month. EIP-4488 will significantly accelerate historical growth, making it a bottleneck for running nodes. Even if EIP-4444 is implemented with EIP-4488, this will only delete the execution load history after one year. A lower sustained load on the PDS is clearly desirable.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

2) Bind historical data in the execution client (EIP-4444)

EIP-4444 allows clients to choose to locally trim historical data (header, body and receipt) older than one year. It requires clients to stop serving these pruned historical data on the p2p layer . Pruning history allows customers to reduce users’ disk storage requirements (currently hundreds of gigabytes and growing).

This is already important, but if EIP-4488 is implemented (as it significantly grows history) it will be essentially mandatory. Anyway, hopefully this can be done in the relatively near future. Eventually some form of historical expiry will be required, so this is a good time to deal with it.

The history is required for full synchronization of the chain, but it is not required for validating new blocks (this only requires state). Therefore, once a client has synced to the top of the chain, historical data will only be retrieved when explicitly requested via JSON-RPC or when a peer attempts to sync the chain. With the implementation of EIP-4444 , we need to find alternative solutions for these.

Clients will not be able to “full sync” using devp2p as they do now – they will instead “checkpoint sync” from a weakly subjective checkpoint, which they see as the genesis block.

Note that weak subjectivity would not be an additional assumption – it is inherent in the move to PoS anyway. Due to the possibility of remote attacks, this requires synchronization using an effective weakly subjective checkpoint. The assumption here is that clients will not sync from invalid or old weak subjectivity checkpoints. This checkpoint must be within the period in which we start pruning historical data (i.e. within a year here), otherwise the p2p layer will not be able to provide the required data.

This will also reduce bandwidth usage on the network as more clients adopt a lightweight synchronization strategy.

3) Retrieve historical data

EIP-4444 pruning historical data after a year sounds good, while PDS prunes blobs faster (about a month later). We absolutely need these because we can’t ask nodes to store all of them and remain decentralized:

  • EIP-4488 – Long term may include ~ 1 MB per slot (slot) adding ~ 2.5 TB of storage per year
  • PDS – target ~ 1 MB per slot , adding ~ 2.5 TB of storage per year
  • DS – target ~ 16 MB per slot , adding ~ 40 TB of storage per year

But where did the data go? Do we no longer need it? Yes, but please note that there is no risk for the protocol to lose historical data – only for a single application. Well, the job of the Ethereum core protocol shouldn’t be to permanently maintain all consensus data.

So, who will store it? Here are some potential contributors:

  • Individual and Institutional Volunteers
  • Block explorers (e.g. etherscan.io ), API providers and other data services
  • Third-party indexing protocols like TheGraph can create incentivized marketplaces where clients can pay servers for historical data as well as Merkle proofs
  • Clients in Portal Network (currently in development) can store random parts of chain history, Portal Network automatically directs data requests to the node that owns it
  • BitTorrent , eg. A 7 GB file containing blob data from blocks is automatically generated and distributed daily
  • Application-specific protocols (e.g. Rollup ) can require their nodes to store parts of the history related to their application

The long-term data storage problem is a relatively simple one because it is the 1 of N trust assumption we discussed earlier. We are many years away from being the ultimate limit to blockchain scalability.

4) Weak stateless

OK, so we’ve handled history management pretty well, but what about state? The state problem is actually the main bottleneck for improving Ethereum’s TPS at the moment.

Full nodes take the pre-state root, execute all transactions in the block, and check that the post-state root matches what they provided in the block. To know if these transactions are valid, they currently need state at hand – verification is stateful.

Entering the age of statelessness – we won’t need state to function. Ethereum is striving for “weak statelessness,” which means that validating blocks does not require state, but building blocks requires state. Validation becomes a pure function – give me a completely isolated block and I can tell you if it works. Basically something like this:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Builders still need state to be acceptable thanks to PBS – they will be more centralized high-resource entities anyway. Focus on decentralized validators. Weak statelessness puts more work on the builder and far less work on the validator. This is a great trade-off.

You achieve this magical stateless execution through a witness . These are proofs of the correct state access that the builder will start including in each block.

Validating a block doesn’t actually require the entire state – you only need the state read or affected by the transactions in that block. Builders will begin to include fragments of state affected by transactions in a given block, and they will prove that they correctly accessed that state through witnesses .

Let’s take an example. Alice wants to send ETH to Bob . To validate a block with this transaction, I need to know:

  • Before the transaction  Alice has 1 ETH
  • Alice ‘s public key – so I can know the signature is correct
  • Alice ‘s random number – so I can know that the transactions were sent in the correct order
  • After executing the transaction  Bob gains 1 ETH and Alice loses 1 ETH

In a weakly stateless world, the builder adds the aforementioned witness data to the block and proves its accuracy. Validators receive the block, execute it, and decide if it is valid. That’s all!

Here’s what it means from a validator’s perspective:

  • Gone is the huge SSD requirement for holding state – a critical bottleneck for scaling today.
  • Since you are now also downloading witness data and proofs, the bandwidth requirements will increase. This is the bottleneck of the Merkle-Patricia tree, but it is moderate, not the bottleneck of Verkle attempts.
  • You still execute the transaction to fully validate. Statelessness admits that this is not the current bottleneck for scaling Ethereum.

Weak statelessness also allows Ethereum to relax its self-limitation on its execution throughput, and state bloat is no longer a pressing issue. It might be reasonable to increase the gas limit by a factor of about 3 .

Most user executions will be on L2 anyway, but higher L1 throughput is still beneficial even for them. Rollups rely on Ethereum for DA ( publish to shards) and settlement (requires L1 execution). As Ethereum scales its DA layer, the amortized cost of issuing proofs may become a larger share of the Rollup cost (especially for ZK-rollups ).

5) Verkle Tries

We gloss over how these testimonies actually work. Ethereum currently uses Merkle-Patricia trees to represent state, but the required Merkle proofs are too large for these witnesses to implement.

Ethereum will turn to Verkle to attempt state storage. Verkle proofs are much more efficient, so they serve as viable witnesses for weak statelessness.

First let’s review what a Merkle tree looks like. Every transaction is hashed – these hashes at the bottom are called “leaves”. All hashes are called “nodes” and they are the hashes of the two “child” nodes below them. The resulting final hash is the “ Merkle root”.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

This is a useful data structure for proving that transactions are contained without downloading the entire tree. For example, if you want to verify that transaction H4 is included, you only need H12 , H3 , and H5678 in the Merkle proof . We have H12345678 from block header . Therefore, light clients can ask full nodes for these hashes, which will then be hashed together according to the route in the tree. If the result is H12345678 , then we have successfully proved that H4 is in the tree.

However, the deeper the tree, the longer the path to the bottom, so you’ll need more items to prove it. Therefore, shallow and wide trees seem to favor efficient proofs.

The problem is that if you want to make the Merkle tree wider by adding more children under each node , that will be very inefficient. You need to hash all siblings together to climb the tree, so you need to receive more sibling hashes for Merkle proofs. This will make the proof size huge.

This is where efficient vector promises come in. Note that the hashes used in Merkle trees are actually vector commitments – they just effectively commit to only two elements. So we want vector promise, we don’t need to receive all siblings to validate it. Once we have that, we can make the trees wider and reduce their depth. This is how we get a valid proof size – reducing the amount of information that needs to be provided.

A Verkle trie is similar to a Merkle tree, but it uses a valid vector commitment (hence the name “ Verkle “) instead of a simple hash to commit to its children. So the basic idea is that each node can have many children, but I don’t need all children to verify the proof. It’s a proof of constant size regardless of width.

In fact, we’ve covered a good example of one of these possibilities before  KZG promises can also be used as vector promises. In fact, that’s what the Ethereum developers originally planned to use here. They have since turned to Pedersen to commit to similar duties. These will be based on elliptic curves ( Bandersnatch in this case ), and they will submit 256 values ​​each (much better than two!).

So why not have as wide a deep tree as possible? This is very useful for validators who now have super compact proofs. But there is a practical trade-off that the prover needs to be able to compute this proof, and the wider it is, the harder it is. So these Verkle attempts will be somewhere between the extremes of being 256 values ​​wide.

6) Status expired

Weak statelessness removes the validator’s state inflation constraint, but the state doesn’t magically disappear. Transaction costs are limited, but they impose a permanent tax on the network by adding state. State growth remains a permanent drag on the network. Something needs to be done to fix the underlying problem.

This is where the state expires. Long periods of inactivity (say a year or two) even take away from what a block builder needs to carry. Active users won’t notice anything, and useless state that is no longer needed can be discarded.

If you need to revert to an expired state, you just need to show a proof and reactivate it. This goes back to the 1 of N storage assumption here. As long as someone still has the full history (block explorer etc) you can get what you need from them.

Weak statelessness weakens the base layer’s immediate need for state expiration, but it’s a good thing in the long run, especially as L1 throughput increases. This would be a more useful tool for high throughput aggregation. L2 state will grow at an order of magnitude higher rate, dragging down even high-performance builders.

Part III – MEV

PBS is necessary for a secure implementation of DS, but recall that it was actually originally designed to counter the centralized power of MEVs. You’ll notice a recurring trend in Ethereum research today – MEV is now front and center in cryptoeconomics.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Designing a blockchain with MEV is critical to maintaining security and decentralization. The basic protocol-level methods are:

  1. Minimize harmful MEVs (e.g., single slot determinism, single secret leader election)
  2. Democratize the rest (e.g. MEV-Boost , PBS , MEV – smooth)

The rest must be easily captured and propagated among validators. Otherwise, it will centralize the validator set as it cannot compete with sophisticated searchers. This is exacerbated by the fact that the combined MEV will have a higher share of validator rewards (staking issuance is much lower than the inflation rate given to miners). Can not be ignored.

1) Today’s MEV Supply Chain

The sequence of events for today is as follows:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Mining pools play the role of builders here. MEV seekers relay transaction packages (and their respective bids) to mining pools via Flashbot . The pool operator aggregates a complete block and passes the block header to individual miners. Miners demonstrate this through PoW by giving them weights in the fork choice rules.

Flashbots came about to prevent vertical integration of the entire stack – which would open the door to censorship and other nasty externalities. When Flashbots came along, mining pools had already started making exclusive deals with trading firms to withdraw MEV . Instead, Flashbots provide them with an easy way to aggregate MEV bids and avoid vertical integration (by implementing MEV-geth ).

After Ethereum merges, mining pools will disappear. We want to open the door to validator nodes that can operate reasonably at home. This requires finding someone in a dedicated build role. Your home validator nodes may not be as good at capturing MEV as a hedge fund with quantified wages. If left unchecked, this will centralize the validator set if ordinary people cannot compete. If properly structured, the protocol could redirect MEV revenue to staking revenue for daily validators.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

2) MEV-Boost

Unfortunately, the PBS within the protocol is simply not ready at the time of the merge. Flashbots again offers a stepping stone solution  MEV-Boost .

Merged validators will by default receive public mempool transactions directly into their executing clients. They can package these, give them to consensus clients, and broadcast them to the network. (If you need to understand how Ethereum’s consensus and execution clients work together, I’ll cover that in part 4).

But as we discussed, your validator doesn’t know how to extract the MEV , so Flashbots offers another option. MEV-boost will plug into your consensus client, allowing you to outsource specialized block building. Importantly, you still have the option to use your own execution client as a fallback.

MEV searchers will continue to do what they do today. They will run specific strategies (Statistical Arbitrage, Atomic Arbitrage, Sandwiches, etc.) and bid on their bundled deals. Builders then aggregate all the bundles they see, as well as any private order streams (for example, from Flashbots Protect ) into the best full block. Builders pass block headers to validators only via relays running to MEV-Boost . Flashbots intend to run repeaters and builders and plan to decentralize over time, but whitelisting other builders can be slow.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

MEV-Boost requires the validator to trust the relayer – the consensus client receives the block header, signs it, and only then displays the block body . The purpose of the relayer is to prove to the proposer that the body is valid and exists so that the validator does not have to trust the builder directly.

When the in-protocol PBS is ready, it will incorporate what MEV-Boost provides in the meantime. PBS provides the same separation of powers, allowing easier decentralization for builders and removing the need for proposers to trust anyone.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

3) Committee-driven MEV smoothing

PBS also opened the door for another cool idea   committee-driven MEV smoothing‌.

We see that the ability to extract MEVs is the centralizing force of the validator set, but so is the distribution. The high variability of MEV rewards from block to block prompts the pooling of many validators to smooth out your rewards (as we see with mining pools today, albeit to a lesser extent here).

The default is to offer the full payment of the builder to the actual block proposer. MEV smoothing will instead distribute payouts to many validators. A committee of validators will check the proposed block and certify that this is indeed the highest bid block. If all goes well, the block proceeds and the rewards are distributed between the committee and proposers.

This also solves another problem – out-of-band bribery. For example, proposers can be incentivized to submit suboptimal blocks and directly accept out-of-band bribes to hide their payments from delegators. This proof checks the proposer.

In -protocol PBS is a prerequisite for MEV smoothing. You need to understand the builder marketplace and submit explicit bids. There are several open research questions here, but this is an exciting proposal that is once again critical to ensuring decentralized validators.

4) Single- slot transaction finality

Fast finality is great. Waiting about 15 minutes is not optimal for UX or cross-chain communication. More importantly, this creates an MEV recombination problem.

The merged ethereum already provides stronger confirmations than today — thousands of validators proving that each block competes with miners and potentially mines at the same block height without voting. This would make restructuring extremely unlikely. However, this is still not true transaction finality. If the last block had some lucrative MEV , you might just trick the validators into trying to reorganize the chain and steal it for themselves.

Single- slot determinism eliminates this threat. Restoring a finalized block requires at least a third of validators, and their staked tokens are immediately slashed (millions of ETH ).

I wouldn’t focus too much on the underlying mechanics here. Single- slot finality is a long way off the Ethereum roadmap, and it’s a very open design space.

In today’s consensus protocol (without slot finality), Ethereum only needs 1/32 of the validators to prove each slot (about 12,000 out of about 380,000 currently ). Extending this voting to the full set of validators aggregated using BLS signatures in a single slot requires more work. This compresses hundreds of thousands of votes into a single validation:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

Vitalik breaks down some interesting solutions here .

5) Single Secret Leader Election ( SSLE )

SSLE is trying to patch up another MEV attack vector we will face after the merger .

The list of beacon chain validators and the upcoming leader selection list are public, and it is easy to de-anonymize them and map their IP addresses. You can probably see the problem here.

More sophisticated validators can use tricks to hide themselves better, but small validators are particularly vulnerable to doxxed and subsequent DDOSd . This can easily be used for MEV .

Suppose you are the proposer of block n and I am the proposer of block n+1 . If I know your IP address, I can cheaply DDOS you so that you time out and fail to generate your blocks. I can now capture the MEV of our slot and double my rewards. This is exacerbated by EIP-1559 ‘s elastic block size (maximum gas per block is twice the target size) so I can cram transactions that should have been two blocks into my now twice as long in a single block.

Home-based validator nodes may give up validating because they are under constant attack. SSLE allows no one but the proposer to know when it is their turn to stop this attack. This won’t go into effect at the time of the merge, but hopefully it will be implemented soon.

Part IV – Merging

Well, to be clear I was joking earlier. I actually think (hopefully) that the Ethereum merger happened relatively quickly.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

We’re all talking about this topic, so I feel obligated to give it at least a brief introduction.

1) The merged client

Today, you run a monolithic client (eg, Go Ethereum , Nethermind , etc.) that handles everything. Specifically, a full node performs the following operations simultaneously:

  • Execution – Executes every transaction in the block to ensure validity. Get the pre-state root, do everything, and check that the generated post-state root is correct
  • Consensus – Verify that you are on the heaviest (highest PoW ) chain that gets the most work done (aka Nakamoto consensus)

They are indivisible because full nodes follow not only the heaviest chain, but also the heaviest valid chain. That’s why they are full nodes and not light nodes. Even if a 51% attack occurs, full nodes will not accept invalid transactions.

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

The beacon chain currently only runs consensus to give PoS a test run. Execution is not included. The final total difficulty will eventually be determined, at which point the current Ethereum execution block will be merged into the beacon chain block to form a chain:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

However, a full node will run two separate clients in the background, which can interoperate:

  • Execution Client (aka “ Eth1 Client”) – The current Eth 1.0 client continues to handle execution. They process blocks, maintain memory pools, manage and synchronize state. The PoW part was ripped out.
  • Consensus Client (aka “ Eth2 Client”) – Current Beacon Chain clients continue to handle PoS consensus. They keep track of the head of the chain, gossip and prove blocks, and earn validator rewards.

Clients receive beacon chain blocks, execute client-run transactions, and if all goes well, consensus clients will follow the chain. You will be able to mix and match execution and consensus clients of your choice, all of which will be interoperable. A new engine API will be introduced for clients to communicate with each other:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

or:

The Hitchhiker 's Guide to Ethereum: Understanding the Ethereum Roadmap

2) Post-merger consensus

Today’s Nakamoto Consensus is simple. Miners create new blocks and add them to the heaviest valid chain observed.

The merged Ethereum moves to GASPER – combining Casper FFG (Final Finality Facility) and LMD GHOST (Fork Choice Rule) to achieve consensus. TLDR here – it’s a liveness that favors consensus, not security.

The difference is that a consensus algorithm that favors security (e.g., Tendermint ) stops when it fails to get the necessary number of votes (set here to 2/3 of the validators). Favorable chain liveness (e.g. PoW + Nakamoto consensus) will continue to build an optimistic ledger anyway, but they cannot reach finality without enough votes. Bitcoin and Ethereum today are never finalized – you just assume that reorgs won’t happen after a sufficient number of blocks.

However, Ethereum will also achieve finality by getting enough votes through periodic checkpoints. Each 32 ETH instance is an individual validator, and there are already over 380,000 Beacon Chain validators. Epochs consist of 32 slots , all validators split up and prove a slot within a given ep ch ( meaning about 12,000 proofs per slo t ) . The fork choice rule LMD Ghost then determines the current head of the chain based on these proofs. A new block is added every slot ( 12 seconds), so the epoch is 6.4 minutes. Finality is usually achieved by the necessary votes after two epochs (that is, 64 slots , although up to 95 may be required).

in conclusion

All roads lead to the end of centralized block production, decentralized trustless block verification and censorship resistance. Ethereum’s roadmap targets this vision.

Ethereum’s goal is to be the ultimate unified DA and settlement layer — massively decentralized and secure at the bottom, and computationally scalable at the top. This condenses cryptographic assumptions into a robust layer. A unified modular base layer containing execution also captures the highest value in the L1 design – enabling monetary premium and economic security, as I recently introduced (now open source here ).

I hope you get a clearer picture of how Ethereum research is intertwined. We have so many moving articles that are very cutting edge and have a very big picture wrapping around your head.

Fundamentally, it all comes back to that single vision. Ethereum offers a compelling path to massive scalability while cherishing those values ​​that we care so much about in this space.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/the-hitchhiker-s-guide-to-ethereum-understanding-the-ethereum-roadmap/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-06-07 10:30
Next 2022-06-07 10:32

Related articles