The Hitchhiker ‘s Guide to Ethereum (Part 2)

Part 1: The Road to Danksharding

Part II – History and State Management

Here’s a quick review of some basics:

  • History – everything that has happened on the chain. History doesn’t need quick access, you can put it on a hard drive. In the long run, history is 1 of N honest assumptions.
  • Status – A snapshot of all current account balances, smart contracts, etc. Full nodes (currently) all need to have state in order to validate transactions. State is too big for RAM, and hard drives are too slow (state is put in SSD), high-throughput blockchain’s state keeps ballooning, growing much faster than we can keep it on our everyday laptops amount of data. If everyday users can’t maintain these states, they can’t be fully verified, which goes against decentralization.

In short, state is very large, so if you let nodes hold state, then it makes it hard to run a node, and if it’s too hard to run a node, we ordinary people don’t do it, so we need to make sure This is not going to happen.

Calldata Gas Cost Reduction and Total Calldata Limit (EIP-4488)

Proto-danksharding is a good transition to Danksharding and it satisfies many end needs. Implementing Proto-danksharding within a reasonable timeframe can advance the timeline of Danksharding’s arrival.

An easier-to-implement workaround is EIP-4488. It’s less elegant, but solves an urgent problem with gas fees. Sadly, EIP-4488 does not implement the intermediate steps needed to lead to Danksharding, and all unavoidable changes will still need to be made in the future. If Proto-danksharding feels slower than we would like it to be, a quick fix for congestion can be done with EIP-4488 (just a few lines of code modification), and then Proto-danksharding 6 months later (times may vary).

EIP-4488 has two main components:

  • Reduced the cost of calldata from 16 gas per byte to 3 gas per byte
  • Adds a 1MB calldata limit per block, and an additional 300 bytes per transaction (theoretically around 1.4MB max)

The limit was increased to prevent the worst-case scenario – calldata in a block would reach 18 MB, far more than Ethereum could handle. While EIP-4488 increases the average data capacity of Ethereum, due to its calldata limit, Ethereum burst data capacity actually decreases slightly (30 million gas / 16 gas per calldata byte = 1.875 MB).

The Hitchhiker 's Guide to Ethereum (Part 2)

Due to EIP-4488’s calldata and Proto-danksharding data blobs that can be deleted after one month, EIP-4488 has a much higher sustained load than Proto-danksharding. When using EIP-4488, historical data growth will be significantly faster and will become a bottleneck for running nodes. Even if EIP-4444 is implemented in sync with EIP-4488, the execution payload history is removed after a year, so obviously Proto-danksharding lower persistent payload is preferable in comparison.

The Hitchhiker 's Guide to Ethereum (Part 2)

Constrain historical data in execution clients (EIP-4444)

EIP-4444 allows clients to choose to locally delete historical data (block headers, block bodies and receipts) older than one year, it forces clients to stop serving such deleted historical data at the p2p layer. Deleting historical data allows the client to reduce the user’s disk storage requirements (currently in the hundreds of gigabytes and growing).

Removing historical data is inherently important, but if EIP-4488 is implemented then this will be mandatory, as EIP-4488 implementation will significantly increase historical data. Anyway, hopefully this can be done in a relatively short period of time. Some form of historical data expiration will eventually be required, so now is a good time to deal with it.

Full synchronization of the chain requires history, but not for validating new blocks (only state). Therefore, once the client has synced to the top of the chain, historical data will only be retrieved when explicitly requested on JSON-RPC or when the peer attempts to sync the chain. With the implementation of EIP-4444, we need to find alternative solutions for these.

Clients will not be able to use devp2p for “full sync” as they do now – they “checkpoint sync” from a weakly subjective checkpoint that will be considered a genesis block.

Note that weak subjectivity is inherent in the move to PoS and is not an added assumption. We need to use valid weak subjectivity checkpoints for synchronization to prevent the possibility of remote attacks, the assumption here is that clients will not sync from an invalid or old weak subjectivity checkpoint. This checkpoint must exist within the period when we start deleting historical data (ie within a year), otherwise the p2p layer will not be able to provide the required data.

As more clients adopt the lightweight synchronization strategy, the bandwidth usage of the network will also decrease.

Restoring historical data

EIP-4444 Delete historical data after one year, Proto-danksharding delete blob faster, delete after about a month. We definitely need these because we can’t require nodes to store all this data while remaining decentralized:

  • EIP-4488 – Long run may include ~1MB per slot, adding ~2.5TB storage per year
  • Proto-danksharding – target ~1MB per socket, adding ~2.5TB of storage per year
  • Danksharding – target ~16MB per socket, adding ~40TB of storage per year

But where does this deleted historical data go? Don’t we still need them? Yes we still need it. Note, however, that losing historical data only poses a risk to individual applications, not the protocol, so it shouldn’t be the job of the Ethereum core protocol to maintain all this consensus data in perpetuity.

So, who will store this data? Here are some potential contributors:

  • Individual and Institutional Volunteers
  • Block explorers (like etherscan.io), API providers and other data service providers
  • Third-party indexing protocols like TheGraph can create incentivized marketplaces where clients pay servers for historical data and Merkle proofs
  • Portal network (currently under development) clients can randomly store part of the chain history, and the portal network automatically directs data requests to nodes
  • BitTorrent, for example, automatically generates and distributes a 7GB file containing block blob data every day.
  • Application-specific protocols, such as rollup, can require their nodes to store parts of history related to their application

The long-term data storage problem is a relatively easy one to solve because, as discussed earlier, it is a 1-in-N trust assumption, and we are many years away from becoming the ultimate limit on blockchain scalability.

Weakly stateless

Now that we have a good grasp on managing history, what about dealing with state? This is actually the main bottleneck for improving Ethereum’s TPS at the moment.

Full nodes use the pre-state root to execute all transactions in a block and check that the post-state root matches the transactions provided in the block. In order to know if these transactions are valid, they currently need to have state, i.e. verification is stateful.

Going stateless means not having to do something with the mastered state. Ethereum is striving to be “weakly stateless”, meaning that state is not required for validating blocks, but is required for building blocks. Validation becomes a pure function – give me a block in complete isolation and I can tell you if it works. Basically something like this:

The Hitchhiker 's Guide to Ethereum (Part 2)

Based on PBS, it is acceptable that block packers still need state – they are more centralized high-resource entities anyway. Focusing on decentralizing validators, weak statelessness brings slightly more work to block packers and a lot less work to validators, which is a good trade-off.

You will achieve this magical stateless execution by validating, and the block packer will include a proof of correct state access in each block. Validating a block does not actually require the full state, but only the state in the block that is being read or affected by transactions in the block. Block packers will include fragments of the state affected by the transaction in a given block and use validators to prove that they accessed these states correctly.

For example: Alice wants to send 1 ETH to Bob. In order to verify the block containing this transaction, I need to know:

  • Before the transaction – Alice has 1 ETH
  • Alice’s public key – thus knowing that the signature is correct 
  • Alice’s random number – this tells you that the transactions were sent in the correct order
  • After executing the transaction – Bob gains 1 ETH, Alice loses 1 ETH

In a weakly stateless world, the block packer adds the aforementioned witness data and its corresponding proof of accuracy to the block. Validators receive the block, execute it, and decide if it is valid. That’s it!

Here are some conclusions from a validator perspective:

  • Gone is the huge need for SSDs to maintain state – a critical bottleneck for scaling today
  • Bandwidth requirements will increase a bit, as witness data and proofs still need to be downloaded. This is a small bottleneck for Merkle-Patricia trees, but not for Verkle tries.
  • You still execute the transaction for full verification. Statelessness is not the current bottleneck for scaling Ethereum.

Since state bloat is no longer a pressing issue, and weak statelessness also allows Ethereum to relax its self-limits on its execution throughput, a 3x increase in the gas limit is reasonable.

At this point, most user execution will be on L2, but higher L1 throughput will also work in their favor. Rollups rely on Ethereum for data availability (publishing to shards) and settlement (requires L1 execution). As Ethereum expands its data availability layer, the amortized cost of issuing proofs may take a larger share of the cost of rollups (especially for ZK-rollups).

Verkle Tries

We glossed over how these testimonies work. Ethereum currently uses Merkle-Patricia trees to represent state, but the required Merkle proofs are too large for these witnesses to be practical.

Ethereum will turn to Verkle tries to store state, Verkle proofs are much more efficient, so they can serve as viable witnesses for weak statelessness.

First, let’s recap what a Merkle tree is: a hash is calculated at the beginning of each transaction – the hash at the bottom is called a “leaf”, all hashes can be called “nodes”, and each hash is called a “node”. is the hash of the two “child” nodes below it. The resulting hash is the “Merkle root”.

The Hitchhiker 's Guide to Ethereum (Part 2)

This is a data structure used to prove that transactions are contained, but without downloading the entire tree. For example, you only need H12, H3, and H5678 in the Merkle proof to verify that transaction H4 is included. We have H12345678 from the block header, so a light client can ask a full node for these hashes and hash the values ​​together based on the route in the tree. If the result is H12345678, then we have successfully proved that H4 is in the tree.

But the deeper the tree, the longer the route to the bottom, so you need more items to prove it. So shallow and wide trees are more conducive to efficient proofs.

The problem is that it would be very inefficient to widen the Merkle tree by adding more children under each node, since all sibling hashes would need to be put together to move up the tree, so the Merkle would need to be Proof of receiving more sibling hashes. This would make the proof very large.

This is where efficient vector promises come in. Note that the hashes used in Merkle trees are actually vector commitments that can only effectively commit to two elements, and what we want is a vector that can make the tree wider and reduce its depth without requiring all sibling hashes to verify Promise, this is how we get efficient proof size, i.e. reduce the amount of information that needs to be provided.

A Verkle trie is similar to a Merkle tree, but it uses efficient vector commitments (hence the name “Verkle”) instead of simple hashes to commit to its children. So basically each node can have many children, but not all children need to verify the proof. This is a proof of a constant size regardless of width.

In fact, the aforementioned KZG promises can also be used as vector promises, and Ethereum developers originally planned to use KZG promises here, but they later turned to Pedersen promises to fulfill a similar role. These promises will be based on an elliptic curve (Bandersnatch in this case) and will promise 256 values ​​(much better than just two elements!).

So why not build a tree as deep and wide as possible? This is a good thing for validators who now have compact proofs. But there is a tradeoff that needs to be practically considered here, that the prover needs to be able to compute that proof, but the wider the tree, the harder it is to compute. Therefore, these Verkle tries will lie between the two extremes, with a width of 256 values.

Status overdue

Weak statelessness removes the validator’s state inflation constraint, but the state doesn’t magically disappear. The costs of transactions are finite, but they bring permanent taxes to the network by adding state. State growth remains a permanent drag on the network, and something needs to be done to fix the underlying problem.

Long-term (say a year or two) inactive state can even be chopped off from what block creators need to carry, and active users don’t notice these things and don’t need useless states, they can been deleted.

If you need to restore an overdue state, you only need to present a proof to activate it, which goes back to the 1-of-N storage assumption, i.e. as long as someone still has the full history (block explorer, etc.) They get what you need there.

Weak statelessness will weaken the desperate need for state overdue at the base layer, which is a good thing in the long run, especially as L1 throughput increases. For high-throughput rollups, this would be a more useful tool, as the L2 state would grow at an exponentially higher rate and even become a drag on high-performance creators.

Part 3: MEV

PBS is necessary to safely implement Danksharding, but it was originally designed to counter the centralized power of MEV. After all, a trend that is recurring in Ethereum research today is that MEV is currently front and center of cryptoeconomics.

The Hitchhiker 's Guide to Ethereum (Part 2)

Designing blockchains with MEV in mind is critical to maintaining both security and decentralization. The basic methods of the protocol layer are:

  1. Mitigate harmful MEVs as much as possible (e.g., single-slot finality, single-secret leader election)
  1. Democratize the remainder (eg, MEV-Boost, PBS, MEV smoothing).

Of these, the remainder must be easily captured and propagated among validators, otherwise, validators will be centralized due to inability to compete with sophisticated searchers. In addition, after the merger, MEV will further occupy a higher share of validator rewards (the amount of pledge issuance is much lower than the inflation obtained by miners), so the centralization of validators cannot be ignored.

Current MEV Supply Chain

The current sequence of events looks like this:

The Hitchhiker 's Guide to Ethereum (Part 2)

Mining pools play the role of block packers here. The MEV retriever forwards the bundled deals (along with their respective bids) to the mining pool via Flashbots. Pool operators aggregate a complete block and pass it along to the individual miners along the block header. Miners demonstrate this through PoW by giving them weights in the fork choice rules.

Flashbots emerged to prevent vertical integration across stacks that would open the door to censorship and other unfavorable externalities. When Flashbots started, mining pools had already started exclusive deals with trading companies to withdraw MEV. Instead, Flashbots provide them with an easy way to aggregate MEV bids and avoid vertical integration (through MEV-geth).

After the merger, the mining pool will disappear. Household validators are generally not as good at capturing MEV as a hedge fund with a bunch of quants, which, if left unchecked, will centralize the power of the validator set where the average person cannot compete. But if properly structured, the protocol could redirect MEV revenue towards staking revenue for everyday validators. So we hope that there is a way for home validators to operate reasonably, which requires finding people who can take on specific building roles.

The Hitchhiker 's Guide to Ethereum (Part 2)

MEV-Boost

Unfortunately, the PBS within the protocol is simply not ready when merged. Flashbots again offers an over-the-top solution: MEV-Boost.

Merged validators will default to receiving transactions from the public storage pool directly to their executing clients. They can package these transactions for submission to consensus clients and broadcast to the network. (Part 4 of the article will cover how Ethereum’s consensus and execution clients work together).

But parents and common validators don’t know how to extract the MEV we’re talking about, Flashbots offers an alternative for this, where MEV-boost will embed in your consensus client, allowing you to outsource specific block building. Importantly, you can still choose to use your own execution client as a fallback at this point.

MEV retrievers will continue to do what they already do today, running specific strategies (statistical arbitrage, atomic arbitrage, sandwich arbitrage, etc.) and bidding on the bundles they need to incorporate. The block packers then aggregate all the bundles they see as well as any private order streams (e.g. from Flashbots Protect) into the best complete block, they pass the block headers through a relayer running on MEV-Boost passed to the validator. Flashbots will run relayers and block packers and plan to gradually decentralize over time, but opening up a whitelist for other block packers will likely be a lot slower.

The Hitchhiker 's Guide to Ethereum (Part 2)

MEV-Boost requires validators to trust relayers – consensus clients that receive block headers and sign them before displaying the block body. The purpose of the relayer is to prove to the block producer that the block body is present and valid, so that the validator does not have to directly trust the block packer.

When the in-protocol PBS is ready, it encodes what MEV-Boost provides during this time. PBS provides the same separation of powers, it makes it easier for block packers to decentralize, and block producers don’t need to trust anyone.

The Hitchhiker 's Guide to Ethereum (Part 2)

Committee-driven MEV distribution evenly

PBS powered another cool idea – committee-driven even distribution of MEVs.

We see that the ability to extract MEVs is a strength of a centralized validator set, but so is distribution. The high variability of MEV rewards from block to block incentivizes many validators to distribute your rewards evenly (as we see with mining pools today, albeit to a lesser extent here).

The default is that block packers will pay the actual block producers in full, and MEV smoothing will distribute this payment among many validators. A validation committee will examine the proposed block and certify that the block is the highest bidder. If all goes well, blocks will continue to be produced and rewards will be distributed to committees and block producers.

This also solves the out-of-band bribe problem, where block producers may be incentivized to submit a suboptimal block and then directly accept the out-of-band bribe in order to hide the out-of-band bribe they received. And this certification (the act of verifying whether the block is the highest bidder by the committee) can make block producers bound.

Intra-protocol PBS is a prerequisite for achieving an even distribution of MEVs. You need to have an understanding of the block packer market and submitted bids. While there are several open research questions here, this is an exciting proposal as it is critical to ensuring validator decentralization.

Single slot finality

Fast finality is great, waiting ~15 minutes is sub-optimal for user experience or cross-chain communication. More importantly, fast finality is an MEV recombination problem.

A merged ethereum will provide stronger confirmation than today — thousands of validators attesting to each block, and miners that can compete and mine on the same block with a high degree of competition without voting. This makes chain reorganization nearly impossible, but still not truly final. If the last block has some lucrative MEV, you might lure the validator into trying a chain reorganization and steal it for themselves.

Single-slot finality removes this threat, reversing a finalized block requires at least one-third of the validators, and their stake is immediately slashed (millions of ETH).

We do not discuss the underlying mechanism too much here. Just know that single-slot finality is not considered much later in Ethereum’s roadmap, and it’s a very open design space.

In today’s consensus protocol (without single-slot finality), Ethereum only needs 1/32 of the validators to certify each slot (i.e. about 12,000 validators out of the current over 380,000 validators to certify each slot). Extending this voting to the full set of validators with BLS signature aggregation in a single slot requires more work – condensing hundreds of thousands of votes into a single validation:

The Hitchhiker 's Guide to Ethereum (Part 2)

Vitalik details some interesting solutions in the link [5].

single secret leader election

Single Secret Leader Election (SSLE: Single Secret Leader Election) attempts to patch up another MEV attack vector that we will face after the merger.

The list of Beacon Chain validators and the upcoming leader election list are public, making it easy to de-anonymize them and map their IP addresses.

More mature validators can use some tricks to hide themselves better, but small validators are particularly vulnerable to information leakage and DDOS, which can be easily exploited by MEV.

Suppose you are the block producer of the nth block, and I am the block producer of the n+1th block, if I know your IP address, I can cheaply send you a message that will cause you to time out and not be able to produce Block DDOS attack so I can get the MEV of both of our slots, doubling my returns. EIP-1559’s elastic block size exacerbates this problem, and since EIP-1559’s max gas per block is twice the target spec, I can stuff transactions that should be two blocks into mine that are the original two times larger in a single block.

In short, verification that home validators may abandon because home validators are vulnerable and may fail. SSLE allows no one but the block producers to know when it is their turn to stop this attack. This was not possible at the time of the merger, but hopefully soon after the merger.

Part 4 – Merging: How It Works

I think and hope that a merger is coming.

The Hitchhiker 's Guide to Ethereum (Part 2)

The merger cannot be ignored, no one will turn a deaf ear to it, I think I can also make some simple voices:

merged client

Today, you are running a monolithic monolithic client (like Go Ethereum, Nethermind, etc.) that handles all transactions. Specifically, full nodes do two things:

  • Execution: Executes each transaction in the block to ensure validity. Take the pre-state root, do everything, and check that the resulting post-state root is correct.
  • Consensus: Verify that you are on the heaviest (highest PoW) chain that has done the most work, i.e. Nakamoto consensus.

They are indivisible because full nodes follow not only the heaviest chain, but also the heaviest valid chain. That’s why they are full nodes and not light nodes. Even under a 51% attack, full nodes will not accept invalid transactions.

The Hitchhiker 's Guide to Ethereum (Part 2)

The Beacon Chain currently does not run execution, only consensus to provide a test running environment for PoS. Ultimately, the moment that determines the total difficulty of a terminal will be the moment when Ethereum executes the block merge into the beacon chain block to form a chain:

The Hitchhiker 's Guide to Ethereum (Part 2)

However, a full node will essentially run two separate interoperable clients:

  • Execution client (aka “Eth 1.0 client”): The current Eth 1.0 client continues to handle execution. They process blocks, maintain mempools, manage and synchronize state, and rip off the essential traits of PoW.
  • Consensus Client (aka “Eth 2.0 Client”): The current Beacon Chain client continues to handle PoS consensus. They keep track of the chain head, broadcast and authenticate blocks, and receive rewards from validators.

The client receives the beacon chain block, executes the client-run transaction, and if all goes well the consensus client will follow the chain. All clients are interoperable and you will be able to mix or match execution and consensus clients of your choice. A new engine API for communication between clients will be introduced:

The Hitchhiker 's Guide to Ethereum (Part 2)

or:

The Hitchhiker 's Guide to Ethereum (Part 2)

post-merger consensus

Today’s Nakamoto consensus is simple: miners create new blocks and add them to the heaviest valid chain observed.

The merged Ethereum moves to GASPER – a combination of Casper FFG (Final Finality Facility) and LMD GHOST (Fork Choice Rule) to achieve consensus. In short here is a consensus that focuses on liveness but not on security.

The difference is that a consensus algorithm that supports security (like Tendermint) stops when it cannot get the necessary votes (⅔ of the validator set). Chains that support liveness (like PoW + Nakamoto Consensus) will continue to build an optimistic ledger anyway, but without enough votes, they cannot gain finality. Bitcoin and Ethereum today just assume that after enough blocks no refactoring will happen and finality will never be achieved.

However, Ethereum will also achieve finality in stages through checkpoints if there are enough votes. Every 32 ETH instances is an independent validator, and there are already over 380,000 Beacon Chain validators. Cycles consist of 32 slots, and all validators are split to certify a slot in a given cycle (that is ~12000 certifications per slot). Next, the fork choice rule LMD Ghost determines the current head of the chain based on these proofs. Every slot (12 seconds) adds a new block, so the period is 6.4 minutes. Usually after two cycles (i.e. every 64 slots, although sometimes 95 slots may be required), the necessary votes are obtained to achieve finality.

in conclusion

All roads lead to centralized block generation, decentralized trustless block verification and censorship resistance. Ethereum’s roadmap has targeted this vision.

Ethereum’s goal is to be a unified data availability and settlement layer – enabling scalable computing with maximum decentralization and security

I hope you have a clearer idea of ​​how research on Ethereum is intertwined, it has so many very brief, developing components, each of which has a very big picture for you to understand.

Fundamentally, it all comes back to that one-of-a-kind vision. Ethereum offers us a compelling path to massive scalability, while also cherishing those values ​​that we care so much about in this space.

Extended reading

[1] A Primer on Elliptic Curve Cryptography

https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/

[2] Exploring Elliptic Curve Pairings ——Vitalik

https://vitalik.ca/general/2017/01/14/exploring_ecp.html

[3] KZG polynomial commitments —— Dankrad

https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html

[4] How do trusted setups work? —— Vitalik

https://vitalik.ca/general/2022/03/14/trusteDankshardingetup.html

[5] Detailed explanation of single-slot finality solution – Vitalik

https://notes.ethereum.org/@vbuterin/single_slot_finality

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/the-hitchhiker-s-guide-to-ethereum-part-2/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-06-22 09:44
Next 2022-06-22 09:46

Related articles