How to decentralize the role of block builder?

introduction

A quick recap – a key theme of this report is Vitalik Buterin’s idea in his Endgame article that all paths forward seem to lead to:

  • Centralized block production
  • Decentralized and trustless block verification
  • continue to offer censorship resistance

Producer-Builder Separation (PBS) attempts to isolate centralization from builders (away from validators), and then Ethereum adds armor (e.g. crList) to ease builders’ censorship powers. Builders are naturally sophisticated, so the open question is mainly how centralized they are. Are we talking about 1 builder? Or 10?

A centralized builder is still not ideal, so can we do better? There are two ways to solve this problem:

  • Decentralized marketplace with many builders – ensuring that the builder marketplace is competitive without entrenched parties. Many builders compete and make small profits. The role has become very commoditized. This requires addressing issues like exclusivity order flow that might otherwise consolidate a single builder.

How to decentralize the role of block builder?

  • Decentralized builder role itself – making the winning builder itself a decentralized protocol. A group of decentralized participants all contribute to the construction of a given block.

This report is primarily built around Vitalik’s recent SBC MEV workshop presentation. I’ll break it down and provide further analysis.

Can Decentralized Builders Win?

There are actually two potential problems here:

  • Technical feasibility – I will present some possible paths (other possibilities exist and are being actively explored)
  • Competitiveness – do users really want to use it? Or will a centralized builder always outperform a decentralized builder in terms of functionality and efficiency?

Decentralized what?

Centralizing builders is easy. Considered below are some distributed builders that can be decentralised, needing to aggregate bundles and transactions among many searchers and users:

  • Algorithms – Builders run algorithms to aggregate Seeker’s bundles and other transactions, then fill in the rest of the block themselves. The algorithm and its inputs can be decentralized. (Note that the simple case of the distributed builder running only one algorithm is assumed here. In reality, different participants in the distributed builder may contribute different parts of the block while running different algorithms.)

How to decentralize the role of block builder?

Source: Image based on Vitalik Buterin

  • Resources – Resource requirements will increase significantly, especially with Danksharding. Blocks will carry more data and be more complex to build → more bandwidth and hardware requirements to build them. Instead of one node building and distributing the entire block, work can be split among multiple nodes.
  • Additional builder services – Builders can get creative and offer new services such as transaction pre-confirmation. For distributed builders to be successful, they need to provide services that are competitive with centralized builders.
  • Access to orderflow – Sending orderflow to a single builder is simple and provides benefits to users. Maybe they promise not to snatch your deal and they can give you some kickbacks behind you. Distributing access to the order flow among potentially many participants is tricky.
  • Privacy – Again, it’s easiest to trust builders that your order will be executed privately, so you can send it to them. Distributed builders need a way to provide transaction privacy while also including many decentralized parties in the process.
  • Cross-chain execution – Distributed builders need a way to coordinate with external actors to capture cross-chain MEVs (e.g., if the swap on chain Y is to be done atomically, the swap is only done on chain X).

challenge

There are several hurdles to overcome if we want to avoid trusted third parties throughout the block production supply chain. Some of the challenges I will address here include:

How to protect searchers from MEV theft?

If the builder sees the bundle submitted to them by the searcher, they can copy the transaction and then replace the searcher’s address with their own. Builders capture MEV without rewarding searchers.

The commit disclosure scheme used in enshrined PBS and MEV-Boost (plus an intermediate trusted relay) removes the same MEV stealing threat from proposer←→builder relationship, but for seekers←→builder to Say, this is an open problem. Searchers currently only trust builders, but trust is not a scalable solution.

How to allow the aggregator mechanism to combine searcher input?

Protecting searchers from MEV stealing means that their bundles cannot be sent in clear text. But if they’re not there, how do builders aggregate them?

How to ensure that the aggregator mechanism can actually publish this block?

Bundled content must ultimately be clear. What is the process from ciphertext to plaintext and how can we achieve this without the assumption of trust?

How to protect searchers from aggregator + proposer collusion?

Note that this is not an exhaustive list of the challenges of building distributed builders. There are other unanswered questions (eg, how do you protect distributed builders from DDOS attacks by the multitude of bad bundles they’re forced to emulate?) and unknown unknowns.

Idea 1 – Trusted Hardware

How to decentralize the role of block builder?

One approach utilizes trusted hardware – TPM (Trusted Platform Module). Sorting looks like this:

Before decrypting a block, the TPM must be sure of two things:

  • Proposer Signature – This commitment to the block header (without seeing the block body) prevents the proposer from stealing MEV. If the proposer tries to steal MEV for themselves after the builder block is made public (by proposing a replacement block), anyone can make their original commitment. This proves that the proposer signed two blocks at the same block height → they were slashed.
  • Proof of Availability of Proposer Signatures – Prevents aggregator + proposer collusion. It is not enough for the proposer’s committed presence – it must be available. If only aggregators receive the commit, they can simply hide it forever, allowing proposers to propose alternative MEVs to steal blocks. The TPM must be confident that the original proposer signature was in fact public.

There are several ways to prove the availability of the proposer’s signature:

  • Prover – The verifier can attest to see the proposer signatures, and the TPM can then check the proposer and these prover signatures. This requires changes to the Ethereum protocol.
  • Low security real-time data availability oracles – something like Chainlink can prove the fact that a signature exists and will be rebroadcasted.
  • M of N assumption within an aggregator – The aggregator itself can be a distributed M of N protocol. There may be threshold voting in the aggregator protocol, and you have an honest assumption about it.

Idea 2 – Merge Disjoint Bundles and Sequential Auctions

Merge disjoint bundles

This approach requires an M of N aggregator, but we can get rid of the TPM. The process looks like this:

  • Seekers send bundles encrypted to a threshold key. The bundle contains an access list (a list of accounts and storage slots they access) and a correctness SNARK (note the technical complexity of generating this quickly).
  • Aggregators merge disjoint bundles to maximize the total bid (bid). (We’re only talking about aggregating disjoint bids here, but it’s possible to improve this further.)
  • Aggregators must compute state roots

The last step is tricky. Computing the state root requires clearly seeing transactions, or at least seeing their state updates. However, even seeing state updates may be sufficient for MEV stealing. We have several options for when to compute the state:

  • Have an aggregator node decrypt and compute the state. However, they can collude with the proposer.
  • The state root is calculated only after the proposer has committed to support any block and state root received. This setup would utilize EigenLayer – proposers subject themselves to additional cuts to participate. Proposers send an off-chain message promising that the only blocks they will produce in turn are the ones that contain this set of bundles (whatever they are). Only after the proposer commits, the bundle is decrypted and the state root is computed. If the proposer breaks this promise, they will be cut.

Note that the aforementioned SNARK requirement can also be avoided for this EigenLayer construct. Proposers here can pre-commit an alternative block or alternative block combination if the block or alternative block combination submitted to them proves to be invalid. A fraud proof can then be used to check the invalidity of the first block combination.

sequential auction

The EigenLayer technique can be used directly for disjoint bundle merging, or it can rely on multiple rounds of sequential auctions within each slot. (Note that the SNARK requirement can also be avoided in this sequential construct if desired.)

For example, the following might happen in a block:

Round 1

1. Proposers sign an EigenLayer message pre-agreeing on some transactions (including bundle 1), thus maximizing their bids in this round to start the block

2. The builder publishes this part of the block

3. Proposer publishes status difference

Round 2

4. Proposers sign an EigenLayer message pre-agreeing on additional transactions (including bundle 2), thereby maximizing their bids in this round to continue the block

5. The builder publishes this part of the block

6. Proposer publishes status difference

Round 3…

One downside is that this merging may not be optimal. For example, the proposer may have pre-agreed to Bundle 1, and then they received the more lucrative Bundle 2 that conflicts with Bundle 1. They will have to reject this bundle 2.

Centralized builders with the same order flow can see all transactions and can include bundle 2 when they build a block at the end of the slot (since they didn’t pre-agreed to bundle 1).

Another potential downside – sequential auctions can make non-atomic MEVs very difficult since searchers won’t be able to cancel or update their bids (once committed) if world conditions change. If you need to submit a deal more than 10 seconds before it is included, you won’t be able to take as much risk if you retain the ability to update bids.

However, the example assumes the same order flow. In fact, due to the guarantees it provides, such a distributed builder may be able to outperform a centralized builder in receiving more order flow. Better guarantees → more order flow → build the most profitable blocks (even with other drawbacks). Then, since distributed builders always provide the highest value blocks, it would make economic sense for proposers to choose this structure (cutting themselves off accepting blocks from other builders).

To be successful, the value provided by the distributed builder may need to outweigh the disadvantages it brings (including the challenges of less efficient merging and non-atomic MEVs).

Block building post-Danksharding

Danksharding enables validators to have lower node requirements. A single node is only responsible for downloading a portion of the block.

However, the initially proposed design would meaningfully increase the hardware and bandwidth requirements for building Ethereum blocks (though validators can always rebuild in a distributed fashion). The question then is whether we can even do the initial build in a distributed fashion. This would eliminate building a full block for a single high-resource entity, computing all KZG commitments, connecting to many subnets to publish it, etc.

(Note: whether this architecture will use subnets or something like DHT is an open research question, but I’ll assume subnets here).

It’s actually quite possible to build in a distributed fashion. Distributed erasure coding isn’t even that hard.

First, the person who contains each data transaction is responsible for encoding it and propagating the blob block to the subnet and data availability network.

When aggregators choose which data transactions to include, they can use real-time DA oracles. Aggregators cannot just do Data Availability Sampling (DAS) themselves, as this is not secure when only one party is doing DAS. So some distributed oracles need to download the whole thing.

The network can then fill in the columns from here. Remember that the data is expanded in this 2D scheme. For example, each blob is 512 chunks, but it is erasure coded to 1024 chunks. Then the extension also runs vertically. For example, you say here that you have 32 blobs in the image, then expand vertically to 64 blobs. The polynomial promise runs horizontally at each row and vertically at each column.

How to decentralize the role of block builder?

Source: Vitalik Buterin

KZG promise

Due to the linearity of KZG’s commitment, you can fill in these columns, which will be used in Ethereum’s sharding design.

Commitment (com) of KZG has a linear relationship. For example, you can say com (A) + com (B) = com (A+B).

You also have linearity in the proof. For example, if:

  • Qᴀ proves that A = some value at some coordinate z, and
  • Qʙ proves that B = some value at the same coordinate z, then
  • You can do a linear combination of Qᴀ and Qʙ, which in itself proves that the same linear combination of A and B has the correct value at the same coordinate z

More formally:

  • Let Qᴀ prove A (z) and Qʙ prove B (z)
  • Then, cQᴀ + dQʙ proves (cA + dB)(z)

This linear property allows the network to fill in everything. For example, if you have a proof for lines 0-31 in column 0, you can use it to generate proofs for lines 32-63 in column 0.

Only KZG has this commitment linearity and proof linearity (IPA and Merkle trees, including SNARK’s Merkle trees cannot satisfy both).

For a more in-depth look at Ethereum’s 2D KZG scheme, you can check out my Ethereum report or Dankrad’s recent KZG talk. This research article by Vitalik also addresses considerations for using KZG versus IPA for DAS.

The TLDR here is that KZG has some really nice properties that allow for distributed block builds and rebuilds. You don’t need any party to process all data, scale all data, calculate all KZG commitments and propagate them. They can be done individually for each row and each column. If this is done, we have no remaining supernode requirements:

How to decentralize the role of block builder?

Source: Dankrad Feist and Vitalik Buterin

Non-KZG Alternatives

If we can’t achieve all the KZG magic, this is the next best thing.

The first half of the row promise is just the blob, so no problem. Then, the builder has to provide the rest and some list of promises.

Then these promises must match. So iᵗʰ row commitment at jᵗʰ coordinate = jᵗʰ column commitment at iᵗʰ coordinate.

More formally:

  • The builder must provide row commitments R₁…Rₕ and column commitments C₁…C?, where Rᵢ(xⱼ) = Cⱼ(xᵢ)
  • and Proof of Promise Equivalence
  • This can be done in a distributed fashion as discussed, but note that this is harder:
  • The KZG method described earlier – can be done in one round. The builder just checks all blobs and then publishes. The network populates the rows in a completely separate process that doesn’t involve the builder.
  • The distributed approach here – requires at least two rounds of agreement. Builders need to be involved.

Additional builder services – pre-confirmed

Ethereum block times are slow and users like fast block times. Ethereum makes this sacrifice primarily in the hope of supporting a large decentralized validator set — a trade-off that Vitalik has written about here. But can we have the best of both worlds?

Ethereum rollup users already know and love these pre-confirmations. Builder innovation may be able to provide similar services at the base layer.

For example, a builder can agree to:

  • If a user sends a transaction with a priority fee ≥ 5, the builder immediately sends an executable signed message agreeing to include it.
  • The builder even provides a post-state root if the user sends a transaction with a priority fee ≥ 8. Therefore, a higher priority fee forces transactions to be included in a certain order, allowing users to immediately know what the outcome of that transaction is.

If builders don’t deliver on their promises, they could be cut.

In a future with parallelized EVM, you can also get more advanced information with pre-validation. For example, the builder can reorder some transactions in a block, even after giving pre-confirmation, as long as the state the user cares about has not changed.

Can distributed builders provide pre-confirmation?

Yes. Distributed builders can run some internal consensus algorithms such as Tendermint with fast block times. Builders may be penalized for the following reasons:

  • Double Determinism within the Tendermint Mechanism
  • Sign blocks that are incompatible with what the Tendermint mechanism agrees to

Note that for best security here, some kind of account abstraction is required for the final builder signature. Threshold BLS is not attributable – this means that if the builder just BLS signs the block, we won’t know who to cut if something goes wrong. Abstract signature will solve this problem.

For any builder pre-confirmation service (distributed or centralized), be aware that pre-confirmation is only as good as their ability to actually build winning blocks. More dominant builders with higher inclusion rates provide better pre-validation.

However, you can actually get stronger pre-confirmation with distributed builders, such as the EigenLayer construct. If the current proposer selected EigenLayer and you got a pre-confirmation, your transaction must be included. You are no longer betting on the probability odds of a centralized builder, giving you a pre-confirmation and then whether to win the block in the end.

Pros and cons of distributed builders

Assuming all techniques succeed, distributed builders have thousands of participants. You can even opt in to the EigenLayer construct that provides sub-second pre-confirmation for most Ethereum validators. Such distributed builders have some nice competitive advantages over centralized builders:

  • Economic security – huge security deposits to support pre-confirmation services
  • Trust – Searchers can trust this distributed builder, not a single centralized entity
  • Censorship resistant – it is more difficult to compromise and control any secure distributed system than a single centralized operator that decides malicious

Centralized builders may have other advantages, some inherent, some based on the construction of distributed builders:

  • Adapt to new features faster – There is value in the flexibility to adapt to market needs, which may be lacking in the distributed builder construct described above. Ideally, you can aggregate unique features from multiple parties into a single block.
  • Lower latency — this is always relevant, but especially true for cross-chain MEVs, where searchers are more likely to want to update their bids when the world state changes across domains. (As mentioned, they also want the flexibility to modify bids throughout the block process first.)

concluding thoughts

Ethereum is largely designed with worst-case assumptions – even if only one builder exists, how can we best mitigate their power (e.g., censorship capabilities)?

However, we can (and should) simultaneously strive to avoid this worst-case assumption. This means designing a system that doesn’t always lead to entrenched centralized builders. The two ideas described here offer some more interesting possibilities. However, they are far from an exhaustive list – other ideas are actively being explored and should continue to be explored.

Furthermore, this should not be seen as “the problem of proprietary order flow magically disappeared, so we no longer need to build around it.” dApps must continue to innovate in mechanism design around MEVs, including reducing the need for proprietary order flow . MEV is not going anywhere.

Special thanks to Vitalik Buterin, Sreeram Kannan, Robert Miller, and Stephane Gosselin for their review and comments. Without their work, this report would not be possible.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/how-to-decentralize-the-role-of-block-builder/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-09-13 12:20
Next 2022-09-13 12:23

Related articles