An in-depth look at how the modular blockchain came to stand for scalability

The idea of ​​a “modular blockchain” is becoming a category-defining narrative around scalability and blockchain infrastructure.

‍The argument is simple: by breaking down the core components of a layer 1 blockchain, we can improve individual layers by a factor of 100, resulting in a more scalable, composable, and decentralized system. Before we discuss modular blockchains in detail, we must understand existing blockchain architectures and the limitations blockchains face in current implementations. 

An in-depth look at how the modular blockchain came to stand for scalability

Source: Ethereum Foundation

What is blockchain?

Let’s briefly review the basics of blockchain. A block in a blockchain consists of two parts: the block header and the transaction data associated with it. Blocks are verified by “full nodes”, which parse and calculate the entire block data to ensure that transactions are valid and that users do not send more ETH than their account balances.

Let’s also briefly outline the functional “layers” that make up a blockchain.

  • implement

Transactions and state changes are initially handled here. Users also typically interact with the blockchain through this layer by signing transactions, deploying smart contracts, and transferring assets.

  • settlement

The settlement layer is where Rollup execution is verified and disputes are resolved. This layer does not exist in a monolithic blockchain and is an optional part of the modular stack. By analogy with the US court system, think of the settlement layer as the US Supreme Court, providing final arbitration for disputes.

  • consensus

The consensus layer of the blockchain provides ordering and finality through a network of full nodes, downloads and executes the contents of blocks, and agrees on the validity of state transitions. ‍

  • ‍Data Availability

The data needed to verify that state transitions are valid should be published and stored on this layer. This should be easy to verify in the event of an attack where a malicious block producer withholds transaction data. The data availability layer is the main bottleneck in the blockchain scalability trilemma, and we’ll explore why in a moment.

For example, Ethereum is monolithic, which means that the base layer handles all of the above components. ‍

An in-depth look at how the modular blockchain came to stand for scalability

Source: ResearchGate

Blockchains currently face a problem known as the “blockchain scalability trilemma”. Similar to Brewer’s theorem for distributed systems, blockchain architectures often compromise on one of decentralization, security, or scalability in order to provide strong guarantees for the other two.

‍Security refers to the ability of a network to remain operational when attacked. This principle is a core principle of blockchain and should never be compromised, so the real trade-off is usually between scalability and decentralization.

Let’s define decentralization in the context of blockchain systems: In order to decentralize a blockchain, hardware requirements cannot be a limitation of participation, and the resource requirements of the verification network should be low.

Scalability refers to the throughput of a blockchain divided by its verification cost: the ability of a blockchain to handle more and more transactions while keeping verification resource requirements low. There are two main ways to increase throughput. First, you can increase the block size, thereby increasing the capacity of transactions that can be included in a block. Unfortunately, larger block sizes lead to network centralization, as the hardware requirements to run a full node increase with the demand for higher computational output. Monolithic blockchains in particular suffer from this problem, as an increase in throughput is associated with an increase in the cost of validating the chain, resulting in less decentralization. Second, you can move execution off-chain, offloading the computational burden from nodes on the main network, while leveraging proofs that allow on-chain computations to be verified.

‍With a modular architecture, blockchains can begin to solve the blockchain scalability trilemma through the principle of separation of concerns. By modularizing execution and data availability layers, blockchains are able to scale throughput while maintaining the trustless and decentralized nature of the network by breaking the correlation between computation and verification costs. Let’s explore how this is accomplished by introducing Proof of Failure, Rollups, and how they relate to data availability issues.

‍Proof of Failure and Optimistic Rollup

A possible compromise between centralization and decentralization, Vitalik pointed out in the article “Endgame”, is that for scalability purposes, the future of block production is centralized in mining pools and professional producers, while block verification ( Keeping the producers honest) should importantly remain decentralized. This can be achieved by splitting blockchain nodes into full nodes and light clients. There are two related issues with this model: block validation (verifying that calculations are correct) and block availability (verifying that all data has been published). Let us first explore its application in block verification.

‍Full nodes download, compute, and verify every transaction in the block, while light clients only download block headers and assume the transaction is valid. The light client then relies on the proof of failure generated by the full node for transaction verification. This in turn allows light clients to autonomously identify invalid transactions, enabling them to operate with nearly the same security guarantees as full nodes. By default, light clients assume that state transitions are valid and can challenge the validity of the state by receiving proofs of failure. When a node’s state is challenged by a fault proof, consensus is reached by a full node re-executing the relevant transaction, resulting in a dishonest node’s stake being slashed.

An in-depth look at how the modular blockchain came to stand for scalability

‍Source: https://ethereum.org/en/developers/docs/scaling/optimistic-rollups/

The light client and fault proof models are safe under the honest minority assumption, where there exists at least one honest full node with the complete state of the chain that submitted the fault proof. This model is particularly relevant to sharded blockchains (such as the merged Ethereum architecture), since validators can choose to run full nodes on one shard and light clients on the remaining shards, while maintaining a minimum of N on all shards 1 security guarantee.

‍Optimistic Rollups leverage this model to safely abstract the blockchain execution layer into orderers, powerful computers that bundle and execute multiple transactions and periodically publish compressed data back to the parent chain. Moving this computation off-chain (relative to the parent chain) can increase transaction throughput by a factor of 10-100. How can we trust that these off-chain sequencers remain benign? We introduced bonds, tokens that operators must stake in order to run the sequencer. Since the sequencer publishes transaction data back to the parent chain, we can use validators (nodes that observe a state mismatch between the parent chain and its aggregate) to publish a proof of failure and subsequently slash the malicious sequencer’s stake. Since optimistic rollups use fault proofs, they are safe assuming there is an honest validator in the network. This use of failure proofs is the source of the name for optimistic summaries – assuming state transitions are valid until proven otherwise during a dispute, handled at the settlement layer.

‍This is how we scale throughput while minimizing trust: allowing computation to become centralized while keeping the verification of computation decentralized.

‍‍Data Availability Issues

While failure proofs are a useful tool to address decentralized block validation, full nodes rely on block availability to generate failure proofs. Malicious block producers can choose to only publish block headers and retain some or all of the corresponding data, preventing full nodes from validating and identifying invalid transactions, resulting in failure proofs. This type of attack is trivial for full nodes because they can simply download the entire block and fork from the dead chain when they find inconsistencies or withhold data. However, light clients will continue to track block headers of potentially invalid chains, forking from full nodes. (Remember that light clients do not download entire blocks and assume state transitions are valid by default.)

‍This is the essence of the data availability problem as it pertains to proof of failure: light clients must ensure that all transaction data is published in a block before validating, so full nodes and light clients must automatically agree on the same block of the canonical chain Block heads agree. (If you’re wondering why we can’t use a similar system for failure proof of data availability, you can read more about the data retention dilemma here. Essentially, game theory dictates that the failure proof-based system used here would be exploitable and lead to a lose-lose situation for honest actors.)

solution

It looks like we’re back to square one. How does a light client ensure that all transaction data in a block is published without downloading the entire block – centralizing the hardware requirement and thus defeating the purpose of a light client?

‍One way to achieve this is through a mathematical primitive called erasure coding. By duplicating the bytes in a block, erasure codes can reconstruct the entire block even if a certain percentage of data is lost. This technique is used to perform data availability sampling, allowing light clients to probabilistically determine that the entire block has been published by randomly sampling a fraction of the block. This allows light clients to ensure that all transaction data is included in a particular block before accepting it as valid and following the corresponding block header. However, there are some caveats to this technique: data availability sampling has high latency, and similar to the honest-few assumptions, the safety guarantee relies on the assumption that there are enough light clients performing sampling to be able to probabilistically determine the availability of a block.

An in-depth look at how the modular blockchain came to stand for scalability

Simplification of data availability sampling.

‍Validity Proof and Zero-Knowledge Rollup

Another solution to decentralized block verification is to eliminate transaction data required for state transitions. In contrast, proofs of validity take a more pessimistic view than proofs of failure. By eliminating the dispute process, validity proofs guarantee the atomicity of all state transitions, while requiring proofs for each state transition. This is achieved by leveraging novel zero-knowledge techniques SNARK and STARK. Compared to failure proofs, validity proofs require more computational intensity in exchange for stronger state guarantees, affecting scalability.

‍Zero-knowledge Rollup is a Rollup that uses validity proofs instead of fault proofs for state verification. They follow a similar computation and verification model to Optimistic Rollup (albeit using proofs of validity as the architecture rather than proofs of failure) through a sorter/prover model, where the sorter handles the computation and the prover generates the corresponding proofs. For example, Starknet launched a centralized sorter for bootstrapping purposes, and is on the roadmap to gradually implement open sorter and prover decentralization. The computation itself is infinite on ZK Rollup due to off-chain execution on the sequencer. However, since proofs of these computations must be verified on-chain, finality remains a bottleneck for proof generation.

‍It should be noted that the technique of using light clients for state verification is only applicable to failure proof architectures. Since state transitions are guaranteed to be valid through proofs of validity, nodes no longer need transaction data to validate blocks. However, the data availability issue for Proof of Validity remains and is slightly more subtle: despite guaranteed state, proof of validity transaction data is still necessary so that nodes can update state transitions and make them available to end users. Therefore, Rollups using proofs of validity are still subject to data availability issues.

Where are we now

Review Vitalik’s paper: All roads lead to centralized block production and decentralized block verification. While we can exponentially increase Rollup throughput through advances in block producer hardware, the real scalability bottleneck is block availability rather than block validation. This leads to an important insight: no matter how powerful we make the execution layer or what proof implementation we use, our throughput is ultimately limited by data availability.

‍One way we currently ensure data availability is to publish blockchain data on-chain. The Rollup implementation utilizes the Ethereum mainnet as a data availability layer, publishing all Rollup blocks on Ethereum on a regular basis. The main problem with this stopgap solution is that Ethereum’s current architecture relies on full nodes that guarantee data availability by downloading entire blocks, rather than light clients that perform data availability sampling. As we increase block size to increase throughput, this inevitably leads to increased hardware requirements for full nodes verifying data availability, centralizing the network.

In the future, Ethereum plans to use data availability sampling to develop towards a sharded architecture consisting of full nodes and light clients securing the network. (Note – Ethereum sharding technically uses KZG commitments rather than failure proofs, but the data availability issue is relevant anyway.) However, this only solves part of the problem: Another fundamental problem facing the Rollup architecture is the rollup zone Blocks are dumped to the Ethereum mainnet as calldata. This poses some problems because calldata is expensive at scale and becomes a bottleneck for L2 users at a cost of 16 gas per byte regardless of Rollup transaction batch size.

An in-depth look at how the modular blockchain came to stand for scalability

‍“This means that even with end users taking advantage of Rollup, publishing calldata to Ethereum would expose them to the staggering gas costs they face today due to this fixed ratio (see graph below).”

An in-depth look at how the modular blockchain came to stand for scalability

“As usage grows, so does the amount of calldata posted to Ethereum. This brings us back to square one – Ethereum is really expensive, and even if end users use Rollup, they will feel that cost.”

‍Validium is another way to improve scalability and throughput while maintaining data availability guarantees: fine-grained transaction data can be sent off-chain (relative to the source) to a data availability committee, PoS guardian, or data availability layer. By moving data availability from Ethereum calldata to an off-chain solution, validiums bypasses the fixed byte gas cost associated with increasing rollup usage.

The Rollup architecture also brings the unique insight that the blockchain itself does not need to provide execution or computation, but simply the ability to order blocks and guarantee data availability for those blocks. This is the main design philosophy behind Celestia, the first modular blockchain network. Celestia, formerly known as LazyLedger, started out as a “lazy blockchain” that left execution and validation to other modular layers and focused on providing a data availability layer for transaction ordering and data availability guarantees through data availability sampling. Centralized block production and decentralized block validators are the core premise behind the design of Celestia: even mobile phones can participate as light clients and secure the network. Due to the nature of data availability sampling, Rollup inserted into Celestia as a data availability layer is able to support higher block sizes (and therefore throughput) as the number of Celestia light nodes grows, while maintaining the same probability guarantees.

‍Other solutions today include StarkEx, zkPorter and Polygon Avail, with StarkEx being the only validator currently used in production. Regardless, most verifications contain an implicit assumption of trust in the source of data availability, whether governed by trusted committees, guardians, or a general data availability layer. This trust also shows that malicious operators can prevent users from withdrawing funds.

work in progress

An in-depth look at how the modular blockchain came to stand for scalability

‍Celestium Architecture.

Modular blockchain architecture is a hotly debated topic in the current crypto space. Celestium’s vision for a modular blockchain architecture has been significantly hampered by security concerns and additional trust assumptions associated with a decentralized settlement and data availability layer.

‍At the same time, significant progress has been made in all aspects of the blockchain stack: Fuel Labs is developing a parallel virtual machine for the execution layer, and the Optimism team is working on sharding, incentivized validation, and decentralized sequencers. Hybrid Optimistic and zero-knowledge solutions are also under development.

‍Ethereum’s combined development roadmap includes plans for a unified settlement and data availability layer. Specifically, Danksharding is a promising development on the Ethereum roadmap that aims to transform and optimize Ethereum L1 data sharding and block space into a “data availability engine”, allowing L2 Rollup to achieve low-cost, high-efficiency Throughput transactions.

‍Celestia’s self-contained architecture also allows a wide range of execution layer implementations to use it as a data availability layer, laying the foundation for alternative non-EVM virtual machines such as WASM, Starknet, and FuelVM. This shared data availability for various execution solutions allows developers to create trust-minimized bridges between Celestia clusters, unlocking cross-chain and cross-ecosystem composability and interoperability, similar to Ethereum and its Possibilities between Rollups.

‍Volitions, pioneered by Starkware, introduces an innovative solution to the dilemma of on-chain and off-chain data availability: users and developers can choose to use validation to send transaction data off-chain, or keep transaction data on-chain, Each has its own unique strengths and weaknesses.

An in-depth look at how the modular blockchain came to stand for scalability

‍Split single application. ‍

Additionally, the increased use and penetration of Layer 2 solutions unlocks Layer 3: Fractal Scaling. Fractal scaling allows application-specific Rollups to be deployed at Layer 2 – developers can now deploy their applications with complete control over their infrastructure, from data availability to privacy. Deploying on layer 3 also unlocks interoperability between all layer 3 applications on layer 2, rather than an expensive base chain like application-specific sovereign chains such as Cosmos. Rollup on top of Rollup.

‍Similar to how network infrastructure has evolved from local servers to cloud servers, decentralized networks are evolving from monolithic blockchains and isolated consensus layers to modular, application-specific chains with shared consensus layers. Whichever solution and implementation ends up taking hold, one thing is clear: In a modular future, the user is the ultimate winner.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/an-in-depth-look-at-how-the-modular-blockchain-came-to-stand-for-scalability/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-07-09 11:04
Next 2022-07-09 11:08

Related articles