Note: The original text is from Delphi Digital, and the author is Can Gurel.
Highlights of the report:
- The monolithic chain is limited by the content that a single node can handle, and the modular ecology overcomes this limitation and provides a more sustainable form of expansion;
- A key motivation behind modularity is effective resource pricing. The modular chain can provide more predictable fees by dividing the application into different resource pools (that is, the fee market);
- However, modularity introduces a new problem called data availability (DA), which can be solved in many ways. For example, Rollup can process off-chain data in batches and submit it to the chain. By providing “data available” on the chain, they overcome this problem, inherit the underlying security of the basic layer, and establish a trustless L1 <>L2 communication;
- The latest form of modular chain, called the Dedicated Data Availability (DA) layer, is intended to be used as a shared security layer for Rollup. In view of the scalability advantages of the DA chain, it may become the end of the blockchain expansion, and Celestia is a pioneer project in this regard.
- ZK-Rollups can provide more scalability than Optimistic Rollups, which we have observed in practice. For example, the throughput of dYdX is about 10 times that of Optimism, and the L1 space consumed at the same time is only 1/5 of the latter.
On-chain activities are developing at an extremely fast speed, and with it comes the user’s demand for block space.
In essence, this is a scalability war, which involves technical terms such as parachain, sidechain, cross-chain bridge, zone, sharding, rollup, and data availability (DA). In this article, we try to eliminate this noise and elaborate on this scalability war.So, before you fasten your seat belt, please drink a cup of coffee or tea, because it will be a long journey.
Ever since Vitalik proposed the famous scalability trilemma, the crypto community has always had a misunderstanding that the trilemma is static and everything must be weighed. Although this is correct in most cases, we will see from time to time that the boundaries of the trilemma are pulled up (either through genuine innovation or through the introduction of additional but reasonable trust assumptions).
As we browse the different designs, we will highlight some of them. But before we start, it is important to define the meaning of scalability. In short, scalability is the ability to process more transactions without increasing verification costs.With this in mind, let’s take a look at the current tps data of major blockchains. In this article, we will explain the design properties that achieve these different throughput levels. Importantly, the numbers shown below are not the highest levels they can achieve, but the actual values of the historical usage of these agreements.
Single chain VS modular chain
First, let’s look at the monolithic chain. In this camp, Polygon PoS and BSC do not meet our definition of scalability because they only increase throughput through larger blocks (as we all know, this trade-off will increase Node resource requirements, and sacrifice decentralization to improve performance). Although these trade-offs have their market fit, they are not a long-term solution and therefore are not so noticeable. Polygon recognizes this and is turning to a more sustainable solution centered on rollup.
On the other hand, Solana is a serious attempt at the boundary of a fully composable monolithic blockchain. Solana’s secret weapon is called a Proof of History (PoH) ledger. The idea of PoH is to create a global time concept (a global clock). , All transactions, including consensus voting, have a reliable timestamp attached by the issuer. These timestamps allow nodes to make progress without waiting for each block to synchronize with each other. Solana optimizes its execution environment to process tx in parallel, instead of processing one tx at a time like EVM, so as to achieve better expansion.
Although Solana achieves throughput gains, it is still largely due to more intensive hardware and network bandwidth usage. Although this reduces user costs, it limits node operations to limited data centers. This is in contrast to Ethereum. Although Ethereum cannot be used by many people due to high fees, it is ultimately managed by its active users, who can run nodes at home.
How can a monolithic blockchain fail?
The scalability of a monolithic blockchain will ultimately be limited by the processing capacity of a single powerful node.Regardless of people’s subjective view of decentralization, this capacity can only be vigorously promoted until governance is restricted to relatively few actors. In contrast, a modular chain splits the total workload between different nodes, so it can generate more throughput than any single node can handle.
Crucially, decentralization is only one-half of the modular picture. As important as decentralization is another motivation behind modularity, which is effective resource pricing (that is, fees). In a monolithic chain, all tx compete for the same block space and consume the same resources. Therefore, in the case of blockchain congestion, the market’s excess demand for a single application will adversely affect all applications on the chain, because everyone’s expenses are rising.This problem has always existed since CryptoKitties caused congestion on the Ethereum network in 2017. The important thing is that the extra throughput never really solved the problem, but delayed it. The history of the Internet tells us that every increase in capacity will make room for new and infeasible applications, and these applications tend to quickly consume the extra capacity that has just been added.
Finally, the monolithic chain cannot be self-optimized for completely different applications with different priorities. Take Solana as an example, Kin and Serum DEX are examples. Solana’s low latency is suitable for applications like Serum DEX, however, maintaining such latency also requires limiting state growth, which is enforced by charging state rent for each account. In turn, this will have a detrimental effect on account-intensive applications like Kin, which are unable to provide Solana’s throughput to the general public due to high fees.
Looking to the future, it is naive to expect a single resource pool to reliably support various crypto applications (from Metaverse and games to DeFi and payment). Although it is useful to increase the throughput of a fully composable chain, we need a wider design space and better resource pricing for mainstream adoption. This is where the modular approach comes into play.
The evolution of blockchain
In the sacred mission of expansion, we have witnessed the trend change from “composability” to “modularity”. First, we need to define these terms: composability refers to the ability of applications to interact seamlessly with each other in a way that minimizes friction, and modularity is a tool that decomposes the system into multiple individual components (modules). These components (modules) ) It can be peeled off and reassembled at will.
Ethereum Rollup, ETH 2.0 sharding, Cosmos Zone, Polkadot Parachain, Avalanche Subnet, Near Chunk, and Algorand’s secondary chain can all be regarded as modules. Each module handles a subset of the total workload in its own ecosystem, while maintaining cross-communication capabilities. As we delve into these ecosystems, we will notice that modular design is very different due to the way it implements security across modules.
Multi-chain hubs such as Avalanche (Avalanche), Cosmos, and Algorand are most suitable for independent security modules, while Ethereum, Polkadot, Near and Celestia (a relatively new L1 design) envisage ultimately sharing or inheriting each other’s security Sexual modules.
The simplest modular design is called Interoperability Hub, which refers to multiple chains/networks that communicate with each other through standard protocols. Hubs provide a broader design space, so they can customize application-specific blockchains at many different levels, including virtual machines (VM), node requirements, cost models, and governance. The flexibility of the application chain is unmatched by smart contracts on the universal chain. Let us briefly review some examples:
- Terra provides support for decentralized stablecoins worth more than 8 billion U.S. dollars. It has a special fee and inflation model. Terra has optimized the adoption and stability of its stablecoins.
- At present, the cross-chain DEX Osmosis with the largest IBC processing volume encrypts tx until they are finalized to prevent front-running transactions.
- Algorand and Avalanche are designed to host enterprise use cases on a custom network. From CBDCs operated by government agencies to gaming networks operated by committees of game companies, these are all feasible. The important thing is that the throughput of this network can be improved by more powerful machines without affecting the decentralization level of other networks/chains.
Hubs also provide scalability advantages because they can use resources more efficiently. Taking Avalanche as an example, C-Chain is used for EVM-compatible smart contracts, and X-Chain is used for P2P payments. Because payments can usually be independent of each other (Bob pays Charlie does not depend on Alice pays Dana), X-Chain can process some tx concurrently. Separating the VM through the core utility, Avalanche can handle more tx.
These ecosystems can also be vertically expanded through fundamental innovations. Avalanche and Algorand are especially prominent here, because they achieve better expansion by reducing the communication overhead of consensus. Avalanche achieves this through the “sub-sampling voting” process, while Algorand uses cheap VRF nodes to randomly select a unique one. The committee came to reach a consensus on each block.
Above, we have listed the advantages of the hub method. However, this approach also encounters some key limitations.The most obvious limitation is that blockchains need to guide their own security, because they cannot share or inherit each other’s security. As we all know, any secure cross-chain communication requires a trusted third party or synchronization assumptions. In the case of the hub method, a trusted third party becomes the main verifier of the counterparty chain.
For example, tokens connected from one chain to another through IBC can always be redeemed (stolen) by a malicious majority of source chain validators. With only a few chains coexisting today, this majority trust assumption may work well. However, in the future where there may be a long tail of chains/networks, these chains/networks are expected to trust each other’s validators for communication or sharing. Liquidity is far from ideal. This brings us to rollups and shards that provide cross-chain communication and provide stronger guarantees beyond most trust assumptions.
(Although Cosmos will introduce cross-zone shared staking, and Avalanche can allow multiple chains to be verified through the same network, these solutions are less scalable because they place higher requirements on validators. In fact, they Is likely to be adopted by most active chains instead of long tail chains)
Data Availability (DA)
After years of research, it is generally believed that all secure sharing work comes down to a very subtle problem called data availability (DA). To understand the reasons, we need to quickly understand how nodes operate in a typical blockchain. .
In a typical blockchain (Ethereum), full nodes download and verify all tx, while light nodes only check the block header (block digest submitted by most validators). Therefore, although full nodes can independently detect and reject invalid transactions (such as Infinite Printing Tokens), light nodes treat any content submitted by most people as valid tx.
To improve this, ideally, any single full node can protect all light nodes by issuing small proofs. Under such a design, light nodes can operate with similar security guarantees like full nodes without spending as much resources as possible.However, this introduces a new problem called data availability (DA).
If a malicious verifier publishes a block header, but detains some or all of the transactions in the block, the full node will not be able to determine whether the block is valid, because the lost transaction may be invalid or cause double spending.Without this knowledge, full nodes cannot generate invalid fraud proofs to protect light nodes. In short, to make the protection mechanism work first, light nodes must ensure that the verifier has provided a complete list of all transactions.
The DA problem is an indispensable part of modular design. When it comes to cross-chain communication, it will surpass most trust assumptions. In L2, rollups are special because they don’t want to avoid this problem.
In the rollup environment, we can regard the main chain (Ethereum) as a light node of rollup (Arbitrum), and Rollup publishes all its transaction data on L1 so that any L1 node willing to put resources together can execute They, and build the rollup state from scratch. With the complete state, anyone can convert the rollup to a new state and prove the validity of the conversion by issuing a validity or fraud proof. Having available data on the main chain allows rollup to operate under the assumption of a negligible single honest node, rather than under the condition of an honest majority.
Consider the following to understand how rollup achieves better scalability through this design:
- Since any single node with the current rollup state can protect all other nodes without that state, the centralization risk of the rollup node is small, so the rollup block can be reasonably made larger.
- Even though all L1 nodes download the rollup data related to their transactions, only a small part of the nodes execute these tx and build the rollup state, thereby reducing the overall resource consumption.
- The rollup data was compressed using clever techniques before being released to L1.
- Similar to the application chain, rollup can customize their VM for specific use cases, which means more efficient use of resources.
Up to now, we all know that there are two major types of rollup: Optimistic rollup and ZK-rollup. From the perspective of scalability, ZK-rollup is more advantageous than Optimistic rollup because they compress data in a more efficient way, thereby In some use cases, a lower L1 footprint is achieved. This subtle difference can already be observed in practice.Optimism publishes data to L1 to reflect each tx, while dYdX publishes data to reflect each account balance. Therefore, the space occupied by L1 of dYdX is 1/5 of Optimism, and the processing throughput is estimated to be about 10 times the gap. This advantage will naturally translate into the lower cost of the ZK-rollup Layer 2 network.
Unlike the fraud proof on Optimistic rollup, the validity proof from ZK-rollup also supports a new scalability solution called volition. Although the full impact of volition remains to be seen, they seem very promising because it allows users to freely decide whether to publish data on-chain or off-chain. This allows users to determine their security level based on the type of transaction. Both zkSync and Starkware will launch volition solutions in the next few weeks/months.
Although rollup applies clever techniques to compress data, all data must still be published to all L1 nodes. Therefore, rollups can only provide linear scalability benefits, and will be limited in terms of cost reduction. They will also be highly affected by the fluctuation of Ethereum gas price. In order to sustainably expand, Ethereum needs to expand its data capacity, which explains the necessity of Ethereum sharding.
Sharding and Data Availability (DA) Proof
Sharding further relaxes the requirements for all main chain nodes to download all data, but uses a new primitive called DA proof to achieve higher scalability. Using DA proof, each node only needs to download a small part of the shard chain data, and knowing a small part of it can jointly reconstruct all shard chain blocks. This achieves cross-shard sharing security, because it ensures that any single shard chain node can raise disputes, and all nodes can resolve them on demand. Polkadot and Near have implemented DA proof in their sharding design, which will also be adopted by ETH 2.0.
At this point, it is worth mentioning how the sharding roadmap of ETH 2.0 is different from other roadmaps. Although Ethereum’s initial roadmap was like Polkadot, it seems to have recently moved to sharded data only. In other words, the shards on Ethereum will serve as the DA layer of the rollup. This means that Ethereum will continue to maintain a single state as it is today. In contrast, Polkadot performs all executions on the base layer with different states for each shard.
One of the main advantages of using shards as a pure data layer is that rollup can flexibly dump data onto multiple shards while remaining fully composable. Therefore, the throughput and cost of rollup are not limited by the data capacity of a single shard. With 64 shards, the maximum total throughput of rollup is expected to increase from 5K TPS to 100,000 TPS.In contrast, no matter how much throughput Polkadot generates as a whole, the cost will be constrained by the limited throughput (1000-1500 TPS) of a single parachain.
Dedicated DA layer
The dedicated DA layer is the latest form of modular blockchain design. They use the basic idea of the ETH 2.0 DA layer, but lead it in a different direction. The pioneering project in this area is Celestia, but newer solutions such as Polygon Avail are also moving in this direction.
Similar to the DA sharding of ETH 2.0, Celestia acts as the base layer, and other chains (rollups) can be inserted to inherit security. Celestia’s solution differs from Ethereum in two basic aspects:
- It will not perform any meaningful state execution at the base layer (while ETH 2.0 will). This saves rollup from highly unreliable base layer fees, which in a stateful environment may soar due to token sales, NFT airdrops, or the appearance of high-yield farming opportunities. Rollup consumes the same resources (ie bytes in the base layer) for security and only for security. This efficiency allows rollup costs to be primarily associated with the use of that particular rollup rather than the base layer.
- Thanks to the DA proof, Celestia can increase its DA throughput without fragmentation. A key feature of DA proof is that as more nodes participate in sampling, more data can be stored. In Celestia’s case, this means that as more light nodes participate in DA sampling (without centralization), blocks can become larger (higher throughput).
As with all designs, the dedicated DA layer has some disadvantages. An immediate disadvantage is the lack of a default settlement layer. Therefore, in order to share assets with each other, rollup must implement methods to explain each other’s fraud proofs.
We evaluated different blockchain designs, including monolithic chains, multi-chain hubs, rollups, shard chains, and dedicated DA layers. In view of the relatively simple infrastructure, wider design space, and horizontal expansion capabilities of multi-chain hubs, we believe that multi-chain hubs are most suitable for solving the urgent needs of the blockchain industry. In the long run, considering resource efficiency and unique scalability, the dedicated DA layer is likely to become the end of expansion.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/delphi-digital-in-depth-report-the-end-of-blockchain-expansion/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.