Why rollups + data sharding mode will be the only sustainable high-scalability solution
Discussions about rollups + data sharding (hereinafter referred to as rads) usually start from the perspective that it will be “more secure and decentralized”, but this is only part of the reason. The real reason why rads will be the only solution to achieve global scale throughput is its considerable scalability-this is the only way to achieve millions of TPS in the long-term. To be more specific, I personally prefer to consider zkRollups, because optimistic rollups have their own inherent limitations. I will come to the above argument, mainly due to two reasons:
a) technological sustainability and
b) Economic sustainability.
If it is further subdivided, a technically sustainable blockchain node must do the following three things:
1. Keep in sync with the node on the chain.
2. Able to synchronize from the genesis block within a reasonable time.
3. Avoid state expansion or even out of control.
Obviously, for a decentralized network, all of these have no room for negotiation and will easily lead to serious bottlenecks. [Appendix: Someone pointed out that the content of 2) is not necessary. I agree with this point. Because the snapshot verified by the social consensus has reached a very good level. While Ethereum is trying to achieve these three points, it also promotes the development of other possibilities, which is obviously not enough. At the same time, a shard chain that retains these three points can only increase scalability to a few thousand TPS at most-but this level is not enough.
Centralized solutions and their hard limits
But more centralized networks may begin to compromise.
1) It is not necessary for everyone to keep up with the synchronization of the chain, as long as the number of validators can meet the minimum requirement.
2) There is no need to start synchronization from the genesis block, just use snapshots and other shortcuts.
3) State expiry is a good solution and will be implemented in most chains.
Until then, brute force expiry solutions like regenesis may help. At this point, you may think that this kind of network is not decentralized enough, but we don’t care about this in this article-this article only cares about the issue of scalability.
1) It is a hard limit, and RAM, CPU, disk I/O and bandwidth are the potential bottlenecks of each node, and more importantly-keep the minimum required node synchronization in the network, which means that the degree of expansion has a certain degree Hard limits. In fact, you can see that PoS networks like Solana and Polygon have worked hard to the highest level, even though they are currently processing only a few hundred TPS (not including voting) transactions. When I visited Solana Beach’s website before, I saw that it said “Solana Beach has encountered some problems in synchronizing Solana blockchain nodes.” The block time mentioned in it is 0.55 seconds-43 slower than the 0.4 second goal. %. You need at least 128 GB to keep up with the synchronization of blocks, and even 256 GB of RAM is not enough to start synchronization from the genesis block-so you need a snapshot to make it work. This is the compromise solution mentioned above.
2) Scalability is our focus today. Jameson Lopp ran a test on a 32 GB machine-predicting the inevitability of test failure before starting-it crashed within an hour, and the data in the upper block could not be synchronized at all. Of course, Solana has made a good demonstration for other chains here, because the situation on other chains is actually the same.
zkRollups can easily defeat centralized L1
The requirements of zkRs are even higher than the most centralized L1, because the validity proof makes it as safe as the most decentralized L1! You can maintain a high degree of security on the premise that there is only one active node in a given time. Of course, in order to resist censorship and recoverability, we need multiple sequencers, but these sequencers do not need to reach a consensus and can rotate accordingly. For example, Hermez and Optimism only plan to set up one active sequencer within a certain period of time, and can allow multiple sequencers to rotate in different time periods.
In addition, zkRs can use all these innovations to make full-node clients as efficient as possible, regardless of whether they serve zkRs or L1. zkRollups can become very creative through the state sleep mechanism, because the transaction history can be reconstructed directly from L1. In fact, there will be innovations in sharding and transaction history access pre-compilation in the future. These technologies can help to run zkRs directly on data shards. We also need a lightweight independent withdrawal form to meet all functional security requirements.
However, even here, we are subject to hard limits. Even if it is 1 TB of RAM or 2 TB of RAM, this configuration has a hard limit. In addition, you also need to consider infrastructure providers that can synchronize node data.
So, yes, the scalability of zkR is indeed much stronger than the most scalable L1, but it is impossible to achieve global scalability by itself.
Keep using multiple zkR
Simply put, it is to run multiple zkR on Ethereum data shards – and it is an effective shard zkR. Once such zkRs are released, they will provide large-scale data availability and will continue to expand as needed, and is expected to reach 15 million TPS within 10 years. One zkR cannot achieve this level of throughput, but it is possible for multiple zkRs to work together.
Will having different zkR shards break composability? Judging from the current situation, it will indeed. However, other solutions continue to emerge in this field, such as rapid bridging projects such as Hop, Connext, cBridge, and Biconomy, and innovative solutions such as dAMM that allow multiple zkRs to share liquidity. Many of these innovations are more difficult and impossible to achieve on L1. I hope to continue to innovate in this field, so that such chains with multiple zkRs can achieve seamless interoperability.
Too long to read: No matter what the most centralized L1 can do, zkR can do better, and TPS can obviously reach higher. In addition, we can have multiple zkRs to effectively achieve global scale throughput.
This question is fairly straightforward. A network needs to collect more transaction fees instead of issuing additional rewards to verifiers and custodians. But in reality, this is a very complicated topic, and I will try to make it as simple as possible. It is true that users’ speculative enthusiasm for tokens and expected currency premiums can keep the network sustainable, even if it is actually at a loss. But for a truly resilient and decentralized network, we should strive to achieve economic sustainability.
The maintenance cost of centralized L1 is much higher than the revenue collected
Let’s take a look at two of our favorite examples-Polygon PoS and Solana. Polygon PoS charges approximately US$50,000 in transaction fees per day, or US$18 million per year. At the same time, its distribution of additional issuance rewards is far greater than US$400 million. So if you calculate it, you can see that this is equivalent to a 95% net loss, which is actually an incredible number! As for Solana, it has only collected about US$10,000/day in revenue for a long period of time, but as speculation has become more and more frenzied, it has significantly increased to about US$100,000/day, or about 36.5 million per year. Dollar. Solana provided an even more exaggerated bonus for issuance-$4 billion, which resulted in a net loss of 99.2%. I got these numbers from Token Terminal and Staking Rewards, and my estimates of these numbers are already conservative-in fact they look worse. But we should also know that the fees charged by Ethereum in one day are more than the sum of the two networks for the whole year!
Can’t just increase throughput beyond what is technically possible
The current argument is: they will process more transactions and charge more fees in the future, additional issuance will be reduced, and ultimately, the network will achieve a balance of payments. But the actual situation is much more complicated. First of all, even if we consider Solana’s lowest issuance rate achieved in this decade, it will still achieve a 96% loss. Things are so distorted that it is almost irrelevant-the throughput needs to far exceed the possibility of break-even. As a thought experiment, based on the current transaction fees, Solana needs to reach 154,000 TPS to achieve breakeven-but given the current hardware and bandwidth conditions, this is completely impossible.
But the bigger problem is that these additional transactions are not in vain-they have higher demand for bandwidth, greater state expansion, and higher requirements for the system. Some people further argue that there is a lot of room for them to do more. However, as I mentioned in the technical scalability section, this is at best a question mark assumption-because you need 128GB of memory to successfully synchronize data from a chain that only does a few hundred TPS. Another argument is that the hardware will become cheaper-this is no doubt, but this is not a magic solution-you either need to choose a larger scale, or reduce the cost, or find a balance between the two. Please note that zkR will also benefit from Moore’s Law and Nielsen’s Law.
Finally, all centralized L1 must increase their fees.
The only two solutions to this situation are: a) the network becomes more centralized; b) when the network reaches its limit, the cost will increase. Option a) has limitations, as discussed above, so option b) is inevitable. It can be seen that this happens on Polygon PoS, and the cost starts to rise. In fact, the Binance Smart Chain has gone through this process and is now a sustainable network-although the cost of achieving this goal is much higher. But remember, we are only talking about economic sustainability here.
Before continuing, I would like to point out again that there are many more variables that currently exist-such as price appreciation and volatility-this is definitely a simple view, but I believe this kind of general logic will let you The understanding is clearer.
How rads can significantly improve efficiency, while the cost can be so low
Let’s talk about rads below. In terms of rollup, its maintenance cost is very, very low. There are only a few nodes that need to run within a specific time, and there is no need for an expensive consensus mechanism to ensure its security. However, they all provide greater throughput than any L1. Rollups can simply charge L2 nominal transaction fees to maintain the profitability of the network. In terms of data availability, Ethereum is currently highly deflationary. Combined with an efficient beacon chain consensus mechanism, it is only necessary to maintain a minimum level of activity to achieve close to zero additional issuance.
Therefore, the entire rads ecosystem can maintain sustainability with greater scalability and lower potential costs. In fact, zkRs is in the best interest of L1, and I am glad that Solana is at least considering this issue.
Too long to read: The cost of Rads only accounts for a small part of the centralized L1, so for the same cost, rads can provide users with greater throughput; or to achieve similar throughput, rads only costs a small part of the cost cost.
A very important point is that rads is a long-term solution and it will take several years to mature.
However, in the short term, there are two options.
1). A sustainable centralized L1, such as Binance Smart Chain and rollups.
2). An unsustainable centralized L1.
1) Too expensive for most people. Better rollup solutions like Hermez, dYdX, or Loopring offer BSC-like fees, while Arbitrum One and Optimistic Ethereum still have a way to go (although OVM 2.0 released next month promises to reduce OE fees by 10 times ). 2) Polygon PoS and Solana currently offer lower fees, but I have made an extensive argument above, showing that this is not sustainable in the long run. However, in the short term, they provide a good option for users seeking low-cost transactions. But there is actually a third option, 3) Validiums.
The cost provided by Validiums is comparable to that of Polygon PoS and Solana-in fact, Immutable X is now online and you can mint NFT for free. You can try it on SwiftMint. At present, validium’s data availability is the same as centralized L1, which can be regarded as unsustainable. Although you can also choose to use alternative consensus methods such as the Data Availability Committee, in fact, this option is still much cheaper. The beauty of Validiums is that when data fragments are released, they can be directly forward compatible to rollups or volitions. Of course, as mentioned above, L1 also has this option, but this will be a more disruptive change. In addition, using validium is significantly safer than centralized L1.
1. The blockchain industry does not yet have the technology to achieve global scale throughput.
2. The cost of some projects is particularly low, in fact, through subsidies for token speculation. Of course, this is a good choice for users who are looking for low-cost fees. But you have to realize that this is not a sustainable model, let alone the issues of decentralization and security compromises.
3. But even for these low-cost projects, if they have the opportunity to obtain huge traffic, they will be forced to increase their costs in the long run. There will always be more new and more centralized L1s. This is an unsustainable long-term competition for low-level users.
4. At present, there are indeed sustainable options, such as Binance Smart Chain (at least economically) or optimized rollups, and their fees range from about $0.1 to $1.
5. In the long run, rads is the only solution that can scale to millions of TPS and achieve global scale throughput, while maintaining technical and economic sustainability. In addition, it can maintain a high degree of security, decentralization, permissionless, trustless, and credible neutrality, which is really unexpected. As a wise man once said, “Any sufficiently advanced technology is no different from magic.” This is what rollups + data sharding presents.
Finally, it’s not just Ethereum, Tezos and Polygon have also made rollup-centric pivots, and all L1s are unavoidable to a) apply zkRollup; b) become a safe and provide rollup on top of it. The chain of data availability; or c) Accept that the technology is outdated and rely entirely on marketing, memes, and network effects.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/why-rollups-data-sharding-mode-will-be-the-only-sustainable-high-scalability-solution/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.