Whether you are an expert in blockchain technology or not, as long as you stay in the world of Crypto long enough. The words of Ethereum expansion, layer2, and Rollup will not be unfamiliar to you. Many people are familiar with one or more of these concepts, but how exactly are they related? Why do we need these technologies? What kind of problems do they want to solve?
If you want to know the answers to the above questions, I hope this article will help you:
All the content in this article is logically sorted out and does not involve knowledge of cryptography/computer science, so I believe that as long as you are familiar with Ethereum itself, you can basically understand it.
Ever since the day CryptoKitty caused congestion on the Ethereum chain, Ethereum developers have been exploring ways to increase Ethereum’s throughput.
In principle, it can be divided into two categories
- It is to transform the Ethereum block itself, let’s call it the Layer 1 scheme, and the main solution here is sharding.
- It is to change the way we use Ethereum, put the execution and processing of transactions off-chain, and Ethereum itself is only used to verify the validity of its transactions and provide security. This is what we often hear Layer2.
The core idea of Layer 2 is to execute and calculate a large number of actual transactions off-chain, and then verify the final validity of the transaction through a very small number of transactions on Ethereum. Whether it’s State Channel, Plasma or Rollup, it actually follows this principle.
Now when it comes to Layer2, many people’s first reaction is to link it to Optimistic/ZK Rollup, but here we briefly introduce state channels and Plasma, which helps us understand why Rollup finally won out and is the most talked about Layer2 solution:
The following is the most primitive structure of a state channel:
State channels are a long-standing blockchain scaling solution, most famously used in Bitcoin’s Lightning Network.
Rather than describing his principle, an example can better explain what a state channel is:
You like a barber shop downstairs very much. Every time you go to the barber shop, Tony will always ask you to apply for a card in your ear.
One day you finally decide, just apply for a card, anyway, I will come later. So you transfer 1,000 yuan to the barber shop to get a card.
Every time you go to the barbershop in the future, you don’t need to transfer money to the barbershop. Instead, the balance in your card will be deducted, and you will complete an actual transaction with the barbershop every time you get a haircut.
After a month, you are going to move, but the money in your card has not been used up. So you apply to the barbershop for a refund, and Tony from the barbershop refunds you 200 yuan.
Over the course of a month, you spend a dozen times at the barbershop, but you and the barbershop actually only transfer money to each other twice.
Putting this process on the blockchain, the process of buying a card is actually depositing the money into a smart contract, and at the same time opening a state channel between you and the barbershop. The process of returning your card closes the “status channel” between you and the barbershop. The mutual transfer between you and the barbershop is equivalent to two transactions on the Ethereum main chain.
Suppose you don’t have a card, then you need to transfer money to the barber shop every time you get your hair done.
Through this example we find that through state channels, only the first and last steps require us to transact on the blockchain, and between these steps, you and the barber can send an unlimited number of signed messages to each other (complete a consumption ) to indicate payment. In this case, the Ethereum blockchain is only used as a settlement layer to handle the final transaction of a one-time payment, which greatly reduces the burden on the underlying blockchain.
The state channel can play a great role in some simple scenarios such as streaming payment. It records transaction data by signing messages off-chain, and simplifies a large number of logically occurring transactions into those on the main chain. two transactions.
(2) Because of its low cost, state channels are very suitable for some micro-payment scenarios, such as buying breakfast coffee with ETH or BTC.
But at the same time, the state channel requires both the sender and the receiver of funds to enter the channel, and in order to maintain the state channel and support more complex operations than streaming payments, large sums of funds need to be locked. So developers quickly realized that state channels were not an option for scaling.
Now we all know the limitations of state channels, and in order to solve this problem. ——Plasma came into being. It solves the problem of sending assets to any target, and also ensures the improvement of TPS. In fact, Plasma was considered the “one” for a long time when developers started working on Layer 2 solutions.
To understand Plasma, we must first understand that it is not an actual technology, it is more like a design idea or technical architecture.
Plasma is usually a chain, it can have a different consensus mechanism than the main chain, or it can have its own miners. But the most important thing is that there will be a role called “Operator” on the Plasma chain, which will periodically generate a Merkle tree according to the state transitions on the sub-chain, and submit the root hash value of the Merkle tree. Verify and record the main chain. We will talk about why Merkle tree and its root hash can be used for verification of state transitions in the Rollup that also uses this application.
In this way, no matter how many transactions occur on the sub-chain during the two submission periods, the sub-chain only needs to submit the status information caused by the transaction execution to the main chain.
The following is a simple schematic diagram of using the Plasma mechanism:
Users who want to enter the Plasma chain need to map assets on the Ethereum main chain, and when they need to transfer assets on the Plasma chain or on the main chain, they need to go through a period of challenge period for others to use “fraud proof” Mechanisms to confirm the validity of asset transfers.
“Fraud proof” means that anyone during this challenge period (usually 7 days or more) can submit a proof that the withdrawal of user assets is illegal through Merkle tree verification.
But this brings two problems:
(1) To verify the correctness of the withdrawal, a node needs to save the transaction and state information on Layer2, because Plasma will only submit the result of its state transfer, and if you want to submit a fraud proof, you must have the information on Layer2, which will Greatly increases the cost of the validator role.
It is the so-called “data unavailability”, which means that Plasma will not send the transaction data that occurs on its chain to the main chain for storage. The nodes of the main chain cannot obtain these transaction data, and cannot pass the security of the main chain itself. verify it. Of course, some solutions will submit these data to centralized storage or IPFS, but this has no meaning for the main chain, because the basis for users to use Layer2 is to trust the security of the main chain itself.
We can see that a very important problem with Plasma is that his “data is not available”, the main chain will only receive the state transfer result submitted by the operator, it can only expect someone to store the transaction and state information off-chain, and fraudulently The proof mechanism is used to ensure the authenticity of the submission of the sub-chain. It only assumes the role of the confirmer in this process, and its security level is poor.
Specific understanding of “data not available”
Ethereum makes all data that happens on its chain public and can be queried by anyone. However, Plasma does not submit these transaction data to the main chain, but only submits the execution results, so its efficiency is very high, but the price of this is that Plasma cannot establish the same level of trust as the Ethereum main chain.
Rollup can actually be regarded as a compromise between the original main chain processing method and the Plasma method. It will submit data to the main chain, but it will compress the data to the greatest extent through smart coding, and at the same time delete and reduce appropriately based on the characteristics of Rollup itself Part of the data, as long as the final submission can be verified by anyone.
After the transaction data is uploaded to the chain, anyone can verify whether the results submitted by Rollup are correct based on this data. Therefore, Rollup is more secure than Plasma.
To sum up, the core advantage of Rollup is the so-called “data availability”, which commits the data to the main chain, which greatly strengthens the security.
So how exactly does Rollup work?
First, Rollup has a (or a series of interrelated) contracts on the main chain:
This contract is used to maintain the state record in the Rollup layer. This state record is actually a hash value stored by the root node of a Merkle tree. This hash value is called the state root.
The leaf node of this Merkle tree is the account status information in Rollup. If you don’t know what a Merkle tree is, here is a simple example:
It can be seen that this is a binary tree, and the status information of the current rollup layer account is recorded on the leaf node of the binary tree.
For every two state information (such as State 1/State 2), we can calculate a unique hash value (eg: Hash(1,2) ) according to a certain hash formula as the parent of these two leaf nodes Nodes, and so on layer by layer, and finally get a hash value stored in the root node:
You don’t need to know how to calculate hashes, you just need to remember a few things.
- The change of any leaf node will cause the value of the root node to change (the change of any state will cause the Root hash to change)
- If the root hash values of the two trees are the same, it means that the information stored in their leaf nodes is completely consistent (so it is only necessary to compare the hash values of the two root nodes to confirm the consistency of the underlying state information)
- According to the hash value of the root node and the path to a certain state information, we can confirm that a certain state information exists in this hash tree.
Through the state root, we can get a key-value map of the account state:
The key is the account address, and the value contains status information such as balance/Nonce/contract code/storage (for contract accounts)
When a transaction occurs on the rollup, it is obvious that the state of these accounts will change, resulting in a new state root.
Although this can provide very accurate and timely feedback on the latest state changes on the Rollup, if the state root is updated on the main chain every time a transaction occurs, the cost will be higher than that of executing these transactions on Layer 1.
Therefore, in order to solve this problem, the transactions generated in the rollup will be packaged and aggregated in batches, and a new state root will be generated according to the state of the batch of transactions after all executions are completed. Whoever packages the transaction and submits it to the smart contract on the main chain needs to calculate the new state root and submit it together with the previous state root and transaction data.
The packaging of this part is called a “batch”. After the submitter submits the batch to the Rollup contract, the main chain will verify whether the new state root is correct. If the verification is passed, the state root will be updated to the latest submitted state root. And finally complete a state transition confirmation within a rollup.
The essence of Rollup
The following is a simplified version of the Optimstic Rollup process. You can see that the biggest difference from Plasma is the addition of transaction data submission.
Therefore, the essence of Rollup is to aggregate a large number of actually generated transactions into a transaction on the main chain. These transactions are executed and calculated by the Rollup chain, but the data is submitted to the main chain, and the main chain acts as a “highest” chain. The role of a court judge” ultimately confirms these transactions. As a result, we utilize the consensus and security of the main chain, while improving the actual transaction efficiency and reducing transaction costs.
After reading the above description, you may have some questions, don’t worry, we will deduce and explain step by step.
If you submit full transaction data, is it still difficult to expand? Does the data compression mentioned above solve this problem? how?
These two technical solutions can be expanded, and the core is the compression and packaging of transactions. This is because the block gas limit of Ethereum has an upper limit. The smaller the compressed transaction, the more transactions can be submitted to the main chain at one time. So how to do this?
The following is a compression mode described by Vitalik in his article, as an example to help us understand
A simple transaction on the Ethereum main chain (such as sending ETH) typically consumes about 110 bytes. However, sending ETH on Rollup can be reduced to about 12 bytes.
To achieve such a compression effect, on the one hand, a simpler advanced encoding is used, and currently, Ethereum’s RLP wastes 1 byte in the length of each value. On the other hand, there are some neat compression tricks:
Nonce: The nonce can be completely omitted in the rollup
Gasprice: We can allow users to pay with a fixed range of gasprices, such as 2 to the 16th power
Gas: We can also set gas to a power of 2. Additionally, we can also set gas limits at the batch level.
To: address can be identified by index on Merkle tree
Value: We can store value in scientific notation. In most cases, transfers require only 1 to 3 significant digits.
Signature: We can use BLS to aggregate signatures to combine multiple signatures into one
These compression techniques are the key to the expansion of rollup. If we do not compress the transaction data, the rollup may only be able to improve the efficiency by about 10 times on the basis of the main chain, but with these compression techniques, we can achieve 50 times 100 times. times or even higher compression efficiency.
At the same time, in order to save gas, these compressed transaction data will be stored in the calldata parameter. The famous EIP-4488 proposed to reduce the gas consumption of each byte of data in calldata in order to further optimize the amount of roll layer transaction data that a main chain transaction can carry. For specific compression effects, we will show simple data below when comparing two different ZK-Rollups.
How can I verify that the verifiable information submitted is correct?
Since the final state transition confirmation (which also represents the confirmation of the transaction) is determined by the update of the state root, but it seems that the submitter on Rollup can submit the transaction data and state root he wants at will, so how to verify what he submitted Is this information correct?
For this problem, there are generally two solutions, and according to the different solutions, rollup is also divided into two categories:
True to its name, this solution chooses to optimistically believe that the submitter submitted the batch is correct, unless someone proves through a fraud proof that the submitter is actually a bad guy who submitted the wrong batch.
(1) Here is a simple example of fraud proof construction (thanks again Vitalik):
Submitting a fraud proof to prove that a submitted batch is wrong requires the information included in the green part of the figure below:
The batch submitted by the submitter
A part of the Merkle tree represented by the previous state root (actually representing the real account state information), according to which a complete Merkle tree can be constructed
Based on the Merkle tree constructed in the second part, we simulate and execute the transactions submitted in the batch, thereby obtaining a new account state, a new Merkle tree, and a new state root.
Compare the state root obtained in the previous step with the state root in the batch to verify whether the batch is correct
We have logically sorted out the process of Optimstic to ensure the authenticity of the state root. In fact, in order to ensure that the submitter can be deterred from doing evil, the submitter often needs to pledge funds. When his submission is verified as wrong, a part of the pledge funds will be used. deducted as punishment. At the same time, validators who submit corresponding fraud proofs will receive deducted funds in some solutions to incentivize detection and submission of fraud proofs.
If we compare OR and Plasma, we will find some similarities, for example they both use a fraud proof mechanism, which requires a validator role to monitor the submission of OR to the main chain. However, since the OR submits transaction data to the main chain at the same time, the verifier on the OR does not need to save and record the transaction on the OR by itself. For comparison, the simple architecture diagram above is put here for readers to compare:
The core of Zk-rollup
Another type of solution is ZK Rollup. Unlike OR (Optimistic Rollup), ZK Rollup makes such fundamental assumptions:
Do not believe that the submitter can actively submit the correct batch, or similar to the “presumption of guilt” in law. The submitter must carry a ZK-SNARK certificate in addition to the transaction data and post/previous state root when submitting a batch.
A ZK-SNARK is essentially a “validity proof” that can be directly used to verify that the submitted batch is correct. After this proof is submitted to the Rollup contract, anyone can use it to verify a specific batch of transactions in the Rollup layer, which means that the rollup no longer needs to wait 7-14 days after submission for verification.
The difference between a certificate of validity and a certificate of fraud
So how to understand the difference between “validity proof” like ZK-Rollup and “fraudulent proof” used by Plasma/Optimsitic Rollup?
First of all, these three schemes require someone to sort, execute and package transactions on Layer 2. Let’s call this role “executor”.
The executor of Plasma will only submit the execution result, adhering to the principle of whether others believe it or not. If you don’t trust me, you need to initiate a challenge, and initiating a challenge requires you to save the underlying transaction data yourself.
The same is true for OR, but the executor will also put the transaction data when submitting it. It is also whether you believe it or not. If you don’t believe it, you can verify it yourself based on the transaction data.
But ZK is different. ZK said I don’t want to wait for days for you to challenge me. What a waste of time, I’m rushing to confirm my transaction. So ZK directly generates a proof when submitting, puts this proof on it, and completes the verification at the same time as submitting.
At the same time, both Plasma/OR need to pledge to ensure that the executor is at a loss for evil, but ZK does not, because it does not require others to believe it, and he will prove his innocence every time he submits.
Apart from this difference, another interesting aspect is that ZK-SNARKs allow us to prove the validity of a batch of transactions without submitting the entire transaction data, which is important for Rollup, below We will explain this.
(3) Implementation logic of ZK-Rollup
First of all, ZK-Rollup is essentially a Rollup solution, so it still needs to do the following two things:
Pack and compress a batch of transaction data
Generate new state root
The only difference is the verification method. ZK-Rollup will not wait for the verifier to initiate the fraud proof process, but will directly generate a ZK-SNARK proof and add it to the batch and submit it to the main chain rollup contract.
As shown in the figure, the submitted content adds a ZK-Proof compared to OR, and the role of validator is hidden.
After submitting to the rollup contract, anyone can verify it. After the verification is successful, the main chain rollup contract will update the State root to the latest submitted data.
How to generate a ZK-SNARK validity proof?
- What is ZK-SNARK?
The full name of ZK-SNARK is “Zero-Knowledge Succinct Non-Interactive Argument of Knowledge.”
Concise non-interactive zero-knowledge proofs. I’ll try to explain what each part means:
Succint (concise): This method produces a much smaller proof compared to the actual proof data volume.
For example, if we want to prove that a series of transactions do exist and take place, the amount of proof data generated must be smaller than the data amount of these transactions themselves.
Non-interactive: After proof construction, the prover only needs to send a simple message to the verifier, and usually allows anyone to verify without permission.
This is important for ZK-Rollup or ZK applications on the blockchain, because some ZK proofs require multiple interactions between the prover and the verifier (the color guessing problem is a typical example), and on-chain That means multiple transactions are initiated, which is intolerable in terms of cost.
Arguments: Resistant to computationally limited provers
This part means that the complexity of the encryption algorithm used to generate the proof cannot be brute-forced at an acceptable time and economic cost under the existing computing power conditions.
of Knowledge: It is impossible to construct a proof without knowing what it is to prove
This is also important for ZK-Rollup because we cannot allow someone to be able to create a ZK Proof based on non-transactional data to submit to the main chain contract.
Finally, and the most important “Zero-Knowledge”:
Zero-knowledge means that when the prover proves a statement to the verifier, it does not reveal any useful information or any information about the proved entity itself.
- One of the simplest zero-knowledge proof examples is this
Alice wants to prove to Bob that she knows the password to a certain safe. The password is the only way to open the safe, but she doesn’t want to tell Bob the password to the safe. What should she do?
It just so happened that Bob knew that there was a love letter from Bob’s ex-girlfriend to him in the safe, with the fingerprints of both Bob and his ex-girlfriend on it.
So Alice opened the safe with Bob on his back, took out the love letter and gave it to Bob.
This proves that Alice knows the password of the safe, and Alice did not tell Bob what the password is. Success!
- How to generate a ZK-SNARK proof for a ZK-Rollup!
Briefly, generating a ZK-SNARK proof is divided into the following steps:
Determine the logical validation rules for the problem (for example, check whether the balance, nonce meets the requirements, etc.)
Transforming logical verification rules into gate circuit Circle problems
Transform the gate circuit Circle problem into R1CS (rank-1 constraint system, first-order constraint system) form
Convert R1CS to QAP (Quadratic Arithmetic Program)
After the above transformation, we can obtain a set of ZK-SNARK proofs that can be verified by a fixed verification method according to the logical verification rules. The specific conversion process can be found in this article.
If you feel that this part is more complicated than every previous part, you are right. It is also complicated by the current ZK-Rollup solution provider, which is one of the reasons why the current ZK-Rollup development progress and practical application are slower than Optimstic Rollup. If you’re not a math/cryptography expert, or a Matter Labs developer, here are a few things you need to know:
Generating a ZK-SNARK proof is much more computationally and time-consuming than validating a Merkle tree
Not just any language, compilation environment, virtual machine, and instruction set can seamlessly support the above-mentioned process, and additional adaptation is required.
For the first point, this is the direction that the major ZK solution providers are currently working on. The first is the time cost. If it takes an hour to generate a usable ZK-Proof, the indirect user withdrawal time will also be longer. The computational cost consists of two parts, one is the amount of data generated by the ZK-Proof, and the other is the computing power required to verify the Proof. The larger these two parts are, the more Gas needs to be consumed on Ethereum, which in turn affects the optimization performance of ZK-Rollup.
For the second point, this is a big reason for currently limiting the development of ZK-Rollup. At the beginning of the EVM design, the developers did not expect to use the ZK technology in the future. Therefore, it is nearly impossible to generate usable zero-knowledge proofs for EVM operations, thus giving rise to the need for ZK-EVM.
- Why is EVM compatible so difficult for ZK?
Open DeFillama, you will find that the top Layer 2 solutions in TVL are all OR solutions. This is because these OR solutions already have their own networks. These networks are EVM compatible, and developers can seamlessly integrate The smart contracts on Ethereum are ported to their network, and users can also do swap, mortgage, and provide liquidity on their network.
However, it is still difficult for ZK-Rollup to do this at present, and many existing solutions can only support simple payment and swap scenarios.
Why is this so, first of all, let’s make it clear that on Layer 1, the bytecodes of deployed smart contracts are stored in Ethereum storage (storage items). Then the transaction will be propagated in the peer-to-peer network, for each transaction, each full node needs to load the corresponding bytecode and execute it on the EVM to obtain the same state (transaction will be used as input data).
On Layer 2, although the bytecode of the smart contract is also stored in the storage item, the user operates in the same way. However, its transaction will be sent off-chain to a centralized zkEVM node. At the same time, zkEVM not only needs to execute bytecode, but also must generate a Proof to indicate that the state has been updated correctly after the transaction is reached. Finally, the Layer 1 contract can verify the proof and update the state without re-executing the transaction on layer 2.
That is to say, executing transactions on zk-Rollup is a completely different logic and path. At the same time, zkEVM also adapts to generate zk circuit proofs while executing transactions, and the existing EVM generates ZK-SNARK proofs as follows question:
Some elliptic curve operations required by ZK-SNARK are not supported
Compared to traditional virtual machines, EVM has many unique opcodes that are difficult for circuit design
The EVM operates on 256-bit integers (like most common virtual machines that operate on 32-64-bit integers), and zero-knowledge proofs “naturally” operate on prime fields.
These are only part of the problem of generating ZK Proof in EVM. Although OR also needs to build a virtual machine to perform EVM operations, since it only needs to complete transaction packaging and other functions on the basis of executing transactions, it is necessary to build it. Much simpler. For ZK-Rollup, in addition to the difficulty of generating ZK-Proof while being compatible with EVM, it is not easy to verify this proof on Layer1.
If you want to know more about ZK-EVM difficulty, you can read this article: https://hackmd.io/@yezhang/S1_KMMbGt
After reading the above content, it is undeniable that the implementation of zk-Rollup has high technical difficulties, so why don’t we directly use the more “simple” Optimistic Rollup technology?
Now let’s do a simple comparison of the two Rollup techniques.
Optimstic VS ZK
- Efficiency optimization (TPS/transaction fee)
The following is a comparison of the fees and TPS of several different schemes currently on the market in a specific Ethereum environment:
Thanks to the @W3.Hitchhiker team for their contributions!
On the figure, we can see that the ZK scheme is more efficient than the OR scheme. Why?
For a Rollup scheme, the most important thing is how much data traded on Layer 2 can be carried in an Ethereum transaction, which is related to two parameters:
Gas consumption of a transaction compressed by Rollup
max gas limit for ethereum blocks
Among them, Rollup can solve the first point, although the storage and verification of ZK-Rollup proof requires a certain amount of storage space and gas (a trusted data is about 500K). However, due to better transaction compression, and the storage consumption of transaction data is the vast majority of gas consumption, ZK-Rollup performs better than OR efficiency optimization.
On a side note, you may notice that ZKPort has the best TPS and transaction cost optimizations in the table, mainly because the Validium it uses is essentially a Plasma scheme that replaces fraud proofs with ZK Proof, which does not submit transaction data, Its efficiency is completely determined by the processing efficiency of the Plasma chain, but it also faces the problem of data unavailability in terms of security.
The calculation above assumes a gas price of 30 Gwei, and we all know what gas price can reach when Ethereum activity spikes. At that time, the cost optimization effect of Rollup, especially the ZK solution, will be more obvious.
- Time costs
We mentioned before that, because of the fraud proof mechanism, withdrawals on Optimtisc Rollup require a 7-14 day submission period for others to falsify potential malicious behavior.
Of course, we can reduce the withdrawal period through some behaviors that are independent of the Rollup mechanism itself, similar to the liquidity pool mechanism proposed by Optimstic Rollup solutions such as Boba Network.
Let’s assume such a scenario:
Alice is an OR user and has an asset of 5ETH on L2.
There is another liquidity pool B on L1, which is turned to provide liquidity for OR users like ALice.
Now Alice wants to retrieve all assets from OR, and now he makes a transaction with B:
Alice can directly take 5ETH from Bob and pay a certain fee at the same time
After 7 days, Alice’s assets are unlocked, and the 5ETH taken by Alice is returned to the pool.
This has a certain risk for the liquidity pool, so he can hedge the risk by monitoring the OR contract and obtain the penalty for dishonest submission, and the fee charged is also a reserve to reduce the risk.
But this method is not suitable for NFT, because NFT is indivisible, and the liquidity pool cannot simply copy an NFT to users.
However, ZK-Rollup does not have this problem. The submitter must prove his innocence when submitting, and provide a verifiable ZK-SNARK certificate. The current ZK-SNARK certificate generation time can reach several minutes. The user just needs to wait for the next batch to be submitted and verified.
Time cost is the flaw of OR and one of the significant advantages of ZK-Rollup.
Both Optimsitic and ZK face the problem of needing to be compatible with complex EVM contract calling operations, but obviously Optimstic is easier to implement.
OR solutions including Arbiturm, Optimsim have EVM compatible virtual machines, allowing them to process all transactions that happen on the Ethereum main chain. Some OG-level DeFi protocols such as Uniswap/Synthetix/Curve have also been deployed on the OR network.
Constructing an EVM-compatible ZK-SNARK proves so difficult that so far there is no publicly available ZK-Rollup solution. However, we have some good news recently. The zkSync 2.0 public testnet was officially launched at the end of February, which is also the first EVM-compatible ZK Rollup on the Ethereum testnet. Perhaps the official large-scale practical use of ZK Rollup will come sooner than we think.
The answer to this question is obvious, the safety of OR comes from economics. In order to operate well, OR must design a reasonable incentive mechanism to drive a group of validators on the main chain to monitor submitters at any time and prepare to submit fraud proofs. For the submitter, it also needs to ensure that the node will pay the corresponding price for evil through methods such as pledge.
The security of ZK comes from mathematics or cryptography, just like a major foundation for establishing trust in the blockchain: code cannot do evil. The guarantees provided by mathematics and cryptography are far more stable than the optimistic belief that human nature cannot do evil.
Of course, the current rollup mechanism itself also has certain security problems, although rollup submits data to the main chain to solve the data availability problem. But we haven’t discussed exactly who is responsible for processing, sorting, compressing, packing, and committing transactions. Some current mainstream solutions, such as Arbitrum, Optimism, and StarkNet, use a role called sequencer, which is a single node that runs by itself. The result of this approach is a high degree of centralization.
We know that decentralization is the premise of all security. The advantage of this sequencer model is that it is highly efficient, and it can be quickly iterated when the rollup is still in the exploratory stage. These solutions also declare that the sequencer decentralization process will be gradually carried out in the future. . For example, the election of sequencer nodes using PoS or dPoS methods, etc., new solutions like Metis have been explored.
Let’s visualize the above discussion in terms of a table:
Overall, OR is the more mature solution at this stage, and in fact it is, and products from Optimstic and Arbiturm are already available to Ethereum developers. However, due to the use of the fraud proof mechanism, its withdrawal time and security are currently debatable, and its cost optimization is slightly inferior to ZK.
The weaknesses of ZK Rollup are basically technical problems. With a large number of excellent developers investing in related research, most people including Vitalik agree that ZK Rollup will be a better expansion solution in the future.
Is Rollup perfect?
After the above description of the three types of Layer2 solutions, I believe you already have a certain understanding of them. In fact, the order in which the articles are written is also the order in which developers study the Layer 2 expansion scheme. After finding a problem with a certain solution, another better solution is proposed to solve the related problem. Not only in cryptographic research, but this process can be extended to all engineering problems:
Come up with ideas, test, iterate, optimize until you find the most viable solution.
Now it seems that Rollup is the answer we are looking for. It solves the problem of universality, solves the problem of data availability, and looks good in terms of security and efficiency. So, is it the perfect one?
The answer is no, no solution can be perfect, there are also many problems with Rollup, and even ZK-rollup, which looks better, cannot avoid them.
- There is a ceiling for efficiency optimization:
When we talked about the main difference between rollup and plasma, we talked about ensuring data availability. Rollup needs to submit transaction data to the main chain, which is the main reason why the rollup scheme beats plasma.
But we have to see on the other hand that the transaction on the chain means that the rollup will still be limited by the capacity of the Ethereum main chain:
A simple calculation:
Current Ethereum block max gas limit: 12.5M gas
Cost of data stored on-chain per byte: 16 gas
Maximum bytes per block: ~781,000 bytes (12500000/16)
The amount of data required for Rollup to perform an ETH transfer: 12 bytes (see the gas cost in the previous section)
Transactions per block: ~65,000 (781,000 bytes/ 12 bytes)
Ethereum’s average block time: 13 seconds
TPS：~5000（65,000 tx/13 s）
Here we make a lot of assumptions, for example we assume that all transactions are simple ETH transfers. The actual transaction will contain a lot of complex contract calls, and the gas consumption will be higher. And for ZK-Rollup we also need to count the cost of validating ZK-Proof (half is around 500K gas).
Even so, the TPS that Rollup can achieve is only about 5000. We also see above that the direct efficiency optimization brought by the Plasma mechanism is much higher than that of Rollup.
The Ethereum Foundation is also very aware of this problem. At present, their main solution is sharding + rollup, which will increase the TPS brought by rollup by an order of magnitude.
- Liquidity Fragmentation:
Under the influence of the current multi-chain structure, the fragmentation of its own liquidity has become increasingly serious.
However, due to the existence of various technical solutions and solutions, the number of rollup networks in the future will only continue to grow, which will bring about more serious liquidity fragmentation.
List of current Ethereum and its layer2 network TVL
The good news is that cross-chain communication can solve this problem, and the typical event is that Synthetix is already working on merging its debt pools on the Ethereum main chain and Optimism. If this process is completed smoothly and silkily, it is believed that there will be a certain promotion of the liquidity consolidation trend on the main chain and sub-chains.
After all, the debt pool model of synthetic asset projects is far more complex than the more common liquidity pool model. It is foreseeable that mainstream DeFi projects such as Uniswap will continue this process.
- Reduced composability due to communication challenges and technical barriers:
In the previous question, we talked about the communication problem that makes the liquidity fragmented. This phenomenon also applies to the interaction between the main chain dapp and the sub-chain dapp. Every new protocol built on Ethereum is like Lego blocks, other Protocols can be easily built on top of it, which is one of the reasons why DeFi is growing so fast.
If the communication problem cannot be solved, then the dapps on the sub-chain need to re-establish their own ecology, which results in a greater waste of resources. Not only between the sub-chain and the main chain, but also between the sub-chain and the sub-chain, a communication mechanism needs to be built.
Again, some good developers are working on this too, let’s hope they can simplify these operations and processes. After all, the operation of Layer1 itself is cumbersome enough. If the complexity of layer2 is added, this will make the threshold of the Web3 world higher.
- Centralization risk
In the current rollup solutions mentioned above, the sequencer responsible for executing, sorting, compressing and packaging transactions is currently a relatively centralized role. If Rollup wants to further improve its security, it must solve the centralization problem of course.
The full text has exceeded 10,000 words, far exceeding my expectations. The scaling of Ethereum itself is a huge and complex subject. And this article only touches on a part of the Layer2 solution. The scaling solution (sharding) of Layer1, and other Layer2 solutions such as Side Chain, Validium, etc. are not mentioned. In fact, the expansion of Ethereum is not a single solution that can be solved once and for all. Many solution providers are also exploring multiple paths, and companies like Polygon have invested in a large number of different types of Layer 2 solutions.
At the same time, there are many things in the text that need to be explored due to space limitations, such as the communication support required for submission between Layer2 and Layer1. How the fraud proof/validity proof is implemented on Layer 1, the specific differences between ZK/OR implementation schemes, etc. Understanding these things is very important for researchers who want to learn more about Layer 2, especially Rollup scaling. In order to facilitate understanding and sorting out some concepts in the article, we have made a relatively general summary, for example, OR/ZK has very different solutions in terms of transaction data compression, etc. The vitalik example used in this article is more inclined to the ZK solution . In the process of writing, we also referred to some excellent Layer 2 content, and we marked them in the text and at the end of the text. We also hope that more and better content will appear to help you further build up related cognition.
Finally, from the perspective of the two Rollup solutions we introduced, Optimstic Rollup has already taken the lead in the market. While launching available products, it gradually attracts mainstream Dapps to integrate into the ecosystem. It is undeniable that the great contributions of relevant developers are undeniable. But in the long run, ZK Rollup + sharding is the future we should look forward to.
1.An Incomplete Guide to Rollups （Vitalik）
2. Cost comparison of the four Layer 2 solutions by the W3hitchhiker team
Scorll Tech’s interpretation of zkEVM implementation
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/one-article-to-understand-the-expansion-road-of-ethereum/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.