A long Layer 2 introduction/science popularization/discussion/notes

This article is relatively long, with a total of seven parts:

  1. Understanding zkrollup and optimistic rollup in a popular way
  2. The simple history of Plasma
  3. Data availability
  4. Faction battle op
  5. Faction battle zk
  6. Immutable X
  7. in conclusion

Understanding zkrollup and optimistic rollup in a popular way

The side chain is a chain, and the Layer 2 protocol has no chain. The side chain is also a chain, so there are nodes, consensus mechanisms, storage sections, and blocks. The second layer protocol is not a chain, so there are no nodes, no consensus mechanism, and no concept of blocks.

Layer 2 does not have the concept of blocks. In fact, there are no nodes and no consensus mechanism, so in the block browser of the Layer 2 scheme, you can’t actually see the blocks. Now go to the Arbitrum browser, there is also a Bk, Block, take a closer look at each block is actually each transaction, please see the screenshot for details. The operator of Layer 2 is only responsible for arranging the transactions in order and then processing them. The key to the blockchain lies in the sorting of transactions.

A long Layer 2 introduction/science popularization/discussion/notes

For example, the well-known sandwich attack is actually a sorting attack. You want to initiate a transaction and pay a gas fee of 50 dollars. Someone finds this unprocessed transaction (in Mempool), and then spends a gas fee of 100 dollars to make two transactions, one of which is the same as your purchase For the coins you want to buy, the second one is to sell the coins you bought to you. The price of buying first is low, and the price of buying later is high. So the other party uses two transactions to sandwich your transaction, first buy the currency you want to buy, and then raise the price to sell it to you.

What you feel is finally becoming more expensive, and the slippage of the transaction is the profit of the other party. This is the same as Uniswap V3’s precise single-point liquidity provision (Just-in-time Liquidity).

After understanding the importance of the ordering of transactions, the difference between the two Layer 2 (zk and op) schemes is simple to understand.

In order to prevent being unfamiliar with zk and op, I repeat the difference between the two:

  1. Zk is a zero-knowledge proof. It is first proved that there is no problem, and then the data is transmitted to the main chain. Check first and do it later.
  2. Op is optimistic to accept, first process the data and send it to the main chain, and then wait for someone to challenge it. No one can prove it wrong within a week, that’s right. Check again after doing it.

Change to a popular but not rigorous way of understanding.

Since sorting is so important, the one in charge of sorting has said: I will give you two paths now, one is for you to listen to me, a lot of transactions will be sorted by me, and after processing, I will send the key data of the processed results. To the main chain. If you have any opinions or ideas on the data, I will give you a week to find the evidence. Anyway, the original transaction data is all there (this is called data availability). You can check it yourself, and report me if you have any problems, or Think I made a mistake in reporting. If you succeed in the challenge, the main chain will also solve the problem for you. So as long as I don’t have any problems, I can deal with them faster. Sorting, how easy it is.

The second way is for me to work hard. I record a certificate for every sort transaction processing, and then there is a certificate for the transaction sent to the main chain. In this way, anyone in any transaction can guarantee that the work I am doing is okay at any time. But the disadvantage is that every time I have to deal with things and produce a proof, it is very tiring. So the speed will be very slow, and the more complex the processing logic, the slower I will do it. You have to understand this.

After reading the above, you should have a general understanding of the two Layer 2 protocol schemes. If you don’t understand it yet, it doesn’t matter. We will talk about it in more detail later. Before that, let’s talk about the expansion of Plasma, which has been buzzing in the past.

The history of Plasma

Let’s take a look at the history of Plasma. In fact, at the beginning, Ethereum was very blocked at the hottest time in 2017. At that time, there were already a lot of discussions about expansion plans. Plasma was the first solution that received a lot of attention. Because the proponent is V God, and Joesph Poon from Lightning.

A long Layer 2 introduction/science popularization/discussion/notes

The first version of Plasma was called Plasma MVP (Minimal Viable Plasma). This version must have failed. The first version is probably to arrange a lot of transactions according to specific rules, and then process them together (in fact, many subsequent improvements are improvements to the data storage structure of these batches of transactions).

So what’s the problem with MVP? Remember the person responsible for sorting transactions mentioned above? This person is the operator in the second layer protocol, and the second layer protocol needs to take into account the operator’s evil situation. So zk is a certificate left by the operator for each transaction, and op is the challenged period for the operator for a period of time. The problem with MVP is:

  1. The challenge period still exists, one week;
  2. If you consider that the operator is doing evil, you need to verify all the transactions (note that it is all, not along the Merkel tree) to verify all the transactions when initiating the challenge;

Later, V God made a new version, Plasma Cash. The data structure of the transaction set is changed, and assets are also represented by non-homogeneous tokens. In this way, the verification is to check all transactions of everyone. But new problems have also emerged. Users need to prove their ownership when withdrawing cash, and users need to be online regularly during the challenge period. In addition, in terms of storage, Plasma has relatively high requirements for users, and must save all the data by itself. The end result is failure.

Data availability

Did you discover it? zk is to encrypt a small part of the result and the proof of the result to the main network to prove that the result is ok; the op also needs to send the result and original record to the main network, if there is a challenge, there can be evidence to check.

In the final analysis, at the end of the Layer 2 protocol, even the expansion plan is a problem of data availability. The best way to understand the availability of data is from the perspective of the main network, whether the nodes of the main network can obtain the data, and unconditionally believe that the transaction data processed by the two-layer protocol cannot exist in a centralized server or somewhere.

The Ethereum mainnet puts all data on the chain, so anyone can see and check it, so it is safe. But this is slow and inefficient. The solution of the side chain is more thorough, directly create a new one. The data of the side chain is on the side chain and has nothing to do with the main chain, so the security of the main chain is independent of the side chain. This is why many people generally think that the Layer 2 protocol is the best expansion solution. Because the two-layer protocol can take advantage of the security of the main chain.

In terms of data availability, the two-layer protocol reduces the amount of data. One is to provide (compared to the main chain) less data that can prove that the transaction is okay; the other is to not provide data but please believe me, or you will challenge me. I make you believe me. And the data is available at the time of the challenge, and the verification process is simpler than the historical solution.

Now look back at this tweet from God V:

A long Layer 2 introduction/science popularization/discussion/notes

Among them, SNARKs and STARKs are the direction of zero-knowledge proof, and you can check the specific meaning if you are interested. Fraud proofs are the optimistic rollups mentioned before. The direct translation is called fraud proofs. In fact, they are the challenges mentioned before—taking out evidence to prove that there is a fraudulent behavior in the transaction.

The following two lines are about data availability. On-chain means that the data of the second-tier transaction collection is available to the main network, and off-chain means that the data of the second-tier transaction collection is for the main network. Is unknown and unavailable.

The validium in the figure has not been mentioned before, here is an explanation. Validium is based on zkrollup, and then discards part of the data to be transmitted to the main chain, so that the amount of data is less, so that the transaction can be processed faster, and it can also achieve the level of free execution of the transaction (this will be discussed in detail below, Because ImutableX uses this).

So following this picture, we are more familiar with it today, and are also the two types of expansion schemes mentioned at the beginning of the article, zk and op.

There may be more people who are familiar with op, mainly because Arbitrum and optimistic are both online, and there are many projects (especially the Tugou project). The zk plan actually has many projects, such as dydx, immutable X, payment methods for gitcoin donations, etc.

Faction battle op

There are factions in everything, and blockchain is the same. Looking at the opening point, the speculation of coins is not about ups and downs, but of humanity and sophistication.

Just as there are two factions in the second-level agreement, zk and op, there are also faction disputes within zk and op.

As mentioned above in op, they are arbitrum and optimistic, both of which are currently online. Obviously arbitrum is developing better. So far, arb has processed more than three million transactions. On the other hand, optimistic only processed more than 200,000 transactions. but! The optimistic browser does a little bit better, this must be said.

df2d52b450844b985fde07bb2a4ef1a6

It is very straightforward to tell that it is a transaction batch, not a block. The transactions of the first and second layers on the right are recorded in the main chain in full accordance with the rules of the main chain, so there is Bk on the right, the block identifier. Compared to arbitrum, it is indeed a lot of care.

Comparison of some information between Arbitrum and optimistic:

A long Layer 2 introduction/science popularization/discussion/notes

Judging from the current situation, Arbitrum is indeed a lot ahead of the two comparisons.

Recently, Optimistic has made a retrospective reward plan to distribute rewards to projects in the ecology. A total of 60 projects were rewarded, with a total of 1 million US dollars. Take a closer look at these projects. They are basically very, very technical projects, rather than application-type projects that will issue coins. V God also wrote a review two days ago, commenting that Optimistic’s retrospective rewards are very good, and the overall quality of the project is higher than that of Gitcoin 11. These technological types of infrastructure are particularly critical to the development of the Ethereum ecosystem.

Looking closely at Opitmistic’s dynamics, the chain network upgrade was just completed a week ago, and the soaring number of unique addresses has just occurred in the last few days. Prior to this, Optimistic could even be described as having no application. Arbirtum has not been shut down for maintenance since it went online in September. The current main development is to cooperate with more projects to deploy more applications on the second layer.

I have to say that the Tugou project that appeared in Arbitrum in the early days played a big role in attracting users and funds.

One more thing, Binance just supported the deposit and withdrawal of Arbitrum ETH. In this way, you should not have to wait seven days to withdraw ETH from Arbitrum in the future. Before that, there was a way to exchange price for liquidity, and there was an agreement to provide such a service. That is, if you want to mention 1 ETH, you don’t want to wait seven days. You can directly withdraw 0.98 ETH immediately. In other words, directly borrow 0.98 ETH on the main network. After 7 days, your ETH will be raised and then returned.

Because the second-level error itself is a probabilistic event, as long as the user’s discounted income can make up for the error rate multiplied by the total withdrawal amount, this will work.

Faction battle zk

The two main factions of zkRollups are the zkSync solution developed by Matter Labs and the StarkEx solution developed by Starkwave. Among them, zkSync currently has two generations, zkSync1 and zkSync2 respectively; on Stark, there are two generations of StarkEx and Stark Net.

These differences are explained by the CTO of Immutable X very clearly:

For networks that you can play with right now, StarkEx is live with four applications, DyDx and DiversiFi for trading, and ImmutableX and Sorare for NFTs. zkSync is live for payments with primitive NFT functionality featured with ZKNFT.

StarkEx has been launched, with dYdX and DiversiFi supporting transactions; and ImmutableX and Sorare as NFTs. On the zkSync side, the payment aspect is mature, and zkNFT is to solve the NFT aspect.

If you still don’t understand, you can look at the table below

pbcSadqDKrWRjKC43iIOn77EQDSOrQ0cXWBIzXQS.png

 

Information about the development team of the two zkrollups:

A long Layer 2 introduction/science popularization/discussion/notes

Now that the two are compared, it is also obvious that StarkWave is better. The current projects that use StarkEx technical solutions have also been mentioned before, the well-known explosive models dydx, immutablex, deversifi, sorare. According to the data from StarkWave’s official website, the total lock-up amount of projects using their own solutions has exceeded one billion U.S. dollars, more than 51 million transactions have been made, and the cumulative transaction volume has exceeded 215 billion U.S. dollars.

For other comparisons between applications and future development plans, Bankless published an article “The best comparison on zkRollups today” two days ago, which introduced this part in detail. There are already many translations in Chinese. If you are interested, you can find it. Look.

Immutable X

In particular, IMX, a massive two-layer protocol, first issued the currency without doing anything.

IMX uses a solution similar to dydx, which is zkrollups, which is nothing. The point is that because of the game chain, the props and equipment characters are definitely NFTs. Although rollups are much cheaper than the main network, if it is a large-scale game, it will also be super expensive. In order to solve this problem, IMX is more radical and directly uses the validium solution mentioned above.

The difference between Validium and normal zkrollups is that it completely discards data availability, just to understand it as a simple analogy.

In the example at the beginning of the article, what the operator has to do is to correct the student’s test paper.

The Zkrollups method is that each time the operator finishes correcting a test paper, he takes a photo of the correction result and the photo of the test paper, and then uploads it to the main website together. So everyone can see the test scores directly. If you have any comments on the test scores, just look at the photos of the test papers and see where you are wrong. Correction is no problem.

Validium directly uploads the result score, not the photo of the test paper. That is, a part of the data is discarded. For the main network, there is no way to find where the test paper is. If there are opinions, there is no way to appeal, because there is no test paper and I don’t know where is wrong. Again, this is to abandon data availability, Data Availability.

However, this will make the entire correction process (transaction processing) faster.

In order to reassure users, IMX has set up a committee to regularly transmit test paper photos to other places, which is IPFS.

But it’s useless to send it anywhere. Only the data in the blockchain of the main network is the real blockchain. No matter how safe it is placed elsewhere, it is not a so-called crypto native.

Of course, IMX also knows this, so users can choose normal zkrollups or validium when making transactions directly in the future. According to the official statement: “We leave the choice to the user.”

We’re allowing users to choose between two Validium and ZK-rollups via a system known as “Volition”, and we’re starting by offering maximum scalability via Validium to allow applications to scale NFTs to the billions, all while remaining on Ethereum. Let’s go!

To be honest, although you can understand such a compromise. But what the user knows, it is not necessarily a good choice to give it to the user.


in conclusion

There is no conclusion in this article, it is more of a popular science/introduction/discussion/notes. Because it involves the second-tier protocol, this is a relatively large track, and the technical design architecture is also more complicated. I am not a technical background, so there must be many places where I cannot understand it. The purpose is to allow some people to probably know what they are doing if they buy these or engage in these agreements.

In addition, the Chinese content is a little bit uneven recently, and there may be less and less non-advertising content in Chinese, so I sorted it out.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/a-long-layer-2-introduction-science-popularization-discussion-notes/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Leave a Reply