How to build a better Layer 2

Layer 2 will inevitably have some security sacrifices, and even if there are no security sacrifices, there will be some capital utilization and capital availability issues.

How to build a better Layer 2

Background

Ethernet scaling is in full swing in the community, and several solutions are being ramped up and expected to all go live on the mainnet this year. On the eve of the explosion of the whole Ethernet Layer2 solution, imToken joins hands with ETHPlanet, EthFans, ECN, Shanghai Frontier Technology Workshop and HiBlock and many other outstanding Ethernet ecological communities and companies to plan a series of events on the theme of Ethernet scaling.

The first event was held on April 23: Rollup – New Paradigm of Ether L2 Scaling Hangzhou Offline Meetup.

The following is the text version of the roundtable discussion on “How to build a better Layer2” at this Meetup, compiled by Shanghai Frontier Technology Workshop – Hourglass Time.

AMA Transcript

Moderator: Yao Xiang

Guests

Celer Developer – Michael

imToken Product Manager – Shu

Head of Ambi Labs – Guo Yu

Loopring CTO – Steve

Yao Xiang: It’s a great honor to invite you all here. Normally we ask you to introduce yourself, but I’ve prepared a question for each of you today.
Let me start with my question. Congratulations on the launch of Layer2.finance’s main website this morning, can you tell us a little bit about your main website, including how it was in the previous testing competition, what the competition was like, and whether there were any technical or product problems or findings.

Michael: Hello, I’ll introduce myself first, I’m Michael, I’m a developer who joined Celer in the early days, I’ve been working on the expansion of Ether and all the chains supporting EVM. In 18 years, we have already launched the universal state channel on the main Ethernet network, not only for transferring funds, but also for supporting the arbitration of off-chain contracts on the chain.

After that, we found that the use of state channel seems to be rather limited, and the experience of users entering and leaving the channel is not that good, so we were also looking at other directions.
In 19, we first proposed a hybrid Rollup, combining the security of the sidechain and Rollup, so that users can choose whether to use the security of Rollup or only the security of the sidechain, a bit like the concept of zkPorter and zkSync.

In 2020, when the whole DeFi was very hot, we were thinking how to solve the mobility problem of Layer2. When everyone was thinking about whether Layer2 would become an island afterwards, we wondered if there was a reverse idea, to keep the DeFi protocol in Layer1 for the time being, and we would just aggregate user operations in Layer2, but all the liquidity would still be in Layer1, and all the money would still be in the original protocol of Layer1. All the liquidity is still in Layer1, all the money is still in the original protocol of Layer1, at least in the short term or even in the medium term. And from another perspective, Layer2 will certainly have some security sacrifices, even if there is no security sacrifice, there will be some capital utilization and capital availability problems. So maybe for the largest amount of money, they still prefer to keep their money in the Layer1 protocol, so we will go backwards to be compatible with them, and this is the origin of Layer2.finance.

In fact, at that time, there were similar concepts like StarkWare, and we were almost on the same path, and we were the first one to put the product online.

We launched the beta site two weeks ago and held some events to let people experience it. We deployed more than 20 simulated DeFi protocols, and people kept entering and exiting assets into the protocols. We have deployed about 20 simulated DeFi protocols, and then people keep entering and exiting assets into the protocols, and a total of more than 400,000 Layer2 transfers have occurred.

We found that the Ropsten test network of Ether is not working well, as it is the only PoW test network, but the block output is very unstable, and it often gives you a rollback of more than 80 blocks. So we had a rollback problem, but we did handle the protocol correctly after the rollback, i.e., we stopped the chain of Layer2 Rollups for a while. Our CTO spent a day replaying all the calldata on Layer1 and regenerating the calldata on Layer2 to keep it safe, and we were the first project to do that.

This is one of the reasons why we feel it is important to preserve the availability of Layer2 data on Layer1. If Layer1 has a deterministic history, you can always replay it and restore the state on Layer2. We did do that, and that’s a very interesting thing.

The first version of Layer2.finance is called v0.1, because a lot of the design has not yet been finalized, so we are just saying that we will let you experience it without handling.

We are compatible with these three DeFi for the first time: Compound, AAVE and Curve, all very conservative strategies, all stable coins, so the return will not be too high, it may be in the general annualization of stable coins about 10%, you can experience it if you are interested. The first 500 people will have some rewards, and we will cover your fees for filling Layer2. Our top-up is not too expensive, it’s a fee similar to the Uniswap exchange. Once you’re in Layer2, you’ll be able to move in and out of the protocols with zero fees.

The product only supports MetaMask desktop at the moment, but we will come up with cell phone adaptations and put it on all major wallets as soon as possible so that everyone can experience Layer2.Finance on their cell phones, except for the waiting time, which is limited by the Optimistic Rollup. At this stage, the fee is zero, and in the future, the fee will be very low. After the next major update, we will combine it with the economic model of Celer tokens, including some liquidity mining activities, and we will welcome you to experience it then.

Yao Xiang: Thank you, Michael, and welcome to experience Celer’s products. I also heard a very interesting point that the instability of the Ropsten test network actually helps Celer to do a test under boundary conditions.

Next, I’d like to ask Shu: I’ve recently seen imToken integrate with zkSync, and I’d like to know if you’ve encountered any challenges in the integration process. If other wallets want to integrate zkSync, what advice would you give?

Shu: Thank you, Mr. Yao, and let me introduce myself: I’m currently responsible for the imToken wallet product. As Mr. Yao mentioned earlier, we started to pay attention to the development of an expansion of Ether in 17 or 18 years, but why we only started to do integration and related implementation work. It is because in the past few years, the Ethernet scaling solution is a continuous evolution process, from the earliest Celer to do the state channel, or like OmiseGo, Plasma to the current Rollup solution, it has a relatively large progress, and then the Rollup solution should be the most feasible and implementable solution for the whole community. A program.

In addition, last year, we could actually feel the development of DeFi. As Mr. Changwu just mentioned, the development of the Ether network is congested, which makes it very expensive for normal users to transfer money and operate DeFi, but this is actually in conflict with the vision of imToken. We hope the wallet can serve more users and cater to the vision of financial inclusion in Ether.

At this stage, these networks can only serve high net worth users who own tens of thousands of dollars or more, so we started to focus on the solution of Ethernet expansion. There are actually a lot of options technically about the solution, and we now see in the mainstream that there are zero-knowledge proofs and optimistic ones. Here our selection criteria may have a few, the first, it has a community consensus, that is, Rollup this piece, the second it is after time, and the community a practical verification.

We can see that the zkSync solution, for example, has been running on the main network for a long time. Secondly, it’s been working with Gitcoin, and it’s been proven in practice. Recently, Gitcoin, for example, has been donating 85% of its transactions through zkSync. Third, the zkSync team has a strong product capability in addition to engineering capabilities, and they provide a very comprehensive SDK for developers to integrate with zkSync. Half a month after the integration of zkSync, the data showed that more than 51% of the transfers on the entire zkSync network came from imToken, so it was a short-term solution to our problem.

The second thing we’re looking forward to is what Alex at zkSync mentioned earlier, that they will provide features like NFT and swap in May, which will continue to expand the use of imToken in Layer2 scenarios. Of course, we are looking forward to more solutions that we can adapt in a more open way. Currently, zkSync is integrated at the native level. Secondly, the mainstream Layer2 solutions are all accessible through imToken’s browser in the form of a DApp, such as Loopring, Celer and Hermez.

Another question is what are the technical difficulties that Mr. Yao asked. In fact, it is not an engineering problem for wallets, it is more of a peripheral dependency challenge. For example, the mainstream practice of Layer2 is that the current nodes are in the form of single nodes, so we see that as long as the services of the Layer2 scheme hang, then it will lead to the entire network is not transferable. This is a relatively unacceptable point for a wallet that is actually used by millions of users. Of course we see the community (node operations) will go in the direction of PoS.

The second thing is that the most important point of the Layer2 network is to have the flow of funds, and what is the problem we are currently experiencing. The cost of recharging users is very high. Lack of two pieces of support, one is the exchange support, the other is the support of OTC vendors. At present, we have not seen any new progress in the exchange, but in the OTC vendors, several overseas service providers should provide Layer2 recharge services in May or at the end of this year, which is the challenge we are currently encountering as a wallet of some peripheral dependencies.

Finally, the teacher mentioned a question about what advice is there for peer wallets that want to do Layer2. The most critical issue for wallets is a private key. Now all Layer2 solutions have obvious pain points, the signature key of Layer2 is generally generated by personal sign. Therefore, this is a very important point to pay attention to as a wallet service provider to integrate Layer2.

On this point, our imToken Labs team in Taipei will provide the community with a storage solution for Signer to reduce the possibility of phishing attacks. The second thing is that wallets can’t be sure who will win in the future with so many Layer2 solutions, and it’s important for wallets to be as less subjective as possible. We try to find a paradigm in Layer2 solutions and support these early Layer2 solutions as much as possible, so that the market can make a choice.

Yao Xiang: Thank you, Shu. It sounds like Layer2 integration requires not only the efforts of wallets, but also the cooperation of many ecological units. Next, please ask Mr. Guo, whose research in the field of zero-knowledge proof has been very deep and cutting-edge. Recently we have heard the term recursive zero-knowledge proofs over and over again, and we have seen that the Ether Foundation and Mina Foundation have released a grant on whether EVM can efficiently verify a zero-knowledge proof technology called pickles. I would like to ask Mr. Guo, if recursive zero-knowledge proofs can be verified efficiently in EVM, does it have any inspiration for us to design the ZK Rollup protocol?

Guo Yu: Our team is based in Suzhou, with about six or seven people, and we have been doing research on the underlying layers of smart contract security and zero-knowledge proof since we were founded in early 18. We rarely face end-users directly, but we work with many project teams to help them solve some of the problems they encounter in the underlying infrastructure. Our work is currently more on the scientific side, and we will release some of the latest research results one after another, some of which can hopefully greatly improve the proof and proof size of ZK Rollup, possibly by one to two orders of magnitude.

The next answer is the questions that Mr. Yao just asked. I will give you a brief history of recursive zero-knowledge proofs. The early research on recursive zero-knowledge proofs was conducted by Chiesa and Tromer, who proposed a concept called proof carrying data around 2010, so that the whole proof verification can be done inside the proof. In this way, a proof can follow a data to spread everywhere, which is a very cutting-edge concept.
Of course, it needs to use a very tricky technology, that is, it has to find two very rare curves, and can make these two curves embed each other, the two curves found before are very slow and difficult to practical.

A small breakthrough happened around 2018 when Sean Bowe from Zcash team and Mary Miller, now a scientist at EtherFoundation, collaborated on a paper called Sonic, in which they found something amazing, that is, in the traditional Bulletproof backend, some of the This discovery was later used by Sean Bowe to implement a very novel recursive zero-knowledge proof algorithm called Halo, which eliminates the need to find two very rare and poorly-performing curves. This recursive proof technique can be used in many places, as long as two common curves can be found that do not require pairing support. This directly opens the door to recursive zero-knowledge proofs, and the research results are very exciting, which can fill the whole blockchain with all kinds of new applications, and Mina protocol is also trying to expand in this direction.

Ethernet can only support zero-knowledge proofs in a limited way by supporting pre-compiled contracts due to some traditional design flaws. But we are now glad to see that Ether 2.0 is actually working hard, especially the improvement of EVM384 might be supported in this year. That will support more curves and be able to support recursive zero-knowledge proofs in a very flexible way. But it will be a bit more complicated to do it on the current Ether 1.0 base. Now the Mina Foundation and the Ether Foundation are working together to collect solutions for efficient verification of pickles algorithms in EVM, but after careful analysis, we found that this is not that easy and is still quite difficult, because the core problem here is that there are some very time-consuming operations in elliptic curves, and at present, without the support of pre-compiled contracts on Ether, it is still difficult. Basically, it is still difficult to move forward, and there may be some effects, but the degree of optimization is very limited.

We are still looking forward to the early support of EVM384 in Ether, which will unleash the power of recursive zero-knowledge proofs. My last point is what does this mean for ZK Rollup?

In ZK Rollup, there is a problem, whether it is zkPorter or zkSync, if you use the same state root to achieve composability on layer 2, you need to make the state space very large so that you can support many DeFi applications and support more users. This includes the great transformation solution that the Loopring team has done, which allows one layer of contracts to directly proxy assets from the second layer to the first layer, which requires a very large amount of state. This state introduces a huge Merkle tree, and the problem with a huge Merkle tree is that generating proof on it can be very performance and computational intensive. For example, Loopring uses an O(1) complexity algorithm, and then PLONK uses an O(log n) verification algorithm, and when n is very large, gas consumption will be very large. If we expect Ether to capture larger values, then the PLONK algorithm will run into a big bottleneck, and StarkWare’s algorithm will be a bit more severe. Recursive zero-knowledge proofs open another door altogether, and ZK Rollup is promising to optimize more thoroughly along the way to bring the gas consumption of doing zero-knowledge proof verification down to a very low level across Ether.

This is a new direction that I think is very exciting, and I hope that interested partners can discuss this with us and explore the imaginative world of the future. Thank you all.

Yao Xiang: Thank you, Mr. Guo Yu. It sounds like if recursive zero-knowledge proofs are more widely used, then ZK Rollup can play a bigger role and its efficiency will be more improved.

Steve just gave a talk. I also observed that Loopring has made a lot of progress in the first quarter, such as adding a fast withdrawal feature, including improving the efficiency of gas on AMM. I’d like to ask how much efficiency can be improved in the process of engineering zero-knowledge proofs, and what are the main difficulties in this?

Steve: Actually, the whole system of ZK Rollup is quite challenging to build from the engineering point of view. As I said in my speech, we started to experiment with this direction in the second half of 2008, and the first version of the prototype system was really made in March 19, and the main web site was launched at the end of 19. This is the first version. The second version was launched in June and July of 20 years, which is also more than half a year to iterate a big version, every step forward, the engineering perspective is very challenging.

In particular, I can mention a few points here. For example, the version we had at the end of 19 was still a version that was not quite commercially available, and there were many limitations. For example, as mentioned by Guo Yu, the Merkle tree cannot be too large, and we actually limited it to a maximum of 1 million Layer2 accounts at that time, so that the level of our Merkle tree cannot be too high, otherwise the time overhead of generating proofs would be particularly large.

The first version had a lot of these engineering tradeoffs, but we will find later that these tradeoffs should be removed. This can only be done from engineering. For example, we have a paper on the optimization of zero-knowledge proof generation, which is probably an improvement of at least one order of magnitude, with a more than tenfold improvement in generation efficiency. After this we found that we could make the Merkle tree much larger, and now essentially hundreds of millions of accounts can be handled at all.

After doing such a thing, we found that the cost of generating zero-knowledge proofs through ZK Rollup was originally thought to be very high, but on the contrary, the cost of generating zero-knowledge proofs is actually very low now, only the cost of data on the chain and data verification on the chain is relatively high. How to further reduce the cost of this part? We have actually made a lot of attempts. Firstly, we have done compression before uploading calldata, and even made a compression algorithm, and then decompressed it in EVM, so that the gas consumption can be lower. Second, all of our transactions on the second layer can save calldata, we are keying byte by byte, basically I think it has been keyed to the extreme.

So at the same time, as Guo Yu said, we chose a verification algorithm of ZK-SNARK with O(1) complexity, and its verification time does not grow with the size of the transaction, which is also a great benefit. We are actually looking forward to the implementation of recursive zero-knowledge proofs on ethereum, which can effectively reduce the verification time. But we should remember that it can’t improve the TPS of the whole ZK Rollup system, because the ultimate TPS limit of the whole ZK Rollup is determined by the data on the chain, and the biggest TPS bottleneck is stuck here. The biggest TPS bottleneck is stuck here. Unless you abandon data up-chain availability, which is another way. But we think we still need to keep the data on the chain so that the security of your assets can be guaranteed.

Yao Xiang: Thanks, Steve, and we’ve heard that Loopring has made a lot of engineering efforts, including the last part where he mentioned that Loopring believes that on-chain availability of data is very important. But at the same time I also see a tendency, because we just heard from Matter Labs today that they released zkSync 2.0, which actually has two parts, including ZK Rollup and a part that is more like Validium; including StarkWare, which also has ZK Rollup and Validium. I also see teams like Arbitrum who are trying to introduce zero-knowledge proofs into the Optimistic Rollup system.

My question for all four of you is whether such a hybrid design is likely to be a future trend in protocol design, because the hybrid itself may give users different options.

Steve: Here’s how I see this hybrid model. When the first version of the Roadmark protocol supported ZK Rollup, our protocol itself supported two modes, which corresponded to data availability and unavailability. In essence, it is a trade-off between TPS and security, which is a process of negotiation and compromise. As for the just mentioned Arbitrum, for example, we even discussed at the time, Roadmark is not able to deploy a set of Layer2 on top of another, turning us into Layer3.

In fact, you can imagine, I have always mentioned a concept, that is, the theft of space, layers of dreams, the superposition of dreams, it can be similar to the infinite expansion of TPS, which is also possible. This is one of my views on this kind of integration.

Yu Guo: First of all, I would like to express a point that zero-knowledge proofs are actually very basic gadgets that can be used everywhere, and a core difference between ZK Rollup and Optimistic Rollup is how we deal with this state space. The proofs can be generated when the action like signature checking, like continuing to challenge, that is, such proofs can be used everywhere, but what we usually say about ZK Rollup may refer specifically to the zero-knowledge proofs dealing with huge Merkle trees.

So I think the so-called hybrid scheme, I currently see Optimistic Rollup using zero-knowledge proofs in a large number of ways, and I believe that many parts will be gradually replaced by zero-knowledge proofs in the future. I don’t see the reverse mix yet, that is, whether ZK Rollup is necessary to introduce some components of Optimistic Rollup, for example, to bring the economics-based method of challenge window to use, I don’t see this necessary yet. I think a lot of solutions will gradually use a lot of zero-knowledge proofs, because soon Ether will support more advanced and complex application scenarios of such proofs.

I think the so-called hybrid program may be a better Rollup program, and I prefer the word Rollup. As for the emphasis on zero-knowledge-based, I think it may not be so important, but in the end we will all go in a common direction. One core issue here is that the proportion of proofs based on cryptographic economics and zero knowledge will be somewhat different, probably related to different scenarios, and DeFi and games may adopt different strategies.

The idea of ZK Rollup and zkPorter is not a hybrid, it is more of a short term thing. zkSync’s Alex already mentioned a design idea of zkPorter in August last year, it is actually more like what we can see now with ethereum. It’s actually more like an idea that we can see now with Ether 2.0 slicing, so what’s my point? So what’s my point of view? Support, at the same time, this is only a short phase, in the future, they will definitely find their own paradigm, just like the Rollup paradigm that Mr. Guo Yu said, and they will do different applications according to their needs.

Michael: My view is different from Shu’s. I think the hybrid one will be a longer-term solution, providing different levels of security. Just like Celer has suggested that sidechain and Rollup can actually be mixed. For example, for some applications on Matic, its security is just the requirement of sidechain, and its users do not particularly care that a game must have data availability or keep the whole history, so I personally think we will see more different hybrid solutions in the future.

From the perspective of Layer2, there is still a problem that is not well solved, such as Optimistic Rollup, who should submit the block, in fact, there is not a good solution now, most of the projects are now a centralized way to submit.

I personally think that the PoW algorithm to determine the block contributor is possible in this problem, but it may be better than using the pledge method to solve it. This is my sudden thought one day, that is, people will submit data availability for mining, and this matter will be solved by a PoW way, which I think is also a feasible idea. The future design space is still very large, and then we can’t say which direction we will definitely go, but we still need some time to observe and practice. When the project is on the main network, there may really be one or two such security things before there will be a better conclusion.

Ed Felton of Arbitrum and Alex of Matter Labs went from Clubhouse to Twitter two days ago, and I joined in. From a security standpoint, you can’t say that zkPorter or Validium has the security of the main network, and that’s something that’s a big question mark. For example, let’s say I was playing BitMEX on zkPorter, and I had a blowout, and I told Arthur Hayes to stop and unplug the data availability, did I have a blowout or not? Did I lose this money or not?

From zkPorter’s point of view, once he suspended the chain, the person who earned my money couldn’t withdraw it. I personally think it’s the same, but maybe Alex and I have a different opinion. I think different applications may choose different ways of scaling, need very high security BitMEX such applications, of course, the best way is to use ZK Rollup will be better, but its TPS may only be reduced to the level of a few thousand, which is also I think there is no way a thing.

So the future design space is still very large, and then I think we will gradually discover a respective demand and their most suitable expansion method, this is my point of view.

Yao Xiang: Okay, thanks Michael, I also heard a word in Michael’s expression, that is, the current Rollup verifier or sequencer is still more of a centralized role. We also know that at Layer1 we are discussing a problem called MEV (Miner Extractable Value). This problem is very serious, if we can solve the MEV, it is also very helpful for the scalability of the system. So on Layer2, if we want to solve this problem, it is not a bigger challenge, especially on the basis of how to pack and sort this problem is not solved, I would like to hear everyone’s views. In particular, as a Layer2 designer, what do you think about the MEV problem? Do you have a good solution to this problem?

Michael: I think it’s a very good problem, and MEV is something that Layer2 is doing ahead of Layer1, which is to try to solve the problem in Layer2 first. I think this is also inconclusive, it has different solutions, I have seen the more extreme is Proof of Burn (proof of destruction). For example, if you want to submit a Rollup block, you have to burn part of the tokens, so you are basically less likely to do evil.

Including the packaging of the order of transactions, if you do not follow the order to package, for example, it is stipulated that the transaction fee must be packaged from highest to lowest, you do not follow this order to package, you will also be forfeited assets, there will be such a punitive design. This is one extreme.

The other extreme may be relatively optimistic, that is, through the PoS out of the block mechanism is the simplest, or even run a PoA rotation out of the block is also possible, as long as it can be verified. As long as you set a set of rules, in fact, the biggest problem of Ether is that there is no rule for transaction sorting in Layer1, which is not in the consensus, I think this is a design flaw, or a place that is not well thought out.

If Layer2 can write the way of transaction sorting into the rules, then no matter whether the rule is strong or weak, whether it is punitive or rewarding, I think it is possible.

Shu: There are many people discussing whether MEV is a good thing or a bad thing, whether it is justice or not. From our point of view or that of the users, MEV, good or bad, is actually a set of incentives to improve the efficiency of network funding even through competition. But it also has a negative impact, users can be damaged by doing transactions on the blockchain network.

Because our transactions on the blockchain are in the form of a pool of money, whether we do exchange or lending is a form of pool of money, this will have slippage, so it will be easy to form the so-called sandwich problem that is common now.

In turn, we are more concerned about how to solve the damage suffered by users. There are two points, one is the transaction sorting problem that Michael mentioned earlier, when the transaction sorting is not subjective, it is equal and fair, it is possible to be solved. The second is from the perspective of DeFi protocol design, how do we design the DeFi scheme so that it can reduce slippage and avoid user damage.

When we look at this problem as a wallet side, we see several points. For example, DEX uses the PMM scheme, where the user’s transaction is not directly on the chain, but by submitting the signature to a relay, which will then upload the order. This should be the more ideal way that we can see at the moment to ensure that users are not damaged by slippage.

Compared to Uniswap, where at least three or four out of 10 transactions may fail, PMM is a mechanism that basically guarantees a success rate of over 90%.

In addition to the design of DeFi, we can also see that the whole community is also working on this piece, one is the fairness of the sorting, and then there is a more interesting idea is KeeperDAO.

What is the idea is that we can redistribute the value of MEV, not just a benefit between the pool and the arbitrageurs, and when we talked to one of the MEV profiteers, we even asked if a profit from our MEV could be offered to all users.

For example, there may be a button in your wallet that you can click to participate in MEV arbitrage, and one of the mainstream solutions for MEV is Flashbots, which allows arbitrageurs to bid at zero cost, which is a viable solution that allows the community to compete, rather than just a distribution of benefits between arbitrageurs and mining pools. So my opinion is to prioritize the DeFi protocol to solve this problem, which is much faster than solving it from transaction sorting. If we want to solve the fairness of MEV, we can consider it from the perspective of distribution.

Yao Xiang: So we need every DeFi protocol designer to learn what MEV is first and how to reduce its impact at the protocol design level.

Guo Yu: I strongly agree with Mr. Shu that it may be difficult to solve this problem from a layer of consensus protocols, and that we may really need an additional DAO-like or governance-based way to distribute profits. The real problem is front running, that is, pre-emptive trading, and MEV is divided into several categories, part of which comes from many trading bots, and now we all call Ether a dark forest, which is full of bots that are very busy every day. But some of them are very well-intentioned bots, and because of these bots, Uniswap’s prices are well synchronized with centralized exchanges.

Whenever a user has an exchange transaction, there are bots that follow the user to level the pool and bring it back to the same level as the centralized exchange, so these bots are actually contributing their energy to the ecology, but they are also getting a large benefit from it, which I think is deserved. It is for the development of the ecology, including the lending agreement liquidators, lending agreement is the need to have liquidation, when the asset prices plummeted this year, there will be a large number of robots out, and it is because of a large number of robots, lending agreement will not appear systemic bad debt.

For the stability of the financial ecosystem, these bots are needed, and MEVs are needed to give them the power to maintain the whole system. But some of this is malicious and very problematic, namely pre-emptive trading, which can cause users to buy the assets they want with maximum slippage. There are many things he can preempt, including sandwiches, which many users hate, and imToken is supposed to be a big victim of this.

I think sandwiches are a big problem, and there seems to be a bad trend in the community right now, where people are overly concerned about MEV allocation when discussing MEV issues, and not thinking about how we can prohibit pre-emptive trading. At the moment, there is no good and only bad pre-emptive trading, and because of their appearance, many users have a bad experience. A user sends a transaction to complete something, and sorry, the opportunity you found is taken by someone else’s bot, and you fail, and you consume extra fees and don’t succeed. I think pre-emptive trading is harmless, and a lot of MEV revenue comes from pre-emptive trading. I think the community should pay attention to these, distinguish what is good and what is bad, let the bad phenomenon be exposed, and for those MEVs that are good, you can use KeeperDAO or Flashbots or something like that to distribute this stuff.

Steve: The points made by the previous guests are very good and clear, MEV is indeed quite serious on the first layer, and I guess everyone who interacts with Uniswap should have more or less experience. In the second layer itself, in fact, at this stage, I think the problem does not exist for these second layer projects that are now online, because everyone’s relay nodes, at this stage, are basically centralized. The centralized implementation will definitely be designed in a time-ordered way, strictly by time, otherwise users will definitely vote with their feet. If you often run users in your Rollup, users will understand and naturally vote with their feet to quit your Rollup, we do not play in you. This is essentially a solution to the problem from the perspective of economic gaming.

If Layer2, the subsequent transformation of the repeater into a decentralized way, will certainly also face this problem. But I don’t think it’s possible to solve it from the consensus mechanism. It may only be possible to solve this problem completely from the perspective of the game of economic models at the protocol layer. That’s my opinion.

Yao Xiang: Okay, Steve mentioned that the reputation of each Rollup is still a very important aspect to attract users.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/how-to-build-a-better-layer-2/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-05-14 19:11
Next 2021-05-14 19:19

Related articles