A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

On September 22, the 2022 Shanghai Blockchain International Week and the 8th Blockchain Global Summit hosted by Wanxiang Blockchain Lab, the theme forum on the third day – “Chan “Wood” Chunhua” opened online .

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

TL;DR

“zkEVM: Compatibility and Equivalence” – Alex Gluchowski, CEO of zkSync

The problem in a ZK environment is that all constraints must be enforced for all instructions at every step of execution, so the cost of executing a ZK proof at every step will be the sum of the costs of the components of each instruction.

“Arbitrum Technology is an Optimistic Rollup in the Age of Rollup” – Steven Goldfeder, Co-founder and CEO of Offchain Labs

From a technical and non-technical perspective, there are many ways to contribute to the thriving ecosystem of Arbitrum and Ethereum. Offchain Labs, as part of the Arbitrum technology suite, is also building another solution called AnyTrust technology. AnyTrust sends data to a so-called data availability committee, which then reports the results back to Ethereum. There are already two chains responsible for implementing this technology: 1. Arbitrum One, which is the longer-running chain that has been in operation for over a year and will be officially launched in August 2021. It’s optimistic Rollup, putting all data on top of Ethereum. 2. Arbitrum Nova, the Arbitrum chain launched in August this year, does not publish all data on Ethereum, but uses a data availability committee.

“Explaining StarkNet” – Eli Ben-Sasson, co-founder and president of StarkWare

What is StarkNet? It’s very much like Ethereum, but it’s L2. You can write smart contracts on StarkNet, you can provide related transactions to smart contracts, and it supports general computing and can be composed. But you can even think of StarkNet as something very similar to Ethereum due to the magic of Stark proofs, which offer lower gas fees.

“The Design and Architecture of Scroll” – Zhang Ye, co-founder of Scroll

Scroll is building an EVM-equivalent ZK Rollup whose design decisions follow security, efficiency, EVM equivalence and decentralization, and its architecture consists of three parts: Scroll nodes, smart contracts on the chain, and a decentralized prover network composition. Scroll has now completed the pre-Alhpa testnet, and in the second phase, developers will be invited to deploy some smart contracts based on its network and develop some additional applications; the third phase will start the outsourcing of Layer 2 proofs, and invite the community to participate in becoming Proof node; the fourth stage reaches the zkEVM mainnet stage, which will be deployed and launched on the mainnet after strict code auditing and performance improvement; the fifth stage will deploy a decentralized sequencer to make zkEVM more efficient.

Roundtable Discussion: “Ethereum 2.0: After The Merge” – Dong Mo, co-founder of Celer Network, Li Chen, CEO of HashQuark, Steve Guo, CEO of Loopring, Adam, co-founder of ssv.network

Discusses the Ethereum Merge process and the way forward. 

zkSync CEO Alex Gluchowski: “zkEVM: Compatibility and Equivalence”

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

zkSync is a deep task-driven protocol, that is, everything we do, every design decision in terms of technology is on a mission to accelerate mass adoption in the crypto space. As you can see, it also influenced the choices we made around zkEVM.

In fact, zkEVM itself is driven by mission, because EVM has become the JavaScript of the blockchain world, which is a new type of value, the universal language of the Internet, with so many tools, services, libraries and infrastructure, it is difficult to avoid it. In other words, zkEVM will be with us for a long time.

ZK is a very interesting technology, in fact it is the only way for us to get rid of the impossible of the blockchain, and the only way to achieve the infinite scalability of the blockchain, while fully guaranteeing the security of each transaction, so we must Combined with zkEVM.

You’ve probably heard a very clichéd question about whether we can just be EVM compatible, or can achieve full EVM equivalence, and whether the latter is necessary.

Some protocols claim to be equivalent to EVM, which we think is a matter of degree. Previously, Vitalik published an excellent article that visually introduced several different degrees of EVM compatibility in the world of zkEVM in the form of a chart, and proposed that the higher the EVM compatibility, the greater the performance sacrifice.

So here, I’ll explain in more depth what each degree of EVM compatibility specifically means, where the performance sacrifice comes from in the implementation of ZK, and which option is most preferable and which option we choose, and how such a choice will How it affects users.

We’ll start at the bottom, type 4 is EVM compatibility at the source code level, and you can put any source code on the EVM into the zkEVM environment to run.

Self-decoding compatibility is included in types 3, 2.5, and 2 and above, and the higher the type, the more functional compatibility. Like the API or the exact same way as the first layer, which is the full Gas metric.

The top type contains the proof-complete root hash, including consensus, storage, updates, etc. Let’s explore further.

Beginning with type 4, which is the most performant, this type of EVM compliant is the ability to compile existing code into a specialized set of EVM opcodes, or RISC.

Actually it’s not really RISC, it’s very similar because every instruction is optimized and can run in the context of zkEVM.

But the problem in a ZK environment is that all constraints must be enforced for all instructions at each step of execution, so the cost of executing a ZK proof at each step will be the sum of the costs of the components of each instruction.

Therefore, to maximize performance, we need to keep the number of these instructions as low as possible, allowing enough flexibility to express arbitrary code and compile arbitrary code into that instruction.

And the instructions need to be very few, very atomic, and very simple, which also means that the performance advantage is orders of magnitude compared to any other type, and there can be innovation in this area, and you can do some really interesting things, This improves the user or developer experience.

For example, accounts can be abstracted, executed through MetaMask wallet, or any other wallet, or using Argent, or smart contract-based wallet, with social key recovery, etc., using different signature schemes, using local multi-signature, etc., This is its advantage.

In terms of developer experience, you can bind libraries written in any modern language and compile to zkEVM using the very mature LLVM compiler framework.

Front-ends to LLVM exist in languages ​​such as Python, and the main disadvantage of this approach is that any tool that is EVM-compatible at the opcode level doesn’t work out of the box. So we need dedicated support for these tools, mainly debuggers and tracers, and they need to be tuned to support zkEVM and zkSync, or any other protocol that uses this approach, but there aren’t many of these tools, So type 4 can still be used.

This is why the slave type is chosen from the very beginning of the mainnet, because from our usage, user experience and performance are critical. If the user experience doesn’t live up to expectations, or doesn’t match the current internet experience, the end result is not really attracting millions of new users.

In terms of performance, as mentioned before, the code needs to be compressed to a minimum and maintain extremely high performance in order to unlock more scenarios.

Such as social networks, games, or any scenario that requires frequent transactions, such as small transfers, in these scenarios, the magnitude of the difference in performance is critical, and it is also important for users to pay 10 cents or 0.01 cents.

Now move on to explore other ZK-EVM classes mentioned by Vitalik, which can be called Six-type architectures, they use a very complex instruction level, or EVM, which means that we have to support some instructions that vary greatly in cost .

We have two ways to solve the problem, one is to implement a native ZK loop that supports all instructions. Every loop of the virtual machine can prove the execution and tracking of the smart contract, and we must add constraints on the implementation of each opcode.

Therefore, in EVM, since it is not designed for the ZK environment, there are many instructions that are very inelegant and inconvenient, and the operation is very complicated, and the speed is several orders of magnitude slower than the RISC instruction set specially designed for ZK, so this The plan is not feasible.

Just mentioned that this solution is not feasible, and another option is to try to emulate the EVM’s opcodes and use some smaller micro-opcodes through Ethereum’s complex instruction-level implementation.

Another option is to try to emulate the EVM’s opcodes, and implement it through Ethereum’s complex instruction level, using some smaller micro-opcodes. The problem with doing this is that for each individual opcode, whether it’s complex or simple, for basic operations such as arithmetic, it must also add a lot of overhead, because you need to read byte by byte from memory. code, you need to analyze and parse them, you need to determine what instruction this is, you need to jump to execute the correct instruction, and then you need to process the operator, operand, etc. orders of magnitude higher, but it is feasible and may lead to interesting applications in certain scenarios.

If you look at the performance difference, the difference between Type 3, Type 2, Type 2.5 is not significant. If you interpret or run complex instructions in your own way, the overall gas cost will not increase too much, because the basic arithmetic in the ZK environment is very cheap, and supports 100% of the API, it will not really affect the performance of the program, Unless you use a lot of heavy operations that can then be optimized for the ZK world.

But the question is do we need to stop there? The answer is no. We can start with Type 4 and continue to improve compatibility with the EVM by adding functionality within the framework of a chosen base paradigm.

For example, if we only implement a smart contract that can interpret EVM opcodes, we can make zkSync a system that supports both native high-performance compiled smart contracts and systems that support existing EVM opcodes.

Although it may be a lot slower, the smart contract can still run, so this smart contract can either be written in Solidity or a more complex language like RUST, and compiled and run in a native zkEVM environment, which is EVM opcodes as interpreted EVM , so this is a very simple project that is fully achievable in a few months.

If we also want to implement Type 2.5, all we need to do is support the full Ethereum API on the smart contract side. At the same time, this means we want to support all hashes and all precompiles. In fact, ZK has supported Ethereum’s native hashes, such as KECCAKSHA256, etc., from day one, and will also support it when our mainnet is launched a month later, so all smart contracts using KECCAKSHA256 will be generated completely with the first layer operation. Same result.

All precompiles are supported, which we plan to finish by the end of the year, the most complex part of which is support for elliptic curve pairings, work on this has already started, and the rest is simple and already supported. Adding the interpreter, and 100% support for the API, will make zkSync implementation type 2.5 EVM compliant.

The interesting thing is that if you start at the bottom, you can keep improving all the time. But if you start with a higher-performing, more advanced system, you can only simulate a lower-performing system.

Just like we can emulate old Mac OS on Mac book, we can even go back and emulate some old mainframe or Unix machine. But this way, there’s no previous compatibility, you can’t emulate Mac OS on older systems, so it’s a one-way street, and you have to start with the highest performing option.

Therefore, currently we are in EVM compatible type 2.5, in order to improve compatibility, we need to support the exact same Gas calculation as Ethereum, and support storage and consensus compatibility.

And that doesn’t make sense for L2, it makes sense for Ethereum, but it doesn’t make sense for Layer 2 C, to understand why we need to understand the cost difference between L1 and L2.

Resource pricing is different in Layer 2, which is why scaling is possible. Comparing Rollup and Ethereum, the bandwidth cost is about the same, but the computation in L2, especially in the zk-Rollup environment, is very cheap, and it must be much cheaper to replicate on Ethereum with tens of thousands of nodes. Also storing Ethereum is very expensive because full node state needs to be synchronized. It is not needed on the second layer, because the zero-knowledge proof will verify the update of the storage, and the user only needs to download the hash and the state delta, so it will be much cheaper.

If you want to support the exact same Gas calculation method as L1, you are actually asking for trouble. Because of this, either will not get L2 performance optimization, or be vulnerable to DDoS attacks.

That’s why we don’t really think it’s necessary to go a step further in compatibility if the other side’s resource pricing is too low, as we think zkSync will stick to type 2.5.

Higher compatibility also doesn’t make sense in my opinion, because if the cost of Ethereum full node verification is reduced, the community can decide to reduce the block size and drastically increase the storage fee for one layer and only use the first layer for verification Validity proofs of ZK and optimistic Rollup and fraud proofs, putting all applications on L2 is also the direction we expect to see in the future. Therefore, EVM compatibility of type 2 and type 1 achieves full EVM equivalence with a slight marginal advantage.

Steven Goldfeder, Co-founder and CEO of Offchain Labs: “Arbitrum Technology in the Age of Rollup is an Optimistic Rollup”

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

What is Rollup, how it is defined technically, and how exactly Rollup is leading Ethereum scaling. Today I want to take you to explore in depth how to build Arbitrum from a technical point of view, and how the community can work together to build the Arbitrum ecosystem. I hope that after listening to my talk today, you will have a better understanding of how Arbitrum works and the huge Arbitrum ecosystem. Also better understand how to get involved.

From a technical and non-technical perspective, there are many ways to contribute to the thriving ecosystem of Arbitrum and Ethereum.

The history of Rollup goes back a long way, but I want to go back to two years ago because I think two years ago cemented Rollup as a critical time point for Ethereum scaling.

In October 2022, Vitalik once published a roadmap blog post about Ethereum Rollup as the core. In other words, Rollup is the core of the Ethereum project and the development of Ethereum technology. Rollup is a scalable solution for Ethereum, bringing the security and decentralization of Ethereum to the masses.

But what is the problem? What is Rollup? And what’s the difference between Rollup and other technology scaling solutions?

The core idea of ​​Rollup is to publish all transaction data of user transactions to Ethereum and store it on Ethereum, but the execution of transactions does not happen on Ethereum. As you can imagine, when you submit a transaction to Ethereum, you can think about the transaction in two ways:

On the one hand, transactions are just blocks of data, made up of 0s and 1s.

On the other hand, transactions represent the instruction level, that is to say, these data, these 0s and 1s represent instructions, which can go to store and calculate a certain value.

What Rollup does is put all data on top of Ethereum, allowing Ethereum to store blocks of data, but the execution of instructions is off-chain and the results of execution are reported back to Ethereum. The key value proposition of Rollup is that security comes from Ethereum. It is hoped that on the one hand, Ethereum can ensure the reliability and correctness of data stored on the chain, and on the other hand, it can ensure that execution occurs off-chain.

Now the question becomes how to get Ethereum to verify what is being done off-chain? The key is to prove to Ethereum that there needs to be a mechanism that proves to Ethereum that not only the content stored on the Ethereum chain is correct, but also the correctness of off-chain execution.

In an optimistic rollup, interactive fraud proofs are used to prove to Ethereum that the execution and transaction results we reported to Ethereum for off-chain execution are correct. In order to achieve this, a large amount of execution work can be executed off-chain in Ethereum, thus achieving the scalability of Ethereum. Because our use of Ethereum is very streamlined, we do not use Ethereum to execute transactions, but only use Ethereum to store data, which means that execution does not happen on the Ethereum chain. In this way, more Ethereum space can be obtained.

Offchain Labs, as part of the Arbitrum technology suite, is also building another solution called AnyTrust technology. AnyTrust technology and Rollup technology are actually very similar, but there are key differences between the two.

AnyTrust does not put all the data on Ethereum, instead it sends the data to the so-called data availability committee, which then reports the results to Ethereum, which also has a fraud proof mechanism. However, in AnyTrust technology, there is a data availability committee responsible for data storage. In Rollup, the biggest cost is data storage, and if there is a data storage committee, the cost can be significantly reduced.

In the process, there is no way to have the full security of Ethereum anymore, as we rely on the Data Availability Council to store data. However, AnyTrust technology is still highly secure, and it is much more secure than sidechains.

As the name suggests, arbitrary trust in AnyTrust means you only need to rely on one or two members of the committee, whereas sidechains generally require you to trust the majority of sidechain participants, or even two-thirds of the validators, before you can go Trust the sidechain. Arbitrum Rollup technology and Arbitrum AnyTrust technology, one relies on Ethereum to store data, and the other technology uses data availability committee for data storage, but both technologies are much more secure than other scaling solutions.

Now that the technology under development has been introduced, what are the specific blockchain implementations? There are already two chains responsible for implementing this technology.

(1) Arbitrum One, which is a longer running chain, has been running for more than a year, and will be officially launched in August 2021. It’s optimistic Rollup, putting all data on top of Ethereum.

(2) Arbitrum Nova, launched in August this year, which is the Arbitrum chain, does not publish all data on Ethereum, but uses a data availability committee. I’ll reveal later who the members of the Data Availability Committee are.

What do Arbitrum One and Arbitrum Nova have in common? First of all, they are all general-purpose blockchains, on which contracts can be deployed, and users can interact with them at will without permission. Compared with Ethereum, scalability and fees are much better than Ethereum. For example, Arbitrum One, on average, the fees are 10-50 times cheaper than Ethereum, and Arbitrum One, because the data is not stored on Ethereum, it is lower than Ethereum in terms of scalability and in terms of fees 30-150 times.

However, both chains can achieve extremely fast confirmation, and if you use Arbitrum One or Arbitrum Nova, you may have realized that once you press the “confirm transaction” button, you can get the result of the execution of the transaction immediately. This is also a feature of all Arbitrum products, enabling fast confirmation, and fast confirmation can lead to a better user experience, which is also an experience that many users are very familiar with.

Are the two chains competing with each other? the answer is negative. Although they are both general-purpose blockchains, the two chains have different focuses. Arbitrum one is a strong DeFi ecosystem and also has very strong and high-value NFT projects, while Arbitrum Nova is mainly aimed at games and social projects.

There are three principles we believe are crucial when it comes to scaling Ethereum, and these principles also apply to Arbitrum One, Arbitrum Nova, and any other chain currently being built.

(1) Transaction costs. It sounds simple, but it’s what users really need. For example, when we have scalability solutions, for users, what they want is to pay lower fees and complete transactions at the same time. Technically, since we use Arbitrum, we have less use of a layer of Ethereum, while still getting the security that Ethereum guarantees.

(2) High security. On the one hand, you want cheap transactions, but on the other hand, you don’t want to sacrifice security to achieve a reduction in transaction costs, so security should also be very high. Don’t do things that sacrifice security to achieve low transaction costs, whether it’s Arbitrum One or Arbitrum Nova, both chains prioritize security. Arbitrum One is a complete rollup of data on Ethereum, while Arbitrum Nova relies on a data availability committee for data storage, but both chains focus on providing high-security, low-cost transactions.

(3) Compatibility. If Ethereum developers already have the knowledge of how to develop in Ethereum, and you’ve written some code for Ethereum, hopefully that code and knowledge can be applied directly to Arbitrum. You may also hear terms like EVM compatibility, EVM equivalence, etc.

Both Arbitrum One and Arbitrum Nova are as compatible with Ethereum as possible, which means that all the code you have previously written for Ethereum and the knowledge you have gained on Ethereum can be directly applied to Arbitrum, which It’s also why we’ve seen so many applications successfully deployed on Arbitrum One and Arbitrum Nova, which are easy to migrate since they’ve been successfully deployed on Ethereum before.

Over time, over time, the development experience in Arbitrum will become easier and better. The core principle is that in Ethereum, all code, everything that works in the environment of Ethereum should be directly applicable to on Arbitrum.

The above are the three core principles we uphold: On the one hand, transaction costs must be reduced, but without sacrificing security. At the same time, it can also achieve high compatibility with Ethereum. That is to say, both from the perspective of developer tools and user tools, “out-of-the-box” can be achieved.

Next, I will share with you the development schedule of Arbitrum. In October 2020, at the same time that Vitalik published the Rollup-centric roadmap article, we launched the testnet of Arbitrum One. It’s a very important moment for us, and for the Ethereum community, because it’s the first general purpose Rollup built entirely on a testnet. Great opportunity for us, and for all participating Arbitrum testers. During the testnet operation in the next few months, I learned a lot and made a lot of improvements.

In May of next year, Arbitrum One was actually launched on the mainnet, but it was mainly open to developers first. When launching the mainnet, I hoped that it would be a fair release, and I didn’t want to give priority access to some developers. , or give priority access to certain applications and infrastructure. I want everyone to be able to participate fairly. Therefore, from May to August 2021, during the three-month testnet time, only developers are allowed to enter the Arbitrum testnet, allowing hundreds of developer teams to enter and let them develop applications on our testnet and test.

Because we hope that when we officially open the main network to the public, everyone can see the ecology of a hundred flowers blooming, which is also the so-called “fair release” strategy. If you’ve seen a show, they compare Rollup to an “amusement park”. Arbitrum One is Cape Le’s new amusement park. Before the amusement park opens its doors to the public, let some amusement service providers enter first, and create these Play experience.

When we officially launched Arbitrum One in August 2021, there were already many applications and infrastructure available for public or immediate use, and the ecosystem was very prosperous from day one. In order to do this, the so-called “fair release” strategy was implemented, so that all developers were in the same camp, and when the gate of this “game park” was opened to users and the public, everyone had already done When you are ready, you can also see the application ecology of a hundred flowers blooming above.

In the next period of time, a large number of applications based on Arbitrum have been built, but for us, things are not over yet, and we are not satisfied to stop there, hoping to make Arbitrum better. That’s why we launched Arbitrum Nitro, the Arbitrum Nitro development network, also known as the testnet, for developers in April. Migrated Arbitrum to Arbitrum Nitro in August, and I will tell you more about the advantages of Nitro later. For us, Nitro is a huge upgrade, meaning lower fees, higher bandwidth, and a better experience for developers and users. So in August, the whole team was very busy, not only migrated Arbitrum to Arbitrum Nitro, but also launched the Arbitrum Nova blockchain. I have just introduced to you that Arbitrum Nova is the AnyTrust chain, and in August it officially launched the blockchain. online line.

First, Arbitrum One.

Next, I will introduce the Arbitrum One ecosystem, and what development projects are being developed based on Arbitrum One, what benefits it can bring to users and developers, and what can be done in Arbitrum One.

So far, more than 50,000 contracts have been deployed based on Arbitrum One, which means it is Ethereum’s leading Layer 2 solution with a 50% market share in Rollup. To date, more than 1 million unique addresses use Arbitrum.

But what are the projects in the ecosystem? Mainly blue-chip DeFi projects, mainly DeFi blue-chip projects from Ethereum, including Uniswap, Sushiswap, Curve, AAVE, in addition, there are also new projects native to Arbitrum, which means that they are not migrated from Ethereum Projects, such as DOPEX, VESTA, GMX, TRACER, TRESASURE, etc., are based on Arbitrum’s native DeFi and NFT projects launched on Arbitrum. Coupled with the DeFi blue-chip projects from the Ethereum ecosystem that everyone is already familiar with, the Arbitrum ecosystem has now seen a flourishing scene.

Compatibility has just been introduced to you. Compatibility not only applies to applications, but also to infrastructure, which means that all Arbitrum infrastructure is very familiar to developers and can also be used out of the box. . Compatibility is achieved both from an infrastructure point of view and an application point of view, so that all Arbitrum-based project parties are very familiar with the infrastructure they are familiar with from Ethereum. For example, ChainLink provides services for projects in the Arbitrum ecosystem, or other price information.

In addition, there are tools like Gnosis Safe, but also tools for the user side, such as Etherscan. The idea is whether from the perspective of developers or from the perspective of end users, I hope that they will feel familiar with Arbitrum when they enter Arbitrum. As long as you are familiar with the things of the Ethereum ecosystem, you will be familiar with the tools and tools on Arbitrum after entering the Arbitrum ecosystem. application. And indeed, we did.

I just gave you an introduction to what you can do on Arbitrum, but you might be asking yourself, how exactly do you get into the Arbitrum ecosystem? How to better participate in the various application ecology on Arbitrum, and how to enter and exit?

The good news is that there are many ways to enter the Arbitrum ecosystem. If you have some centralized exchanges, nearly a dozen centralized exchanges have directly allowed users to withdraw your funds to a project in the Arbitrum ecosystem. These include BINANCE, FTX, Crypto.com, and many other centralized exchanges that have now launched services that allow you to make inbound transactions directly from the Arbitrum ecosystem.

In addition to that, if you store funds on Ethereum or other blockchains, we also have some decentralized cross-chain bridges that allow you to bridge between Ethereum and Ethereum. For example, you can use Arbitrum’s native cross-chain bridge, or you can use third-party cross-chain bridges, such as Hop and Synapse, which allow you to easily migrate and enter Arbitrum and other blockchains.

In addition, Arbitrum One also has entry and exit channels for fiat currency, and credit cards can be used to purchase assets in the Arbitrum ecosystem. For users who want to deposit funds on Arbitrum, and get a better user experience, faster two-tier system, cheaper transactions and feelings, these huge applications in the Arbitrum ecosystem have various method.

Second, Arbitrum Nitro.

Went into the Nitro upgrade a few weeks ago and now would like to go into more detail on this. From a technical point of view, what is a Nitro upgrade? The basic idea is that a few weeks before the official launch of Nitro, the so-called Arbitrum virtual machine, which is running on-chain, is the virtual machine used for interactive fraud proofs, which is a virtual machine that we developed ourselves.

In addition, there are also custom nodes developed by themselves to support virtual machines. Although the two work very well, we can replace our own developed components with more standard components. Through such replacement, we can take advantage of the efforts of developers in the Ethereum ecosystem over the years, so that the development experience can be closer to Ethereum’s. development experience. In addition, advanced call data compression is also introduced, in other words, the data placed in Ethereum is compressed first, and then stored in Ethereum. If compressed, the compressed data will become smaller, and the user needs to use less Ethereum space, which means you can put more data on Ethereum.

On August 31st of this year, we successfully migrated to Nitro, the one year anniversary of our opening to the public. For us, it is a very important event, and it is a symbolic event, but we are not satisfied to stop there. We established Arbitrum One a year ago and returned to the starting point, hoping to make the Arbitrum ecology better, and continue to ask ourselves, How to make the ecology more prosperous? It also ensures that the migration is seamless, that is to say, the user does not need to do anything or make any preparations to complete this seamless migration, because the ecology is active.

The analogy I often like to use is that the Arbitrum One is the equivalent of an airplane, which is already in the air. We did engine swaps in the air. First replace your own Arbitrum virtual machine flight engine, and then switch to the Ethereum virtual machine while the plane is still flying, which is technically achieved.

The question you want to ask now is what kind of benefits do we get after completing the Nitro upgrade? The basic idea is a huge increase in throughput, and after the Nitro upgrade, the throughput has increased by a factor of 7. Because of the advanced data compression I just mentioned, being able to pass these saved storage capacity to the user means that for the user, you will have less transaction fees for publishing data or needing to bear.

In addition, for developers, we also provide developers with a lot of functions to improve the development experience. Because we use Ethereum nodes, many Gas billing and tracking support are exactly the same as in the Ethereum ecosystem. In other words, after the upgrade of Arbitrum Nitro, the compatibility with Ethereum has been further improved, and the compatibility with Ethereum has become better and better. I am very excited about this.

Third, Arbitrum Nova.

What is Arbitrum Nova? is the AnyTrust chain. The question now is how does the AnyTrust chain of Arbitrum Nova work?

First, the data is not stored directly on Ethereum, it is sent to the Data Availability Committee, technically speaking, and practically speaking. The relevant transaction data will be sent to the sequencer, which will then publish the data to Ethereum. But on the AnyTrust chain, for example, Arbitrum Nova will not send data directly to Ethereum. Instead, the sequencer will send the data to the Data Availability Committee, which will sign the data reliability certificate and send the certificate to Ethereum. It is the specific technical implementation principle of Arbitrum Nova and AnyTrust chain.

Arbitrum Nova has a rollback mechanism to roll back to publishing data on Ethereum if the data availability committee goes down, if the committee has no way to accept the data. The data will be rolled back and published on Ethereum. That is, the Nova AnyTrust chain has a rollback mechanism. For whatever reason, there is no way for the committee to function properly and complete its mission. The chain will not stop, it will continue to send data to Ethereum, and although the fees will increase, the chain will still be like any normal Rollup chain. Keep running the same.

Who are the members of the Data Availability Committee? My answer is that there are some very strong Web2 and Web3 brands and IPs, including Reddit, Consensys, QuickNode, FTX, Google Cloud, etc., that is a comprehensive committee of Web2 and Web3 brands, core concepts It’s just a matter of trusting one or two of the committee members to accomplish the mission well. With the participation of the big Web2 and Web3 brands, users can rest assured that their data is stored securely.

The next question is who are the users of Arbitrum Nova? In July 2020, Reddit hosted a meetup called The Great Reddit Contest. The key is that they hope to issue the token of the blockchain and give it to the community users, so that the community can better participate in the interaction. They hope to put it on the mainnet, and they hope to put the experience of Reddit community points on the mainnet, but they don’t know how to achieve scalability, so they asked the community to come up with an expansion solution, about 20 or more. The team submitted scaling solutions, including us.

In 2021, Reddit announced that Arbitrum had won the contest, just a few weeks before the Reddit launch on the Arbitrum Nova mainnet. What’s interesting is that Reddit brought in over 200,000 users on the first day, which is very exciting. People often talk about how to reach mass adoption, how to reach the next billion users. It’s not that you give a billion users direct access to the crypto community. On the contrary, it is to use the existing ecology. For example, Reddit has some large communities, reaching 400 million monthly active users, so if Arbitrum and Reddit have closer cooperation, there will be opportunities to allow more users to participate in Arbitrum’s activities. Ecology, such as entering Arbitrum Nova, there are already two active communities in Arbitrum Nova.

How much gas can Nova Gas save? Because Nova will not publish the data on Ethereum, instead, it will send the data directly to the Data Availability Committee, which can further reduce the Gas fee. Compared with Arbitrum One, Arbitrum One is 97% cheaper than Ethereum. And Nova is cheaper than Arbitrum One.

For example, the cost of making a transfer is about less than 1 cent, and the cost of making an ERC20 swap is about 2-3 cents, so the cost on Arbitrum Nova is very low.

Since not all data is published on Ethereum, it is also less affected by Ethereum price fluctuations, which means that on Nova, transaction fees are generally more stable.

The next question is who is Arbitrum Nova designed for? You might ask if Arbitrum One and Arbitrum Nova are in competition with each other. The answer is not so, because Arbitrum One mainly focuses on DeFi and NFT projects, but Arbitrum Nova is aimed at game development and social projects. That is to say, for those projects with high throughput, many users need to interact with the blockchain frequently, so they need very low fees to better interact with these chains.

To sum up, Arbitrum Nova is mainly focused on gaming, social networking and other ultra-low-cost expansion solutions that require high-frequency trading.

Eli Ben-Sasson, co-founder and president of StarkWare: “Explaining StarkNet”

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

The story of STARK technology and cryptography goes back thirty years. In the mid-1980s and early 1990s, mathematical innovations unleashed enormous power and put it back into the hands of humanity.

Therefore, we often see such a picture in movies, a young and weak child encounters a very powerful dragon, and controls the dragon in some magical way, and in the blockchain In the world, the weak child is actually Ethereum, which is a computing device with certain limitations.

A large amount of computing power can be controlled through some kind of magic. The “magic” mentioned here is mathematics, so STARK is based on 30 years of mathematical research results. Among them, many research results are applied to StarkWare after invention and optimization. The core is that mathematics allows you to use cryptography to declare its reliability.

What does reliability really mean? In fact, the novelist Lewis has a very nice description of the fact that no one is supervising, but also making sure to do the right thing.

In the same way, this mathematical technique can lead people to believe that the contract will guarantee correct execution, even without supervision, without overseeing every step of the calculation. So, with this really amazing mathematical technique, we can work on making Ethereum scale to meet the needs of the world.

This 30-year-old paper has a very beautiful description of a remarkable mathematical innovation that says “a reliable personal computer that can supervise the operation of a supercomputer”, even if these supercomputers Computers use extremely powerful but unreliable software and untested hardware. Ethereum is like a very weak and limited computer.

In the words of the paper, it is a personal computer, but it can be used to declare reliability, knowing that the group of supercomputers is doing the right thing, and making no assumptions about the reliability of the group of supercomputers. It sounds like magic, but the magic of mathematics can make it happen.

Sharing my personal story, about twenty years ago I became a young math researcher interested in computational theory. And one of the things that was being researched at the time was making these beautiful and powerful proofs of magic continue to be very efficient. Moreover, I have done a lot of hard work with my collaborators and colleagues to make this inefficient theory for decades, and finally become very efficient, and can be generated and verified on ordinary or general-purpose computers .

At this stage, StarkWare was established, and StarkWare’s mission is to use mathematics to maintain the reliability of the blockchain and enable the scalability of the blockchain. During this time, I also used different versions of zero-knowledge proof technology and introduced privacy protection to the blockchain. I’m also one of the founders of Z-cash, but instead of talking about privacy today, I’m going to focus on the math and scalability of StarkNet.

I am often asked, what is it like to go from a mathematician to a practitioner, from a computer scientist to a professor to an entrepreneur? For me, going from a very theoretical research to a very practical scenario application is like crossing a “desert”. As a theoretician and scientist, I want to talk about practical applications, while those who do practice sometimes don’t understand the theory. How to use it for them, for a long time my colleagues and I had to walk through the “desert” until we managed to get to the other side and made an application that is actually very effective and usable today, of course this is Another story. This technology allows users to operate a weaker computer to verify the reliability of another more powerful computer, and what does this have to do with blockchain?

Going back to the previous sentence, a computer with limited performance can supervise and declare the reliability of a large number of computing clusters without re-executing the calculation, which is why this technology is relevant to the blockchain.

Imagine that Ethereum is a computer with many nodes running in a decentralized manner, but if it is a computer, as a computer, its performance is very limited. In other words, the demand for its computing power is much higher than the current actual performance, which is why Gas is expensive and crowded. Therefore, with STARK technology, this computer can be used to monitor larger-scale calculations done off-chain, as well as the reliability of the calculation results, and can have the same security level and trust assumptions as Ethereum.

At the same time, the reliability and security of all computations are enhanced through the magic of mathematics.

In today’s financial transactions we are faced with two different ways of transacting, the first is very traditional, we use the bank’s credit card and payment processing flow, in this way, if expressed in an abstract way, someone A large computer is being used and everyone needs to trust it.

If you want to know if the whole system is very honest, just have to believe or believe that the government, the auditors, or other people are doing the right thing, but it’s not a very inclusive system, it’s actually very unique, you and I Can’t be a bank and process these transactions, that’s the traditional way, it’s very efficient in the ability to calculate and process transactions.

Like blockchain – ethereum is extremely inclusive, which is great, everyone can and is very welcome to use their personal laptop to connect to the ethereum network and be part of its foundation of trust, and I Hope everyone is doing so.

In order for everyone to connect to the Ethereum network with our laptops and be part of its trust base, we need to limit the amount of computation that laptops can carry.

In other words, a highly inclusive system like Ethereum is great, but very slow as a computing device. What we want is an inclusive system that allows everyone to add their own computer to the network and monitor network activity.

At the same time, it allows the system to gain the same scale as a mainframe computer off-chain. The bridge connecting these different worlds is actually STARK and related mathematical applications.

What are the options for scalability? First of all you can get everyone to buy a bigger computer, of course some people will be turned away. By the way, we also have some sidechains, some very popular sidechains do this. In the past, we also had sidechains, such as EOS and BSC, etc. In fact, the implementation of this solution is to allow nodes to run larger computers, and you can also choose to buy a large device to participate. But even so, its size is limited. At the same time, inclusivity is lost.

For example, one of the most popular blockchains currently requires a minimum of 12 cores and 128GB of memory for hardware, but my computer doesn’t have it, so the other way around is to require the use of something called Fraud Proof thing. Such as Arbitrum and Rollup. Some hardware requirements will be larger, which means that my own laptop won’t be able to join. However, some will claim that there are various game theories or incentives that will make these larger devices constrain each other, and so far the technology has not started to take off anywhere as envisioned. If it does get used, it should be as safe as expected, but at least to me, it’s unclear and unproven. So this program will be a little faster, but a little less inclusive, and some people will be turned away.

And the method we’re going to take is based on mathematical proofs, which is the way of validity proofs, and what we’re trying to achieve is to be able to allow anyone to run a very large computer and do related things, but to do everything you have to do Prove to L1 as this is the only network we trust.

StarkNet embodies this principle very well. When we operate on StarkNet, security is the security of Ethereum. You don’t need to make any trust assumptions in the StarkNet ecosystem, because the guarantee of security comes from the bottom layer, that is, the Ethereum Square, this is what StarkNet can provide, and this is the power of mathematics.

Receipts proving shops, visits, and in fact restaurant receipts are a very old form of proof. If you think of a restaurant receipt as a string of characters used to declare reliability, it is used to prove to the customer the total amount that should be paid, and the sum comes from a series of algorithms. When we receive the receipt, you can check and verify the reliability of the results by simply running these calculations, so the restaurant receipt is a proof of reliability.

But from a mathematical point of view, they are not very mature, and they do not support scalability because you need to redo the operation. The Stark proof is similar, you can think of it as something similar to a restaurant receipt, but the length of this receipt and the amount of computation required to check the proof is much less than it claims to be. So using this technology, you can process hundreds of thousands of transactions without making any trust assumptions about the prover and the whole processor, which could be the Dark Lord or Darth Vader’s Evil Lord.

No matter what you provide for processing these transactions, but you can guarantee that all nodes cannot cheat or borrow, because any update to the system must have a proof of integrity, submitted to L1 together, and this is what we do, to the system state Any update of s is accompanied by a proof that no one has proven to be false, and if there is proof it is a correct execution, even without the supervision of L1, this is the power of STARK proofs. So it’s not just a theory, it was a theory before, but now it’s a valid system.

Imagine, when building NFTs, if you want to use Ethereum to mint NFTs, you can put hundreds of NFTs in a block. Our technology can be built with just one Stark proof, and 600,000 NFTs are executed, all of which can easily fit into an Ethereum proof. With this technology, hundreds of transactions per block can be realized, and the capacity can be expanded to hundreds of thousands and millions of transactions per block.

StarkNet enables every developer and user to have this very magical technology that enhances the capabilities of Ethereum and scales it exponentially. What is StarkNet? It’s very much like Ethereum, but it’s L2. You can write smart contracts on StarkNet, you can provide related transactions to smart contracts, and it supports general computing and can be composed. But you can even think of StarkNet as something very similar to Ethereum due to the magic of Stark proofs, which offer lower gas fees. From the user’s point of view, when you provide transactions, all transactions enter the mining pool, and miners package the transactions and write them into blocks.

What StarkNet does is very similar. It runs off-chain, that is, Ethereum is used as the L1 bottom chain. It doesn’t know what happened on StarkNet, and it doesn’t need to make any trust assumptions. But users can feed hundreds of thousands of transactions to the sorter, and the sorter will sort those transactions, one by one. These transactions are then sent to the prover, which will generate a more rigorous proof that all transactions were updated and executed correctly. In fact, proofs are exponentially smaller than calculations and transactions. This proof has been submitted to Ethereum, where a gatekeeper validating the smart contract is responsible for the checksum reliability.

Let’s recall the picture just mentioned. A weak child has huge magic power, and through magic power, he can control huge magical creatures, such as a giant dragon. And in the industry just now, it refers to the power of mathematics and Stark, so the validator (so-called kid) is the validator at the bottom layer of L1, it is Ethereum, but he can still use the very powerful calculation on StarkNet Capability, which belongs to the L2 layer.

StarkNet comes with a new language called Cairo, simply explain to you why there is this new programming language. You may have a question, why Ethereum has such a new programming language, and when did Ethereum appear? Around 2015, there were already very good programming languages ​​like Python and C. But Vitalik and Gavin and other experts came up with a virtual machine called EVM, and a new programming language to go with it.

Of course there are several different languages, and Solidity is the most well-known of them, also now recommending users and developers to use a new programming language, Cairo. For similar reasons Vitalik and others invented Solidity, wanting to run a blockchain creates new systems of constraints, and you need a programming language that can satisfy those constraints.

If you want maximum scalability, you actually need to use a programming language that unlocks these potentials. For StarkNet, the programming language is Cairo, which you can use to write all kinds of applications. And so far, developers have written hundreds of applications for it, including those for voting, virtual identities, and games, hoping that everyone will join the large and growing network of StarkNet and users and embrace this magical technology .

Zhang Ye, co-founder of Scroll: “The Design and Architecture of Scroll”

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

Before I formally introduce the technical details to you, I would like to briefly introduce to you what the Scroll project is. In short, Scroll is a general-purpose two-layer expansion solution for Ethereum. Similar to Ethereum itself, developers can deploy smart contracts on Scroll, and at the same time, they can interact with various applications above. But the transaction fees on it are lower and the throughput is higher.

Unlike other second-tier solutions, although we are an expansion solution, all integrity on Scroll will be verified in Ethereum, either through ZK proof or fraudulent proof, so the security of Scroll is guaranteed. It’s even stronger because it’s backed by Ethereum.

More specifically, we are now building a ZK Rollup equivalent to the EVM, what does that mean? Technically, Scroll is based on ZK Rollup, which relies on proofs of validity to prove that everything that happens on Scroll is correct. ZK Rollup is considered to be the purest scaling solution based entirely on pure data assumptions.

Ethereum equivalence here means that EVM with a self-decoding level can be supported internally. For developers, it means that everything supported on VEM can be supported, not only specific programming languages ​​such as Solidity, but also Supports the Ethereum Virtual Machine at the bytecode level, as well as all related development tools.

So for developers, you don’t need to know ZK Rollup to deploy on Scroll, and the development experience on Scroll is exactly the same as on the Ethereum layer, you can use all the familiar development tools , and then deploy in a similar environment.

Before going deeper into the specific technical details, first I would like to tell you why we made such a design decision, and what are the principles behind it?

First, security. The most important task is security, so the most important form of security in scaling is to protect the security of users’ funds and data. On the most secure and decentralized base layer, which is based on Ethereum, users do not need to rely on the honesty of Scroll nodes to ensure the safety of their funds. They can fully utilize the security of the underlying Ethereum layer to ensure the safety of their funds. Even if they actually trade on Scroll, because from a security point of view, it is completely dependent on the underlying Ethereum.

Second, efficiency. The second important principle of design is efficiency. In order to allow users to enjoy a better user experience on the second layer, we believe that transaction fees should be extremely low, at least several orders of magnitude lower than transaction fees on Ethereum.

In addition, we believe that users should enjoy timely confirmation on the second layer. If you send a transaction to a node on the second layer, you can get confirmation very quickly, and you can also achieve very fast finality, that is, your proof can be very fast. Get verified on one layer quickly.

Third, EVM equivalence. EVM has a very active ecosystem. We believe that an effective Ethereum expansion solution means that users and developers should have a seamless migration experience, no matter which DAPPs and tools they use now, in The migration process should be completely seamless.

EVM equivalence is the best way to achieve this because for the user you can have the exact same environment on the Scroll and that’s why EVM equivalence is always maintained, that’s our goal and we ‘s original intention.

Fourth, decentralization. Decentralization is the core feature of blockchain, but it is often overlooked, or is inappropriately sacrificed for efficiency, especially for some one-layer blockchains, they often sacrifice decentralization for efficiency change. But we believe that one of the most valuable aspects of blockchain is decentralization, which also ensures that protocols and communities are censorship-proof, or prevent some coordinated attacks. We have also considered the decentralization of all aspects of Scroll, including the decentralization of nodes, the decentralization of the prover, the decentralization of developers, and the decentralization of users, which is why we say decentralization across all levels .

These principles are the design principles behind us that ultimately lead us to our current technical design solutions.

Security, efficiency, and EVM equivalence ultimately lead us to propose a ZK solution for zkEVM. As just mentioned, ZK provides mathematical guarantees of pure mathematics, does not depend on any economic game under attack, and is also very efficient. In addition, the cost of each transaction is spread among a large number of transactions, so the cost is also very low. Compared to Fraud Proof, Validity Proof has shorter certainty/shorter confirmation. Since Fraud Proof is based on optimistic Rollup, it takes about a week to complete the verification at one layer, but for Validity Proof, if You can generate proofs quickly, that is, you can get finality confirmation at one layer very quickly.

After we designed the ZK decision, we also realized that zkEVM is the ultimate winning cup in favor of EVM equivalence. The idea behind zkEVM is that zkEVM can use a concise ZK proof to prove the correct execution of EVM self-decoding. Because all previous ZK Rollups are application-specific, either designed for some DAPP, or some specialized transactions. If you can prove that it is also correct for EVM execution, then you can prove that ZK-EVM is a very general purpose virtual machine.

Previously, everyone thought that zkEVM could not be implemented, because its overhead was very large, which was two orders of magnitude higher than normal APP and application overhead. But since we leveraged the collaborative innovation of the entire community, the design also incorporates recent breakthroughs, including recent prover systems, prover aggregation, and even leveraging ZK’s hardware acceleration.

The open development approach allows us to work with a very wide range of community members, especially the Ethereum Foundation’s privacy and scaling teams, as well as other players in the community, where we collaborate very closely, using The latest research makes zkEVM finally possible.

Based on these research results, ZK Rollup based on zkEVM is also being built to meet many of the design principles I just mentioned. The next step is decentralization. The requirements for decentralization led us to finally build a decentralized prover network. When designing the entire Scroll system, especially when designing zkEVM, we realized that EVM should be put into ZK proof requires a very large overhead in , mainly due to incompatibility between these local fields. In order to reduce, in order to shorten the proof time, because proof time affects the finality time on L1, we decided to build a permissionless decentralized network of provers that will help us generate proofs of blocks on the L2 layer.

In this way, we achieve two main technical advantages:

(1) The prover is more like running in parallel and is scalable, which means that the prover pool can be massively expanded by adding more prover nodes.

(2) The community will be incentivized to run these prover nodes to build better and significantly optimized hardware solutions for us. Because the community is incentivized, we do not need to rely on us as the central party to build these hardware solutions .

If you let the community participate in the development process, you can provide them with enough incentives, and when the incentives are enough, the community is even willing to build mining machines.

Next, I will introduce the overall architecture and design. In order to give you more background information about the architecture, we must first review ZK Rollup. The transaction processing of Ethereum is very slow, and everyone should be familiar with it. The speed of its block generation is Very slow, since it is more decentralized and relies on some specific consensus mechanism, Ethereum transaction processing is very slow.

But for users, with Scroll it is possible to send transactions directly to Scroll instead of sending transactions to Ethereum. Scroll can quickly generate second-tier blocks, and then we will run some proof algorithms to generate validity proofs to prove that the batch of transactions sent to Scroll is correct, and then we will provide some necessary block data as availability data. This data is submitted to the first layer of Ethereum.

And zkProve is used as the public input value in the submission process, proving that the state has changed after the application and execution of these transactions. In this way, a layer only needs to verify the various proofs submitted to him without re-executing all transactions, that is, the final layer verification time will be greatly reduced.

So for us, we have to have a prover, but also some other nodes, including the block producing node. This slide shows the architecture of Scroll, which consists of three parts:

(1) Scroll node.

(2) The smart contracts on the chain are mainly used for depositing funds and smart contracts for one-layer transactions.

(3) Decentralized prover network. The prover network consists of many prover nodes, which are called Roller in our system.

The first is the sorter. In the Scroll node, there is a party called the sorter, which is Gas, which is a fork of Go Ethereum. GO Ethereum can be said to be the most popular implementation on the first layer of Ethereum and can be used to accept the second layer. transactions and generate second-tier blocks. In this way, we can simply leverage existing Ethereum client implementations to ensure consistent behavior at Layer 2 and Layer 1. At the same time, for developers, they can be more familiar with it, and it is more convenient to deploy contracts, not just RPC interfaces.

The second layer is the repeater. The repeater is mainly used to relay information, such as relaying information between the cross-chain bridge protocol and the Rollup protocol. There are also some other user data, including user transactions. The repeater between the first layer and the second layer is also responsible for the transmission of messages. In summary, the repeater is responsible for the relay of messages between the first layer and the second layer.

The third layer is the coordinator. The coordinator mainly sends the tracked information to the Roller. In other words, when the sequencer generates multiple blocks, the coordinator sends all the information not only transaction information, but also all the information obtained from the execution. Collection, I will introduce this step to you in a little bit, because we have a decentralized prover network, so the coordinator must determine who is responsible for proving which block, and then send the relevant block to the decentralized Prover Network. These decentralized provers (Rollers) generate proofs which are then sent back to the coordinator, which is the whole loop.

The zkEVM is at the heart of the entire design, so let’s now dig a little deeper into what the transaction steps that take place in Scroll are.

First receive some execution traces from the coordinator, such as your execution steps or block headers, transaction data, necessary data, self-decoding data, Merkle proofs as execution traces, and then turn it into a loop, using The input builder for this loop, which generates the proof. Because it is circular, it must be converted into something that ZK can use, and then it will be used as a witness to witness all kinds of things.

zkEVM consists of multiple loops, and each loop has a different purpose. For example, EVM is mainly used to supervise transactions, RAM is mainly used for memory operations, and storage is mainly responsible for storage and reading. There are other loops, such as signatures and other loops. In this way, multiple proofs are generated. Finally there is the aggregation loop, which aggregates these proofs into one proof and finally puts it on the chain.

For example, if you have a block or an execution trace, there is a proof. This is the technical process that happens in the prover, or in the Roller.

Next, I will share with you the specific workflow of Scroll. First look at the workflow of zkRollup, you need to have proof of validity of blockchain data, but this proof of validity can be separated because block data can be submitted in advance. We can further decompose it into two steps at this step, one is the proof of validity, the reason for this separation is that there is a stronger confirmation. Because the second layer block does not give you any information, you have to rely on the sequencer, all data comes from the sequencer, but once you have the block data, you can re-execute the transaction to get stronger confirmation , because the time to generate the proof will be longer, you can submit the validity proof in a subsequent stage, so that you can make pre-confirmation quickly and the degree of confirmation will be enhanced.

We also have different types of block states, one of which is the block that has been proposed or initiated by the sequencer, the block that has been included in the second-layer chain is pre-Committed, and the other type of block is called Committed, representing The block’s transaction data has been sent to the Rollup contract on Ethereum. In the end, it is the finalized block status, indicating that the correct transaction execution in the block has been proved, and the proof of validity has been obtained. At this point, your transaction has been finalized.

This slide shows the workflow of transactions in Scroll, first the transaction will be sent to the sequencer and the block will become Pre-Committed. The next step is that the sequencer will upload the block data to one layer, that is, to the Rollup contract. At this stage, your block becomes Committed. In the next stage, the block will form execution traces. You need these execution traces to generate proofs, and the coordinator will choose a Roller to generate the corresponding proofs.

Say for the first block, I choose a prover. For the second block coordinator, such an execution trace is also scheduled to another prover. Since these provers are executed in parallel, the proof generation is also parallel. For example, 3 provers can generate proofs for three different blocks at the same time, and the proofs are sent back to the coordinator, which verifies these proofs. Next, either sign or send these proofs to another Roller, which will execute and prove again. Finally, the coordinator aggregates all the proofs and sends the aggregated proofs to one layer for contract verification. The contract already has a part of the block data before, plus the proof, the combination of the two finally realizes the transaction verification and confirmation on the second layer.

This slide shows the block status, including three different types of blocks, including Pre-Committed and Committed that have completed final confirmation. Different colors represent different block statuses. We already have an alpha testnet, or a pre-alpha testnet, if you want to participate in the test, or want to contribute to us, you can scan the slides on the screen.

Finally, I would like to share with you the roadmap and the current development progress. We have completed the pre-Alhpa testnet, which requires permission, and the testnet can only do user interaction. You can make some on-chain apps in this version. attempt.

In the second stage, we will invite developers to deploy some smart contracts based on us and develop some additional applications.

In the third stage, we hope to start the outsourcing of Layer 2 proof, that is, the process of proof generation. We hope to invite all communities to participate. This is permissionless, and anyone can participate in the proof network and become a proof node.

In the fourth stage, after reaching the zkEVM mainnet stage, we will deploy and launch the mainnet after strict code audit and performance improvement.

The fifth stage deploys a decentralized sorter, which makes zkEVM more efficient, both from a design and technical point of view.

We have a very strong goal, the goal is to bring the next 1 billion users to Ethereum, because we believe that all interactions will happen above the second layer, and we also believe in open, open source communities, everything we do All are open source, especially EVM and contributors from the Ethereum community. We also believe that the collaboration of the entire community can help us to be more transparent in the entire development process. External code audits are also required, and we are constantly pursuing decentralization at all levels, including the decentralization of the prover network. This is the first step on the road to decentralization. step.

Roundtable Discussion: “Ethereum 2.0: After The Merge”

Dong Mo, co-founder (host) of Celer Network

Li Chen, CEO of HashQuark

Steve Guo, CEO of Loopring

Adam, co-founder of ssv.network

A Hundred Flowers Bloom : A Quick Look at Layer 2 Progress in the Post-Merge Era

Dong Mo (host): For related entrepreneurs and related projects, let’s discuss the entire process of Ethereum Merge and the future development direction. Today, the entire Panel will introduce itself to you, starting with me. I’m Dong Mo, the co-founder of Celer. The main thing Celer does is Focus on multi-chain interaction and cross-chain infrastructure. The purpose is to enable blockchain applications to make good use of the additional liquidity and cooperation provided by multi-chain interactions, and to implement various multi-chain applications very well.

Steve Guo: Hello everyone! I’m Steve Guo, CEO of Loopring. The Loopring protocol is a ZK Rollup scheme based on Ethereum-specific applications. Specifically for some high-frequency scenarios, highly customized and optimized.

The Loopring protocol is the earliest and longest running ZK Rollup system on Ethereum. The first version will be launched at the end of 2019. Some small achievements have been made so far. layer user.

This year, we launched a two-layer NFT system, and several partners have launched products based on this system.

In addition to the ZK Rollup business, we are also developing smart contract wallets that support social recovery. We mainly want to solve the biggest problem for ordinary users to save private keys when they enter the blockchain. This business has obtained more than 40,000 smart wallets. The number of users is not bad.

Li Chen: Hello everyone, I am Leo, CEO of HashQuark. Maybe everyone knows us as a staking company, but we made a relatively big strategic transformation last year. Formally transformed from a staking company to a Web3 infrastructure company.

So we now have three businesses, and maybe two of them are related to Ethereum.

The first one, we all know that we have been doing staking since 2018, and we have supported more than 40 public chains. The amount of assets under management should be at least the top three in Asia, and the top ten in the world. This is a matter of staking, and we can talk more about the impression of 2.0 later.

Second, we feel that the entire Crypto and Web3 will have a greater development in application now, so we have launched a new product called HashKey DID this year, which may take some infrastructure to the upper level. This is not bad, as of today, there are almost 300,000 registered addresses. In addition, we are also looking at some other directions, for example, including Bitcoin’s Lightning Network, which we have been working on.

Adam: We enable stakers and staking providers to build a decentralized network, rather than simply relying on a single point of mining pools, thus avoiding a single point of failure, which is what we do. Actually, we’ve worked with the HashQuark team in the past, and HashQuark is by far one of the best teams I’ve seen in this space, and it’s specific to us.

Dong Mo (host): We know that Ethereum’s POS Merge was successfully completed last week, and all projects and communities related to the Ethereum community are very happy and excited.

We’re mainly going to be in two parts, one part is a look back at the interesting things that happened in Ethereum before Merge, and what’s going on in the community. At the same time, look forward to some of the more exciting changes to come after Merge, and what impact and roadmap changes will the Ethereum ecosystem have on your projects and the entire ecosystem after Merge.

Start before Merge. The first point is that we know that Ethereum Merge felt like something out of reach before, it was always near and far away. When Ethereum was just launched in 2015, the plan was to complete the POS transition in 2017, but in reality, it took the entire Ethereum community an additional five years to achieve the transition from PoW to PoS.

All of you here are veterans of the industry. I don’t know about the development process of Ethereum PoS and the twists and turns in the middle. From your own point of view, what do you think is the most interesting point? Or what is the biggest feeling? After all, it took us seven years to go from PoW to Milestone of the PoS roadmap.

Steve Guo: The “simple switch” that Brother Dong just said was considered simple. It took seven years for the community to complete the switch.

There is a good analogy. You can imagine that the difficulty of moving the consensus mechanism from PoW to PoS is no less than that of a running plane. It is difficult to replace the engine. Mistakes, once they go wrong, there is a huge loss of a lot of money in the middle. In these seven years, the initial estimate was 2017, but in 2017, Loopring just did it, and it is expected that in 2018 and 2019, the ether will complete the switch. Why are we able to observe for so long?

First, the selection and iteration of the PoS algorithm also lasted for a long time.

Second, the essence of Beaconchain is to go online in February 2020, and it took almost two years to verify whether the Beaconchain of PoS alone can work, which is also the time to be verified.

In addition, Ethereum is constantly iterating on each test chain, which is completely fine. It was not until the recent testnet that the final Merge started last week, and the mainnet switch was normal at the last moment. The advancement of complex things can basically only be done step by step, step by step, and there is no quick method.

No developer can completely rely on the underlying factors for their projects, such as Loopring. At that time, they were optimistic that the Ethereum would increase to 2.0 in 2018 and 2019, and the TPS would increase significantly. At that time, when the earliest version of the Loopring protocol was just launched , up to 2 transactions per second. Such order-based Dexs systems are almost unavailable on orders. At the earliest, there was a voice within the team that we could be equal to Ether, improve our protocol, and wait for the 2.0TPS of Ether to improve. Later, it was found that this road could not be completely dependent on the bottom layer, and we had to traverse another road by ourselves to make the Loopring protocol based on A set of solutions for ZK Rollup, this is the biggest learning point.

Dong Mo (host): That’s true. I think what Mr. Guo said is particularly good. Especially in 2019 and 2022, the mood of the entire community is very low. Everyone has the same idea, including the “elderly people” in the community. Ethereum is a very small network to a very large network, and the entire community and capital investment are very costly to develop this thing. Why is a seemingly simple PoW to PoS switch delayed? The evolution and development of many technologies are not driven by the inflow of capital. The most fundamental aspect of the entire community is the expansion of people. In fact, from a small developer community, it has gradually expanded to a community with many people contributing together. In fact, it is very difficult. Easy.

In the process of slow growth, a gradual and clear transition from PoW to PoS roadmap is also realized. The PoS we see today is completely different from the PoS we predicted in 2015. I wonder if the two teachers will talk about this in the future.

Leo has also been in this industry for a long time. HashQuark has also supported Ethereum all the way. He also has a lot of feelings. You can share it.

Li Chen: I especially agree with Mr. Guo’s previous comments. I will add a few more points.

My personal biggest impression is that I have seen the most challenging upgrade of any public chain in Crypto. Several levels:

First, the technical level. Ethereum is not a public chain that started from scratch. It has run PoW for so many years, from 2015 to the present, and seven years of PoW. The key point is that it has run very successfully. Whether DeFi, NFT, or DAO are all carried on the ecology, so Is to pull one hair and move the whole body. I think it’s like what Mr. Guo said about “a plane”. It’s very technically difficult, because it doesn’t allow you to fail, and it’s impossible to restart. I won’t talk more about this.

Second, there is a lot of discussion, divided into several levels: one is the technical level, and the second is the governance level.

First, there are many criticisms and doubts in the community at the governance level. Everyone says that PoW and PoS compete. You do not need to hold any PoW tokens. You can participate in the network by buying a mining machine, but you must convert to PoS. To hold the Token of Ethereum, the governance audience is different.

Second, the most hot discussion recently is the anti-censorship mechanism. In fact, the anti-censorship mechanism and governance are closely related. Whatever your reason, when someone needs to censor the public chain, you need to react. Who has the final say at the Ethereum level? At present, we see that from the current point of view, no matter how much Ethereum you hold, you have no voting rights, and the voting rights are still owned by the community. Is this a better mechanism? It’s hard to say, but that’s at least the status quo, and I think there’s a lot of discussion. From PoS to PoW, the change in governance has a very far-reaching impact. It will even affect the future organizational form of Crypto, which may be dominated by DAO, but what should the management form of DAO and the voting rights of DAO be in the future? This mechanism with PoS seems to be a technical problem, but it has a profound impact on the governance of the public chain.

The third, the economic model, the three groups of PoW miners, developers, and public chain users are completely separated. Miners are the easiest. I bought a mining machine and sold it after I mined Tokens, because I had to pay the cost of electricity. Miners have little relationship with developers, but after PoS, you need to put Token on the network to support it. This is the first point, obviously affected. The original PoW layering is particularly obvious, with three layers of applications, developers, and miners above, and the boundaries are obvious. But when PoS breaks the boundaries, it is hard to say what will happen in the future.

Doing Valuedata, it turns out that many miners’ PoW, including Flashbots on DeFi, will have a big impact on these mechanisms? It is also profound. What I said is definitely not complete, and it affects economic incentives.

A public chain has almost three parts: technology, governance, and economic model. A consensus change affects these three things, and all three aspects will be affected. The second is on the world’s largest application-oriented public chain, and such a successful public chain, so the depth and scope of this influence are enormous. This is an extremely challenging and topical experiment.

In so many cases, we haven’t seen it all, but if we really go in, we will find that it has a great impact, so it takes so long. If it is a project from scratch, of course PoS is very simple, this is my opinion, I will talk about it here first.

Adam: I think the key issue is decentralization, don’t sacrifice decentralization. In 2017, I participated in the release of the EOS project in New York. Originally, EOS was supposed to be an Ethereum killer, but it did not become an Ethereum killer. The consensus model they chose was not the real PoS, they chose DPoS, so At the expense of decentralization, it is not completely decentralized like Ethereum.

If you want to do something as complex as Ethereum Merge, it needs to be implemented, as Steve just mentioned.

Also, things move very fast in crypto circles, so if changes are to be made to the Ethereum PoS design, the changes are huge. Just to give an example, the first is that if you want to Stake, you have to have a few thousand Eth, and if you want to be a validator, you need to hold a lot of Eth, which represents a big change.

Ethereum is not like the low PoS consensus protocol launched by EOS or Solana. Such changes are the result of community-driven efforts. Ethereum’s Merge is the result of community-driven efforts, which means that you have to solve some very complex problems, you can’t do it alone To fight alone, you must find a lot of researchers, you must find teams from all over the world, and they will build excellent technologies to participate in this network, so in fact, companies like HashQuark or networks like our SSV, if you want to If we do a migration like Ethereum, we are still relatively weak, and it is difficult to do it. You need the efforts of many people.

In general, many people feel that it took a long time for Ethereum to complete Merge, but at least we have done it now.

Dong Mo (host): Let me add two more points. The concept of PoS may be a word called Proof of Stake, but in fact, in 2015, the Ethereum community looked at what PoS is. PoS can be done in the future based on pledge. The public chain structure of , and the expansion of the public chain is carried out through sharding. What does each slice of the shard do? Each chip is an EVM, and each chip performs a variety of smart contract calculations, which is the original vision and roadmap of Ethereum at the time.

But in seven years, the roadmap has changed dramatically. Including what Vitalik mentioned last year, the future of expansion with Rollup as the center and the second floor as the center. It is completely different from the data sharding architecture that supported Rollup or supported the second-tier center when computing sharding was done at that time. It makes Ethereum switch from a single large Chain architecture to Ethereum itself will be the basic consensus layer and data availability layer, and a large amount of calculations above are derived from Layer 2 or the second-layer expansion, whether it is through ZK Rollup or Optimistic The chain grown by Rollup is used to expand the execution layer.

Ethereum itself has also become a multi-chain architecture, and the changes are very large. For us, as a project built on Ethereum, we initially focused on Ethereum. After sharding, the problem of the execution layer will be solved, and a broader interaction layer, the so-called state channel interaction layer, is assumed above.

With the development of the Ethereum roadmap, it has had a great impact on us. After the multi-chain ecology and multi-chain blueprint are developed, the demand for multi-chain interaction and multi-chain asset transfer, multi-chain message expansion, and multi-chain message cross-chain will arise. , so our feelings are very deep.

After talking about a slightly serious topic, let’s talk about a light-hearted topic. Before Ethereum Merge, although the voices of the community were very unanimous that they were very opposed to forks, proof of work forks, everyone was against it. But in fact, when looking at the data on the chain, it does not seem to be the case. For example, Ethereum in AAVE was basically empty, and all kinds of different DeFi Protocols were basically taken out of their respective Ethereum ETH liquidity. What are you waiting for? Everyone is waiting, maybe after the fork, these Ethereum PoW forks can give us some Ethereum for Airdrop, or there are so-called fork coins.

The community evolves in this way. What do you think of these forked Ethereum proof-of-work chains? Is it just very short-lived and doesn’t have much significance, or do you think these chains may have some interesting things to build on it, and how to compare the fork of PoW with the earliest fork in 2016 The Ethereum Classic itself runs on Proof Work’s consensus algorithm.

Adam: In the final analysis, competition is not competition on the technical side. Although technology is very important, everyone is within the Ethereum ecosystem today, and we don’t think we want to migrate to ETC, Ethereum Classic, or other forks of Ethereum. inside. From the perspective of the ecosystem, almost aspiring to vote with our feet, we decided to change Ethereum from PoW to PoS, continue to participate in the Ethereum ecosystem, continue to contribute to it, and help the prosperity of the Ethereum ecosystem.

So, I don’t think the forks divided by PoW miners have any value, and I don’t see any potential for them to develop in the future. Although I use strict words, I think these blockchains will eventually cease to exist. Even if it is the same as ETC, I will not build on ETC, nor will I invest any effort and resources in that ecology. I think I and other people in the ether ecology should have a consensus. But I don’t know what the other roundtable guests think today, I don’t have the means to speak on their behalf, but personally, I won’t consider going to other forks.

Li Chen: I think this is very good. It is human nature to get an Airdrop. Everyone wants to eat cakes for free. This is very normal. Everyone wants to do this. The value of Ethereum does not lie in Ethereum itself, but in the ecology above. Whether it’s DeFi, or NFT, or DAO, it all grew out of the Ethereum community everywhere. A large number of developers are also in the Ethereum or Ethereum-related Layer 2 community. I think this is its value, can the traditional TCPIP protocol be the technically best protocol compared to the traditional Web2? I don’t think so. But it is the most recognized protocol. Is the iPhone the best smartphone? Not necessarily, but there is no way, because it has a first mover advantage, all the community, all the developers are on it. For Ethereum, the most valuable things are developers and the community. As long as these are on Ethereum, the rest of the PoW public chain is of little significance.

Second, the mainstream values ​​in the world are environmental protection and energy saving, so I am relatively arbitrary here. The only valuable PoW chain is Bitcoin. There is no other valuable except Bitcoin, and Bitcoin is simple enough. If you want to develop on PoW, there is no such value. Pour some cold water, there is a hidden danger. After Ethereum is converted to PoS, after all, PoW has run for seven years without any huge problems, and there is no fundamental problem on the network. But PoS ran for a day, there may be huge black swan events that no one thought of, this may give those (hackers) a little chance, but I think it is difficult, after all, we spent so much time doing PoS before, Did such an exhaustive test.

Dong Mo (host): Thank you, Steve, do you have anything to add?

Steve Guo: I think Leo and Adam summed it up very well. It is natural for users to want to receive and earn more money, including that everyone can see that in all kinds of DeFi, before Merge, basically all ether was withdrawn. This phenomenon is also observed on the second floor of Loopring. But once the Merge was over, it rushed back, so it was obvious that everyone’s phenomenon was monitored, so I went out to get a possible “airdrop candy”, and then came back.

Basically, well-known DeFi voices will not release this new chain that only supports PoS, and will not support PoW forked chains. Why? It’s very simple, you support a new chain, and the extra overhead brought by each developer itself, no one expects this chain to start up, I spend so much money to support things that can’t be up, why do this? At the same time, there are assets in each DeFi protocol, and it is impossible to create so many identical assets out of thin air after the fork.

As for the latest PoW fork chain and the previous Ethereum classic fork chain, there are still essential differences. The fork in 2016 was caused by the event that Ethereum was locked. At the time, it seemed that I personally supported the fork, why? Because of course ether is still a very small thing, still in a nascent state, and faced with so many losses. In 2016, there was no DeFi, and there was no NFT complete system that everyone sees now, not at all. In fact, that fork was relatively simple, and it could save the entire Ethereum community. I think it was a good thing to not do it.

Any new thing must be in the trial and error stage, and everyone must have a certain tolerance to allow trial and error. Of course, it seems difficult to make the decision at that time. If the main chain is forked due to the loss of a project, I do not agree to do such a thing anymore, because the vigorous development of the ecology is completely different from that time.

Dong Mo (host): Thank you Steve for sharing the historical perspective.

Looking forward now, Merge has been completed very successfully. When we look forward, Merge itself will not bring more TPS or processing computing power to Ethereum, but with Merge as a foundation, Ethereum can quickly Realize the other two minor upgrades of the next EIP, greatly reduce the so-called EIP-4488, and greatly reduce the gas cost of the connection’s calldata. In addition, the slightly larger 4844 is a mechanism to realize the so-called Protocol danksharding, which can further reduce the cost of gas without generating various centralization risks brought by the previous 4488.

Steve does a lot of work at Rollup, can you tell us what the Rollup 3trick Future is like? What kind of changes will it bring to Ethereum and the applications and users on Ethereum?

Steve Guo: Let me explain this EIP to everyone. EIP-4488 is relatively simple. It simply reduces the Gas consumption of Calldata from 16 to 3. Don’t underestimate the simple change. The economic model it affects is very important. important.

As mentioned earlier, this change is only a temporary transition plan. Before the deployment of danksharding, we should use this plan to reduce the gas consumption of the second layer of Rollup, because when most of the data is uploaded to the chain, the data of the second layer This is done through Calldata. If the overall cost of Calldata is reduced, once it is implemented, it is OK to reduce it by more than 80%, if it is only the change of EIP-4488.

This change is applicable and effective to all Rollup schemes, and there is no need for any changes at the protocol level, because it is only a major upgrade at the consensus network layer, and DAPP developers at the application layer do not need to make any changes, which is its greatest benefit. Because you are reducing fees as a whole, Calldata has its particularity. Sometimes there is very little Calldata in a block, and sometimes it accounts for a relatively large proportion.

If the “big pot rice” method is adopted to reduce the ownership, once the entire transaction of Ethereum is all large-scale Calldata, it will bring greater economic benefits to the operator of the node Valuedata, because it A large amount of data needs to be stored, but the corresponding income cannot be obtained, so the EIP-4844 danksharding scheme is introduced.

The biggest improvement of 4844 is the introduction of the special data transaction type of Blob, any developer can pack the data between 1-2M in the special transaction type and put it in for storage. Support will use similar data decentralization to store Blocb data separately. These data will also have a validity period. At present, the validity period is about 30 days. After 30 days, those savers can throw away these data, which can theoretically reduce the operator (save) Data side) Cost.

At the same time as the introduction of Blob transactions, Blob transactions require three months of zero-knowledge proof evidence, which is called KZG Polynomial Proof of Commitment. With this evidence, any Dapp developer can determine in the smart contract whether the inputted CallDate exists in the historical Block transaction, so that the smart contract can judge this matter. This thing is naturally very suitable for the Optimistic Rollup scheme.

Everyone thinks that the essence of OP is to record all the original transactions of the second layer on the chain through CallDate. Afterwards, whether the EVM instruction executed by CallDate is Worked.

So if OP goes online, I guess they don’t need to make any changes. But for the zkRollup scheme, everything has to change. Taking Loopring as an example, the previous zkRollup of Loopring counted the user’s CallDate as part of the public input of the circuit, and this part of the CallDate will also be verified in the contract and participate in some small calculations.

Once danksharding is to be adapted, such a mechanism needs to be changed, and all CallDates that originally participated in the calculation on one layer must be moved to the chain. It is equivalent to changing the circuit of GKP under the chain, to prove in the circuit whether the KZG polynomial commitment is valid, and at the same time pass the CallDate, which was originally a public input, as a private input, and prove in the circuit that the private input is correct and exists. A piece of valid data, this change is not small.

To be honest, it is estimated that the workload is still relatively large. What is the benefit that it brings? I estimate that on the basis of 4488, the cost can be reduced by an order of magnitude.

But you don’t have to think very well, think that when 4488 is used, the gas cost is reduced by a thousand times. Judging from the overall Loopring data, Loopring is the relay operator, and CallDate only accounts for 30% of the gas consumption on Ethereum. Why? Because there are many transactions that involve a layer of operations.

Once the transaction on the second layer involves processing on the first layer, there is no way to reduce the fee. If you look at the current scale of the Loopring network, even if you do it to the extreme, you will save another 30% of the cost. But another point, if all transactions on the entire network do not need to interact with the first layer at all, they are all processed on the second layer network.

Dong Mo (host): Thank you Steve for giving us a very detailed technical analysis.

For some audiences who are not so deep in technical background, I will add here. Ethereum’s Rollup-centric roadmap, what are the main things to do? We start not to regard the underlying public chain of Ethereum as the execution layer of the program. We will place less and less smart contracts in one layer for execution, and more and more of the layer as the data layer.

What data is stored? All data on the Rollup needs to ensure that the Rollup is still a secure second-layer execution environment. That is, a large number of application scenarios will run on various Rollups.

Of course, this thing also brings its own corresponding problems. In the history of computer development, there will be a process from single-core to multi-core. Visually understand, we may think that it is the process of Ethereum expanding from single-core CPU to multi-core CPU. Multi-core and multi-core must communicate with each other in order to synergistically exert the advantages of multi-core.

What Celer does is how to establish a multi-core communication mechanism, multi-core assets can be quickly circulated, and the state between multi-core can be presented to users and developers as a unified state. This is a challenge that needs to be solved in the future, and a lot of work will be done. Work and applications will be presented to everyone in this way.

In a more figurative sense, Ethereum emerges as a management layer for data, or as a sort of terminal arbitration court. And more computing and application logic will live in a different two-layer public chain.

Speaking of Ethereum 2.0, we have to say Staking. The establishment of the consensus mechanism is based on pledge and staking. The threshold for the entire Ethereum Staking is relatively low, 32 EOS can run a node and participate in the construction of consensus and security.

The technology of blockchain is very interesting. It is always a combination of technology and economic game theory. In the process of game and evolution of the entire system, so-called pledge derivatives have appeared. We have a large pool, and everyone put all the pools together to generate Staking warrants, and do various other things through the warrants.

There is a lot of discussion in the community. Are such staking derivatives good? Both Leo and Adam are experts in this area, and asked them to comment on Staking Derivative and how to better build a truly decentralized staking network in the future.

Li Chen: This issue has been discussed a lot, so let’s talk about the first one.

Centralization will exist everywhere. Although Staking has a large number of Tokens, whether it is like ValueData or others, it will become the point of more Tokens, but PoW is exactly the same.

I read an article this morning. Almost 60% of PoW’s computing power is concentrated in a few large mining factories. I think the shape is exactly the same. Now, more than 40% of Ethereum exists.

Compared with PoW, it turned out that it cost tens of thousands of yuan to buy a mining machine, but because of the product, it does not necessarily need 32 Ethereum. For other protocols, maybe 1 Ethereum or even 0.5 Ethereum can participate. Staking of the network. At least from the user’s threshold, it becomes lower, so it will be more decentralized, which is compared with PoW.

Second, once you find out that this mining pool has done something you don’t want to do, it is especially difficult for you to change a mining pool, because it is a physical thing, and you have to move the physical thing from the physical world to another place. But for staking, it is easy to cast from this ValueData to another ValueData, which is better on Ethereum.

In essence, building a ValueData yourself has a certain cost, but the cost is not high. If you really don’t trust it, you can build a ValueData yourself, and you can run it with 32 Ethereum. One is in the physical world and the other is in the online world, so this transfer will be faster and more convenient to do this.

Thirdly, in terms of economic interests, because of a series of products, it turns out that more and more Staking is pledged, and the safer my network is. But for users, human nature is profit-seeking, and I still want to obtain benefits.

I want to get the income of DeFi, the income of NFT, and the income of Staking. I can’t do it. There is no such mechanism in PoW. But PoS can participate in all kinds of DeFi.

Aligning interests has two major benefits as a user:

(1) You don’t have to have 32 Ethereums to participate in these activities, and the flexibility becomes higher.

(2) You can get multiple incomes.

(3) Miners and interests are basically separated, and they are consistent in Staking.

From the perspective of Ethereum, because it has gone through PoW and seven years of development, it has become quite decentralized in nature. It turned out that a large number of Tokens in the public chain that just started are in the hands of investment institutions.

First, the original distribution is very decentralized. Second, after so many operations, its Token Holder has been very scattered, and its standards are not high, and these agreements do not limit how much you can participate in ValueData, so I think it seems that it has Very decentralized. This worry is not so necessary.

Dong Mo (host): Adam, I know you have a lot of experience in this field, so I would like to ask what is your opinion on liquid staking? Do you think this is good or bad for the community?

Adam: I think liquid staking is a good thing for the community, and I think it’s definitely a good thing.

If you look at Ethereum in the wild with RES, SES, or AES, they can actually do some very important things, and that includes helping mass adoption, allowing more people to participate in staking. Unless we have these derivative tokens, it will be difficult to achieve mass adoption.

We have to allow people to be able to use the tokens of these derivatives, either by storing them in their wallets or by participating in other DeFi protocols.

In addition, from a risk perspective, staking is a low-risk, low-reward activity. But in the blockchain, as Leo said, once you have derivative tokens and staking tokens, you can participate in DeFi, so this is what we see many people are doing now.

From the project side’s point of view, they must be audited. But overall, it’s a good thing for the community.

Another thing about centralization is actually something that needs to be tracked. We don’t want to see Ethereum become controlled by 3-5 large mining pools one day. Although LDO is a very good project, we also cooperate with them, but we also don’t want to see 100% Eth All are fully pledged in LDO.

As Leo said, although very brief, but the most important point.

Decentralization is very important. In the process of centralization and decentralization, we actually have to move towards decentralization as much as possible. In addition, as long as the community cares about this issue, as long as everyone discusses this issue and everyone realizes this issue, we are actually moving towards a more decentralized Ethereum ecosystem.

I don’t think the problem of centralization is very big, and I think the most important thing to remember is that there are many staking mining pools. Staking was done a year ago, and Eth could not be withdrawn at that time, that is, the pledged Eth was not available. Any liquidity, but hopefully soon after the Shanghai upgrade of Ethereum, people will be able to take their staked Eth out of the beacon chain so people may choose to stake it in other pools.

For example, after taking out the pledged Eth from LIDO, it can be pledged to other new protocols. Because we expect that in the coming months, there will be hundreds of staking agreements in the Ethereum ecosystem. That is to say, we are gradually moving towards decentralization, and compared with other protocols, Ethereum has the highest degree of decentralization.

Dong Mo (host): Do you think we will see more liquid staking pools? Although the liquidity is still locked, in fact, we also see that a lot of value in the DeFi ecosystem is locked, so I would like to ask how to break the situation?

Adam: It’s actually hard to break the game.

I don’t know if you are familiar with the Bancor project. When they first appeared, everyone was very excited, but later decentralized projects like Loopring and Uniswap also entered the game, and the value they carried also exceeded Bancor. , users also surpassed Bancor.

So from the perspective of the entire industry, we see various staking pools. If we see that most of the Ethereum is staked in a mining pool, it will be difficult to catalyze staking and catalyze innovation. In the future, we will see more protocols to challenge the position of these existing mining pools. We see that it is only a very early stage and more mining pools will appear.

Teams like SSV, who are also gradually increasing their market share, are also constantly advancing their own liquid staking solutions. So I think the whole industry is going to be turned upside down by the entry of new entrants, and we also see new solutions taking more market share.

Projects like LIDO and Rocket Pool, they do some very interesting things, they started doing it a few years ago, so I also see that there is indeed a lot of room for innovation in this field, especially after the Merge and Shanghai upgrades, we It is still in the ascendant stage.

Dong Mo (host): Adam just said very well, after the entire Merge is completed, there is still a lot of roadmap to do after Ethereum 2.0. Including Shanghai Merge later.

But the entire community does not already have a set roadmap. Just like the change from PoW to PoS, there are still many different possibilities after the Ethereum PoS Merge, and it also depends on the different paths of the entire community in the future.

In the last five minutes, two simple questions.

The first question is that after the PoS change of Ethereum, its scalability will be greatly enhanced, even gradually aligning with other Layer 1s.

In 2020 and 2021, other main growth routes of EVM Layer 1 are to carry the overflowing user needs and transaction needs in Ethereum.

When Ethereum itself undergoes rapid iteration and can carry applications, as well as more applications and lower costs, what impact will it have on other Layer1 Blockchain ecosystems? What will happen to other Layer1 in the future? I don’t know if you have an interesting point of view?

Li Chen: The problem is particularly good. These needs are overflowing from the Ethereum ecosystem, and the needs of the Ethereum ecosystem itself cannot be accommodated. Therefore, after the expansion of Ethereum is completed, the possibility of Ethereum becomes very large.

For example, it turns out that Ethereum does not have the feature of privacy, so I will find one with privacy to be Layer 2, no matter what form it is close to.

Ethereum has become capable of carrying different possibilities, but this thing is quite similar to the Internet and Web2. One is user traffic and the other is developer traffic. One of the things that I find difficult to subvert Ethereum is that both mainstream traffic is in Ethereum.

Once it has the ability to access, it is exactly the same as Web2. Web2 turned out to be a start-up company. When my product is Ready, the most important thing to look for is the traffic entrance. In China, we are looking for WeChat and Tencent to connect, and overseas are Facebook and Twitter. , there is no need to make any judgment, the market will let these public chains, including the existing Layer1 public chain, my judgment is that they will actively connect with Ethereum, after all, the traffic is there.

This is my opinion, not necessarily accurate.

Steve Guo: I think Leo said it well. In the past, various Layer 1 public chains were known as “Ethereum killers”, starting from EOS to the most recent public chains. Recently, I feel that everyone has stopped mentioning this concept, and it has gradually become the sidechain that I want to become Ethereum, and I am connected to Ethereum.

Including the rise of Layer 2, it is essentially the computation that cannot be carried on the first layer of Offload Ethereum, including the storage requirements, all of which are to move the requirements themselves from the first layer to the second layer or to other side chains to complete the adaptation of resources.

Both approaches have opportunities in my opinion, and it may be that different application scenarios will choose different solutions. For example, for requirements closely related to security, I think it may be more suitable for Layer2 and Rollup solutions. However, for application scenarios that are not very sensitive to security, such as uploading game logic to the chain for processing, the variables of this game may be suitable for other chains to implement.

Other Layer 1 public chains must first improve their infrastructure in order to attract developers and see if the entire ecosystem can be developed. I think it will take at least one cycle to see if the public chain will work.

Dong Mo (host): I very much agree with Steve’s point of view. How will the public chain ecology go in the future? There is an old saying in China that if you divide for a long time, you will be united.

In the beginning, everyone was an Ethereum killer. I was different from Ethereum. In the last cycle, all the implementation standards of EVM were reached. I hope to shift the ecology of Ethereum and become the replacement of Ethereum.

But after Ethereum itself can carry these needs, I think there may be a reverse phenomenon of long-term cooperation. A layer of public chain will wonder if I can do some special and subdivided blockchain application scenarios, maybe a possible path, which is also my current macro feeling. It’s like there was a “stem cell” at the beginning. The stem cells divide, and everyone’s stem cells grow similar, but some stem cells become skin, some become nerves, and some become bones.

Adam, do you have any ideas to share with us?

Adam: I think the other guests have said it very well, and I don’t have much to add. From an industry perspective, we always pay close attention to new innovations and embrace innovations, so for Ethereum, we actually faced a lot of pressure when we were in Merge, and EOS and Solana also posed challenges at that time. However, due to these challenges and pressures, the Ethereum roadmap also looks more realistic. We do see a lot of innovation in the future, but as all the guests have said, the key is to reduce transaction costs.

Dong Mo (host): Last question, how will Ethereum 2.0 and Ethereum Merge change the Project you are working on now.

Steve Guo: Everyone is looking forward to a better and faster application of Ethereum to truly undertake Web3. A lot of Web2 companies are trying to do Web3 transformation needs.

Since the beginning of this year, I have also talked to many companies that want to transform from Web2 to Web3. Most of their needs are to copy all the current business logic on the chain simply by using smart contracts. When these companies talked to me before, I said, you want to choose Ethereum to do this thing, and I strongly do not recommend you to do it. Even if you choose the current general Layer 2, it will not be able to meet your needs, because the current cost and TPS should not meet the needs of Web2’s serious transformation, all the needs of all businesses on the chain, completely unattainable, only Depends on further Ethereum development, including higher TPS, lower fees, to do this.

I am also very happy to see that the Ethereum community is building the next generation centered on Rollup, which also strengthened Loopring’s original intention at that time. We are the first ZK Rollup system, and we will build the most useful system around ZK Rollup. A set of application-specific ZK Rollup systems, while also expecting a better Ethereum will bring another large-scale use of smart contract wallets on the track.

At present, the biggest problem with the use of smart contract wallets is the cost of one-layer deployment, including the use of the cost is still too expensive. Very much looking forward to the latter solution to reduce the cost problem. Thanks!

Li Chen: The first thing that has a direct impact on us is the staking business. As Adam said, especially after the Shanghai upgrade, there will be no PoW after the Merge transfer. The most direct thing for us is the market change. After the Shanghai upgrade, there will be various possibilities for Staking, and we will also try to make some new products. The purpose is two points: one is to improve the security of the network, and the other is to make the efficiency more efficient. This is a very simple reason.

Second, Ethereum talked about the possibility of more other tracks through Rollup, and Web3 must also be rooted in Ethereum. I also introduced it earlier, and now we are working on DID products, making the original staking more low-level infrastructure to the application layer. There are so many different Layer 2s, different side chains, and sub-chains, each of which is an ecosystem. Users have their own identities in their own ecosystems. They especially need a unified identity to be able to record all kinds of things they have done. represent him. Give us a bigger chance, I have great expectations for Ethereum after Merge, thank you!

Dong Mo (host): Looking forward to seeing new products, Adam.

Adam: I will answer this question quickly, I think the staking industry is huge now, but it has more potential in the future. The current figure of 50 million will double at least in the next 1-2 years.

Since the Beacon Chain came out, about a year and a half ago, many people have been struggling to understand how to stake. For us, how do we run a validator node? How to participate in Staking, but now the question has become what is the best way to Stake? Are you involved in DeFi? Which protocol is more secure? These questions are now more precise and more specialized, which also means more innovation in the industry as a result of these questions. Most DeFi will actually be based on Staking, just like the traditional financial industry is based on the concepts of interest and risk-free returns. So, I think indeed Ethereum 2.0 Merge will open up a lot of new possibilities and new innovation space for us.

Dong Mo (host): Like everyone, I am also very excited about Ethereum 2.0 and Merge. For us, like Steve said, there is not generally only one Rollup on Ethereum, but how many A different Rollup, and even a so-called application specific, a Rollup for a certain application feature. In a multi-chain ecosystem, each chain itself must not be an island. How to build a bridge for fast and stable communication between multiple Rollups, how to allow assets to flow freely among multiple Rollups, so that status and users can It circulates freely among multiple Rollups, and even from the user’s point of view, there is no need to know that it is interacting with multiple different Rollup chains. This is also our main business environment. We hope to develop and undertake more together with Ethereum in the future. Type, demand for more volume.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/a-hundred-flowers-bloom-a-quick-look-at-layer-2-progress-in-the-post-merge-era/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-09-23 15:59
Next 2022-09-24 11:34

Related articles