Cold “Wood” Chunhua: Layer 2 Hundreds of Flowers Bloom

On September 22, the 2022 Shanghai Blockchain International Week and the 8th Blockchain Global Summit hosted by Wanxiang Blockchain Lab, the theme forum on the third day – “Chan “Wood” Chunhua” opened online .

Cold "Wood" Chunhua: Layer 2 Hundreds of Flowers Bloom

TL;DR

“zkEVM: Compatibility and Equivalence” – Alex Gluchowski, CEO of zkSync

The problem in a ZK environment is that all constraints must be enforced for all instructions at every step of execution, so the cost of executing a ZK proof at every step will be the sum of the costs of the components of each instruction.

“Arbitrum Technology is an Optimistic Rollup in the Age of Rollup” – Steven Goldfeder, Co-founder and CEO of Offchain Labs

From a technical and non-technical perspective, there are many ways to contribute to the thriving ecosystem of Arbitrum and Ethereum. Offchain Labs, as part of the Arbitrum technology suite, is also building another solution called AnyTrust technology. AnyTrust sends data to a so-called data availability committee, which then reports the results back to Ethereum. There are already two chains responsible for implementing this technology: 1. Arbitrum One, which is the longer-running chain that has been in operation for over a year and will be officially launched in August 2021. It’s optimistic Rollup, putting all data on top of Ethereum. 2. Arbitrum Nova, the Arbitrum chain launched in August this year, does not publish all data on Ethereum, but uses a data availability committee.

“Explaining StarkNet” – Eli Ben-Sasson, co-founder and president of StarkWare

What is StarkNet? It’s very much like Ethereum, but it’s L2. You can write smart contracts on StarkNet, you can provide related transactions to smart contracts, and it supports general computing and can be composed. But you can even think of StarkNet as something very similar to Ethereum due to the magic of Stark proofs, which offer lower gas fees.

“The Design and Architecture of Scroll” – Zhang Ye, co-founder of Scroll

Scroll is building an EVM-equivalent ZK Rollup whose design decisions follow security, efficiency, EVM equivalence and decentralization, and its architecture consists of three parts: Scroll nodes, smart contracts on the chain, and a decentralized prover network composition. Scroll has now completed the pre-Alhpa testnet, and in the second phase, developers will be invited to deploy some smart contracts based on its network and develop some additional applications; the third phase will start the outsourcing of Layer 2 proofs, and invite the community to participate in becoming Proof node; the fourth stage reaches the zkEVM mainnet stage, which will be deployed and launched on the mainnet after strict code auditing and performance improvement; the fifth stage will deploy a decentralized sequencer to make zkEVM more efficient.

zkSync CEO Alex Gluchowski: “zkEVM: Compatibility and Equivalence”

Cold "Wood" Chunhua: Layer 2 Hundreds of Flowers Bloom

zkSync is a deep task-driven protocol, that is, everything we do, every design decision in terms of technology is on a mission to accelerate mass adoption in the crypto space. As you can see, it also influenced the choices we made around zkEVM.

In fact, zkEVM itself is driven by mission, because EVM has become the JavaScript of the blockchain world, which is a new type of value, the universal language of the Internet, with so many tools, services, libraries and infrastructure, it is difficult to avoid it. In other words, zkEVM will be with us for a long time.

ZK is a very interesting technology, in fact it is the only way for us to get rid of the impossible of the blockchain, and the only way to achieve the infinite scalability of the blockchain, while fully guaranteeing the security of each transaction, so we must Combined with zkEVM.

You’ve probably heard a very clichéd question about whether we can just be EVM compatible, or can achieve full EVM equivalence, and whether the latter is necessary.

Some protocols claim to be equivalent to EVM, which we think is a matter of degree. Previously, Vitalik published an excellent article that visually introduced several different degrees of EVM compatibility in the world of zkEVM in the form of a chart, and proposed that the higher the EVM compatibility, the greater the performance sacrifice.

So here, I’ll explain in more depth what each degree of EVM compatibility specifically means, where the performance sacrifice comes from in the implementation of ZK, and which option is most preferable and which option we choose, and how such a choice will How it affects users.

We’ll start at the bottom, type 4 is EVM compatibility at the source code level, and you can put any source code on the EVM into the zkEVM environment to run.

Self-decoding compatibility is included in types 3, 2.5, and 2 and above, and the higher the type, the more functional compatibility. Like the API or the exact same way as the first layer, which is the full Gas metric.

The top type contains the proof-complete root hash, including consensus, storage, updates, etc. Let’s explore further.

Beginning with type 4, which is the most performant, this type of EVM compliant is the ability to compile existing code into a specialized set of EVM opcodes, or RISC.

Actually it’s not really RISC, it’s very similar because every instruction is optimized and can run in the context of zkEVM.

But the problem in a ZK environment is that all constraints must be enforced for all instructions at each step of execution, so the cost of executing a ZK proof at each step will be the sum of the costs of the components of each instruction.

Therefore, to maximize performance, we need to keep the number of these instructions as low as possible, allowing enough flexibility to express arbitrary code and compile arbitrary code into that instruction.

And the instructions need to be very few, very atomic, and very simple, which also means that the performance advantage is orders of magnitude compared to any other type, and there can be innovation in this area, and you can do some really interesting things, This improves the user or developer experience.

For example, accounts can be abstracted, executed through MetaMask wallet, or any other wallet, or using Argent, or smart contract-based wallet, with social key recovery, etc., using different signature schemes, using local multi-signature, etc., This is its advantage.

In terms of developer experience, you can bind libraries written in any modern language and compile to zkEVM using the very mature LLVM compiler framework.

Front-ends to LLVM exist in languages ​​such as Python, and the main disadvantage of this approach is that any tool that is EVM-compatible at the opcode level doesn’t work out of the box. So we need dedicated support for these tools, mainly debuggers and tracers, and they need to be tuned to support zkEVM and zkSync, or any other protocol that uses this approach, but there aren’t many of these tools, So type 4 can still be used.

This is why the slave type is chosen from the very beginning of the mainnet, because from our usage, user experience and performance are critical. If the user experience doesn’t live up to expectations, or doesn’t match the current internet experience, the end result is not really attracting millions of new users.

In terms of performance, as mentioned before, the code needs to be compressed to a minimum and maintain extremely high performance in order to unlock more scenarios.

Such as social networks, games, or any scenario that requires frequent transactions, such as small transfers, in these scenarios, the magnitude of the difference in performance is critical, and it is also important for users to pay 10 cents or 0.01 cents.

Now move on to explore other ZK-EVM classes mentioned by Vitalik, which can be called Six-type architectures, they use a very complex instruction level, or EVM, which means that we have to support some instructions that vary greatly in cost .

We have two ways to solve the problem, one is to implement a native ZK loop that supports all instructions. Every loop of the virtual machine can prove the execution and tracking of the smart contract, and we must add constraints on the implementation of each opcode.

Therefore, in EVM, since it is not designed for the ZK environment, there are many instructions that are very inelegant and inconvenient, and the operation is very complicated, and the speed is several orders of magnitude slower than the RISC instruction set specially designed for ZK, so this The plan is not feasible.

Just mentioned that this solution is not feasible, and another option is to try to emulate the EVM’s opcodes and use some smaller micro-opcodes through Ethereum’s complex instruction-level implementation.

Another option is to try to emulate the EVM’s opcodes, and implement it through Ethereum’s complex instruction level, using some smaller micro-opcodes. The problem with doing this is that for each individual opcode, whether it’s complex or simple, for basic operations such as arithmetic, it must also add a lot of overhead, because you need to read byte by byte from memory. code, you need to analyze and parse them, you need to determine what instruction this is, you need to jump to execute the correct instruction, and then you need to process the operator, operand, etc. orders of magnitude higher, but it is feasible and may lead to interesting applications in certain scenarios.

If you look at the performance difference, the difference between Type 3, Type 2, Type 2.5 is not significant. If you interpret or run complex instructions in your own way, the overall gas cost will not increase too much, because the basic arithmetic in the ZK environment is very cheap, and supports 100% of the API, it will not really affect the performance of the program, Unless you use a lot of heavy operations that can then be optimized for the ZK world.

But the question is do we need to stop there? The answer is no. We can start with Type 4 and continue to improve compatibility with the EVM by adding functionality within the framework of a chosen base paradigm.

For example, if we only implement a smart contract that can interpret EVM opcodes, we can make zkSync a system that supports both native high-performance compiled smart contracts and systems that support existing EVM opcodes.

Although it may be a lot slower, the smart contract can still run, so this smart contract can either be written in Solidity or a more complex language like RUST, and compiled and run in a native zkEVM environment, which is EVM opcodes as interpreted EVM , so this is a very simple project that is fully achievable in a few months.

If we also want to implement Type 2.5, all we need to do is support the full Ethereum API on the smart contract side. At the same time, this means we want to support all hashes and all precompiles. In fact, ZK has supported Ethereum’s native hashes, such as KECCAKSHA256, etc., from day one, and will also support it when our mainnet is launched a month later, so all smart contracts using KECCAKSHA256 will be generated completely with the first layer operation. Same result.

All precompiles are supported, which we plan to finish by the end of the year, the most complex part of which is support for elliptic curve pairings, work on this has already started, and the rest is simple and already supported. Adding the interpreter, and 100% support for the API, will make zkSync implementation type 2.5 EVM compliant.

The interesting thing is that if you start at the bottom, you can keep improving all the time. But if you start with a higher-performing, more advanced system, you can only simulate a lower-performing system.

Just like we can emulate old Mac OS on Mac book, we can even go back and emulate some old mainframe or Unix machine. But this way, there’s no previous compatibility, you can’t emulate Mac OS on older systems, so it’s a one-way street, and you have to start with the highest performing option.

Therefore, currently we are in EVM compatible type 2.5, in order to improve compatibility, we need to support the exact same Gas calculation as Ethereum, and support storage and consensus compatibility.

And that doesn’t make sense for L2, it makes sense for Ethereum, but it doesn’t make sense for Layer 2 C, to understand why we need to understand the cost difference between L1 and L2.

Resource pricing is different in Layer 2, which is why scaling is possible. Comparing Rollup and Ethereum, the bandwidth cost is about the same, but the computation in L2, especially in the zk-Rollup environment, is very cheap, and it must be much cheaper to replicate on Ethereum with tens of thousands of nodes. Also storing Ethereum is very expensive because full node state needs to be synchronized. It is not needed on the second layer, because the zero-knowledge proof will verify the update of the storage, and the user only needs to download the hash and the state delta, so it will be much cheaper.

If you want to support the exact same Gas calculation method as L1, you are actually asking for trouble. Because of this, either will not get L2 performance optimization, or be vulnerable to DDoS attacks.

That’s why we don’t really think it’s necessary to go a step further in compatibility if the other side’s resource pricing is too low, as we think zkSync will stick to type 2.5.

Higher compatibility also doesn’t make sense in my opinion, because if the cost of Ethereum full node verification is reduced, the community can decide to reduce the block size and drastically increase the storage fee for one layer and only use the first layer for verification Validity proofs of ZK and optimistic Rollup and fraud proofs, putting all applications on L2 is also the direction we expect to see in the future. Therefore, EVM compatibility of type 2 and type 1 achieves full EVM equivalence with a slight marginal advantage.

Steven Goldfeder, Co-founder and CEO of Offchain Labs: “Arbitrum Technology in the Age of Rollup is an Optimistic Rollup”

Cold "Wood" Chunhua: Layer 2 Hundreds of Flowers Bloom

What is Rollup, how it is defined technically, and how exactly Rollup is leading Ethereum scaling. Today I want to take you to explore in depth how to build Arbitrum from a technical point of view, and how the community can work together to build the Arbitrum ecosystem. I hope that after listening to my talk today, you will have a better understanding of how Arbitrum works and the huge Arbitrum ecosystem. Also better understand how to get involved.

From a technical and non-technical perspective, there are many ways to contribute to the thriving ecosystem of Arbitrum and Ethereum.

The history of Rollup goes back a long way, but I want to go back to two years ago because I think two years ago cemented Rollup as a critical time point for Ethereum scaling.

In October 2022, Vitalik once published a roadmap blog post about Ethereum Rollup as the core. In other words, Rollup is the core of the Ethereum project and the development of Ethereum technology. Rollup is a scalable solution for Ethereum, bringing the security and decentralization of Ethereum to the masses.

But what is the problem? What is Rollup? And what’s the difference between Rollup and other technology scaling solutions?

The core idea of ​​Rollup is to publish all transaction data of user transactions to Ethereum and store it on Ethereum, but the execution of transactions does not happen on Ethereum. As you can imagine, when you submit a transaction to Ethereum, you can think about the transaction in two ways:

On the one hand, transactions are just blocks of data, made up of 0s and 1s.

On the other hand, transactions represent the instruction level, that is to say, these data, these 0s and 1s represent instructions, which can go to store and calculate a certain value.

What Rollup does is put all data on top of Ethereum, allowing Ethereum to store blocks of data, but the execution of instructions is off-chain and the results of execution are reported back to Ethereum. The key value proposition of Rollup is that security comes from Ethereum. It is hoped that on the one hand, Ethereum can ensure the reliability and correctness of data stored on the chain, and on the other hand, it can ensure that execution occurs off-chain.

Now the question becomes how to get Ethereum to verify what is being done off-chain? The key is to prove to Ethereum that there needs to be a mechanism that proves to Ethereum that not only the content stored on the Ethereum chain is correct, but also the correctness of off-chain execution.

In an optimistic rollup, interactive fraud proofs are used to prove to Ethereum that the execution and transaction results we reported to Ethereum for off-chain execution are correct. In order to achieve this, a large amount of execution work can be executed off-chain in Ethereum, thus achieving the scalability of Ethereum. Because our use of Ethereum is very streamlined, we do not use Ethereum to execute transactions, but only use Ethereum to store data, which means that execution does not happen on the Ethereum chain. In this way, more Ethereum space can be obtained.

Offchain Labs, as part of the Arbitrum technology suite, is also building another solution called AnyTrust technology. AnyTrust technology and Rollup technology are actually very similar, but there are key differences between the two.

AnyTrust does not put all the data on Ethereum, instead it sends the data to the so-called data availability committee, which then reports the results to Ethereum, which also has a fraud proof mechanism. However, in AnyTrust technology, there is a data availability committee responsible for data storage. In Rollup, the biggest cost is data storage, and if there is a data storage committee, the cost can be significantly reduced.

In the process, there is no way to have the full security of Ethereum anymore, as we rely on the Data Availability Council to store data. However, AnyTrust technology is still highly secure, and it is much more secure than sidechains.

As the name suggests, arbitrary trust in AnyTrust means you only need to rely on one or two members of the committee, whereas sidechains generally require you to trust the majority of sidechain participants, or even two-thirds of the validators, before you can go Trust the sidechain. Arbitrum Rollup technology and Arbitrum AnyTrust technology, one relies on Ethereum to store data, and the other technology uses data availability committee for data storage, but both technologies are much more secure than other scaling solutions.

Now that the technology under development has been introduced, what are the specific blockchain implementations? There are already two chains responsible for implementing this technology.

(1) Arbitrum One, which is a longer running chain, has been running for more than a year, and will be officially launched in August 2021. It’s optimistic Rollup, putting all data on top of Ethereum.

(2) Arbitrum Nova, launched in August this year, which is the Arbitrum chain, does not publish all data on Ethereum, but uses a data availability committee. I’ll reveal later who the members of the Data Availability Committee are.

What do Arbitrum One and Arbitrum Nova have in common? First of all, they are all general-purpose blockchains, on which contracts can be deployed, and users can interact with them at will without permission. Compared with Ethereum, scalability and fees are much better than Ethereum. For example, Arbitrum One, on average, the fees are 10-50 times cheaper than Ethereum, and Arbitrum One, because the data is not stored on Ethereum, it is lower than Ethereum in terms of scalability and in terms of fees 30-150 times.

However, both chains can achieve extremely fast confirmation, and if you use Arbitrum One or Arbitrum Nova, you may have realized that once you press the “confirm transaction” button, you can get the result of the execution of the transaction immediately. This is also a feature of all Arbitrum products, enabling fast confirmation, and fast confirmation can lead to a better user experience, which is also an experience that many users are very familiar with.

Are the two chains competing with each other? the answer is negative. Although they are both general-purpose blockchains, the two chains have different focuses. Arbitrum one is a strong DeFi ecosystem and also has very strong and high-value NFT projects, while Arbitrum Nova is mainly aimed at games and social projects.

There are three principles we believe are crucial when it comes to scaling Ethereum, and these principles also apply to Arbitrum One, Arbitrum Nova, and any other chain currently being built.

(1) Transaction costs. It sounds simple, but it’s what users really need. For example, when we have scalability solutions, for users, what they want is to pay lower fees and complete transactions at the same time. Technically, since we use Arbitrum, we have less use of a layer of Ethereum, while still getting the security that Ethereum guarantees.

(2) High security. On the one hand, you want cheap transactions, but on the other hand, you don’t want to sacrifice security to achieve a reduction in transaction costs, so security should also be very high. Don’t do things that sacrifice security to achieve low transaction costs, whether it’s Arbitrum One or Arbitrum Nova, both chains prioritize security. Arbitrum One is a complete rollup of data on Ethereum, while Arbitrum Nova relies on a data availability committee for data storage, but both chains focus on providing high-security, low-cost transactions.

(3) Compatibility. If Ethereum developers already have the knowledge of how to develop in Ethereum, and you’ve written some code for Ethereum, hopefully that code and knowledge can be applied directly to Arbitrum. You may also hear terms like EVM compatibility, EVM equivalence, etc.

Both Arbitrum One and Arbitrum Nova are as compatible with Ethereum as possible, which means that all the code you have previously written for Ethereum and the knowledge you have gained on Ethereum can be directly applied to Arbitrum, which It’s also why we’ve seen so many applications successfully deployed on Arbitrum One and Arbitrum Nova, which are easy to migrate since they’ve been successfully deployed on Ethereum before.

Over time, over time, the development experience in Arbitrum will become easier and better. The core principle is that in Ethereum, all code, everything that works in the environment of Ethereum should be directly applicable to on Arbitrum.

The above are the three core principles we uphold: On the one hand, transaction costs must be reduced, but without sacrificing security. At the same time, it can also achieve high compatibility with Ethereum. That is to say, both from the perspective of developer tools and user tools, “out-of-the-box” can be achieved.

Next, I will share with you the development schedule of Arbitrum. In October 2020, at the same time that Vitalik published the Rollup-centric roadmap article, we launched the testnet of Arbitrum One. It’s a very important moment for us, and for the Ethereum community, because it’s the first general purpose Rollup built entirely on a testnet. Great opportunity for us, and for all participating Arbitrum testers. During the testnet operation in the next few months, I learned a lot and made a lot of improvements.

In May of next year, Arbitrum One was actually launched on the mainnet, but it was mainly open to developers first. When launching the mainnet, I hoped that it would be a fair release, and I didn’t want to give priority access to some developers. , or give priority access to certain applications and infrastructure. I want everyone to be able to participate fairly. Therefore, from May to August 2021, during the three-month testnet time, only developers are allowed to enter the Arbitrum testnet, allowing hundreds of developer teams to enter and let them develop applications on our testnet and test.

Because we hope that when we officially open the main network to the public, everyone can see the ecology of a hundred flowers blooming, which is also the so-called “fair release” strategy. If you’ve seen a show, they compare Rollup to an “amusement park”. Arbitrum One is Cape Le’s new amusement park. Before the amusement park opens its doors to the public, let some amusement service providers enter first, and create these Play experience.

When we officially launched Arbitrum One in August 2021, there were already many applications and infrastructure available for public or immediate use, and the ecosystem was very prosperous from day one. In order to do this, the so-called “fair release” strategy was implemented, so that all developers were in the same camp, and when the gate of this “game park” was opened to users and the public, everyone had already done When you are ready, you can also see the application ecology of a hundred flowers blooming above.

In the next period of time, a large number of applications based on Arbitrum have been built, but for us, things are not over yet, and we are not satisfied to stop there, hoping to make Arbitrum better. That’s why we launched Arbitrum Nitro, the Arbitrum Nitro development network, also known as the testnet, for developers in April. Migrated Arbitrum to Arbitrum Nitro in August, and I will tell you more about the advantages of Nitro later. For us, Nitro is a huge upgrade, meaning lower fees, higher bandwidth, and a better experience for developers and users. So in August, the whole team was very busy, not only migrated Arbitrum to Arbitrum Nitro, but also launched the Arbitrum Nova blockchain. I have just introduced to you that Arbitrum Nova is the AnyTrust chain, and in August it officially launched the blockchain. online line.

First, Arbitrum One.

Next, I will introduce the Arbitrum One ecosystem, and what development projects are being developed based on Arbitrum One, what benefits it can bring to users and developers, and what can be done in Arbitrum One.

So far, more than 50,000 contracts have been deployed based on Arbitrum One, which means it is Ethereum’s leading Layer 2 solution with a 50% market share in Rollup. To date, more than 1 million unique addresses use Arbitrum.

But what are the projects in the ecosystem? Mainly blue-chip DeFi projects, mainly DeFi blue-chip projects from Ethereum, including Uniswap, Sushiswap, Curve, AAVE, in addition, there are also new projects native to Arbitrum, which means that they are not migrated from Ethereum Projects, such as DOPEX, VESTA, GMX, TRACER, TRESASURE, etc., are based on Arbitrum’s native DeFi and NFT projects launched on Arbitrum. Coupled with the DeFi blue-chip projects from the Ethereum ecosystem that everyone is already familiar with, the Arbitrum ecosystem has now seen a flourishing scene.

Compatibility has just been introduced to you. Compatibility not only applies to applications, but also to infrastructure, which means that all Arbitrum infrastructure is very familiar to developers and can also be used out of the box. . Compatibility is achieved both from an infrastructure point of view and an application point of view, so that all Arbitrum-based project parties are very familiar with the infrastructure they are familiar with from Ethereum. For example, ChainLink provides services for projects in the Arbitrum ecosystem, or other price information.

In addition, there are tools like Gnosis Safe, but also tools for the user side, such as Etherscan. The idea is whether from the perspective of developers or from the perspective of end users, I hope that they will feel familiar with Arbitrum when they enter Arbitrum. As long as you are familiar with the things of the Ethereum ecosystem, you will be familiar with the tools and tools on Arbitrum after entering the Arbitrum ecosystem. application. And indeed, we did.

I just gave you an introduction to what you can do on Arbitrum, but you might be asking yourself, how exactly do you get into the Arbitrum ecosystem? How to better participate in the various application ecology on Arbitrum, and how to enter and exit?

The good news is that there are many ways to enter the Arbitrum ecosystem. If you have some centralized exchanges, nearly a dozen centralized exchanges have directly allowed users to withdraw your funds to a project in the Arbitrum ecosystem. These include BINANCE, FTX, Crypto.com, and many other centralized exchanges that have now launched services that allow you to make inbound transactions directly from the Arbitrum ecosystem.

In addition to that, if you store funds on Ethereum or other blockchains, we also have some decentralized cross-chain bridges that allow you to bridge between Ethereum and Ethereum. For example, you can use Arbitrum’s native cross-chain bridge, or you can use third-party cross-chain bridges, such as Hop and Synapse, which allow you to easily migrate and enter Arbitrum and other blockchains.

In addition, Arbitrum One also has entry and exit channels for fiat currency, and credit cards can be used to purchase assets in the Arbitrum ecosystem. For users who want to deposit funds on Arbitrum, and get a better user experience, faster two-tier system, cheaper transactions and feelings, these huge applications in the Arbitrum ecosystem have various method.

Second, Arbitrum Nitro.

Went into the Nitro upgrade a few weeks ago and now would like to go into more detail on this. From a technical point of view, what is a Nitro upgrade? The basic idea is that a few weeks before the official launch of Nitro, the so-called Arbitrum virtual machine, which is running on-chain, is the virtual machine used for interactive fraud proofs, which is a virtual machine that we developed ourselves.

In addition, there are also custom nodes developed by themselves to support virtual machines. Although the two work very well, we can replace our own developed components with more standard components. Through such replacement, we can take advantage of the efforts of developers in the Ethereum ecosystem over the years, so that the development experience can be closer to Ethereum’s. development experience. In addition, advanced call data compression is also introduced, in other words, the data placed in Ethereum is compressed first, and then stored in Ethereum. If compressed, the compressed data will become smaller, and the user needs to use less Ethereum space, which means you can put more data on Ethereum.

On August 31st of this year, we successfully migrated to Nitro, the one year anniversary of our opening to the public. For us, it is a very important event, and it is a symbolic event, but we are not satisfied to stop there. We established Arbitrum One a year ago and returned to the starting point, hoping to make the Arbitrum ecology better, and continue to ask ourselves, How to make the ecology more prosperous? It also ensures that the migration is seamless, that is to say, the user does not need to do anything or make any preparations to complete this seamless migration, because the ecology is active.

The analogy I often like to use is that the Arbitrum One is the equivalent of an airplane, which is already in the air. We did engine swaps in the air. First replace your own Arbitrum virtual machine flight engine, and then switch to the Ethereum virtual machine while the plane is still flying, which is technically achieved.

The question you want to ask now is what kind of benefits do we get after completing the Nitro upgrade? The basic idea is a huge increase in throughput, and after the Nitro upgrade, the throughput has increased by a factor of 7. Because of the advanced data compression I just mentioned, being able to pass these saved storage capacity to the user means that for the user, you will have less transaction fees for publishing data or needing to bear.

In addition, for developers, we also provide developers with a lot of functions to improve the development experience. Because we use Ethereum nodes, many Gas billing and tracking support are exactly the same as in the Ethereum ecosystem. In other words, after the upgrade of Arbitrum Nitro, the compatibility with Ethereum has been further improved, and the compatibility with Ethereum has become better and better. I am very excited about this.

Third, Arbitrum Nova.

What is Arbitrum Nova? is the AnyTrust chain. The question now is how does the AnyTrust chain of Arbitrum Nova work?

First, the data is not stored directly on Ethereum, it is sent to the Data Availability Committee, technically speaking, and practically speaking. The relevant transaction data will be sent to the sequencer, which will then publish the data to Ethereum. But on the AnyTrust chain, for example, Arbitrum Nova will not send data directly to Ethereum. Instead, the sequencer will send the data to the Data Availability Committee, which will sign the data reliability certificate and send the certificate to Ethereum. It is the specific technical implementation principle of Arbitrum Nova and AnyTrust chain.

Arbitrum Nova has a rollback mechanism to roll back to publishing data on Ethereum if the data availability committee goes down, if the committee has no way to accept the data. The data will be rolled back and published on Ethereum. That is, the Nova AnyTrust chain has a rollback mechanism. For whatever reason, there is no way for the committee to function properly and complete its mission. The chain will not stop, it will continue to send data to Ethereum, and although the fees will increase, the chain will still be like any normal Rollup chain. Keep running the same.

Who are the members of the Data Availability Committee? My answer is that there are some very strong Web2 and Web3 brands and IPs, including Reddit, Consensys, QuickNode, FTX, Google Cloud, etc., that is a comprehensive committee of Web2 and Web3 brands, core concepts It’s just a matter of trusting one or two of the committee members to accomplish the mission well. With the participation of the big Web2 and Web3 brands, users can rest assured that their data is stored securely.

The next question is who are the users of Arbitrum Nova? In July 2020, Reddit hosted a meetup called The Great Reddit Contest. The key is that they hope to issue the token of the blockchain and give it to the community users, so that the community can better participate in the interaction. They hope to put it on the mainnet, and they hope to put the experience of Reddit community points on the mainnet, but they don’t know how to achieve scalability, so they asked the community to come up with an expansion solution, about 20 or more. The team submitted scaling solutions, including us.

In 2021, Reddit announced that Arbitrum had won the contest, just a few weeks before the Reddit launch on the Arbitrum Nova mainnet. What’s interesting is that Reddit brought in over 200,000 users on the first day, which is very exciting. People often talk about how to reach mass adoption, how to reach the next billion users. It’s not that you give a billion users direct access to the crypto community. On the contrary, it is to use the existing ecology. For example, Reddit has some large communities, reaching 400 million monthly active users, so if Arbitrum and Reddit have closer cooperation, there will be opportunities to allow more users to participate in Arbitrum’s activities. Ecology, such as entering Arbitrum Nova, there are already two active communities in Arbitrum Nova.

How much gas can Nova Gas save? Because Nova will not publish the data on Ethereum, instead, it will send the data directly to the Data Availability Committee, which can further reduce the Gas fee. Compared with Arbitrum One, Arbitrum One is 97% cheaper than Ethereum. And Nova is cheaper than Arbitrum One.

For example, the cost of making a transfer is about less than 1 cent, and the cost of making an ERC20 swap is about 2-3 cents, so the cost on Arbitrum Nova is very low.

Since not all data is published on Ethereum, it is also less affected by Ethereum price fluctuations, which means that on Nova, transaction fees are generally more stable.

The next question is who is Arbitrum Nova designed for? You might ask if Arbitrum One and Arbitrum Nova are in competition with each other. The answer is not so, because Arbitrum One mainly focuses on DeFi and NFT projects, but Arbitrum Nova is aimed at game development and social projects. That is to say, for those projects with high throughput, many users need to interact with the blockchain frequently, so they need very low fees to better interact with these chains.

To sum up, Arbitrum Nova is mainly focused on gaming, social networking and other ultra-low-cost expansion solutions that require high-frequency trading.

Eli Ben-Sasson, co-founder and president of StarkWare: “Explaining StarkNet”

Cold "Wood" Chunhua: Layer 2 Hundreds of Flowers Bloom

The story of STARK technology and cryptography goes back thirty years. In the mid-1980s and early 1990s, mathematical innovations unleashed enormous power and put it back into the hands of humanity.

Therefore, we often see such a picture in movies, a young and weak child encounters a very powerful dragon, and controls the dragon in some magical way, and in the blockchain In the world, the weak child is actually Ethereum, which is a computing device with certain limitations.

A large amount of computing power can be controlled through some kind of magic. The “magic” mentioned here is mathematics, so STARK is based on 30 years of mathematical research results. Among them, many research results are applied to StarkWare after invention and optimization. The core is that mathematics allows you to use cryptography to declare its reliability.

What does reliability really mean? In fact, the novelist Lewis has a very nice description of the fact that no one is supervising, but also making sure to do the right thing.

In the same way, this mathematical technique can lead people to believe that the contract will guarantee correct execution, even without supervision, without overseeing every step of the calculation. So, with this really amazing mathematical technique, we can work on making Ethereum scale to meet the needs of the world.

This 30-year-old paper has a very beautiful description of a remarkable mathematical innovation that says “a reliable personal computer that can supervise the operation of a supercomputer”, even if these supercomputers Computers use extremely powerful but unreliable software and untested hardware. Ethereum is like a very weak and limited computer.

In the words of the paper, it is a personal computer, but it can be used to declare reliability, knowing that the group of supercomputers is doing the right thing, and making no assumptions about the reliability of the group of supercomputers. It sounds like magic, but the magic of mathematics can make it happen.

Sharing my personal story, about twenty years ago I became a young math researcher interested in computational theory. And one of the things that was being researched at the time was making these beautiful and powerful proofs of magic continue to be very efficient. Moreover, I have done a lot of hard work with my collaborators and colleagues to make this inefficient theory for decades, and finally become very efficient, and can be generated and verified on ordinary or general-purpose computers .

At this stage, StarkWare was established, and StarkWare’s mission is to use mathematics to maintain the reliability of the blockchain and enable the scalability of the blockchain. During this time, I also used different versions of zero-knowledge proof technology and introduced privacy protection to the blockchain. I’m also one of the founders of Z-cash, but instead of talking about privacy today, I’m going to focus on the math and scalability of StarkNet.

I am often asked, what is it like to go from a mathematician to a practitioner, from a computer scientist to a professor to an entrepreneur? For me, going from a very theoretical research to a very practical scenario application is like crossing a “desert”. As a theoretician and scientist, I want to talk about practical applications, while those who do practice sometimes don’t understand the theory. How to use it for them, for a long time my colleagues and I had to walk through the “desert” until we managed to get to the other side and made an application that is actually very effective and usable today, of course this is Another story. This technology allows users to operate a weaker computer to verify the reliability of another more powerful computer, and what does this have to do with blockchain?

Going back to the previous sentence, a computer with limited performance can supervise and declare the reliability of a large number of computing clusters without re-executing the calculation, which is why this technology is relevant to the blockchain.

Imagine that Ethereum is a computer with many nodes running in a decentralized manner, but if it is a computer, as a computer, its performance is very limited. In other words, the demand for its computing power is much higher than the current actual performance, which is why Gas is expensive and crowded. Therefore, with STARK technology, this computer can be used to monitor larger-scale calculations done off-chain, as well as the reliability of the calculation results, and can have the same security level and trust assumptions as Ethereum.

At the same time, the reliability and security of all computations are enhanced through the magic of mathematics.

In today’s financial transactions we are faced with two different ways of transacting, the first is very traditional, we use the bank’s credit card and payment processing flow, in this way, if expressed in an abstract way, someone A large computer is being used and everyone needs to trust it.

If you want to know if the whole system is very honest, just have to believe or believe that the government, the auditors, or other people are doing the right thing, but it’s not a very inclusive system, it’s actually very unique, you and I Can’t be a bank and process these transactions, that’s the traditional way, it’s very efficient in the ability to calculate and process transactions.

Like blockchain – ethereum is extremely inclusive, which is great, everyone can and is very welcome to use their personal laptop to connect to the ethereum network and be part of its foundation of trust, and I Hope everyone is doing so.

In order for everyone to connect to the Ethereum network with our laptops and be part of its trust base, we need to limit the amount of computation that laptops can carry.

In other words, a highly inclusive system like Ethereum is great, but very slow as a computing device. What we want is an inclusive system that allows everyone to add their own computer to the network and monitor network activity.

At the same time, it allows the system to gain the same scale as a mainframe computer off-chain. The bridge connecting these different worlds is actually STARK and related mathematical applications.

What are the options for scalability? First of all you can get everyone to buy a bigger computer, of course some people will be turned away. By the way, we also have some sidechains, some very popular sidechains do this. In the past, we also had sidechains, such as EOS and BSC, etc. In fact, the implementation of this solution is to allow nodes to run larger computers, and you can also choose to buy a large device to participate. But even so, its size is limited. At the same time, inclusivity is lost.

For example, one of the most popular blockchains currently requires a minimum of 12 cores and 128GB of memory for hardware, but my computer doesn’t have it, so the other way around is to require the use of something called Fraud Proof thing. Such as Arbitrum and Rollup. Some hardware requirements will be larger, which means that my own laptop won’t be able to join. However, some will claim that there are various game theories or incentives that will make these larger devices constrain each other, and so far the technology has not started to take off anywhere as envisioned. If it does get used, it should be as safe as expected, but at least to me, it’s unclear and unproven. So this program will be a little faster, but a little less inclusive, and some people will be turned away.

And the method we’re going to take is based on mathematical proofs, which is the way of validity proofs, and what we’re trying to achieve is to be able to allow anyone to run a very large computer and do related things, but to do everything you have to do Prove to L1 as this is the only network we trust.

StarkNet embodies this principle very well. When we operate on StarkNet, security is the security of Ethereum. You don’t need to make any trust assumptions in the StarkNet ecosystem, because the guarantee of security comes from the bottom layer, that is, the Ethereum Square, this is what StarkNet can provide, and this is the power of mathematics.

Receipts proving shops, visits, and in fact restaurant receipts are a very old form of proof. If you think of a restaurant receipt as a string of characters used to declare reliability, it is used to prove to the customer the total amount that should be paid, and the sum comes from a series of algorithms. When we receive the receipt, you can check and verify the reliability of the results by simply running these calculations, so the restaurant receipt is a proof of reliability.

But from a mathematical point of view, they are not very mature, and they do not support scalability because you need to redo the operation. The Stark proof is similar, you can think of it as something similar to a restaurant receipt, but the length of this receipt and the amount of computation required to check the proof is much less than it claims to be. So using this technology, you can process hundreds of thousands of transactions without making any trust assumptions about the prover and the whole processor, which could be the Dark Lord or Darth Vader’s Evil Lord.

No matter what you provide for processing these transactions, but you can guarantee that all nodes cannot cheat or borrow, because any update to the system must have a proof of integrity, submitted to L1 together, and this is what we do, to the system state Any update of s is accompanied by a proof that no one has proven to be false, and if there is proof it is a correct execution, even without the supervision of L1, this is the power of STARK proofs. So it’s not just a theory, it was a theory before, but now it’s a valid system.

Imagine, when building NFTs, if you want to use Ethereum to mint NFTs, you can put hundreds of NFTs in a block. Our technology can be built with just one Stark proof, and 600,000 NFTs are executed, all of which can easily fit into an Ethereum proof. With this technology, hundreds of transactions per block can be realized, and the capacity can be expanded to hundreds of thousands and millions of transactions per block.

StarkNet enables every developer and user to have this very magical technology that enhances the capabilities of Ethereum and scales it exponentially. What is StarkNet? It’s very much like Ethereum, but it’s L2. You can write smart contracts on StarkNet, you can provide related transactions to smart contracts, and it supports general computing and can be composed. But you can even think of StarkNet as something very similar to Ethereum due to the magic of Stark proofs, which offer lower gas fees. From the user’s point of view, when you provide transactions, all transactions enter the mining pool, and miners package the transactions and write them into blocks.

What StarkNet does is very similar. It runs off-chain, that is, Ethereum is used as the L1 bottom chain. It doesn’t know what happened on StarkNet, and it doesn’t need to make any trust assumptions. But users can feed hundreds of thousands of transactions to the sorter, and the sorter will sort those transactions, one by one. These transactions are then sent to the prover, which will generate a more rigorous proof that all transactions were updated and executed correctly. In fact, proofs are exponentially smaller than calculations and transactions. This proof has been submitted to Ethereum, where a gatekeeper validating the smart contract is responsible for the checksum reliability.

Let’s recall the picture just mentioned. A weak child has huge magic power, and through magic power, he can control huge magical creatures, such as a giant dragon. And in the industry just now, it refers to the power of mathematics and Stark, so the validator (so-called kid) is the validator at the bottom layer of L1, it is Ethereum, but he can still use the very powerful calculation on StarkNet Capability, which belongs to the L2 layer.

StarkNet comes with a new language called Cairo, simply explain to you why there is this new programming language. You may have a question, why Ethereum has such a new programming language, and when did Ethereum appear? Around 2015, there were already very good programming languages ​​like Python and C. But Vitalik and Gavin and other experts came up with a virtual machine called EVM, and a new programming language to go with it.

Of course there are several different languages, and Solidity is the most well-known of them, also now recommending users and developers to use a new programming language, Cairo. For similar reasons Vitalik and others invented Solidity, wanting to run a blockchain creates new systems of constraints, and you need a programming language that can satisfy those constraints.

If you want maximum scalability, you actually need to use a programming language that unlocks these potentials. For StarkNet, the programming language is Cairo, which you can use to write all kinds of applications. And so far, developers have written hundreds of applications for it, including those for voting, virtual identities, and games, hoping that everyone will join the large and growing network of StarkNet and users and embrace this magical technology .

Zhang Ye, co-founder of Scroll: “The Design and Architecture of Scroll”

Cold "Wood" Chunhua: Layer 2 Hundreds of Flowers Bloom

Before I formally introduce the technical details to you, I would like to briefly introduce to you what the Scroll project is. In short, Scroll is a general-purpose two-layer expansion solution for Ethereum. Similar to Ethereum itself, developers can deploy smart contracts on Scroll, and at the same time, they can interact with various applications above. But the transaction fees on it are lower and the throughput is higher.

Unlike other second-tier solutions, although we are an expansion solution, all integrity on Scroll will be verified in Ethereum, either through ZK proof or fraudulent proof, so the security of Scroll is guaranteed. It’s even stronger because it’s backed by Ethereum.

More specifically, we are now building a ZK Rollup equivalent to the EVM, what does that mean? Technically, Scroll is based on ZK Rollup, which relies on proofs of validity to prove that everything that happens on Scroll is correct. ZK Rollup is considered to be the purest scaling solution based entirely on pure data assumptions.

Ethereum equivalence here means that EVM with a self-decoding level can be supported internally. For developers, it means that everything supported on VEM can be supported, not only specific programming languages ​​such as Solidity, but also Supports the Ethereum Virtual Machine at the bytecode level, as well as all related development tools.

So for developers, you don’t need to know ZK Rollup to deploy on Scroll, and the development experience on Scroll is exactly the same as on the Ethereum layer, you can use all the familiar development tools , and then deploy in a similar environment.

Before going deeper into the specific technical details, first I would like to tell you why we made such a design decision, and what are the principles behind it?

First, security. The most important task is security, so the most important form of security in scaling is to protect the security of users’ funds and data. On the most secure and decentralized base layer, which is based on Ethereum, users do not need to rely on the honesty of Scroll nodes to ensure the safety of their funds. They can fully utilize the security of the underlying Ethereum layer to ensure the safety of their funds. Even if they actually trade on Scroll, because from a security point of view, it is completely dependent on the underlying Ethereum.

Second, efficiency. The second important principle of design is efficiency. In order to allow users to enjoy a better user experience on the second layer, we believe that transaction fees should be extremely low, at least several orders of magnitude lower than transaction fees on Ethereum.

In addition, we believe that users should enjoy timely confirmation on the second layer. If you send a transaction to a node on the second layer, you can get confirmation very quickly, and you can also achieve very fast finality, that is, your proof can be very fast. Get verified on one layer quickly.

Third, EVM equivalence. EVM has a very active ecosystem. We believe that an effective Ethereum expansion solution means that users and developers should have a seamless migration experience, no matter which DAPPs and tools they use now, in The migration process should be completely seamless.

EVM equivalence is the best way to achieve this because for the user you can have the exact same environment on the Scroll and that’s why EVM equivalence is always maintained, that’s our goal and we ‘s original intention.

Fourth, decentralization. Decentralization is the core feature of blockchain, but it is often overlooked, or is inappropriately sacrificed for efficiency, especially for some one-layer blockchains, they often sacrifice decentralization for efficiency change. But we believe that one of the most valuable aspects of blockchain is decentralization, which also ensures that protocols and communities are censorship-proof, or prevent some coordinated attacks. We have also considered the decentralization of all aspects of Scroll, including the decentralization of nodes, the decentralization of the prover, the decentralization of developers, and the decentralization of users, which is why we say decentralization across all levels .

These principles are the design principles behind us that ultimately lead us to our current technical design solutions.

Security, efficiency, and EVM equivalence ultimately lead us to propose a ZK solution for zkEVM. As just mentioned, ZK provides mathematical guarantees of pure mathematics, does not depend on any economic game under attack, and is also very efficient. In addition, the cost of each transaction is spread among a large number of transactions, so the cost is also very low. Compared to Fraud Proof, Validity Proof has shorter certainty/shorter confirmation. Since Fraud Proof is based on optimistic Rollup, it takes about a week to complete the verification at one layer, but for Validity Proof, if You can generate proofs quickly, that is, you can get finality confirmation at one layer very quickly.

After we designed the ZK decision, we also realized that zkEVM is the ultimate winning cup in favor of EVM equivalence. The idea behind zkEVM is that zkEVM can use a concise ZK proof to prove the correct execution of EVM self-decoding. Because all previous ZK Rollups are application-specific, either designed for some DAPP, or some specialized transactions. If you can prove that it is also correct for EVM execution, then you can prove that ZK-EVM is a very general purpose virtual machine.

Previously, everyone thought that zkEVM could not be implemented, because its overhead was very large, which was two orders of magnitude higher than normal APP and application overhead. But since we leveraged the collaborative innovation of the entire community, the design also incorporates recent breakthroughs, including recent prover systems, prover aggregation, and even leveraging ZK’s hardware acceleration.

The open development approach allows us to work with a very wide range of community members, especially the Ethereum Foundation’s privacy and scaling teams, as well as other players in the community, where we collaborate very closely, using The latest research makes zkEVM finally possible.

Based on these research results, ZK Rollup based on zkEVM is also being built to meet many of the design principles I just mentioned. The next step is decentralization. The requirements for decentralization led us to finally build a decentralized prover network. When designing the entire Scroll system, especially when designing zkEVM, we realized that EVM should be put into ZK proof requires a very large overhead in , mainly due to incompatibility between these local fields. In order to reduce, in order to shorten the proof time, because proof time affects the finality time on L1, we decided to build a permissionless decentralized network of provers that will help us generate proofs of blocks on the L2 layer.

In this way, we achieve two main technical advantages:

(1) The prover is more like running in parallel and is scalable, which means that the prover pool can be massively expanded by adding more prover nodes.

(2) The community will be incentivized to run these prover nodes to build better and significantly optimized hardware solutions for us. Because the community is incentivized, we do not need to rely on us as the central party to build these hardware solutions .

If you let the community participate in the development process, you can provide them with enough incentives, and when the incentives are enough, the community is even willing to build mining machines.

Next, I will introduce the overall architecture and design. In order to give you more background information about the architecture, we must first review ZK Rollup. The transaction processing of Ethereum is very slow, and everyone should be familiar with it. The speed of its block generation is Very slow, since it is more decentralized and relies on some specific consensus mechanism, Ethereum transaction processing is very slow.

But for users, with Scroll it is possible to send transactions directly to Scroll instead of sending transactions to Ethereum. Scroll can quickly generate second-tier blocks, and then we will run some proof algorithms to generate validity proofs to prove that the batch of transactions sent to Scroll is correct, and then we will provide some necessary block data as availability data. This data is submitted to the first layer of Ethereum.

And zkProve is used as the public input value in the submission process, proving that the state has changed after the application and execution of these transactions. In this way, a layer only needs to verify the various proofs submitted to him without re-executing all transactions, that is, the final layer verification time will be greatly reduced.

So for us, we have to have a prover, but also some other nodes, including the block producing node. This slide shows the architecture of Scroll, which consists of three parts:

(1) Scroll node.

(2) The smart contracts on the chain are mainly used for depositing funds and smart contracts for one-layer transactions.

(3) Decentralized prover network. The prover network consists of many prover nodes, which are called Roller in our system.

The first is the sorter. In the Scroll node, there is a party called the sorter, which is Gas, which is a fork of Go Ethereum. GO Ethereum can be said to be the most popular implementation on the first layer of Ethereum and can be used to accept the second layer. transactions and generate second-tier blocks. In this way, we can simply leverage existing Ethereum client implementations to ensure consistent behavior at Layer 2 and Layer 1. At the same time, for developers, they can be more familiar with it, and it is more convenient to deploy contracts, not just RPC interfaces.

The second layer is the repeater. The repeater is mainly used to relay information, such as relaying information between the cross-chain bridge protocol and the Rollup protocol. There are also some other user data, including user transactions. The repeater between the first layer and the second layer is also responsible for the transmission of messages. In summary, the repeater is responsible for the relay of messages between the first layer and the second layer.

The third layer is the coordinator. The coordinator mainly sends the tracked information to the Roller. In other words, when the sequencer generates multiple blocks, the coordinator sends all the information not only transaction information, but also all the information obtained from the execution. Collection, I will introduce this step to you in a little bit, because we have a decentralized prover network, so the coordinator must determine who is responsible for proving which block, and then send the relevant block to the decentralized Prover Network. These decentralized provers (Rollers) generate proofs which are then sent back to the coordinator, which is the whole loop.

The zkEVM is at the heart of the entire design, so let’s now dig a little deeper into what the transaction steps that take place in Scroll are.

First receive some execution traces from the coordinator, such as your execution steps or block headers, transaction data, necessary data, self-decoding data, Merkle proofs as execution traces, and then turn it into a loop, using The input builder for this loop, which generates the proof. Because it is circular, it must be converted into something that ZK can use, and then it will be used as a witness to witness all kinds of things.

zkEVM consists of multiple loops, and each loop has a different purpose. For example, EVM is mainly used to supervise transactions, RAM is mainly used for memory operations, and storage is mainly responsible for storage and reading. There are other loops, such as signatures and other loops. In this way, multiple proofs are generated. Finally there is the aggregation loop, which aggregates these proofs into one proof and finally puts it on the chain.

For example, if you have a block or an execution trace, there is a proof. This is the technical process that happens in the prover, or in the Roller.

Next, I will share with you the specific workflow of Scroll. First look at the workflow of zkRollup, you need to have proof of validity of blockchain data, but this proof of validity can be separated because block data can be submitted in advance. We can further decompose it into two steps at this step, one is the proof of validity, the reason for this separation is that there is a stronger confirmation. Because the second layer block does not give you any information, you have to rely on the sequencer, all data comes from the sequencer, but once you have the block data, you can re-execute the transaction to get stronger confirmation , because the time to generate the proof will be longer, you can submit the validity proof in a subsequent stage, so that you can make pre-confirmation quickly and the degree of confirmation will be enhanced.

We also have different types of block states, one of which is the block that has been proposed or initiated by the sequencer, the block that has been included in the second-layer chain is pre-Committed, and the other type of block is called Committed, representing The block’s transaction data has been sent to the Rollup contract on Ethereum. In the end, it is the finalized block status, indicating that the correct transaction execution in the block has been proved, and the proof of validity has been obtained. At this point, your transaction has been finalized.

This slide shows the workflow of transactions in Scroll, first the transaction will be sent to the sequencer and the block will become Pre-Committed. The next step is that the sequencer will upload the block data to one layer, that is, to the Rollup contract. At this stage, your block becomes Committed. In the next stage, the block will form execution traces. You need these execution traces to generate proofs, and the coordinator will choose a Roller to generate the corresponding proofs.

Say for the first block, I choose a prover. For the second block coordinator, such an execution trace is also scheduled to another prover. Since these provers are executed in parallel, the proof generation is also parallel. For example, 3 provers can generate proofs for three different blocks at the same time, and the proofs are sent back to the coordinator, which verifies these proofs. Next, either sign or send these proofs to another Roller, which will execute and prove again. Finally, the coordinator aggregates all the proofs and sends the aggregated proofs to one layer for contract verification. The contract already has a part of the block data before, plus the proof, the combination of the two finally realizes the transaction verification and confirmation on the second layer.

This slide shows the block status, including three different types of blocks, including Pre-Committed and Committed that have completed final confirmation. Different colors represent different block statuses. We already have an alpha testnet, or a pre-alpha testnet, if you want to participate in the test, or want to contribute to us, you can scan the slides on the screen.

Finally, I would like to share with you the roadmap and the current development progress. We have completed the pre-Alhpa testnet, which requires permission, and the testnet can only do user interaction. You can make some on-chain apps in this version. attempt.

In the second stage, we will invite developers to deploy some smart contracts based on us and develop some additional applications.

In the third stage, we hope to start the outsourcing of Layer 2 proof, that is, the process of proof generation. We hope to invite all communities to participate. This is permissionless, and anyone can participate in the proof network and become a proof node.

In the fourth stage, after reaching the zkEVM mainnet stage, we will deploy and launch the mainnet after strict code audit and performance improvement.

The fifth stage deploys a decentralized sorter, which makes zkEVM more efficient, both from a design and technical point of view.

We have a very strong goal, the goal is to bring the next 1 billion users to Ethereum, because we believe that all interactions will happen above the second layer, and we also believe in open, open source communities, everything we do All are open source, especially EVM and contributors from the Ethereum community. We also believe that the collaboration of the entire community can help us to be more transparent in the entire development process. External code audits are also required, and we are constantly pursuing decentralization at all levels, including the decentralization of the prover network. This is the first step on the road to decentralization. step.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/cold-wood-chunhua-layer-2-hundreds-of-flowers-bloom/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-09-22 12:01
Next 2022-09-22 12:02

Related articles