The 7th AMA of the Ethereum Foundation Research Team (Part 1)

Editor’s Note: On January 7, 2022, the Ethereum Foundation research team held their seventh AMA on Reddit covering L2, sharding design, broader roadmap, MEV, EIP-1559, and more. ECN organized and compiled most of the questions for this AMA. It should be noted that members of the Foundation R&D team have personal opinions and speculations on certain topics. To avoid misinterpretation, please refer to the original post.

Due to the length of this article, this article will be published in two issues. The topics of this issue include PBS, sharding design and Layer2.

PBS and sharding

Question by Maswasnos

What does the Ethereum Foundation think of Dankrad’s new sharding proposal that requires independent high-performance block builders? That is, does the research team anticipate that sharding will require some kind of datacenter-level resources in the network, or is a more distributed implementation possible?

Vitalik Reply

The only nodes that must reach the data center level are the builder nodes (see “Vitalik: How to improve transaction censorship resistance for PBS solutions”). Validators and normal user nodes will continue to require only normal computers to run (in fact, one benefit of PBS is that when the Verkle tree is deployed, validators can be completely stateless!).

Another point worth noting is that it should be possible to build distributed block builders. There will be a coordinating node that collects transactions with data commitments, but the performance of the coordinating node is only equivalent to a normal computer, because each data commitment is independently constructed by some other node and transmitted to the coordinating node . This coordinating node would need to rely on some sort of reputation system to ensure that the data behind these commitments is actually available, but this is something that some L2 DAO protocols can easily do.

Joseph Schweitzer at the Ethereum Foundation Reply

I’m sure some researchers will add their own ideas, but it’s worth noting that the Ethereum Foundation as an organization doesn’t really have a unified view of what’s going on with the research and development.

As stated in blogs over the past few years, the Ethereum Foundation is more of a bazaar than a cathedral. Relatively independent R&D work is sometimes under the umbrella of the Ethereum Foundation, which saves management overhead. But just as the Ethereum Foundation does not endorse a client or an EIP, opinions on Dankrad and other proposals are the personal opinions of individual researchers. Many times, there is also a lot of debate and disagreement within the team.

Diligent-Mouse Reply

It’s worth clarifying that the proposer/builder separation (PBS) system that Dankrad recommends heavily relies on is also still under discussion. What happens is that you can continue to run normal nodes on normal consumer hardware in your home. You can still run 32 ETH validators on that hardware that can propose blocks to the chain and validate blocks proposed by other validators. What has changed under PBS is that validators rely on external “builders” to select packaged transactions and order them in the most profitable way. That’s what Flashbots does today with miners that are whitelisted. Those “builders” would need high-performance hardware and an internet connection to figure out the most profitable mechanism to package these blocks to make recommendations to validators.

When this idea is combined with the new sharding proposal, it greatly simplifies sharding for everyone.

Liberosis question

How will this new design affect the roadmap/timeline?

Ethereum Foundation Danny Ryan Reply

Dankrad’s design acknowledges that in any shard design, multi-shard MEVs will exist, so the market for sharded data will tend to be provided by multi-shard builders (specialized non-validators) to proposers (validators) data. These multi-shard builders would consume significant resources in any such design to capture MEVs by running complex computations. And, in any sharding design, this does not affect end-user resources, nor does it affect validators’ participation in the consensus.

The above means that in general design, there is value in creating a “firewall” and market between proposers and builders, because it does not require validators to have complex computing power to participate in the market, so that Ethereum can Maintained and operated with commodity grade computers. We tend to refer to this scheme as “Proposer-Builder-Split” or PBS.

If we admit that the PBS scheme is necessary to avoid high requirements for validators, then Dankrad’s design is essentially saying “if PBS is required, and we admit that MEV will exist across multiple shards due to market driving, why not integrate with it? And simplify the shard design so that builders are highly incentivized to work on building blocks and disseminate block-related shard data.”

The above brings significant simplifications in the core sharding consensus logic (where this PBS model is probably needed anyway).

Under this paradigm, sharded builds are likely to be distributed, but it is unclear whether builders will tend to operate in a decentralized manner, for which high-performance machines will bring enough benefits over time. It might be interesting to explore this design if you guys can build a builder DAO :)

Also, if no such heavy builders exist for a while (they’re all offline, or the market doesn’t really support them for some reason), proposers can still execute and limited sharded data on consumer hardware Blocks of transactions are proposed, but data throughput may drop until there are economically incentivized actors to do the work.

consideritwon ask a question

A question about Dankrad’s sharding proposal and block builders that may move toward centralization. If this approach is adopted, what prevents states from colluding to ban or censor block builders? Is it possible for a blockchain to come to a complete stop as block builders are forced offline? Are you confident you can always assume that there are jurisdictions in the world that allow block builders to communicate freely?

Vitalik Reply

A blockchain just needs to have an honest block builder somewhere to package transactions. Extending the protocol to PBS could add censorship resistance, making it invalid to create blocks with many transactions that are not seen by validators, so censored block builders cannot even get slashed or ignored. participate.

If it is not possible to run large block builders anywhere, the block builders themselves can be distributed, relying on different users running different nodes to create different parts of the block, using some DAO reputation system to ensure data availability.

Dankrad Feist Reply

I can say with certainty that block builders themselves are not the primary censorship target. The reason is that while it definitely has higher requirements than validators/full nodes (our goal here is Raspberry Pi/Phone!), it’s definitely not a datacenter-scale operation, but a more common machine, You can easily hide it if you want. For example, the extra work required to compute shard encoding and proof might be easily done on a high-end GPU.

Bandwidth requirements can create larger constraints. As I mentioned in the proposal, in practice I don’t expect anyone to be able to run block builders with less than 2.5 GBit/sec of upstream bandwidth. You probably don’t have this condition at home, so it’s likely that all of them will be running in the data center. However, if Ethereum suffers a censorship attack, there are alternatives that can work from home. For example, the propagation of blocks can be done through several nodes. Even computational coding can be done distributed. We will definitely think about what the distributed alternative is and design the specification, so it is definitely possible.

It is likely that someone will run such a distributed block builder as a public service, although they would not be the most competitive, and their presence would make any serious censorship attack extremely unlikely.

By the way, there are other general censorship-resistance worries that come with the PBS scheme. Most of the dangers have nothing to do with new sharding proposals. I think research on crLists is the most likely to yield good censorship resistance in the PBS world.

thomas_m_k question

Are you concerned that PBS will bring centralization and how will it be censorship resistant? If there are two major service providers offering builder services and they are of equal strength, what incentive do I have to package the deal they are reviewing because I would make a lot less money doing so?

Vitalik Reply

There are protocol extensions on PBS that will force builders to package many transactions that other validators or builders have already seen. See this documentation:

https://notes.ethereum.org/@vbuterin/pbs_censorship_resistance

Justin Drake Reply

Are you concerned that PBS will bring centralization and how will it be censorship resistant?

PBS is a mechanism that separates centralization from consensus participants. It replaces centralized block proposers with “builders” who do not participate in consensus. PBS does not increase centralization – the whole point of PBS is to reduce centralization of validators.

As for censorship resistance, we have mechanisms whereby proposers can force transactions onto the chain even if all builders choose not to intentionally include such transactions in their blocks.

If there are two major service providers offering builder services and they are of equal strength, what incentive do I have to package the deal they are reviewing because I would make a lot less money doing so?

There is no opportunity cost for the proposer to forcibly package censored transactions. Proposers should not reduce revenue by forcibly wrapping censored transactions.

Cin – ask questions

If I understand the current roadmap correctly, some form of sharding will be implemented before DAS (Data Availability Sampling). Since DAS is required to verify that sharded data is 100% available, I would like to know what the risks are and why you feel they are sufficiently mitigated to execute on the roadmap.

Vitalik Reply

Earlier versions of sharding may not actually implement sharding, they will probably just implement “on” sharding, when in reality the client still needs to download the entire 2 MB or any shard block data (the largest shard at this stage). number will be adjusted down). When this “fake sharding” phase unfolds, client teams will each start experimenting with DAS validation, and once we’re confident enough that DAS validation is possible, we can crank up the parameters and make the entire network depend on it.

Danny Ryan Reply

This doesn’t have to be the case, but if “shards” are released without DAS, I personally think that only a few shards (say 2) should exist so that all validators and most users can fully validate all shards Slice data is available (eg download all).

This ends up looking similar to EIP-4488, but the benefit is that it uses the same mechanics as sharding (same promises, same EVM accessors, etc) once it has more data to process (and then DAS).

kalansciv19 asked a question

I would like to ask why only 64 shards are added and why won’t Ethereum add more in the future?

Hsiao-Wei Wang Reply

64 is a placeholder for the initial number of shards. It may be less or more once we have more benchmark results.

We do plan to add more shards in the future as the technology improves iteratively (Moore’s Law).

Layer2

josojo asked a question

Hi!

I’m very interested in bridge security related issues:

  1. Bridging security between different L1s (such as applying zero-knowledge techniques) and bridging security between two L2s that share the same L1 chain, do you think the two are the same?
  2. Possibly any bridge between L1s needs to be scalable in order to avoid a fork in one of the L1 chains. Will this situation make the L1->L1 cross-chain bridge less secure than the L2->L1->L2 cross-layer bridge?
  3. What is the best mechanism for Zk-rollup to keep new features upgradable while avoiding security risks for users. In particular, for users who want to lock their assets on L2 for a long time, or users who do not leave L2 so soon, how can their asset security be guaranteed?

Vitalik Reply

The reason why I am positive about multi-chain blockchain ecosystems (there are indeed separate communities with different values, for which it is better to grow independently than to all compete for the influence of the same thing), is not A key reason for the negative attitude of cross-chain applications is that bridging has fundamental security limitations.

To understand why bridging has these limitations, we need to look at how various combinations of blockchain and bridging are resistant to 51% attacks. Many people have this mentality: “If the blockchain is 51% attacked, the whole system will collapse, so we need to spend all our efforts to prevent a 51% attack, not even one.” I strongly disagree with this idea; in fact, blockchains maintain many guarantees even after a 51% attack, and maintaining these guarantees is very important.

As an example, let’s say you hold 100 ETH on Ethereum, and when Ethereum is 51% attacked, some transactions will be censored and/or rolled back. So no matter what happens, you still own the 100 ETH. Even a hacker who launched a 51% attack cannot propose a block that takes your ETH, because such a block would violate the rules of the protocol and would be rejected by the network. Even if 99% of the hashrate or stake wanted to launch an attack to steal your ETH, everyone running a node would only follow the remaining 1% because only their blocks follow the protocol rules. More generally, if you have an application on Ethereum, launching a 51% attack may censor or rollback the application’s transactions for a period of time, but end up with a consistent state. If you hold 100 ETH and then exchange it for 320,000 DAI on Uniswap, then even if the blockchain is hacked in some crazy way, you will end up with a reasonable result: either hold 100 ETH, or get the 320,000 DAI. That is, a result obtained by neither (or both) is actually a violation of the protocol rules and will not be accepted by the network.

Now, imagine if you transfer 100 ETH to a bridge on Solana and get 100 Solana-WETH, then Ethereum is 51% attacked. The attacker deposits a sum of his own ETH in the Solana-WETH wrapper contract, and then immediately rolls back the deposit transaction on the Ethereum network after the transaction is confirmed on the Solana network. The Solana-WETH contract is now no longer fully recoverable, maybe your 100 Solana-WETH is now only worth 60 ETH. Even if there were a perfect ZK-SNARK based bridge that could fully verify the consensus, it would still be vulnerable to such a 51% attack.

Therefore, it is always safer to hold Ethereum-native assets on Ethereum or Solana-native assets on Solana than to hold Ethereum-native assets on Solana or Solana-native assets on Ethereum. “Ethereum” in this context refers not only to the Ethereum L1 base chain, but also to any L2 built upon it. That is, if Ethereum is 51% attacked and the transaction is rolled back, the transaction on Arbitrum and Optimism will also be rolled back. Thus, “cross-rollup” applications holding state on Optimism and Arbtirum are guaranteed to remain consistent even if Ethereum is 51% attacked. And if Ethereum is not 51% attacked, there is no way to 51% attack Arbitrum and Optimism respectively. Therefore, it is still very safe to hold assets that are issued on Optimism and then packaged on Arbitrum.

However, when there are more than two chains, the problem is more serious. If there are 100 chains, there will be many interdependent dapps between these chains. At this point, even a 51% attack on one chain creates a systemic risk that threatens the economy of the entire ecosystem. That’s why I think it’s possible for interdependent regions to be closely connected to sovereignly independent regions (so many applications of the Ethereum network are closely connected to each other, many applications of the Avax network are closely connected to each other, etc.; not Ethereum The applications of the Square network and the Avax network are closely related to each other).

This is also why rollup cannot directly “use another data layer”. If a rollup stores its data in Celestia or BCH or whatever, but deals with ethereum assets, then if this layer gets a 51% attack, the user is screwed. Even if Celestia’s Data Availability Sampling (DAS) can defend against 51% attacks, it doesn’t actually help you, because the Ethereum network does not read this DAS; instead, the Ethereum network reads the information on the bridge, and Bridging is precisely vulnerable to a 51% attack. As a rollup wanting to provide security for applications using Ethereum-native assets, the Ethereum data layer must be used (and the same for any other ecosystem).

Of course, I wouldn’t say these problems will crop up anytime. A 51% attack on just one chain is difficult and expensive. However, the more users use cross-chain bridging and applications above, the worse the problem becomes. No one is going to attack Ethereum to steal 100 Solana-WETH (or to attack Solana to steal 100 Ethereum-WSOL). But if there are 10 million ETH or SOL on the bridge, the incentive to launch attacks will be stronger, and some large asset pools will make these attacks more likely. Therefore, cross-chain transaction activity has an anti-network effect: when there is little transaction activity, the network is very secure; when there are more transactions, the greater the risk.

egodestroyer2 follow up

Can you give an example of how the attack on the PoS chain happens and how the attacker can steal assets on the bridge?

Vitalik Reply

This is no different from PoW chains. The attacker makes one of the chains finalize a transaction T1, and then the other chain finalizes an incompatible transaction T2. They publish the first transaction information to the bridge, and then publish the second transaction information to the network.

loiluu follow up

If Ethereum is 51% attacked and the transaction is rolled back, the transaction on Arbitrum and Optimism will also be rolled back.

I’m not sure if any L2 has dealt with such a situation, and it’s really hard to think of a solution. Suppose Ethereum is 51% attacked and the transaction is rolled back. But before the transaction is rolled back, the L2 runner has made a promise to L1. So now, if the L2 runner generates a new promise based on the L1’s rollback transaction, anyone can launch a replay attack to resubmit the L2 runner’s previously committed promise and ultimately wrongly blame the L2 runner malicious behavior. Therefore, L2 runners will be penalized for sending conflicting promises.

Vitalik Reply

The solution is simple, isn’t it? Make a conditional commit on the latest L1 block hash, so if the L1 transaction is rolled back, the previously finalized transaction cannot be resubmitted or slashed.

Liberosis question

Are you surprised by the research progress of zkEVM? As far as progress is concerned, do you think Polygon Hermez and Scroll’s stated goal of achieving zkEVM by the end of 2022 is realistic? Obviously, zkEVM for rollup will take longer to be ready for Ethereum.

Justin Drake Reply

Are you surprised by the research progress of zkEVM?

Yes, I’ve been pleasantly surprised by the progress, funding, and optimism about zkEVM research compared to a year ago. 

There are already several excellent teams competing (or collaborating!) in the zkEVM space and investing hundreds of millions of dollars to push it to production level by 2022-2023.

Note that the term “zkEVM” has different meanings in different contexts. I divided zkEVM into three types:

  • Consensus-level: The consensus-level zkEVM is completely equivalent to the EVM currently used by the Ethereum L1 consensus. That is, such zkEVMs generate proofs of SNARKs to verify the validity of the Ethereum L1 state root. Deploying a consensus-level zkEVM is part of our roadmap to “generate zk-SNARK proofs for everything”. (here is the Chinese version of the roadmap)
  • Bytecode level: This class of zkEVM is designed to interpret EVM bytecode. The zkEVM project led by the Scroll, Hermez, and ConsenSys teams uses this approach. Such a zkEVM may produce a different state root than the EVM. For example, EVM’s SNARK-unfriendly Patricia-Merkle trie would be replaced by a SNARK-friendly alternative.
  • Language-level: This zkEVM is designed to translate some EVM-friendly language (like Solidity or Yul) into some SNARK-friendly VM, which is completely different from EVM. MatterLabs and StarkWare take this approach.

I would expect to deploy the language-level zkEVM first, as this is technically the easiest to build. zkEVM at the bytecode level then unlocks additional EVM compatibility and further exploits the network effects of EVM. Finally, deploying a consensus-level zkEVM on L1 will turn the EVM into an “invincible rollup” and improve the security, decentralization, and usability of Ethereum L1.

As far as progress is concerned, do you think Polygon Hermez and Scroll’s stated goal of achieving zkEVM by the end of 2022 is realistic?

It seems reasonable to me that teams like Hermez or Scroll are planning to ship a production implementation of zkEVM at the bytecode level in 2022. At launch, I expect the following major issues to arise:

  • Smaller gas limit: From the beginning, the gas limit of the bytecode-level zkEVM will likely be smaller (much smaller, say 10 times smaller) than the gas limit of the L1 EVM, and will gradually increase over the next few years.
  • Large centralized prover: The proof process will likely not be decentralized and will likely be verified by a centralized entity with a large proof system. I hope we can implement decentralized proof systems (eg, GPU-based trustless provers globally) in 2023, and SNARK-proof ASICs in 2024.
  • Circuit bugs: Due to the circuit complexity of bytecode-level zkEVMs, such zkEVMs are likely to suffer from circuit bugs, and EVM bytecode equivalence is not perfect. These bugs (some security critical) will take a while to resolve. Ultimately, bytecode equivalence will be proven by formal verification tools.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/the-7th-ama-of-the-ethereum-foundation-research-team-part-1/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.