The 7th AMA of the Ethereum Foundation Research Team (Part 2)

Editor’s Note: On January 7, 2022, the Ethereum Foundation (EF) research team held their seventh AMA on Reddit covering L2, sharding design, broader roadmap, MEV, EIP-1559, and more. ECN organized and compiled most of the questions for this AMA. It should be noted that members of the Foundation R&D team have personal opinions and speculations on certain topics. To avoid misinterpretation, please refer to the original post.

Due to the length of the article, this article will be published in two parts. The topics of this issue include Layer 2, Roadmap, Distributed Validator, DAO, Fundraising for Public Goods, etc.

Click to read: The 7th AMA of the Ethereum Foundation Research Team (Part 1)

Layer2

Liberosis question

Since calldata on Ethereum is particularly expensive, and EIP-4488 (reducing calldata cost) and data sharding take a long time to deploy, the rollup team is turning to an alternative to off-chain data availability. I also learned that an optimistic rollup solution is planning to store their transaction data in IPFS (of course, this is not “optimistic rollup”). Are you worried about this trend? And if the development team of the rollup/volition scheme insists on storing data availability (DA) off-chain, what are your suggestions?

Carl Beekhuizen, Ethereum Foundation Reply

In the short term, it really doesn’t make sense for certain types of transactions to publish their data on-chain due to expensive calldata costs, I agree. Unfortunately, on-chain data is one of the pain points of our current migration to L2, and we may need some expedients to reach our proposed L2 utopia.

I think during this period, how DA is handled has become one of the product differences of the various L2 scenarios. Similar to (in descending order of trust):

  • Higher cost L2, i.e. store calldata on L1
  • Some L2s use other chains to store data availability
  • Part of L2 uses IPFS
  • Some L2s simply ignore the DA and declare “Trust us”

Users can then choose their L2 scheme based on the specific needs of a given transaction.

itsanew ask a question

How will L1’s security budget expand with L2 adoption? If/when L2 reaches escape velocity, it is likely that most of the liquidity will be on Layer 2, likely denominated in non-ETH tokens and ETH fees paid to L1 will be significantly reduced. In this case, is there any mechanism to ensure that the L1 value of the pledge is acceptable relative to the value of the asset being protected?

I’ve heard the statement “L1 will always be expensive”, but it’s not clear why. If L2 can implement almost all the functions of L1 with lower transaction cost.

Is it possible in the future to have something that has the opposite effect of EIP-4488, leading to higher gas prices for L2 transactions?

Ethereum Foundation Danny Ryan Reply

I believe there is an L1 security budget value that is functionally infinite in the value it can protect. I also believe that there will be transaction activity on L1 so that there is value in doing transactions there. So I don’t think “due to the existence of L2, all transaction activity happens on L2 immediately, so that no transaction activity happens on L1”. Instead, I think the network will have a spectrum of transaction activity driven by the market, until L1 is basically only responsible for finalizing and registering data/state transitions for L2. Of course, this can only happen when there are many L2s on the network that are highly competitive.

Even when L1 is dominated by L2 transaction activity, users still have the need to transact on L1. For example, market makers migrate/balance liquidity between different L2s. In the design of Ethereum, L2 is rooted in a very rich execution layer, with first-class bridging schemes between L2s. So if there are many thriving L2 ecosystems by then, bridging activities may have high economic demand (for some level of market participants).

Ethereum Foundation Barnabé Monnot added

In addition, L2 (like rollup or commitchain) needs to pay L1 a fee when publishing data/state root/proof of user transaction to L1. They publish transactions containing this data, the transactions pay L1 a packaging fee in ETH, and the fee market is based on the EIP-1559 mechanism. So another question could be: “Will L2 reduce the fee paid to L1 because L2 pays less to L1 and move the fee-paying activity from L1 to L2?”

The cheaper gas fees offered by Rollup mean that many users who previously did not transact due to the high cost can now participate, which means that the value of the network (i.e. the overall utility the network provides to users) increases by the same amount. Before, let’s say I was willing to pay $1 to transfer (but couldn’t do it at this cost price), now I have this condition to transfer, which is the extra $1 value provided by the network. The fee cap charged by the network is always equal to the total value provided by the network (if you pay more in fees than you earn, you generally prefer not to use the network). In fact, the goal of the network should be to maximize value while minimizing transaction fees, but often increases transaction fees to compensate network operators and effectively control network congestion.

My point is that while rollup/L2 improves scalability, network congestion will never go away and more users will be extracting value from the network, creating network effects and so on. All of this activity permeates L1 through the release of transaction data. But with the extra scalability, at least the per-transaction fee can go down.

Barnabé Monnot Continue to reply

some thoughts:

  • Before the 1559 deployment, higher unburnt fees meant more active hashrate, as more miners would join by then, ceteris paribus. This incentive mostly disappeared after 1559, which destroyed the variable portion of transaction fees, so in the current security model, fees are not fully factored into the security budget.

     

  • Before the deployment of EIP-1559, the value of transaction fees was not really captured by the protocol, but ETH still had value. EIP-1559 all but guarantees that ETH’s floor price is commensurate with transaction demand, but it may still not account for much of ETH’s value. So value of ETH = fee of ETH.

     

  • Another idea is to redefine the problem: not that the total value of the stake needs to be commensurate with the total value of protection on L2, but that the cost of the attack must be commensurate with the profit the attacker can make. PoS is better against attacks, so the cost of the attack increases / the profit of the attack decreases. However, whether this is enough to defend against an attack depends on the specifics of the attack.

edmundedgar follow up

Before the deployment of EIP-1559, the value of transaction fees was not really captured by the protocol, but ETH still had value.

Everyone knows that PoS has mainnet deployed, and under PoS mechanism, with or without EIP-1559, all fee income is captured by ETH holders (since stakers are a subset of ETH holders, you need ETH to participate in staking), so assuming that the valuation of virtual tokens is rational, the value may all come from the expected fee income.

Barnabé Monnot Reply

If you say “rational” you mean “value = some kind of return/sum of discounted buybacks”, then you’re right. But we can imagine that “rational” value models also rely on different assumptions (or declare all BTC holders to be “irrational”!) but agree that stakers will factor in implicits in their return forecasts , so to some extent the cost will still be factored into it. I think the point remains that the security of PoS is not designed on the premise of extracting as many fees as possible.

Justin Drake Reply

some thoughts:

  • Ethereum has a guaranteed security budget (unlike Bitcoin), which is implemented in the form of token issuance (1 million ETH per year on the assumption that the network has 1 million validators)

     

  • Historically, total L1 fees have only risen even when scalability is considered (see my answer to “Assuming a 1000x increase in scalability over the next few years, transaction demand will also increase by a corresponding 1000x”). I think this trend will continue with the rollup.

     

I’ve heard the statement “L1 will always be expensive”, but it’s not clear why. If L2 can implement almost all the functions of L1 with lower transaction cost.

The reason is that L2 has to pay L1 for the availability of stored data. The more successful the L2 expansion, the greater the total L1 fee. I think the picture would be something like this:

  • Total L1 fees will only increase
  • L2 gas prices will only decrease

Is it possible in the future to have something that has the opposite effect of EIP-4488, leading to higher gas prices for L2 transactions?

In the future, “data” and “execution” will likely be priced separately (see article Multidimensional EIP-1559). As for artificially limiting the token supply to increase transaction fees (as Bitcoin does), Ethereum doesn’t need to do this (because we have a guaranteed security budget), and I don’t think it will work in the long term (because users will go elsewhere).

AllwaysBuyCheap Ask a question

It seems that all public key quantum-resistant algorithms use keys larger than 1kb in size, how do you think implementing this will affect Ethereum?

Justin Drake Reply

Post-quantum cryptography does tend to have larger cryptographic material (in bytes). I’m not worried about this for these reasons:

1. With SNARKs, we can aggregate and compress cryptographic material as needed. We are also working on post-quantum cryptography such as lattices – which inherently have opportunities for aggregation (for example in the context of aggregated signatures or aggregated state witnesses for stateless clients).

2. Bandwidth is a computing resource that is fundamentally massively parallelizable, and will likely continue to grow exponentially (about 50% per year according to Nielsen’s law) for one to twenty years. Note that 50%/year is about 50x/10 years, so 1kB in 10 years is about 20 bytes today.

AllwaysBuyCheap follow up

Yes, the bandwidth will be much higher, but isn’t this question mainly about storage and computing power? How to increase the bandwidth speed to scale Ethereum?

Justin Drake Reply

Bandwidth is the ultimate fundamental barrier to scaling blockchains. We know how to solve the computational bottlenecks of every other consensus layer (eg, disk I/O and storage can be solved with statelessness, while computation can be solved with recursive SNARKs).

Hanzburger follow up

Linking to Justin Drake’s answer, since this requires new addresses, any funds in existing addresses will be at risk in the event of a quantum attack?

Vitalik Reply

Funds in addresses that have already been used (i.e. from which at least one transaction was sent) are at risk because transactions expose public keys, which are vulnerable to quantum computer attacks. And if an address has not been used, then it is safe, and if it is attacked by quantum computing, we will be able to deploy a hard fork that allows users to put The funds are transferred to a quantum-resistant account.

TheStonkist asks a question

As Ethereum transitions into a mature L1/L2 ecosystem (i.e. most transaction activity will take place on L2), once the bridging cost is higher than it is now, EF has envisioned how to solve bridging such as NFT, LP, ERC-20 generation Coins such assets? Is it possible that some users have assets stuck in L1 because they can’t afford to bridge those assets into L2?

Justin Drake Reply

As Ethereum transitions into a mature L1/L2 ecosystem (i.e. most transaction activity will take place on L2), once the bridging cost is higher than it is now, EF has envisioned how to solve bridging such as NFT, LP, ERC-20 generation Coins such assets?

Soon, most users’ assets can be used directly on L2 without involving L1, and even users can directly bridge assets directly from one L2 to another. And with the improvement of rollup technology and the deployment of data sharding, this operation will become cheaper.

Is it possible that some users have assets stuck in L1 because they can’t afford to bridge those assets into L2?

possible. But regardless of the need for bridging, the user’s assets will be “stuck” on L1 because the L1 gas fee is too high.

consideritwon ask a question

Is there a possibility to run zero-knowledge proofs on consumer-grade hardware/low-cost ASICs, enabling complete decentralization based on a zero-knowledge base layer?

Justin Drake Reply

Of course! We now have efficient SNARK recursion techniques (like Halo 2 and Nova) that allow distrusting and computationally constrained “runners” to collaborate to build proofs for large statements in parallel. (This is done by splitting large statements into smaller chunks and distributing those chunks to runners).

Projects like Scroll aim to achieve this kind of decentralized proof. I expect the hardware they use will evolve roughly like Bitcoin Proof of Work, from CPUs to GPUs to FPGAs to ASICs. I know of several independent projects building SNARK-proof ASICs – it will take 2-4 years to get there, but it’s definitely achievable. (If you are interested in SNARK ASIC related work, please PM me).

Greg Foley asked a question

Is a rollup-centric roadmap good enough? How do you resist centralization and censorship now that there are a handful of highly centralized provers and sequencers running on the data center? The court system could easily shut down these provers and sequencers, or force them to review transactions, no?

Justin Drake Reply

Is a rollup-centric roadmap good enough? How do you resist centralization and censorship now that there are a handful of highly centralized provers and sequencers running on the data center?

We recently devised mechanisms (kudos to Francesco) whereby proposers can force transactions to be packed on-chain even if all builders choose not to pack them in their blocks.

The court system could easily shut down these provers and sequencers, or force them to review transactions, no?

As mentioned above, the censorship problem can be solved using a cryptoeconomic gadget on L1. As for liveness, if all experienced block builders suddenly go offline, proposers can always choose to build their own blocks and fall back to the “easy” strategy of prioritizing transactions in the mempool that pay the highest tip.

Staking and Distributed Validators (DV)

MrQot asked a question

Are there any long-term proposals/thoughts for changing the minimum 32 ETH deposit?

I know this aims to find a sweet spot for the number of active validators, but considering all the other relevant self-adjustment mechanisms, it’s a bit odd to have such a fixed number. Similar to targeting 220 or 219 validators and then automatically adjusting MAX_EFFECTIVE_BALANCE every few epochs when there are too many or too few validators, which is also very good.

Vitalik Reply

There are a number of ideas being actively researched to mitigate the harmful effects of this high minimum deposit. There are two main directions:

  • Reduces the load on the chain per validator, allowing the chain to handle more validators. If the chain can handle 8x more validators with the same load, then we can support a minimum deposit of 4 ETH instead of 32.
  • Makes it easier to decentralize staking pools.

Distributed Validator (DV) technology is a major exploration in the latter direction. Another important part is making quick partial deposits and withdrawals simpler, so that individual users can join and leave staking pools quickly without the need for complex liquidity infrastructure.

And in the first direction, there is some research on more efficient proof aggregation techniques, as this seems to be the biggest bottleneck at the moment. There is also a technique whereby only a subset of validators participates in validation at any one time. Both of these techniques can reduce the load on the chain, either by reducing the load on each validator or by reducing the time to finalization (although shortening the time to finalization is generally considered to be a higher priority at the moment).

Danny Ryan Reply

I would say that the long-term goal is to reduce this number as much as possible as global computing, bandwidth, and storage requirements decrease.

I think it’s more likely that the network/community is looking to hard fork to reduce this number to what they think is reasonable, eg 16 after 3 years, maybe 8 after 6 years, etc, rather than dynamically adjusting.

There will be more complexity in the dynamic adjustment – what do you do with your existing stake amount if you reduce it? Or is it worse when it increases? In my opinion, there are no easy answers to these questions.

TinyDancingSnail asked a question

Distributed validators seem to be developed with the support of EF…you guys funded it, I saw it at the top of Vitalik’s recent roadmap. But I rarely hear about it from the wider community. Even many opinion leaders in the Ethereum staking community seem to ignore or misunderstand the technology.

So, can you guys talk about why DV is important and what value do you think these projects have for the Ethereum ecosystem?

Carl Beekhuizen Reply

I am very supportive of DVs, they are the reason I joined Consensus Research that year.

I think the wider community needs to get more excited about this and pay closer attention to this issue as they are an important part of the long-term health of the chain.

The basic idea is to share validator responsibilities by several “co-validators” so that there is no single point of failure in terms of security and online time (beacon node/validator client combination in the case of individual validators).

DV can achieve the following functions:

  • Decentralized staking pools that do not require over-collateralization
  • More robust/secure home setup
  • Centralized staking providers can spread their (aka users’) risk

The reason I think DVs are important to the long-term health of the chain is that they enable efficient decentralized staking & reduce the risk (and power) of centralized services. If we see something like stETH become the underlying asset for the majority of defi, it is critical to handle this underlying staking with minimal trust.

As far as now important players:

  • Two other researchers and I are writing a specification for DV, which will be published in the next few weeks. It is similar to the consensus specification and can have multiple implementations
  • Formal verification work on this specification has already started (we want to prove that a malicious co-validator cannot compromise the entire validator, etc.)
  • Obol is developing an implementation of this specification
  • SSV.network/Blox are also developing their own DV implementation, not sure if it will follow this spec

Dankrad Feist Reply

DV is undoubtedly important. They can add functionality that is not natively possible:

  • A group of people get together to do staking, even if they each own less than 32 ETH, it is not necessary to delegate a person to run the validator
  • Increase validator security and resilience in a low-cost way that anyone can use
  • For those who don’t want to run their own validators and don’t want to trust a single provider to run their validators, this is a way to spread trust among several different providers
  • For staking pools, they can run validators elastically even if the availability and security of individual validators are less than 100%. This means that the staking pool can be open to more people to operate.

Some developments in this area may not be known to many people, but some big projects are underway, such as Blox and Obol.

Asked by MuXu96

What do you think of the Secret Shared Validator (SSV) technology of projects like SSV and Obol? I feel like distributed validators (DVs) are important on the roadmap, and that’s what they’re doing. So, will these projects be implemented, or help in development?

Carl Beekhuizen Reply

From a technical standpoint, I’m very bullish. I think DV is an important component of the future of Ethereum.

Some of our fellows have been working with Obol and SSV.network on the DV specification and provide general assistance to their tech stacks.

See my comment for more: https://reddit.com/r/ethereum/comments/rwojtk/ama_we_are_the_efs_research_team_pt_7_07_january/hrmrqo1/

egodestroyer2 ask a question

Are you happy to see the emergence of liquid staking protocols? Or are you not planning to let this happen? What are your thoughts?

Carl Beekhuizen Reply

I think liquid staking will definitely appear, because the demand is there, so someone will definitely build a plan, you can’t fight against the market.

The question is how it is implemented. If the underlying staking is done through a centralized service provider, it will be detrimental to the entire network. However, if a liquid staking protocol would decentralize staking through distributed validators (such as Obol or SSV.network) or economic incentives (RocketPool), then liquid staking ETH would be a good primitive.

pwnh4 question

For stakers, one of the important things after the merger is withdrawing their staked ETH. Is this item already on the roadmap? Specifically:

  • Can stakers claim their rewards without needing to unstake/exit stakers (which would make the whole process inefficient)?
  • How will withdrawals be initiated? Is the withdrawal transaction signed using the verification or withdrawal key?
  • In the withdrawal process, will there be a difference between the pledger who generates the Eth2 withdrawal key and the pledger who uses the Eth1 wallet as the i withdrawal certificate?

Justin Drake Reply

For stakers, one of the important things after the merger is withdrawing their staked ETH.

Withdrawals are not initiated upon merge (this is to not add complexity to the merge). A future “post-merge cleanup” fork will initiate withdrawals.

Can stakers claim their rewards without needing to unstake/exit stakers (which would make the whole process inefficient)?

This partial withdrawal from a validator’s balance to another place without the need to exit is a “transfer”, which may be the content of a merge cleanup fork.

How will withdrawals be initiated? Is the withdrawal transaction signed using the verification or withdrawal key? In the withdrawal process, will there be a difference between the pledger who generates the Eth2 withdrawal key and the pledger who uses the Eth1 wallet as the i withdrawal certificate?

If you have an Eth2 withdrawal certificate, then you have a BLS withdrawal key that can be used to sign withdrawal information (withdrawal information specifies a withdrawal address). If you have an Eth1 withdrawal certificate, the destination address of the withdrawal will be the specified Eth1 address, and the withdrawal is triggered by signing with a verification key.

Ethereum Roadmap

Liberosis question

Is there any concept or implementation outside of the Ethereum ecosystem that you think has greatly advanced the state of development of blockchain technology, but is not on the current Ethereum roadmap (“The Urges” roadmap proposed by Vitalik in December), but What would you like to see implemented on Ethereum?

Related: What concept of rollup would you like to see on Ethereum’s execution layer?

Answered by Justin Drake

What concept of rollup would you like to see at the execution layer of Ethereum?

Here are a few ideas:

  • Various rollups (eg Optimism, Arbitrum, zkSync) can have instant pre-confirmation, which is a good user experience feature. However, it’s not clear to me how to fully reconcile strong pre-validation (currently done by centralized sequencers) with decentralized ordering (what many rollups indicate to be achieved). If rollup can achieve strong pre-confirmation in the case of decentralized ordering, then maybe Ethereum L1 can also have pre-confirmation.

     

  • Arbitrum hopes to solve the MEV problem with a fair ordering protocol. I don’t know the details of their research, but will keep an eye out. Likewise, if the fair ordering protocol can work well at L2, then maybe Ethereum L1 can benefit from it.

Liberosis question

As Ethereum matures, I feel as though research is ahead of engineering/client development. Do you think Ethereum will be fixed once “The Urges Roadmap” is implemented? Or do you expect more breakthroughs to keep research teams busy for decades to come?

Answered by Danny Ryan

Personally I would like to see Ethereum fixed. I expect that over time, in a world where Ethereum has tremendous value, any governance process that welcomes changes will be incentivized to capture it.

At the base layer I support functional escape velocities so that you can scale and build anything people want at L2.

However, I can empathize with you. Research is really moving fast, and we are constantly amazed by the endless stream of new ideas. I remember in the early days of my Ethereum research, every time I saw a new good idea online, I was like “Ah, no! We’re going to start all over again because this idea is better”. It turns out that the actual engineering world doesn’t work that way. There are many base layer designs that can get us to functional escape velocity, so ultimately, we need to wade through this bazaar of ideas, balancing the complexity of engineering (and transforming an active system), and then condense on a design that is sufficiently safe and functional .

I would say that it took longer than expected to implement PoS, allowing a lot of simplified and improved sharding designs to emerge. If sharding (or PoS!) had been introduced 3 years ago, neither would have been this good or secure by design due to advances in research.

Answered by Vitalik Buterin

Personally, once the current required package of changes has been implemented, I’m definitely in favor of pinning it down. From then on, any required improvements can be done at L2.

Answered by Justin Drake

Do you think Ethereum will be fixed once “The Urges Roadmap” is implemented?

“Fixed” is a spectrum, and Ethereum is now clearly skewed to the fixed side, in large part because of how decentralized it is. (Things like PoS, sharding, and EIP-1559 are years of slow efforts.)

Once everything is done on the “The Urges” roadmap (Vitalik’s roadmap is quite extensive and could take 10+ years to fully execute), I expect Ethereum to become very fixed. Having said that, I do expect currently unknown or neglected research projects to be added to the roadmap (and some to be dropped).

Or do you expect more breakthroughs to keep research teams busy for decades to come?

I do expect the research team to be busy for the next 10-20 years, and there will be further breakthroughs. In the early days of a successful technology, innovation is exponential, and we can still say we are in the early stages. But again, research and fixation (even extreme rigidity) are not mutually exclusive: it’s just that breakthroughs are expected to take longer and longer to reach L1.

MrQot asked a question

Is the Verkle tree written on the roadmap unchanged? Or are you still looking/hoping to find a more ideal key-value mapping promise solution?

Vitalik Reply

I think in the short to medium term, the implementation of the Verkle tree is quite certain. In the long term, it’s likely to be replaced by some sort of SNARK-proof hash construct; we don’t know yet.

Justin Drake Reply

“It won’t change” is probably an exaggeration, because if a significantly better alternative emerges tomorrow, we may choose to use it instead.

I would point out that even if we implement the Verkle tree according to the current specification, these things will eventually be replaced in the post-quantum commitment scheme. Research into well-characterized post-quantum state commitment schemes (eg, small and/or aggregable witness data) is ongoing.

Dankrad Feist Reply

My current take on Verkle trees is that Verkle trees seem to be by far the most promising solution to the vector commitment problem in stateless Ethereum. We are making good progress on its implementation (Guillaume Ballet is in charge of development on this).

Having said that, I think if there is a very important new development, we can always turn around. I think it’s silly to stick with a solution if there’s a clearly better way to do it. However, I’m currently not aware of any promising research that outperforms the verkle tree and promises to be the best vector in the next 5 years.

MillennialBets question

Beyond sharding and zero-knowledge proofs, what new technologies are the EF team excited about working on?

Justin Drake Reply

An exciting research topic that we are already deeply researching is post-quantum cryptography, which will become increasingly important. In a decade or so, Ethereum’s L1 crypto stack will have to be trimmed. Things like BLS signatures, Verkle trees (for EVM state and data availability sampling), zero-knowledge proofs for SSLE (Single Secret Leader Election), SNARK-based VDF (Verifiable Delay Function) proofs are not as written in the spec Quantum security.

MEV

itsanew ask a question

Is there any interest in eliminating MEV at the platform level in the medium term through crypto trading or otherwise? Or is this already considered a failure and MEV democratization seen as the only viable short- and medium-term goal?

Vitalik Reply

Over time, there will certainly be interest in eliminating MEVs, adding ways to further restrict block builders and reducing their power, especially with regard to censorship and eventual reordering of transactions. That said, this technology is likely to be implemented only after the PBS core is out and running.

Dankrad Feist Reply

I think we should clarify a few things here. There are different types of MEVs, one of the main distinctions I want to make is:

  • Some MEVs are parasitic/extractive. For example, on a decentralized exchange, front-running and sandwiching a user doesn’t add any value; if we can get rid of that, we should

     

  • Some MEVs are inherent to a protocol. For a decentralized exchange, this is arbitrage (if the price fluctuates, someone must bring the DEX back into equilibrium with this market). Other examples are liquidations and fraud proofs for optimistic rollups.

The MEV for the second part will always be there, and it’s not a bad thing. So there is no other way than to democratize it, and this is the simple and most efficient way.

The first type of MEV is very different. In fact, there is already a way to avoid it: you can send your transaction to a Flashbots MEV transaction bundle instead of adding it to the transaction pool. Over time, there will be more such “private transaction channels”. Of course, these rely on the centralization of the channel, but if it is broken, you may find another channel.

In the long term, threshold encryption and delayed encryption schemes can solve this problem without the need for a centralized direct channel for builders. However, they all have drawbacks, either liveness (threshold encryption) or latency (delayed encryption; we’ll need some encryption research to make this possible), so I don’t think they’ll be written into the base layer protocol, but will be application-specific.

other

greatgoogelymoogely asked a question

Many of us have come together because we believe Ethereum best offers us a decentralized, trustless, fairer future.

How do you ensure that Ethereum can realize these visions in the long-term future?

Do you have plans to DAOize the Ethereum Foundation (EF)?

Vitalik Reply

The short-term path of “EF Decentralization” is more focused on moving most of its funds to some other Ethereum community organization, which has a variety of fund distribution mechanisms. One of the most recent examples is validator grants set up for client teams, but there are others and there will be more in the future.

One possible way to achieve decentralization is for EF to reduce its own relevance by doing more of the same. Of course, it’s still a traditional-style foundation, it’s just that as more and more EF alternatives emerge, decentralization comes naturally. Another approach is what you mentioned, over time EF is DAO’ed in some way. It’s also possible that the first method will happen sooner, while the second will happen at some point in the future.

mikeifyz asked a question

Questions about public goods! So far, we’ve seen great experiments in funding public goods through Gitcoin Grants. There is also the recently launched Retroactive Public Goods Funding by the Optimism team.

However, whether these mechanisms actually incentivize some long-term protocol contributors in a “trusted and neutral” way remains a matter of debate.

Will such a mechanism really emerge? Or do you think the funding of public goods will continue to be driven by individual initiatives (eg Gitcoin and RPGF)?

There is even another option that could be just as effective — taking advantage of composability on Ethereum and L2 to fund public goods (e.g., a portion of the proceeds from NFTs for human longevity research).

Vitalik Reply

The main challenge for Gitcoin Grants is that they have to keep trying new ways to get new funding. The advantage of Retro PGF is that if Optimism is run successfully and used consistently, there will be a steady stream of funding. One option is to introduce something rooted in the Ethereum protocol layer, but for people to accept that, they need to have real confidence in its “trusted neutrality”, which seems difficult. So the next best mechanism should be in the application layer (Optimism, Uniswap DAO and ENS….). Hopefully, projects that are too lazy to launch their own fundraising proposals can pledge to donate to Gitcoin so that the Gitcoin team doesn’t need to worry about funding.

As for “what is the best allocation mechanism”, this question can be answered experimentally, and the best solution may just be to have a bunch of mechanisms run in parallel.

Barnabé Monnot Reply

I’m not sure if there will be such a mechanism that has the best properties and also gives us all the results we want. 

There’s enough economic research in the “impossibility theorem” to tell us that when we think we can get what we want, we’re actually throwing away something else :p This is when we start to define what we mean The “Credible Neutral” and “Long-Term Incentives” are after things!

The best approach seems to be to keep experimenting with different models, tweaking their parameters and evaluating whether they give us the results we want. It would be good to have more experiments like this in a space that is naturally inclined towards public goods and collaboration mechanisms.

oldmate89 ask a question

What are the risks (in terms of timing and execution) of successfully completing the merger in 2022? Are there any further R&D unfinished or unforeseen issues to be resolved?

Danny Ryan Reply

At this point, security and testing are the long tail of work. We need to find and fix all existing problems through client implementation and multi-dimensional attack testing. It’s all a work in progress, and I personally expect things to settle down soon, but until that settles, it’s still an unknown.

Fredrik Svantes Reply

There is a “Mainnet Merge Readiness Checklist” that tracks our current work, which you can find here: https://github.com/ethereum/pm/blob/master/Merge/mainnet-readiness.md

mikeifyz asked a question

Can someone explain to me “Multidimensional EIP 1559 (Multidimensional EIP-1559)” in vernacular?

Vitalik Reply

Instead of stuffing all resources into one unit (gas), we divide them into different resources: water, electricity, water (ie, EVM execution, on-chain data, state read and write)… each resource is Has its own independent floating market price. This allows us to find the long-term average amount of each resource used, making the load on the blockchain more stable.

Justin Drake Reply

Different things should be priced independently. Is it reasonable if a bottle of milk always costs 2 diapers? Of course it doesn’t make sense! The market should price milk and diapers separately.

TShougo asks a question

I would like to ask about the merged block broadcast.

As far as I know, Gasper is the rule by which blocks are finalized and fork selection, while block broadcasting is still the same as PoW.

Is it a full PoS or a hybrid PoW/PoS model after the merger? (Block broadcasting is PoW, consensus is PoS)?

How is the Kintsugi testnet going, are there issues or no issues at all? (There was a paper published 1-2 months ago that defined three possible reorg attacks that PoS Ethereum could suffer from. Will these attacks be mitigated?)

Carl Beekhuizen Reply

what! Our crappy name is getting us in trouble again! I think several concepts are confused here.

  • Casper FFG (short for Casper the Friendly Finality Gadget, meaning Casper Friendly Finality Gadget) is the name of the finalized consensus component that can be used for PoW or PoS. Ethereum will not use a PoS/PoW hybrid chain, but Gasper, a pure PoS solution.

     

  • Gasper is the abbreviation of GHOST (Greedy Heaviest Observed SubTree, which means the principle of including the largest number of subtrees) + Casper FFG. It is also the pure PoS consensus algorithm used by today’s beacon chain and the merged Ethereum.

     

  • These three possible reassembly attacks were co-authored by some of my colleagues and we all have mitigations in place, see the tweet here for more: https://twitter.com/casparschwa/status/1454511836267692039

jdthrowy asked a question

1. What impact will the new proposer/builder paradigm have on the power consumption of the merged Ethereum?

2. Is the power consumption of the rollup hardware included in the combined energy forecast?

3. Are carbon offsets included in the Ethereum roadmap?

Justin Drake Reply

How will the new proposer/builder paradigm affect the power consumption of the merged Ethereum?

PBS (whether implemented post-merger or pre-merger) does not affect Ethereum’s electricity consumption.

Is the power consumption of the rollup hardware included in the combined energy forecast?

It depends on the author of the specific energy forecast. Most of what I’ve seen is focused on L1 (excluding the rapidly growing L2).

Are carbon offsets included in the Ethereum roadmap?

Not on the roadmap. As far as I know, carbon offset schemes today have counterparty risk and lack credible neutrality, which makes them largely incompatible with Ethereum L1.

(As a side note, the energy consumption of the merged Ethereum L1 will be small. The approximate consumption is the equivalent of 10,000 always-on computers.)

Carl Beekhuizen Reply

1. PBS doesn’t have a big impact on overall energy consumption, and it’s hard to predict ahead of time because it depends a lot on how the network of builders and searchers ends up looking.

2. Not in the data I calculated (this is the data usually published on the Internet). The power consumption of Optimistic Rollup is low because there is only one sequencer at any time. ZK-Rollups may require a moderate amount of energy, but there is so much innovation here, it’s hard to know what will eventually develop.

3. No, I personally do not support this. While I’m in favor of reducing carbon emissions, implementing them at the protocol layer requires some kind of on-chain governance to choose providers, which I can foresee soon becoming problematic. I think this should be implemented at the DAO and application level, for example GreenUniSwap can charge a few gwei extra per transaction to compensate, and users can then use this as part of their decision on why they use a certain exchange.

arredr2 question

Earlier this year, Carl Beekhuizen and others wrote a blog post describing how Ethereum’s energy usage would decrease by ~99.95% after switching to PoS. Has there been any research on L2 transaction energy consumption?

  • How many computers are running L2 software?
  • How much energy does the L2 software consume?
  • How many transactions are there per day on L2?
  • How much energy does an L1 proof consume?

Carl Beekhuizen Reply

These questions are all good, but I don’t have specific KWh data at the moment. L2 is developing rapidly and there are big differences in implementation, so it’s not very reasonable to talk about data here. I can write another article on L2 energy consumption and I’ll add it to my to-do list. :)

Here are some rough ideas:

  • Energy levels of OR and ZK can be very different (especially in the short to medium term)
  • L2 is usually very efficient because there is only one sequencer that decides everything at a given time
  • The energy cost of putting data on-chain is almost impossible to estimate at this point, as the sharding specification is uncertain
  • OR disputed energy costs don’t matter, they shouldn’t happen

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/the-7th-ama-of-the-ethereum-foundation-research-team-part-2/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.