Conversation with Vitalik Buterin: The combination of sharding and Rollups will bring a 10,000x scaling increase

With Rollups, you can put 90% of the data and 99% of the compute under the chain and then 10% of the data and 1% of the compute on the chain, so you can increase the scalability by about 100 times.

Conversation with Vitalik Buterin: The combination of sharding and Rollups will bring a 10,000x scaling increase

Source: Lex Fridman Podcast

Editor: Southwind

Recently, Ether co-founder Vitalik Buterin participated in an interview with blogger Lex Fridman, where he talked about cryptocurrencies, regulation, MEV (Miner Extractable Value), Ether 2.0, PoS security, Layer 2 (Rollups), big mergers, Polygon, and more. Content. The interview is about 3 hours long, this article is taken from some of the content of this interview, the following is taken from some of the content of this interview.

Lex Fridman: Shiba Inu was created in 2020 to mimic Dogecoin, you were given 50% of the total supply of Shiba, then you “destroyed” 90% of the gift, worth $6.7 billion, and you gave 10% (then worth $1.2 billion) to the Indian COVID-19 Relief Fund, saying you didn’t want that much power.

Vitalik: I’ll start with the background of these coins and the story of how they were given to me. I invested $25,000 in Doge in 2016, thinking about how I was going to explain to my mom that I had invested my money in these dogecoin coins, the only interesting thing about this coin was that it had a dog logo on it, and it turned out to be one of my best investments. Then in late 2020, Elon Musk started talking about Degecoin, and then its market cap skyrocketed to $50 billion at that time, and it skyrocketed several times, like the first time it went from 0.8 cents to about 7 cents, and that happened in one day.

I remember being in Singapore and seeing the price spike over 100%, and then I thought my Dege holdings are worth a lot of money, and I sold half of my Doge holdings for $4.3 million and just donated it. A few hours later, its price dropped from about 7 cents to 4 cents. So I sold Doge at the high point and felt like a great trader. then Doge went from 4 cents to 7 cents and then 50 cents. doge became something so influential that a lot of people who hadn’t heard of ethereum had heard of Doge. this was something I didn’t anticipate.

And then some people thought, if Doge’s market cap can reach $50 billion, then you should be able to reach billions of dollars with other coins that imitate it, and I think that’s what the people who created Shiba thought. But they gave me 50% of the supply of Shiba directly, but they weren’t the first project to give me coins. Around the end of 2020, there was a prophecy machine project, Tellor, which I think was a competitor to Chainlink, and I remember they gave me $50,000 worth of coins directly, and then they went around and said, “Look, Vitalik holds our Token, he’s one of our backers. He’s one of our backers.”

Once I realized this, I publicly sold their Token through Uniswap and put an end to the rumor. Then Shiba, who were also a bit smarter, credited their coins not to that address of mine, but to my cold wallet. Then I noticed a lot of people talking about this coin, and I was donated coins worth billions of dollars, and then after I got my cold wallet key, I started selling some coins and donating some directly to a couple of charities. I actually sold off 80% of Shiba and donated the ETH I got to a few organizations and then donated 20% of Shiba directly, including the COVID Relief Fund in India and others.

Lex Fridman: How do you see the regulation of blockchain? What are the best case and worst case scenarios?
Vitalik: The best case scenario is that the blockchain continues to boom, and then we find ways to scale the blockchain so that people can do all kinds of things on the blockchain, which is all the incredible things that people have been talking about, and then there are a lot of great applications running on the blockchain, like DAOs that allow people to interact in a better way, that allow artists to benefit better, and so on, and then there is enough popular support to make people realize that cryptocurrencies can do a lot of good things, and there are other innovative potentials that are yet to be understood.

The worst case scenario is that people suddenly think the technology is being used by some bad people, but I don’t think the government can stop blockchain from existing, but they have the ability to marginalize it by, for example, banning all exchanges and banning all mainstream employers from accepting and using cryptocurrency payments to make it have less impact. Obviously I’m hoping for the best.

Lex Fridman: Let’s talk about Ether 2.0. How will Eth2 make Ether more scalable, more secure and more sustainable?

Vitalik: The reason behind actually de-emphasizing the Eth2 moniker recently is that initially we envisioned a big, red-hot vision of all the good things that would happen at once: a whole new blockchain and a whole new protocol. Then we slowly adjusted the roadmap to a more gradual form, with PoS and sharding happening over time, as well as all the features and functionality, so that while the average Ether user feels a seamless experience that may be a bit more complex than the previous hard fork upgrade, it’s not that complex from the user’s perspective. Two flagship features that were once considered to be the flagship features of Ether 2.0 and are now just considered to be the flagship features of the next Ether evolution are PoS and sharding (sharding).

PoS is a consensus algorithm or consensus mechanism, which is a way for network nodes to make sure that once a block is on the chain, it cannot be reversed, in terms of which blocks or transactions are in what order. Currently, blockchains such as Bitcoin and Ether use PoW, which is basically a network where many computers (nodes) agree on which blocks to accept, and sometimes two blocks are released at the same time, so there is a need to reach consensus on the order of the blocks, hence the need for a “voting game ” (voting game).

But who has more weight in voting cannot be done in a “one person, one vote” way, because some bad guy may have 10 billion virtual computers on his computer, so he has 10 billion virtual nodes, and then may own 99% of the network nodes and control everything in the network. To stop this from happening, PoW and PoS are both based on how much you contribute to the network

economic resources to the network. So in PoW, you have to prove how many economic resources you have, that is, how many computers you have and run them 24*7, and this approach does work because if you want to attack the network, you need to invest more computers and more money and power, which is very costly.

Whereas in PoS, unlike in PoW where you contribute arithmetic power by 24*7, you only need to pledge a certain amount of coins into the system as an economic resource. I like PoS for many years because it requires less resources, it doesn’t require buying mining equipment from manufacturers and consuming a lot of energy like PoW does, and PoS only needs to be run from a common computer, you can run a PoS verifier node on a regular computer that you are using now.

So this approach is much less resource intensive and does not burden the environment. Another reason is that based on PoS, the blockchain doesn’t need to pay as much to the people who maintain the network (miners) as PoW does, and currently Bitcoin and Ether both provide about 4% of the total supply to miners every year, and Ether issues about 4.7 million new ETH per year, out of a current total supply of 115 million ETH. But with PoS, we expect to add about 500,000 to 1 million ETH per year, which means the total supply will not increase too fast.

Lex Fridman: What do you think about the security of PoS?

Vitalik: I think PoS is very secure because if you want to successfully attack the ethereum network, then you basically need to have the amount of ETH that is pledged in the entire network, for example, right now we have 5 million ETH pledged (in the beacon chain), and then you (the attacker) need to have 5 million ETH and join the network. Secondly, PoS is much easier to recover from an attack than PoW. In PoS we have a lot of measures against attacks, for example we have an automatic slashing (forfeiture) mechanism that destroys the coins pledged by the perpetrators, and the community can also respond by The community can also coordinate a soft fork to counter a (successful) attack where the attacker will lose a lot of coins in the new chain.

Lex Fridman: Some see MEV (Miner Extractable Value) as a threat to ethereum, what is MEV and how do you deal with it?

Vitalik: The MEV (Miner Extractable Value) problem exists in both PoW and PoS, and can also be called Block Proposer Extractable Value (PBEV). What this basically means is that if you have the ability to have an idea of which transactions are packaged into a block in what order, then you can take advantage of this to make financial gains, not just through transaction fees, such as by jumping ahead or trailing others’ transactions, allowing the block proponent to get a percentage of the gain.

This phenomenon is a challenge because first of all it sometimes degrades the user experience and puts users in a less favorable position to trade, and the bigger risk is that the economies of scale that MEV brings to miners or verifiers can lead to more centralized PoW mining or PoS verification. So the ecosystem has taken MEV seriously, with projects such as Flashbots already underway. It’s a real risk, but we’re already doing something to address it now.

Lex Fridman: Let’s talk about the concept of scaling, specifically Layer 1, Layer 2 and the interaction between the two, and the idea of sharding.

Vitalik: There are two paradigms for scaling blockchains, which are Layer 1 scaling and Layer 2 scaling. l1 scaling is to enable the blockchain itself to handle more transactions through some mechanism, even though there are some performance limitations in the blockchain itself; L2 scaling is to not change L1, but to create protocols on the chain to inherit L1’s security, while many things are done off-chain, so more scalability can be obtained. In Ethernet, the most popular L2 paradigm is Rollups and the most popular L1 scaling paradigm is sharding.

Lex Fridman: One of the ways to scale a blockchain is to increase the block size, so before we talk about sharding, can you talk about the block size debate.

Vitalik: It’s a trade-off between better writing to the blockchain (i.e., making transactions on the blockchain) and better reading of the blockchain (i.e., having nodes verify that transactions on the chain are correct). Both are equally important as far as decentralization is concerned. If a blockchain is expensive to read, this means that people need to trust a small number of nodes that can change the rules of the blockchain without anyone else’s consent, while if a blockchain is very expensive to write (transact), then everyone will move to a very centralized secondary system.

So I think it needs to be a balance between the two, and favoring one or the other will lead the blockchain in an unhealthy direction. I think there are two main reasons for the current block size of 1 M in Bitcoin, one is that they think it’s really important to be able to read the blockchain, and the other is that a lot of people are defending the principle of not hard forking it. A larger block size means that the blockchain will be more centralized because there will be fewer people able to run the nodes and it could also bring about a hard fork.

Lex Fridman: So what is a slice? What are the characteristics of a slice?

Vitalik: Instead of adding parameters like increasing the block size, sharding is about changing the architecture of the blockchain so that a single node in the network only has to store a portion of the data and process a portion of the transactions for the entire network. The challenge with adopting this model and applying it to blockchain is that blockchain is not just about spreading data across the network, but about reaching consensus on the data spread across the network and ensuring that the data reached by consensus is correct. So there is a paradox, for example, suppose you need a blockchain that can process 10,000 transactions per second, but each computer node in the blockchain can only process 100 transactions per second, so how can a single computer trust other computers without verifying all the transactions?

For example, if there are 10,000 verifiers (pledgers) in a PoS chain, we assume for simplicity that each verifier pledges the same number of coins and then randomly scrambles the verifiers, assigning 100 of them (forming a committee) The other 100 validators are assigned to validate another block, and so on. Then the validity information is broadcasted in such a way that one of the 100 validators will sign a block to indicate that they agree with the validity of the block, and then all the signatures of the block will be aggregated into one signature and broadcasted to other validators in the network, so that the other validators will only need to verify the signature and not the transactions in the block directly. When other verifiers see this signature, they do not directly believe that the block is valid, but that the majority of verifiers in the block agree that the block is valid.

So if I believe that most of the verifiers in that block are honest (because these verifiers are randomly assigned and the attacker cannot stuff all the verifier nodes under his control into the same committee, i.e., the nodes under the attacker’s control are also randomly disrupted), then the illegal block will not be included in the blockchain. This is a simple form of fragmentation. There are other, more sensible forms, such as the concept of zk-SNARKs, i.e. a kind of zero-knowledge proof, which is the idea of generating a cryptographic proof, indicating that a proof is generated by running some complex operations on a certain piece of data.

If such a proof is generated, for example, and you see that a certain zk-SNARKs proof indicates that a block is valid, then you can trust that the block is valid. There is also something called data availability sampling, which lets you be confident that the data in a block has been published. Basically, if you stack these methods together, then you can create a blockchain system where individual participants can trust that everything that happens on the chain is correct, without having to verify it themselves. That’s Sharding.

Lex Fridman: As far as I know, what is being proposed for Ether is to implement 64 shards, how is this scaling achieved? Is this a fixed number? Is this a way to achieve scalability to compete with credit cards or Visa?

Vitalik: Over time, the number of 64 slices can be increased by a hard fork, and theoretically 1024 sliced chains can be achieved. There are challenges with more chains, such as the need to have a logic to check and manage all of them, and the higher cost if there are too many, but nevertheless you can improve a little bit. And the other thing we’re doing is combining Sharding with Rollups.

Lex Fridman: Oh, Rollups, so let’s talk about the idea of L2 now.

Vitalik: The basic idea of a Rollup is that users send transactions to some central aggregator, and theoretically anyone can be an aggregator in a Rollup, which is a permissionless model. What Aggregator does is that they will exclude all transaction data that is not relevant to updating the state, and then keep the data needed to update the state and compress it, so that only a small amount of this compressed data is published on the chain, and not all of the transaction data. The amount of data published on the chain may then be reduced by a factor of ten.

There is also the fact that the computation will not be done on the chain, but off the chain. There are two ways to do this, one is zk-Rollup, which is to provide a zk-SNARK proof that says “I did the computation and this is the proof of my computed hash” and then submit that proof to the chain and then everyone verifies the proof without having to verify all The other method is Optimistic Rollup, which is basically where first one person claims that they think the transaction is correct, and then another person can disagree and claim that the transaction is different, and if there is such a disagreement, then the whole block of data needs to be published on the chain and verified, and the wrong party will lose a lot of money.

So with Rollup it is possible to put 90% of the data and 99% of the computations off-chain and then 10% of the data and 1% of the computations on-chain, so the scalability can be increased about 100 times. These systems are now live for applications such as Loopring, a zk-Rollup-based payment platform, where you can deposit funds into the Loopring system for a very low transaction fee of, say, 5 cents (instead of $5). So combining Rollups and sharding gives a 10,000x increase in scalability, resulting in thousands of tps.

Lex Fridman: So this scalability allows for faster processing of large numbers of transactions at a much lower cost… …

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/conversation-with-vitalik-buterin-the-combination-of-sharding-and-rollups-will-bring-a-10000x-scaling-increase/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-06-05 05:11
Next 2021-06-05 07:16

Related articles