Jump Crypto: Analysis of the Core Dimensions of Various Layer 1 and Layer 2 Expansion Schemes

On March 30, 2022, Rahul Maganti, Vice President of Jump Crypto, posted a brief but powerful framework for analyzing the L1 public chain. On April 1, Rahul Maganti followed up on his previous post by outlining various tier 1 and tier 2 scaling solutions and analyzing and comparing these different solutions along some core dimensions. The compilation of Golden Finance is as follows:

introduction

In the previous article, we developed a framework for analyzing L1 public chains, especially given the large number of new public chains established recently. We also briefly note that the motivation behind these novel L1 public chains is primarily focused on finding solutions for blockchain scalability. Let’s take a closer look at some of these solutions. The goals of this article are:

  • Overview of various Tier 1 and Tier 2 scaling solutions.
  • Analyze and compare these different solutions along some core dimensions.
  • Give our opinion on which extension architectures are the most promising.

The Scalability Trilemma

In a blog post in early 2017, Vitalik Buterin proposed the scalability trilemma, referring to the three main properties that define the viability of a blockchain system: (1) decentralization; (2) security; (3) Scalability. ‌‌

Of these three, we believe scalability remains the hardest problem to solve without unduly compromising the other two pillars. Security and decentralization remain critical to the performance of these systems, but as we will see later, addressing the challenges of scaling distributed systems also fundamentally provides a key breakthrough in decentralization and security. Therefore, we emphasize that the ability to effectively scale blockchains will be a key factor in determining the future success of the crypto industry more generally.

Broadly speaking, there are two main categories of scaling: Tier 1 and Tier 2. Both are relevant and critical for increasing blockchain throughput, but focus on different layers of the Web3 stack. Scaling has undoubtedly gotten a lot of attention over the past few years and is often touted as a key path to mass adoption of blockchain technology, especially as retail usage continues to climb and transaction volumes increase.

Level 1 (L1s)

There are some major scaling architectures that have come to the fore:

  • state sharding
  • parallel execution
  • Improvements to the consensus model
  • proof of validity

state sharding

Sharding comes in many varieties, but the core principles remain the same:

  • Sharding spreads the cost of verification and computation, so nodes do not need to verify every transaction.
  • Nodes in a shard, just like in a larger chain, must: (1) relay transactions; (2) validate transactions; (3) store the state of the shard.
  • Sharded chains should preserve the security primitives of non-sharded chains through: (1) an efficient consensus mechanism; (2) security proofs or signature aggregation.  

Sharding allows a single chain to be split into K different independent subnets or shards. If there are a total of N nodes in the network S, then the subnet K has N/K nodes running. When a set of nodes in a given shard (say K_1) validates a block, it spits out a proof or set of signatures claiming that the subnet is valid. Then all other nodes, all S-{K_1} needs to do is verify the signature or proof. (The time to verify is usually much smaller than rerunning the computation itself).

To understand the scaling benefits of sharding, it is critical to understand the value this architecture provides in increasing the total computing power of the chain. Now, let’s say a node has an average capacity of about C:O(C). Suppose the chain needs to process block B. The computing power of a non-sharded chain is very small O(C); however, since a sharded chain can process blocks in parallel, the capacity of a sharded chain is: O(CB). Often the runtime savings are multiplied! A more in-depth technical explanation of Vitalik can be found here (https://vitalik.ca/general/2021/04/07/sharding.html). Sharding has been the most notable foundational component on the Ethereum 2.0 and NEAR roadmap.

parallel execution

Sharding and parallel execution are similar in many ways. While sharding attempts to validate blocks in parallel on different subchains, parallel execution focuses on splitting the work of processing a single transaction across nodes. The effect of this architecture is that nodes can now process thousands of contracts in parallel!

We won’t go into the details of how it works, but here’s a great article (https://medium.com/solana-labs/sealevel-parallel-processing-thousands-of-smart-contracts-d814b378192) that goes deeper Describes how parallel execution works in Solana to Sealevel.

consensus model

Consensus is at the heart of a one-layer blockchain protocol – for transactions/data to be finalized on-chain, participants in the network need a way to mutually agree on the state of the chain. Therefore, consensus is a means of ensuring the consistency of shared state as new transactions are added and the chain progresses. However, different consensus mechanisms can also lead to fundamental differences in the key metrics we measure blockchain performance: security, fault tolerance, decentralization, scalability, and more. However, the consensus model alone does not determine the performance of a blockchain system. Different consensus models apply to different scaling mechanisms, which ultimately determine the performance of a particular network.

Layer 2 (L2s)

Fundamentally, tier 2 scaling is based on the premise that resources on tier 1 (whether computational or otherwise) become prohibitively expensive. To reduce costs for users, services, and other community participants, the heavy computational load should be moved off-chain (layer 2), while still trying to preserve the underlying security guarantees provided by the encryption and game theory primitives on layer 1 (public-private key pairs, elliptic curves, consensus models, etc…)

Early attempts at this primarily involved establishing a “trusted channel” between two parties off-chain, and then completing state updates at layer 1. State channels do this by “locking some part of the blockchain state into a multi-signature contract controlled by a defined set of participants”. Plasma chain, first proposed by Vitalik: allows to create an unlimited number of sidechains. Then use proof of fraud (PoW, PoS) to complete transactions on layer 1.

Rollups+Flavors

Rollups are also a way to move computation off-chain (layer 2) while still recording messages or transactions on-chain (layer 1). Transactions that were originally recorded, aggregated, and verified on Layer 1 are recorded, aggregated, and verified on Layer 2, and then published to the original Layer 1. This model achieves two goals: (1) release the computing resources of the base layer; (2) still retain the underlying cryptographic security guarantee of layer 1.

Transactions are “aggregated” and then the transactions ordered by Sequencer are passed to the Inbox contract

Contracts stored on L2 execute off-chain contract calls

The contract then publishes the Merkle root of the new state back to the L1 chain as calldata

Optimistic Rollups

Optimistic Rollups are optimistic. Validators publish transactions on-chain with prior assumptions valid. Other validators can challenge the transaction if they so choose, but are not required to. (Think of it as a guilt-free model). However, once challenged, two parties (like Alice and Bob) are forced to engage in a dispute resolution protocol.

The dispute resolution algorithm works as follows:

1. Alice claims that her assertion is correct. Bob disagrees.

2. Alice then divides the assertion into equal parts (for simplicity, assume this is a bisection)

3. Bob then has to choose which part of the assertion (like the first half) he thinks is false

4. Run steps 1-3 recursively.

5. Alice and Bob play this game until the size of the sub-predicate is only one instruction. Now, the protocol just needs to execute this instruction. If Alice is correct, then Bob loses his stake, and vice versa.

A more in-depth explanation of the Arbitrum Dispute Resolution Protocol can be found here.

In the optimistic case, the cost is O(1), which is small/constant. In controversial cases, the algorithm runs in O(log n) complexity, where n is the original assertion size.

A key result of this optimistic verification and dispute resolution architecture is that optimistic Rollups have an honest party guarantee, which means that in order to keep the chain secure, the protocol only needs one honest party to detect and report fraud.

ZK Rollups

In many blockchain systems and layer 1 public chains today, consensus is achieved by effectively “re-running” computational transactions to verify state updates to the chain. In other words, to complete a transaction on the network, nodes in the network need to perform the same computation. This seems like a naive way to verify chain history, and it is! The question then becomes, is there a way to ensure that we can quickly verify the correctness of a transaction without having to recalculate it on a large number of nodes. (For those with some background in complexity theory, this idea is at the heart of P vs. NP) Well, yes! This is where ZK rollups come in handy – in effect, they ensure that the cost of verification is significantly lower than the cost of performing the computation.

Now, let’s dive into how ZK-Rollups achieves this while maintaining a high level of security. The following components are included in the advanced ZK-rollup protocol:

  • ZK Validators – Validate proofs on-chain.
  • ZK Prover – Takes data from an application or service and outputs a proof.
  • On-chain contracts – Track on-chain data and verify system state.

Numerous zero-knowledge proof systems have emerged, especially in 2021. There are two main types of proofs that are already well known: (1) SNARKs; (2) STARKs, although the lines between them are getting blurred every day.

We won’t go into the technical details of how the ZK proof system works, but here’s a nice diagram of how we can get something like a valid proof proof from a smart contract.

uh0STNa7tw8V1zzndcSq2eLoLNcQyQuWOA0hIrYn.png

nAsoDI4ngI63Wuk19RVchFtLqAo7aHBs0T2KIMdK.png

Key Dimensions for Different Rollup Comparisons

speed

As we mentioned before, the goal of scaling is to provide a way to increase the speed at which the network can process transactions while reducing computational costs. Because Optimistic Rollups do not generate proofs for each transaction (no additional overhead in the honest case), they are generally much faster than ZK Rollups.

privacy

ZK proofs are inherently privacy-preserving because they do not require access to the underlying parameters of a computation to verify it. Consider the following specific example: Suppose I want to prove to you that I know which of many keys can open a box. A naive approach is to share it with you and ask you to try opening the box. If the box opens, then obviously I know the combination. But suppose I have to prove that I know the combination without revealing anything about the combination. Let’s design a simple ZK-proof protocol to demonstrate how it works:

  • I ask you to write a sentence on a piece of paper
  • I hand you the box and let you tuck the paper through a small slit in the box
  • I turned my back to you and opened the box
  • I opened the note and gave it back to you.
  • You confirm that the note is yours!

That’s it! A simple zero-knowledge proof. Once you’ve confirmed that the note is actually the same note you put in the box, I’ve proven to you that I’m able to open the box and therefore know a priori the key to open the box.

In this way, zero-knowledge proofs are particularly good at allowing one party to prove the authenticity of a claim to the other without revealing any information to the other.

F5Cc9WjsrJ9lcgWq3vVsqqaWVwrXPW2GdUh0gwKf.png

EVM compatibility

The Ethereum Virtual Machine (EVM) defines a set of instructions or opcodes for implementing basic computer operations and specific blockchain operations. Smart contracts on Ethereum are compiled into bytecode. The bytecodes are then executed as EVM opcodes. EVM compatibility means that there is a 1:1 mapping between the running virtual machine instruction set and the EVM instruction set.

The largest layer 2 solution on the market today is built on Ethereum. EVM compatibility provides a seamless, minimal-code extension path when Ethereum-native projects want to migrate to Layer 2. Projects just need to redeploy their contracts on L2 and bridge their tokens from L1.

The largest optimistic Rollup projects, Arbitrum and Optimism/Boba, are both EVM compatible. zkSync is one of the few ZKRollups built with EVM compatibility in mind, but still lacks support for some EVM opcodes, including ADDMOD, SMOD, MULMOD, EXP, and CREATE2. While not supporting CREATE2 does present issues with interacting with contracts, limiting upgradeability, and user entry, we believe that support for these opcodes will be implemented soon and won’t be a significant use in ZK rollups in the long run obstacle.

bridge

Because L2s are separate chains, they do not automatically inherit native L1 tokens. Native L1 tokens on Ethereum must bridge to the corresponding L2 to interact with dApps and services deployed there. The ability to seamlessly connect tokens remains a key challenge, with different projects exploring various architectures. Typically, once the user calls the deposit on L1, an equivalent token needs to be minted on the L2 side. Designing a highly general architecture for this process can be particularly difficult because of the wide range of tokens and token standards driving the protocol.

finality

Finality refers to the ability to confirm the validity of an on-chain transaction. At layer 1, when a user submits a transaction, it is almost instantaneous. (Although it takes time for nodes to process transactions from the mempool). On tier 2, this is not necessarily the case. State updates submitted to a Layer 2 chain running the optimistic Rollup protocol will first assume that the update is valid. However, if the validator submitting this update is malicious, there needs to be enough time for an honest party to challenge the claim. Typically, this challenge period is set to about 7 days. On average, users who want to withdraw funds from L2 may have to wait about 2 weeks!

On the other hand, Zk Rollup does not require such a long challenge period because each state update is verified using a proof system. Therefore, transactions on the ZK Rollup protocol are as finalized as transactions on one layer. Not surprisingly, the instant certainty provided by ZK rollups has become a key advantage of L2 scaling.

Immediate liquidity as a means of fast finality

Some argue that while Optimistic Rollup does not necessarily guarantee quick finality on L1, Quick Withdrawals provide a clear, easy-to-use solution by allowing users to access funds before the end of the challenge period. While this does provide users with a way to access their liquidity, there are several problems with this approach:

  • Additional overhead for maintaining liquidity pools for L2 to L1 withdrawals.
  • Quick withdrawals are not universal – only token withdrawals are supported. Arbitrary L2 to L1 calls cannot be supported.
  • Liquidity providers cannot guarantee the validity of transactions until the end of the challenge period.
  • Liquidity providers must: (1) trust those to whom their liquidity is provided, limiting the benefits of decentralization; (2) construct their own fraud/validity proofs, effectively violating the use of fraud proofs built into the L2 chain / Purpose of the consensus protocol.

sort

The orderer is like any other full node, but has arbitrary control over ordering transactions in the inbox queue. Without this ordering, other nodes/participants in the network cannot determine the outcome of a particular batch of transactions. In this sense, this provides users with a certain degree of certainty when executing transactions.

The main argument against using sequencers for this purpose is that they create a central point of failure – if the sequencer fails, L2 layer activity may be affected. Wait a minute…what does this mean? Isn’t that destroying the vision of decentralization? Hmm… kinda. Sequencers are typically run by the project developing L2, and are often viewed as semi-trusted entities, usually acting at the will of project stakeholders. For the decentralization hardliners who are gnashing their teeth at the thought of this, you might take comfort in knowing that someone is doing a lot of work/research on decentralized fair ordering.

The recent outage of orderers on large L2 ecosystems (including Arbitrum/Optimism) continues to demonstrate the need for fault-tolerant, decentralized ordering.

capital efficiency

Another key point of comparison between Optimistic Rollups and ZK Rollups is their capital efficiency. As mentioned earlier, Optimistic L2 relies on Fraud Proofs to secure the chain, while ZK Rollup utilizes Validity Proofs.

The security provided by fraud proofs is based on a simple game theory principle: the cost of an attacker trying to fork the chain should exceed the value they can extract from the network. In the case of an optimistic rollup, validators stake a certain amount of tokens (such as ETH) on rollup blocks that they believe are valid as the chain progresses. Malicious actors (those found guilty and reported by honest nodes) will be punished.

Therefore, there is a fundamental trade-off between capital efficiency and security. Improving capital efficiency may require shorter latency/challenge periods, while increasing the likelihood that fraudulent assertions are not detected or challenged by other validators in the network.

8ZYaxO33Shyp8hrM64drVvpyfCOL5b4by3JpWciX.png

Moving the lag period is equivalent to moving along the capital efficiency vs. lag period curve. However, as the latency period changes, users need to consider its impact on the trade-off between security and finality – otherwise they will be indifferent to the changes.

The current 7 delay periods for projects like Arbitrum and Optimism are determined by the community taking these aspects into account. Ed Felten of Offchain Labs gave an in-depth explanation of how they determined the optimal length of the lag period.

By construction (relying on cryptographic assumptions rather than game theory assumptions), proofs of validity are less susceptible to the same capital efficiency/security tradeoffs.

nL1lLS4vyivUswxyyZ1cISOCUlAthVrnfyNCAXWL.png

Specific application chain/scaling

When we talk about a multi-chain future, what exactly do we mean? Will there be a plethora of high-performance 1-layer, more 2-layer scaling solutions with different architectures, or just a handful of 3-layer chains with custom optimizations for custom use cases?

Our belief is that the demand for blockchain-based services will be fundamentally driven by user demand for a specific type of application, be it NFT minting or DeFi protocols for lending, staking, etc… In the long run, As with any technology, we expect users to want to abstract away from the underlying primitives (in this case, L1 and L2 that provide the core infrastructure for settlement, scalability, and security).

nurZRmYV2JTb3ORI89SnWAGtb1EvmrTKSXHJGdPs.pngApplication-specific chains provide a mechanism for deploying high-performance services by leveraging narrow optimizations. As such, we expect these types of chains to be key components of Web3 infrastructure aimed at driving mass adoption.

There are two main ways that these chains appear:

  • Independent ecosystems with their own primitives focus on very specific applications.
  • Additional layers built on top of existing L1 and L2 chains, but fine-tuned to optimize performance for specific use cases.

In the short to medium term, these independent chains are likely to see significant growth, but we believe this is due to their short-term novel properties rather than a signal of sustainable interest and usage. Even now, more mature application-specific chains like Celo seem to be relatively rare. While these independent application chain-specific ecosystems provide superior performance for specific use cases, they often lack the features that make other general-purpose ecosystems so powerful:

  • Flexibility and Ease of Use
  • Highly composable
  • Liquidity Aggregation and Access to Native Assets

Next-generation scaling infrastructure must strike a balance between these two approaches.

Fractal Scaling Method

The fractal scaling method is highly related to the “layered model” of blockchain scaling. It provides a unique way to unify otherwise siloed and disparate application-specific chain ecosystems with the wider community, in doing so helps maintain composability, enables access to common logic, and L1 and L2 get security assurance.

How does it work?

  • Split transactions between local instances based on the use cases they are intended to serve.
  • Leverage the security, scalability, and privacy properties of the underlying L1/L2 layers while optimizing for unique customization needs
  • Utilize novel architectures based on proof-of-carry and recursive proofs (for storage and computation)
  • Any message is accompanied by ideas that prove that the message and the history that led to the message are valid

Here’s a great article from Starkware discussing the fractal scaling architecture (https://medium.com/starkware/fractal-scaling-from-l2-to-l3-7fe238ecfb4f).

the idea of ​​ending

Blockchain scaling has become more prominent over the past few years, and for good reason – the computational cost of verifying on a highly decentralized chain like Ethereum has become infeasible. With the popularity of blockchain, the computational complexity of on-chain transactions is also growing rapidly, further increasing the cost of securing the chain. Optimizations to existing architectures such as layer 1 and dynamic sharding can be very valuable, but the dramatic increase in demand requires a more nuanced approach to developing secure, scalable, and sustainable decentralized systems.

We believe that each chain layer is optimized for specific behaviors as a way to build a blockchain chain layer machine, ranging from general-purpose computing to application-specific and privacy-enabled logic. Therefore, we see Rollup and other layer 2 technologies as the core for scaling throughput by enabling off-chain computation/storage and fast verification.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/jump-crypto-analysis-of-the-core-dimensions-of-various-layer-1-and-layer-2-expansion-schemes/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-04-01 10:29
Next 2022-04-01 10:33

Related articles