Why is “decomposition” the first principle of blockchain optimization?

Over the past two weeks, I have spent a lot of time learning about the latest chain-level innovations, and in my opinion, “disaggregation” is the first principle that guides these innovations.

What is decomposition?

The term comes from ‘modular blockchain’ (my latest article explaining why decomposing and not modularizing blockchains), and there are some differences between decomposing and modularizing:

  • Modularity refers more to a layer in the stack that outsources at least one of the blockchain’s three properties ‘execution, settlement and data availability’ from operating the blockchain.
  • Decomposition is possible even within one layer. It can be performed by type of transactions, even if they are still in one layer.

The logic behind it is very easy to understand. The efficiency of each unit is maximized by enabling each unit to perform the simplest operations .

The Decomposition Phenomenon in the Latest Chain-Level Innovations

Ethereum – Danksharding

In an AMA researched by the Ethereum Foundation #8, Justin stated that ‘Ethereum is becoming more and more modular’:

Screenshot of Justin's explanation on Reddit

Screenshot of Justin’s explanation on Reddit

A strategic approach to reducing complexity is modularity. (See this article on encapsulation and systemic complexity). The good news is that Ethereum is becoming more and more modular.

Consensus and Enforcement : The consensus layer is largely encapsulated. Peter just means that the execution work (previously under the purview of the executive team) is outsourced to the wider rollup community.

Data and execution : The separation of data (danksharding) and execution (rollups) means that the work of execution (previously under the purview of the execution team) is outsourced to the wider rollup community.

Cryptography vs. non-cryptography : Complex low-level BLS12-381 cryptography is encapsulated in libraries, eg Peter interacts with BLST APIs when dealing with Verkle trees.

Proposer and Builder : Proposer-Builder Separation (PBS) allows non-consensus-critical builder logic to be separated from consensus-critical proposer logic. I would like to see the emergence of two types of execution clients: proposer clients for validators and builder clients for the MEV (Miner Extractable Value) industry.

Verifier and Validator : In the context of enshrined zkEVM (zero-knowledge proof Ethereum virtual machine), non-consensus-critical verifier logic can be separated from consensus-critical SNARK verification logic come out. Likewise, I would like the client to be further normalized and modularized in enshrined zkEVM.

But you should note that Justin’s ‘modularity’ goes way beyond the ‘modular blockchain’ that most people are talking about right now:

  • ‘Proposers and Builders’ are not in the planning of any project other than Danksharding;
  • ‘Provers and Validators’ are also not in the planning of any project other than Danksharding, and this design is easier for other projects to integrate.

The above two points fully fit the definition of ‘decomposition’.

Aptos, Sui

Parallel computing is the core argument of the scaling innovation behind Aptos and Sui.

Parallel computing is an extension technology that has been proven by Web2, but there are some prerequisites for the implementation of this technology, and parallel technology does not naturally match the current mainstream monolithic blockchain. Transactions processed in parallel cannot have any correlation, but there are too many correlations between transactions within a block.

While Aptos and Sui plan to do some preprocessing to figure out this mismatch:

An academic approach pioneered by software transactional memory (STM) libraries is the detection and management of conflicting memory accesses. The STM library with optimistic concurrency control records memory accesses during execution, validates each transaction after execution, and aborts and re-executes the transaction when a validation conflict occurs.

And I found that preprocessing fits very well with the concept of ‘decomposition’:

  • Classify all transactions in a P2P network so that each unit (in this case, multiple cores of a CPU) only needs to execute one class of transactions

Altlayer

Altlayer is a highly scalable application-specific and pluggable execution layer system that derives sufficient security from the underlying L1/L2 (Layer 2 network).

If you asked me what is the difference between Altlayer and other mainstream layer 2 networks, I would say: Altlayer is an auto-scaling solution, while other solutions are more “immutable”.

Let’s imagine the following situation:

Many NFT projects do not actually need an eternal dedicated block space, but only need to occupy the block space for a short time. When the demand for minting coins is expected to surge, the system will automatically expand to solve this problem perfectly.

Altlayer’s core theory also fits perfectly into the definition of ‘decomposition’:

Automatic capacity expansion is a very common method used in the traditional cloud computing market to solve sudden demands. Burst requirements are not regular requirements that exist steadily, so we need some unconventional designs to meet them through some ‘decomposition designs’.

Brief summary

The general framework of the correct blockchain is becoming more and more clear, but we still need to do more optimization to make the blockchain better, so how do we optimize the existing blockchain?

  • One of the best optimizations is to layer the blockchain as a whole, with extreme updates to a single layer (I call it factorization).
  • The rest are next-gen technologies like zero-knowledge proofs, 10x bandwidth, 10x SSDs, etc. (if you ask me what’s in it besides decomposition)

Please note: decompositions can be reorganized for further optimization!

The next Ethereum killer?

There’s been a lot of discussion lately about who’s going to be the biggest contender in Ethereum’s next bull run. Will the next Ethereum killer be Celestia, or Aptos and Sui?

I would prefer to say that they may not be Ethereum killers individually, but a blockchain that includes all of their properties could be:

Why is "decomposition" the first principle of blockchain optimization?

  • Four layers: Breaking down a monolithic blockchain into four layers has proven its worth. (I understand that we can divide a monolithic blockchain into more layers, but to make it easier to understand, I’m using the term four layers here)
  • Celestia is designed with DAS (Data Availability Sampling) the best choice for data availability layer. (in theory, it’s safer than the other options mentioned above)
  • Parallel computing is also valuable for further optimizing the transaction consensus layer, and is theoretically possible. And Celestia is the only option for transaction order consensus (currently, a design where a group of people handles both the data availability layer and the transaction count consensus is the most efficient design).
  • Aptos and Sui are designed for optimal performance in global state consensus, with parallel computing and unique P2P messaging.
  • Developers can unlock more possibilities at the execution layer. And auto-scaling should be a good addition.

How does Ethereum compare to this “best optimization”?

Ethereum is the safer choice, but is less efficient compared to this “best optimization”. This is my personal opinion at the moment:

The core innovation of Aptos and Sui (parallel computing) must involve the design of the global consensus layer, which means that it does not fit naturally into Ethereum, which cannot take advantage of these great innovations, at least not in the short term.

Will these innovations add too much complexity to the blockchain?

From Vitalik’s joking ‘we should de-shard’ at EthCC to @Polynya’s latest ‘4844 and Done’, there is a lot of talk about ‘reducing blockchain complexity’.

My view is more like ‘it’s still not the time to simplify’ : if we look back at the history of the cloud computing industry, in the early 5-10 years, cloud computing services became more and more useful, but the more getting more complicated. And in recent years, we have seen more simplification of cloud computing services, such as serverless, IAC, etc.

  • New technologies always follow a similar route, from “find a standard and meaningful framework” to “become easy to use but complex” to “simplify for easier use and lower cost”.
  • The premise of simplification is that things are good enough, but the current blockchain infrastructure is clearly not good enough. We are still very early in the game.

I still believe that we can see more decomposition innovations in this bear market, and I will share my thoughts as I come across them.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/why-is-decomposition-the-first-principle-of-blockchain-optimization/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-08-08 12:14
Next 2022-08-08 12:16

Related articles