Learn about Polygon’s new Avail, a scalable data availability layer

polygon releases new killer Avail, how it’s a whole new way of operating the blockchain.

“On June 30, Polygon, the Ethernet sidechain scaling solution, announced the launch of Avail, a general-purpose, scalable data availability solution. avail provides a public data availability layer that can be used in different execution environments, such as standalone chains, sidechains and off-chain scaling solutions. In the long run, it will support a wide variety of experiments and eventual implementations in terms of execution environments without requiring teams and projects to initiate security on their own. Chains created with Polygon SDK, Cosmos SDK or Substrate can benefit from Avail.”
We are very excited to announce Avail – an important part of a whole new way of how the future of blockchain will work. Avail is a universal, scalable, data availability-centric blockchain for standalone chains, sidechains, and off-chain extension solutions.

Avail provides a robust data availability layer by using an extremely secure mathematical primitive – using a key innovative corrective code for data availability checking – we use Kate polynomial promises to create a two-dimensional data availability scheme that avoids fraudulent proofs, does not require honest majority assumptions, and does not rely on honest full nodes to gain confidence in data availability.

Avail provides a generic data availability layer that can be used by different execution environments, such as standalone chaining, side chaining, and under-chain extension solutions. In the long run, it will support a variety of experiments in terms of execution environments and eventual implementations without teams and projects having to initiate their own security on their own. Chains created with Polygon SDK, Cosmos SDK or Substrate can benefit from using Avail for this purpose.

Avail decouples transaction execution and validity from the consensus layer, so that consensus is only responsible for a) sequencing transactions and b) ensuring their data availability.

Main objectives
Enable independent chains or sidechains with arbitrary execution environments to bootstrap validator security by guaranteeing transaction data availability without having to create and manage their own validator sets

Layer 2 solutions such as Validiums provide higher scalability throughput by using Avail as an off-chain data availability layer

We have been secretly working on Avail since late 2020 and it is currently in the Devnet phase. The test network is under development. More detailed information about the problem, architecture and solution, including references to the code base, can be found in the reference documentation.

Background
In today’s ethereum-like ecosystem, there are three main types of nodes.

Validation nodes

Full nodes

Light client

A block is attached to the blockchain by a validator node that collects transactions from a pool of memory, executes them, and generates the block before propagating it through the network. The block contains a cell block header that contains a summary and metadata related to the transactions contained in the block. Full nodes throughout the network receive the block and verify its correctness by re-executing the transactions contained in the block. Light clients obtain the block header and transaction details from adjacent full nodes only as needed. The metadata in the block header allows the light client to verify the authenticity of the received transaction details.

While this architecture is very secure and has been widely adopted, it has some serious practical limitations. Since light clients do not download entire blocks, they may be tricked into accepting blocks for which the underlying data is not available. Block producers may include malicious transactions in a block without revealing its entire contents to the network. This is known as a data availability problem and poses a serious threat to light clients. To make matters worse, data unavailability is a non-attributable failure, which prevents us from adding a fraud proof structure that allows node-wide notification of lost data to light clients in a convincing manner.

Learn about Polygon's new Avail, a scalable data availability layer

Existing blockchain architecture vs. Polygon Avail

In contrast, Avail takes a different approach to this problem – instead of validating application state, it focuses on ensuring the availability of published transaction data and ensuring transaction sequencing. A block with consensus is only considered valid if the data behind that block is available. This is to prevent block producers from releasing block headers without releasing the data behind the block header, which would prevent clients from reading the transactions needed to calculate their application state.

Avail reduces the problem of block validation to data availability validation, which can be done efficiently at a constant cost using data availability checks. Data availability checking utilizes corrective codes and is heavily used in data redundancy designs.

Data availability checking requires each light client to sample a very small number of random blocks from each block in the chain. A group of light clients can sample the entire blockchain en masse in this way. A good model for thinking about this is a system like a p2p file sharing system like Torrent, where different nodes typically store only certain parts of a file.

Note that these techniques will be used heavily in systems like Ethereum 2.0 and Celestia (formerly known as LazyLedger).

This also leads to an interesting result: the more non-consensus nodes that exist in the network, the larger the block size (and throughput) you can safely have. This is a useful property because it means that non-consensus nodes can also contribute to the throughput and security of the network.

KZG Commitment-Based Solutions
In the KZG commitment-based scheme used by Avail, there are three main features.

Data redundancy makes it difficult for the outgoing blocker to hide any part of the block.

Fraud-free guarantee of correct corrective coding

Vector commitments that allow full nodes to use concise proofs to convince light nodes to include transactions.

In simple terms, the entire data in a block is arranged into a two-dimensional matrix. Data redundancy is introduced by erasure coding each column of the matrix to double the size of the original column. Kate promises are used to commit each row and the promises are included in the block header. This scheme easily captures data hiding attempts because any light client with access only to the block header can query the random cells of the matrix and obtain a short proof that can be checked against the block header (thanks to the Kate promise). Data redundancy forces the block producer to hide large portions of the block, even if it only wants to hide individual transactions, making them easy to capture by random sampling. We avoid the need for fraudulent proofs because the binding nature of the Kate promise makes it computationally infeasible for a block producer to construct false promises without being caught. Moreover, the commitment of the extended row can be computed using the homomorphism property of the KZG commitment scheme.

Learn about Polygon's new Avail, a scalable data availability layer

KZG Commitment Program

Although we mention the main features of the Avail construct here, there are other features such as partial data capture and collaborative availability assurance. We have omitted the details here and will revisit them in a subsequent article.

Now might be a good time to give an example and walk through a real-world use case. Suppose a new application wants to host a separate chain specific to the application. It starts a new PoS chain using the Polygon SDK or any other similar framework such as Cosmos SDK or Substrate and embeds the business logic in it. However, it faces the bootstrapping problem of obtaining sufficient security through authenticator pledges.

To avoid this, it uses Avail for transaction sequencing and data availability. Application users submit transactions to the Polygon SDK chain, and those transactions are automatically forwarded to Avail, where they maintain the order themselves. Ordered transactions are picked up by one (or more) operators and the final application state is constructed based on business logic. Application users can rest assured that ordered data is available and can rebuild the application state themselves at any time, allowing them to use the strong security guarantees provided by Avail for chaining.

While the example above discusses a new standalone chain using Avail for security, the platform is generic and can be used by any existing chain to ensure data availability. In the next section, we will briefly mention how Avail can help existing aggregates scale Ethernet.

A note on data availability for off-chain scaling solutions for Ether
Various Ethernet Layer 2 solutions have been proposed, such as Optimistic Rollup, ZK Rollup and Validiums, which move execution off-chain while ensuring application validation and data availability on-chain. While an architecture based on off-chain execution improves throughput, it is still limited by the amount of data that a master chain like Ether can handle. This is because while execution is off-chain, validation or dispute resolution is strictly on-chain. Transaction data is submitted as calldata on Ether to ensure that the data is available for future reconstruction. This is extremely important.

In the case of Optimistic Rollup, an operator may submit invalid transactions and then suppress some blocks to the entire blockchain. This way, the other full nodes in the system will not be able to verify that the submitted assertions are correct. Due to the lack of data, they will not be able to generate any fraud proof/challenge to prove that the assertion is indeed invalid.

In the case of zero-knowledge based Rollup, ZKP robustness ensures that the accepted transactions are valid. However, even with such assurance, non-disclosure of the data supporting the transaction can have serious side effects.

This can result in other validators not being able to calculate the current state of the system, as well as users being excluded from the system and having their balance frozen because they do not have the information needed to access that balance (witness).

We recognize that in order to achieve higher throughput, we need not only to put execution under the chain, but also a scalable data hosting layer to guarantee data availability.

This blockchain design needs to address the following components.

Data hosting and sorting: This part will receive the transaction data and sort it without any execution. It will then store the data and ensure complete data availability in a decentralized manner. This is the key to Avail.

Execution: The execution component should take the ordered transactions from Avail and execute them. It should create a checkpoint/assertion/proof and submit it to the data validation layer. We call this the execution layer.

Validation/Dispute Resolution: This part represents the master chain of system anchors. The security of the design depends on the robustness and security properties of this part. Checkpoints/assertions/proofs submitted by the execution layer are processed by this layer to ensure that only valid state transitions are accepted in the system (provided that data is available). We refer to this part as the data validation layer.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/learn-about-polygons-new-avail-a-scalable-data-availability-layer/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-06-30 05:06
Next 2021-06-30 05:11

Related articles