Focus on the data availability layer to understand the new public chain Celestia

You may have read that many blockchains are studying the evolution from monolithic to modular designs. You may not have heard of Celestia (formerly LazyLedger), the first blockchain designed with a modular architecture. Celestia is one of the most exciting projects in the entire crypto space, and its upcoming mainnet launch could be a milestone in reshaping the construction of blockchains as we know them today.

Celestia is a simple proof-of-stake blockchain that provides a pluggable data availability and consensus layer. It orders the data and makes it available, but it doesn’t execute the transaction. Celestia is optimized as a shared security layer for dedicated execution environments such as Rollup. While Celestia will support all types of rollups, it is initially focused on the EVM and the Cosmos SDK. Celestia itself is built on top of the Cosmos SDK and uses Tendermint as its consensus engine. The key team members behind it each have an incredibly impressive track record in the field:

  • Mustafa Al-Bassam — CEO and Co-founder
  • Ismail Khofi — CTO and Co-founder
  • John Adler — CRO and Co-Founder
  • Nick White – Chief Operating Officer

Modular and integrated blockchain design

Many researchers have delved into this topic (Polynya in particular has a lot of posts), so I’ll keep it short here. Fundamentally, when you break down a blockchain into its core components, they do three things:

1. Execution – This is the computation required to update the chain. Get the current state, add a bunch of new transactions, and transition to the new state.

2. Consensus/Settlement – This provides security and agreement for transactions and their ordering.

3. Data Availability – You need to ensure that the transaction data behind the block header is published and available so that anyone can easily calculate the state and check the state transitions.

Looking at the current major blockchains, you have a holistic approach that brings these three core components together. Split them into specialized chains and you have a modular approach. Modular design is a well-documented approach to Ethereum’s current scaling roadmap, and it’s something the Celestia team has been working on for years. Celestia upends the current model by decoupling execution from data availability and consensus. Leave execution to specialized environments such as Rollup. These Rollups can then turn around, publish their arbitrary data to Celestia, and rely on it for data availability and consensus.

Cm2KL5Kw58arJoVUCQfOWkbSSAcgddcaxiZoAoOb.png

Blockspace is currently one of the most in-demand commodities in the world, and traditional blockchains such as Ethereum are at a scaling breakthrough point. The core of the problem boils down to how a holistic blockchain handles transactions. Currently, for a consensus node to validate a new block, you must first check if the block is consensus (eg, in PoW under Nakamoto consensus, is this a valid chain that gets the most work done?). Nodes must also download and execute all transactions to ensure blocks are valid while computationally demanding. However, doing all of this together doesn’t scale effectively.

Celestia nodes are different – they don’t worry about execution at all. Nodes in Celestia only need to check if the data behind the transaction has been published, they don’t even have to care if it is correct. They simply order transactions and verify that the data being published is available, a more scalable task.

Data Availability Issues

L7okXeyzDhRG4jOuaLGbSc7XL2WrmSCLdI1K3Ioo.png

To properly analyze Celestia, we must first understand the data availability issues faced by blockchain and why it is so important. The core of the problem is how do nodes determine that when a new block is produced, all the data behind that block is published to the network? Without this data, users would not be able to detect if a block contains invalid transactions.

How Blockchain Nodes Work

There are two types of node participants in the blockchain:

1. Full Nodes (aka Full Verification Nodes)  – Full Nodes download and verify all transactions. These are resource intensive, but they are also more secure. For example, in the case of a 51% attack, only full nodes can censor the data and can distrust the double spend as they would consider it invalid.

2. Light Clients (i.e. Simplified Payment Verification for SPV Clients) – Light clients are not fully validating nodes, so they are easier to run but less secure. Instead of checking all underlying transactions, they only validate block headers. They will rely on the majority assumption – they assume that the majority of consensus is honest and that the chain preferred by the blockchain consensus algorithm contains valid blocks. Therefore, they are vulnerable to 51% attack and lead to double spends.

This raises an important question – how do we get light clients to reject invalid blocks so they don’t have to trust miners? The answer lies in fraud proofs, which are small proofs that a particular transaction is invalid.

fraud proof

Proof of fraud and data availability was formalized in 2018 by Mustafa Al-Bassam with his co-authors Vitalik Buterin and Alberto Sonnino. Their paper describes some of the key components for secure scaling of a modular blockchain stack. Using these techniques, light clients can rely on full nodes to find invalid transactions and send them succinct fraud proofs in any case detected. This is also easy to do, since a fraud proof essentially consists of the relevant transaction itself, a pre-state root, a post-state root, and a witness for that transaction. They can then send it to a light client, who can easily recompute that particular transaction and detect if it is invalid without knowing the state of the entire blockchain.

YvEK8O5ZjPGQTeSfnlhIXYGiYtQMoHuHGwrtK5xF.png

Data Availability Sampling (DAS)

This is where data availability issues come into play. In order for full nodes to generate fraud proofs, all underlying data needs to have been published. If the data is not available, then no one can recompute the state or prove malicious activity. What we really need, then, is a way for light clients to check if miners have posted transaction data to the chain when checking block headers. As long as this is published and available to full nodes, then they will be able to generate fraud proofs. Enter proof of data availability.

The key to Data Availability Sampling (DAS) is that users (light clients) can use erasure coding to split a block into multiple blocks, randomly sample a small portion of that data, and in the process Verify that the entire block has been published with statistical certainty.

uZHaJlDjZA1PCd9Xs46upty0BW3ctz1jqhffblLa.pngTo be more specific, how easy is it to run this light node? The answer is as follows:

Fr1maXNrP9XiYV20sFDwlPKSbxNBBxXjhSfQQNQ8.png

tYfkNRYRWAdnWrz2dpWkWjna4xFyppCFiO2oRf8x.png

security assumption

Using Data Availability Sampling (DAS) allows light clients to verify that all data in a block is actually downloadable, so fully validating nodes will be able to generate fraud proofs in the event of any invalid transaction. Combining these techniques, we are able to rely on weaker security assumptions. Now let’s review these three cases:

1. Full nodes – still the safest scenario, full nodes cannot be tricked into accepting invalid blocks.

2. Standard light clients – since they don’t validate blocks, they assume majority consensus is honest.

3. Light Client + Fraud Proof – We can now replace the state validity of the honest majority assumption with a weaker honest minority assumption. You now only need the minimum number of light clients to make enough sample requests so that together they can rebuild the entire block.

The combination of fraud proofs and probabilistic data sampling is at the heart of enabling on-chain blockchain scaling (e.g. through sharding or block size increase) while maintaining strong guarantees of data availability and validity. Celestia’s roadmap contrasts sharply with Ethereum’s:

  • Celestia will launch with DAS and has no plans for sharding
  • Ethereum’s roadmap to achieve sharding before DAS

Ethereum planned to use random sampling for sharding (shuffling randomly among the validator lists validating different blocks), but it didn’t have DAS until a few years later. As Vitalik himself points out, “sharding by random sampling has weaker trust properties than the form of sharding we’ve built into the Ethereum ecosystem, but it uses simpler techniques.” In fact, Ethereum The fact that the current roadmap has sharding long before its implementation of DAS is a subtle but important point, as sharding without DAS is less secure.

Blockchain scaling and Celestia scaling methods

Blockchains typically have limited capacity based on the resource requirements of end-user full nodes. Bitcoin, for example, has a theoretical maximum size of 4 megabytes, which is set very low, so any ordinary user can spin up a node and validate the chain on ordinary hardware. Ethereum has a similar goal of letting regular users validate the chain, albeit with slightly higher resource requirements than Bitcoin. This ability for anyone to check the chain for themselves is crucial to the concept of self-sovereignty, that you don’t have to trust any third party to verify the network. This basically limits the TPS of the network for a given set of hardware requirements and the cost of running a full node.

One notable exception can be seen in Solana – a prime example of a monolithic chain that seeks to scale without modularity. Solana’s scaling largely boils down to leveraging Moore’s Law to bet that hardware costs will continue to drop and that the network will continue to increase its hardware requirements, thereby increasing throughput. The upshot of this is that Solana’s capacity should always be greater than demand, and there’s no need for a fee market to emerge for block space. Therefore, transaction costs can be kept very low, just enough to prevent spam attacks.

The Celestia roadmap is very consistent with the idea that the average user with minimal hardware should be able to verify the chain themselves, so they also intend to scale by making verification easier (rather than by adding hardware assumptions). Therefore, it cannot guarantee that capacity will always exceed demand. There will be restrictions, there will be a market for fees. What Celestia offers is greater capacity than other contemporary blockchain designs, which in turn will lead to incredible scalability and lower fees. It can do this because it is designed so that the chain of verification is computationally easy (without worrying about execution).

The key to Celestia scaling is that it requires sub-linear work to verify the block size of the chain. More specifically, clients only need to download the square root of the amount of data they are examining. For example, let’s say you perform DAS on a block with 10,000 chunks. You just need to download and check 100 of them. You have now moved from a model where nodes need to download and execute every transaction in a block to a model where they only need to download and check the availability of the square root amount of block data.

Making verification so simple is key to scaling, since the number of blocks you need to sample is independent of block size, so the cost of checking a block is roughly constant regardless of block size. This allows you to increase the block (or shard) size, and thus TPS, without increasing the cost of validating the chain for the end user. However, the larger the block, the more users you need to download random samples in the network to ensure that users have collectively sampled everything in the block. So the limit to safely hosting more data is having more nodes. You’ve now created a blockchain that scales linearly with the number of users (light nodes) and have made it incredibly easy. As more nodes join the network, the block size can be safely increased without sacrificing security or decentralization. Increasing the block size on traditional blockchains increases the hardware requirements for verification at the expense of decentralization and security. Rollups on top of Celestia rely on it to provide data availability, so by improving data availability at the base layer, this translates into adding scaling within their own execution environment. This is how Celestia provides massive scalability.

In fact, we’ve seen similar ideas in practice. BitTorrent is a communication protocol for peer-to-peer file sharing that enables users to distribute data and electronic files over the Internet. It has been one of the most scalable decentralized protocols in the world, even handling more than a quarter of the total internet traffic at one point. It’s so extensible for much the same reasons Celestia is designed to be so extensible. P2P users do nothing, they just share storage and distribution with each participant, contributing and storing only a small portion of the network. The more users there are in the network, the more data it can store and distribute, scaling directly with the user base.

Rollups

Now that we have a way to create a secure base layer that securely provides data availability, we have a viable home on which to rollup. A rollup is the blockchain itself, with its own block producers, that can be optimized as an execution environment. They can then rely on base layers like Celestia to provide data availability so they can dump transactions. Let’s take a quick look at the two main rollups and why they require data availability and consensus:

1. Optimistic Rollup — An aggregator or sequencer for an optimistic rollup first collects transactions into a rollup block. In the case of an Ethereum rollup, the aggregator then sends the block back to the smart contract at the base layer, while also issuing the bond. These rollups are optimistic because their blocks are considered valid (innocent until proven guilty). In the case of an invalid transaction, it can be proved using the above-mentioned fraud proof. After a block is published, there is a challenge period where anyone can challenge the block by submitting a fraud proof. If the fraud proof challenge is successful, the aggregator’s deposit will be slashed and the block will be rolled back. If the period ends without a challenge, the block is finalized. As mentioned earlier, submitting these fraud proofs requires data availability.

2. Zero Knowledge (ZK) rollups — ZK rollups work in reverse, requiring a pre-encrypted proof called a proof of validity to show that the published block is valid (guilty until proven innocent). While proof of validity itself does not require data availability, it is still required for chain security. If ZK rollup block producers create blocks without publishing data, users will not be able to recreate state. For example, imagine a scenario where block producers for a ZK rollup on top of Ethereum start censoring transactions. If the data is available on the main chain, users on the rollup can recreate the state, prove their account balance, and forcibly exit the rollup to the main chain. Alternatively, other sequencers can step in to recreate the state and start generating chunks.

So, what exactly will a Rollup on Celestia look like?

Currently, Celestia hopes to provide the foundation for the rollup in two main ways:

1. “Celestia-native Rollups”  – These rely solely on Celestia for data availability and consensus. This was the original vision for building a rollup ecosystem with client-side execution on top of Celestia. This is still the main goal in the long run, and may be more scalable.

2. “Ethereum-native Rollups”  – These are rollups that currently exist on top of the Ethereum main chain, but also rely on off-chain rollups (or more precisely, “Volitions” in this case, which we’ll get to later Introduction) Data Solutions Options. Data availability on Ethereum is still very expensive, so leveraging a rollup hybrid solution of Ethereum and Celestia may make sense in the short and medium term.

Celestia-native rollup

The main vision of Celestia is very simple at a high level – to provide a pluggable data availability and consensus layer on which to run rollups. The main difference of rollup on Celestia compared to rollup on Ethereum is the fact that Celestia has no execution environment. This will functionally affect applications, such as ZK rollup, which use proofs of validity in the following ways:

  • Ethereum Model – ZK rollup will issue proofs of validity to Ethereum and smart contracts on Ethereum will verify them.
  • Celestia Model – ZK rollup also publishes data and proofs of validity on Celestia, but proofs of validity need to be verified locally as Celestia itself does not have an execution environment to do this. Therefore, Celestia will lock the data and proof of validity, but outsource the verification of it to the rollup’s execution environment.

In Celestia, there are no two-way bridges between the base layer and the rollup layer. You don’t have a client running either chain because Celestia is basically agnostic to all these rollups and doesn’t understand the meaning of the data coming from those rollups. In the case of Celestia’s native rollup, the rollup sequencer may run a client of the Celestia main chain. It will follow the block, submit a transaction to Celestia using the rollup data, and pay a fee for including that data.

Ethereum native rollup

While a hybrid solution working with Ethereum was not the original plan, there is a clear product-market fit here. Therefore, Celestia has been discussing alternative solutions with different Ethereum ZK rollups. First, explain what the current data availability bottleneck in Ethereum is. At a high level, the Ethereum main chain still faces scaling challenges, and even with rollups on it, fees can increase significantly. This is because Ethereum is far from optimized for sharding and DAS with a data availability layer. As a result, other off-chain data availability solutions have emerged. Here is an overview of the current rollup situation on Ethereum:

PriYnClbFPAUc7ZcxBNUqJLny0JS6Qml6eCeubKS.png

Kl4JdLRgxDSLgKmGKbOjM7WplTlH8tjJuidbUEyj.png

Celestia gets really interesting in the case of Volitions. Volitions, as pioneered by StarkWare, are systems that users can choose to operate on in either:

  • ZK-rollup model – inherits all the security of Ethereum, relying on it for settlement and data availability.
  • Validium Mode – A weaker security mode where Ethereum remains the underlying settlement layer, but data availability is placed off-chain.

Volitions give users a great deal of freedom, allowing them to trade off higher security (rollup) and lower costs (Validiums) on a per-transaction basis, while maintaining full composability because Both modes share the same state. StarkNet will provide its Validium solution, and zkSync 2.0 will launch a similar zkPorter solution to operate in a similar mode as it continues its release roadmap this year. Current versions of Validiums, like StarkWare’s permissioned extension engine StarkEx, rely on closed committees of well-known trusted parties such as ConsenSys and Nethermind to attest to the availability of data off-chain. This type of permission scenario is certainly far from ideal, as all users on this Validium are at the mercy of these central committees that can freeze state and withhold data. Going forward, StarkWare and Matter Labs will launch their permissionless rollups (StarkNet and zkSync 2.0, respectively). zkSync will also include zkPorter (the Validium solution), which will also move off-chain hosting its data availability, secured using proof-of-stake secured by zkSync token stakers called Guardians, as follows:

xvNN8PXNHnWGZiQRH1Sh1EobyfimqDMbzAHj7XNj.png

These Validiums can still provide greater security than sidechains or alternative layer 1 blockchains if their data availability committees are secure. In the worst case, malicious actors control the orderer and control over 2/3 of the total shares, the worst they can do is sign valid state transitions but withhold the data, effectively freezing the state. Users in full rollup mode are protected from such attacks because their data is available on the Ethereum main chain, so they can always recreate state to prove their account balances and force an exit from Ethereum Layer 1.

That’s where Celestia comes in. If Ethereum’s mainchain data availability option is too costly, ZK rollup currently has the option to set permission settings for data availability (like now in StarkEx) or need to bootstrap a new Security Council validator (zkPorter’s approach). Validiums can optionally plug into Celestia for off-chain data availability, while still utilizing Ethereum for settlement, rather than going through this process and splitting security across various ZK rollups.

In practice, the Celestia validator set can issue signatures to Ethereum to prove that data for a given Ethereum-native Validium is actually available on Celestia. Celestia blocks are organized using a so-called Nanospace Merkle tree (more on this later), allowing data specific to a given Validium to be attested on an Ethereum smart contract. Rollup clients can then read these proofs on Ethereum and know the data is available for them to recompute the state.

In this hybrid scenario, Celestia will continue to undercut Ethereum in terms of data availability cost and scalability due to the different structure. On Ethereum, publishing data will continue to compete with the state execution of a large number of smart contracts. Ethereum’s innovations have a long way to go to make themselves a very suitable data availability layer (data sharding, and ultimately DAS). Celestia will add some cost to publishing this proof to Ethereum, but this can be optimized by things like batching proofs possibly for different rollups. What’s more, you’re just publishing the signature and Merkle tree from Celestia to Ethereum, which is much cheaper than publishing the full transaction data to Ethereum.

It should be noted that the reason why Celestia can only be used for Ethereum native verification instead of actual rollup is that Ethereum does not currently support off-chain data availability proof. However, the aforementioned scenario where the rollup client only verifies Celestia’s signature to prove data availability does fit their threat model. In order for you to have the security assumption of a rollup, you want to inherit the full security assumption of the base layer (i.e. have Ethereum actually verify data availability by supporting off-chain data availability proofs), rather than relying on proving an off-chain committee.

So in this Validium example that relies on both Ethereum and Celestia, it should be noted that this does introduce additional security assumptions compared to a full rollup mode. However, it should still be more secure than rollups that rely entirely on a less secure layer 1 or many Validiums relying on their own weaker data availability committees (and cheaper and more scalable than rollups that only rely on Ethereum ).

A complete modular stack – leveraging Cevmos and recursive rollup

Celestia is currently working with the Evmos team to build Cevmos (C elestia / EVM os / Cosmos OS), an incredibly exciting fully modular stack for hosting EVM based rollups.

For context, Evmos is an application-agnostic chain that is interoperable with the Ethereum mainnet, EVM-compatible environments, and other BFT chains via IBC. Evmos aims to be the EVM hub for Cosmos, making it easy to deploy smart contracts and communicate within the Cosmos ecosystem.

At the core of Cevmos will be an optimized settlement layer based on the Cosmos SDK that will run a constrained EVM. It will be based on Evmo and built to host an EVM recursive rollup (rollup within rollup) on top of it. This settlement layer itself is a rollup of EVM running on top of Celestia, so we can call it a “settlement rollup”. The Cevmos settlement rollup will be built using Optimint (Optimistic Tendermint) instead of the Tendermint Core consensus engine used on the existing Cosmos chain. Optimint is an alternative to Tendermint BFT that enables developers to deploy new chains that use existing consensus and data availability layers such as Celestia.

Essentially, any settlement layer built for a rollup is a chain with a trust-minimizing two-way bridge to the rollup, using some sort of dispute resolution contract on top of the settlement layer. This allows tokens to be transferred between the two, or routed from one rollup to another through the settlement layer, in either direction in a trust-minimized manner.

The current problem is that the Ethereum main chain is not only optimized for rollup settlement, so rollup must always compete with other applications, which becomes expensive and not scalable. Conversely, a Cevmos settlement rollup will be more restricted, allowing only:

  • Rollup smart contract – it has to handle the validation of proof of validity and the disputes required to host ZK and optimistic rollup on top of it
  • Simple transfer between Rollups

Since the Cevmos settlement rollup will be fully equivalent to the EVM, you will be able to easily port and run your favorite EVM Rollup (Fuel, Optimism, Arbitrum, StarkNet, etc.) on it.

2qy0XR2ZwUUYMYSrEIMjVf58nee7urJqafcnFs77.png

To recap, a full Cevmos stack might include:

1. Celestia – Provides data availability at the bottom.

2. Cevmos settlement summary – This Evmos-based chain will sit on top of Celestia. It will only be optimized as a settlement layer for EVM-based rollups.

3. EVM-based Rollups – Handles execution, potentially a lot of execution Rollups will be at the top of the stack.

lDiYbRRDTqdaWvaqRvvsrNX5VGVLQeuQEWzKBd7k.png
Comparison with other products:

RQ4UoYmsm1QjTRFOrJSVu4qAlDi4CJy9DR0u49uw.png

Quantum Gravity Bridge

Aside from having a really cool name, Quantum Gravity Bridge is one of the more interesting developments Celestia is working on. The bridge will be a relay from Celestia to EVM compatible chains (e.g. Ethereum, Avalanche, BSC, etc.). This will allow you to forward those proofs that the data on Celestia is available to that EVM compatible chain. This will be used for Volitions that are built on an EVM-compatible chain, but are not quite ready to skip fully deploying their code on Cevmos. However, you can still benefit from Celestia’s scalable data availability in this hybrid system, as follows:

1. Celestia provides data via relay

2. EVM chain for settlement (replacing Cevmos settlement summary)

3. EVM-native rollup for execution

The bridge will prove that the data has actually been forwarded on the EVM-compatible chain available on Celestia, and you can then proceed to use the EVM-compatible chain for settlement.

Execution environment for Celestia Rollups

Although Celestia itself is built on the Cosmos SDK, the beauty of it is that rollups built on top of it retain the ability to choose any execution environment they want. In fact, rollups using the Cosmos SDK are actually quite difficult at the moment, as it is difficult to make them state fraudulently provable. This is because the Cosmos SDK is not a very specific or well-defined execution environment like the EVM. Therefore, a single transaction may touch all states, making it difficult to provide fraud proofs for a given transaction for light clients to check. (In the case of Cevmos settlement rollups, you’ll notice an exception earlier, although this should work, since its environment is more restrictive and restricted).

Therefore, Celestia actually treats other environments than the Cosmos SDK as a default execution environment. One under study is Arbitrum’s VM, which is a more restricted and well-defined execution environment. It uses an interactive verification game instead of general state fraud proofs. In the short term, this may be a more viable solution than making the Cosmos SDK vulnerable to fraud proofs. This remains an end goal, but the plan will likely start with the most readily available options now and continue to add new execution environments over time. To add new execution environments over time, they need two main things:

1. Current execution environments are often combined with consensus, so you need to decouple them and replace the consensus part with the ability to just dump data onto Celestia.

2. The ability to prove validity or some kind of state fraud proof (the options here are those two mentioned earlier – interactive verification game or general state fraud proof).

Sovereignty of rollup on Celestia

One of the key visions of a modular stack is to give developers more flexibility to optimize what they want and the application user’s say in the outcome. Celestia provides the perfect neutral and flexible base layer for this vision. If you run rollup on today’s world computer model (such as Ethereum), you will deploy a smart contract bridge on the base chain. Therefore, you will follow the rules of the first layer. Without on-chain voting, the logic and consensus of this rollup cannot be easily upgraded, so there is no fork option.

In the case of rollup on Celestia, you can have a local bridge where the logic to do the fraud or ZK proof is done locally and this code is adjudicated. This can be upgraded without affecting the data availability layer. This is why Celestia rollups are more sovereign – they can easily hard fork without permission. This is also an advantage of systems like Cosmos (not a computer model of the world), where individual regions can rely on their own governance to hard fork without hard forking every other region at the same time. The problem here is that the zones are a bit too segregated because they spread out the security. Now, what if you could provide sovereignty to the zone, but have Celestia’s shared security as a common base layer, while interacting via IBC?

These systems are essentially a form of social consensus, and Celestia returns power to the chain deployed on top of it. Suppose you are in a situation like the DAO hack again, the decision to hard fork has to be made by the entire Ethereum base chain. Now you can envision a world of application-specific rollups on top of Celestia, in a similar case, free to do what they want if they get hacked without hard forking any other rollup.

How to organize Celestia using application namespaces

When applications are deployed on Celestia, they will be able to choose their own “namespace” and then all their messages will be associated with it. Celestia then organizes its blocks using a Merkle tree sorted by the namespace of each transaction. This allows users in the network to easily query Celestia’s full storage nodes to request transactions related to their application without caring about data related to other applications. This is in contrast to existing blockchains, where every smart contract runs on the same world computer, with a combination of consensus and execution. In this case, smart contract users really need to pay attention and check all other smart contract transactions.

Token

Celestia will indeed have a token, but details are very limited at the moment. It will be used to secure the network and pay transaction fees on the network using Proof of Stake. A fee depletion mechanism like EIP-1559 is also planned, creating deflationary pressure to offset the pressure for new issuance as adoption grows.

timeline

Celestia launched its Minimum Viable Product (MVP) and private development net in 2021. The next step is to launch a testnet in early 2022 (an incentivized testnet is planned), followed by a mainnet launch later in 2022.

KUye6OgrjUmAffohIXJdVca7dnqDFRbCTBqmiFF0.png

summative thinking

All in all, Celestia offers several advantages over traditional solutions:

Scalability – By separating execution from consensus and data availability, Celestia is able to specialize and scale linearly based on the number of nodes on the network. The execution environment can then be freely optimized at the upper layer.

Simple – Celestia bills itself as a pluggable solution that wants to make deploying application-specific blockchains as easy as clicking a button. A potentially infinite number of long-tail blockchains will have a natural home on top of Celestia.

Shared Security – No more bootstrapping your own security and validator sets as separate chains. Whether for an otherwise independent chain, or a Validium that needs to bootstrap a data availability committee, these options are more difficult and security-fragmented.

Sovereignty – The beauty of simplicity in Celestia’s design is that it provides a great deal of freedom for applications built on top of it. It is no longer tightly bound by the execution environment and governance decisions of the chain in which it is located.

The Celestia team is far ahead of its time in considering data availability and a modular blockchain stack. Others such as Polygon Avail or other Layer 1 blockchains currently looking to modularize are just waking up to the inevitability of this paradigm shift. This is increasingly the direction of blockchain scaling, and Celestia will provide first-class solutions.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/focus-on-the-data-availability-layer-to-understand-the-new-public-chain-celestia/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (4)
Donate Buy me a coffee Buy me a coffee
Previous 2022-01-26 08:09
Next 2022-01-26 08:11

Related articles