Celestia data availability

Celestia data availability

This article was compiled by the CFG Labs core team and W3 Hitchhiker, and most of the content was taken from the tenth Office Hour on the evening of September 15

Overview

About the Authors: W3 Hitchhiker, an independent crypto research team that aims to discover cutting-edge technologies and innovative projects through first-principles thinking and on-chain data support. Previously, in the secondary market, it was subjective, non-hedging, hedging. Accumulated experience from Defi. Gradually become interested in the first level, but also can effectively communicate with the project, find investment methods and ideas. Currently the entire team (staff) is 50+. The team has three departments, the technical department, the chip department, and the investment research department. The investment research department includes three colleagues in charge of technology, including Ren Hongyi and Liu Bicheng. It is also the sharing guest of this time. We mainly look at the underlying infrastructure projects. We are interested in the next cycle of technological innovation in DA layer projects, Layer 2’s Rollups, ZK, etc. After learning about Celesita at the end of last year, we hope to contribute to the community through primary markets, technology output, etc. We spent half a month translating more than 200 pages of Mustafa (founder of Celestia) for our doctoral dissertation, which also incorporated a white paper from Lazyledger (the predecessor of Celestia). Mustafa is also a good believer in our work and has accumulated a lot of friends in the community, including Chloe & Frank from CFG Labs. This time, they invited you to interpret the project. The main sharing team members are Rex, Ren Hongyi and Liu Bicheng.

Data availability

Celestia data availability

The proposal of DA is also a hot topic. We can see the performance bottlenecks in the development of Ethereum, not only the transaction confirmation speed (more than ten seconds), high handling fees, etc. So we want to expand the performance of the blockchain. The discussion directions in the Ethereum community include: 1) one is L2, using Rollups for execution and computation, Rollups (execution layer) parallelism, and accelerating the efficiency of the chain; 2) The second is the problem of chain scaling, which currently believes that block scaling is the most effective solution to deal with the efficiency of the entire network (State Bloat). In order to make the network more utilized, Vitalik also mentioned in the end game that the expansion (reduce costs) and the proposal of Rollups is proposed. However, with the proposal of the expansion plan, the performance increase will undoubtedly put forward additional requirements for the consensus node, in order to ensure the security of the network, the verification node through the (district centralized) verification method to achieve decentralization. Consensus nodes, in order to achieve 1) high performance 2) support for more rollups to join, need to be implemented through a verified decentralized way (full node vs light node). Data availability (Celestia) is a perfect fit for these requirements. The first step after the Ethereum merger is to promote EIP-4844, that is, Proto-Danksharding, and danksharding also emphasizes data availability, which shows the importance the community attaches to DA.

Celestia data availability

The official explanation of DA is to ensure the availability of data on the network through data sampling. How is data availability understood? Going back to the previous point, it is understandable to understand the decentralized verification mentioned earlier. In essence, light nodes do not need to store all data without participating in consensus, and do not need to maintain the status of the whole network in a timely manner. For such nodes, there is a need for an efficient way to ensure data availability and accuracy. Next, let’s introduce the difference between DA and consensus in data security. Because the core of blockchain is that the data is immutable. The blockchain can ensure that the data is consistent across the network. In order to ensure performance, consensus nodes will have a tendency to become more centralized. Other nodes need to obtain usable data confirmed by consensus through the DA. The consensus here (consistency of transaction content and transaction order) is not exactly the same as the consensus of other networks (ordering of transactions, verification, etc.).

Introduction to Celestia

Celestia data availability

Celestia follows the philosophy of Cosmos, open, independent and sovereign, a modular public chain focused on data availability, and also does Tendermint (no implementation environment provided). There are mainly the following characteristics:

1) Provide data availability for Rollup

2) Provide settlement, consensus layer separation, need to create a third layer of settlement, if some applications do their own settlement, it is also feasible. 3) Solutions for data availability. QR erasure code + fraud proof

4) For light nodes to provide high-security services, through fraud proof, get relatively accurate, verified, recognized by the network valid data.

The following includes the Celestia workflow, comparison with Danksharding, recent topics of concern, the current state of the Mamaki testnet, Optimint, Celestia application method, system verification, Celestia fees, community answers and more.

Celestia workflow

Celestia’s implementation ideas are mainly divided into three parts to explain, similar to other chains of consensus, P2P interaction is not introduced, here mainly focus on the following differences:

1) Some differences in block construction. Start by defining each share. Shares include data on transactions and proof of the transactions associated with that batch. Cosmos SDK (Stake, Governance, Account System), consensus and execution in Tendermint are separate. Celestia itself has no execution layer and no settlement layer. So the relationship between transaction and state is not the same in Celestia as Ethereum, and the state of Ethereum is a change to the entire state tree after the transaction execution is completed. The state that Celestia envisions is not the execution of the transaction, but the state in which the entire transaction is stored on the chain. (share) previous implementations they have overturned, looking for new solutions. This share is critical, proof of fraud, sampling is required. So Shares can be understood as transactions and transaction-related proof data, built into a fixed-length, fixed-format data block.

Celestia data availability

After Shares is introduced, it is the difference between his block and other chain blocks, that is, what his data root is. Data Root is also explained here according to the white paper (may be different from the actual implementation, just about the concept). This graph is a 2K2K matrix. First, let’s look at how this matrix came to be. The firstis the matrix of K K, which is the set parameter that can be modified at any time. After I have prepared the matrix of KK, I will put the previous shares, containing transaction-related data, and place each shares in the grid of the matrix. This way I will fill inthe lattice of the kk matrix. If it is not enough, make up for it, and make up some invalid data. If enough is enough, wait for the next block. The size of this K means Celesita, the maximum transaction capacity that a single block can tolerate, i.e. the block capacity. The transaction in Shares can be one or more transactions, the same batch. A fixed length means that there is a limit to the transaction. K K determines the capacity of a single chunk of Celestia. How much specifics need to be paid attention to its development. After placing this share in the K-Kmatrix, it is first extended by reed solomon laterally, from a K K matrixto a 2K K matrix. K K isthe raw data, and extended data is 2K Kminus the original K * K = K ‘K’. Then expand the originalK K to get K “K”, and then scale up K “K”. Through this encoding, the final square matrix of 2K 2Krows and columns is obtained, and the shares are encoded into the data. That’s how this data is constructed. What is DataRoot? We see a matrix of 2K2K, and we can construct each row and each column into a Merkel tree. Merkel trees will have Merkel roots. We’re going to get 2k+2K Merkelgan. Then build the 24K Merkel root into a Merkle tree, and finally get the root of the Merkel tree. We see dataroot, which is the root of data. Data root is placed in the status header (block header). Celesita DA is all around DATA root, and the key data for the block is Data Root. How to confirm data and data-related transactions, and how to generate these shares, are currently being reconstructed.

2) Now that we have prepared these roots in the block header, let’s see how the DA works.

Celestia data availability

The interaction between consensus nodes and consensus nodes is not specifically explained here, the P2P way. DA involves the interaction of consensus nodes and light nodes. Nodes that cannot participate in consensus are collectively called light nodes. Propagation mainly depends on the flow between DA and light nodes. We look at how they interact. After a consensus node has gone through a period of consensus (data confirmation), new blocks are generated, and the block headers, including the data root, are sent to the light node. After the light node receives the data root. Random sampling is required, and we see a 2K 2K matrix in which a set of coordinates is picked and packaged into a set of2K 2Kmatrices. This set is a collection of this sample. Send the sampled collection to the consensus nodes they are connected to. Request the consensus node to send the coordinates corresponding to the shares to the light node. The consensus node has two kinds of responses. One I have you requested, then 1) shares, and 2) shares in the data root Merkel proof together reply to the light node, the light node receives the reply, will do Merkel proof, prove that the shares in this data root, and then will accept the shares. When he received a response to his sample. At this time, the block is basically recognized, and the data is available. Dataroot is a chain of transactions, and being able to respond shows that these data are recognized by consensus nodes on the network. If a consensus node does not respond, the light node will forward the relevant shares I received to the corresponding consensus node to help the entire network converge quickly. P2P network, network expansion, consensus nodes really can not receive/receive a relatively slow consensus result. My light node can quickly help spread to the network through this mechanism.

There is a problem, light nodes sample a set of coordinates, did not say how many samples, 2K * 2K, two-dimensional erasure code, as long as the sampling K*K, can be completely recovered. Why is there no explicit requirement for light node sampling so many. Why not explicitly give it. If we have only 1 light node in the entire network to ensure that the data must be recovered, then we need to sample KK to ensure that the original data is restored. In fact, for a network, there will definitely be N light nodes, and the task can be distributed to N nodes. The official documentation also gives the formula. How many times you sample, how many samples, and how many times you get the data available is a formula for calculating the probabilities. Light nodes may choose to make their own data sampling decisions based on their security level requirements. At the same time, in Celestia, the more light nodes and the larger the blocks, the more efficient the network execution. If there is only 1 light node, sample at least K K of data. If there are light nodes of K*K, ideally the sampling is not repeated, and each node only needs to be sampled 1 time. The performance of the entire network, the bandwidth of each node, and the performance are consistent. More nodes means the larger the total number of samples. This also explains that the more nodes there are, the more efficient the network is.

Proof of fraud, why do you need to have proof of fraud, we have erasure codes, sampling, theoretically obtained data. For example, if I sample the data obtained from ten consensus nodes, I can determine that the data is given by the ten nodes through the erasure code. But there will be some problems in this place, that is, these ten nodes have not given us the right data, how do we do it. Erasure codes only prove that this data is what they want to give us. But to verify that the data they want to give us is correct, then the fraud proof ensures that the consensus nodes are coded for us according to the envisioned rules. That’s why we need proof of fraud. Proof of fraud is all about solving these problems. Prove whether the nodes obtained shares by sampling and whether the recovered data was correct or invalid.

The composition of the proof of fraud has three points.

1) My fraud proves to challenge the data of which block, the fraud proves to be optimistic, there is a certain lag, not necessarily against the current block. There may be several blocks for the front.

2) I have this fraud proof, I want to indicate which of your shares is wrong, I want to point out the shares you made mistakes. And the roots of the row/column where the shares are located and the Merkel proof tells me that the consensus node follows the rules we envision.

3) I tell you his Merkel proof that it also requires at least K shares/columns (where the wrong share is located), and that the rows can be restored after having K. This allows for verification.

Let’s take a look at how proof of fraud interacts?

Celestia data availability

The consensus node gives the light node the shares data of the request in response, and the light node sends the shares to the other consensus nodes (requesting other consensus nodes to help verify). Other consensus nodes will judge whether the shares and local data are consistent, and if they are not, they will initiate this fraud proof. How can I tell if the validity of a fraud certificate is made? Verification is required.

1) The specified block (data root) is local to me. You are wrong with shares, you have to tell me the root of your shares and the proof of Merkel, and I will do a verification.

2) The roots of your shares rows and columns are indeed in my data root.

3) I’ll restore the entire line by correcting the shares you gave me. Compare it with my local data and find that the data is really different from my data.

In these three ways, it is conclusively confirmed that the fraud proves to be valid. The consensus node that gave me shares before was problematic. I need to blacklist it and no longer accept any shares it sends.

This is the process of proof of fraud interaction. At this point, for DA, the data availability process is pretty much the same.

Let’s summarize. DA through the two-dimensional RS entangled code, for the transaction data, shares encoded once, encoded after the generation of data, this data between the consensus node non-consensus nodes by sampling the data data. Once the data data is obtained, it can be recovered and it can be confirmed that the data data is available. For example, Rollups restore their own transactions and do the calculations. At the same time, the erasure code can only ensure that the data is the data that the other party wants to give, and introduce fraud proof to ensure that the data given by the other party is encoded according to the expected rules. Valid data, through these two parts, together can provide a solution for light nodes, providing fast verification and obtaining data validity.

Contrast with Danksharding:

Celestia data availability

Contrast between Danksharding and Celestia. All are encoded by 2D RS erasure code. But a different path is taken. Danksharding, using KZG polynomial commitments. KZG polynomial commitment for a polynomial can provide the promise that f(x)=y of an X Y proves that a certain set of x y happens to be the root of the polynomial, which means that the point passes through the polynomial curve. At the same time, the verifier does not need to know the specific content of the polynomial, nor does it need to execute the polynomial, and can be obtained by a simple method, the transaction proof can be obtained, XY is a set of facts for the solution of the polynomial. KZG polynomial promises, more suitable for RS encoding, which involves the implementation of RS erasure code. The implementation of RS erasure code extends K copies of data to 2K. How to scale K copies of data to 2K. We have K copies of data, sorted out, simply understood as his index, corresponding to XY. K points, through data calculations, Fourier transforms, etc., can obtain the polynomial of K-1. You can draw a graph of this polynomial on the coordinate axis. The first 0 to K-1 is the raw data, and we can extend the latter K data. These 2K data Any K can recover the entire polynomial, which means that 2K data can be recovered, then we can restore the previous K raw data. The polynomial promise happens to be proof of a polynomial and a polynomial. His advantage 1) is more suitable for secondary coding. 2) The size proof is 48 bytes (fixed). 3) Because of the use of timely proof, light node to get the certificate, can be immediately verified, the confirmation of the transaction is timely, unlike optimistic proof, it takes a while, if no one initiates a challenge, I can confirm that the transaction is OK. This is one of the advantages that KZG promises.

Celestia’s fraud proof is a proof of optimism, and the biggest advantage is that I am optimistic, as long as no one is wrong with this network, the efficiency is very high. If nothing goes wrong, I won’t have proof of fraud. Light nodes do not need to do anything, as long as the data is received, restored according to the code, the whole process without problems, very efficient. Data availability is top of a plus above.

Danksharding In addition to data availability, PBS. PBS is the solution to the MEV problem. Separate the block and consensus miner roles. The PBS scheme restricts the right of block packagers to review transactions, and Crlist is also involved in this work. This part has little to do with DA. Celestia is not currently considering the SL layer, and MEV is not. To sum up, Celesita is a public chain around data availability, there is no execution, settlement layer, so the capacity of the entire network is used for data availability. Ethereum danksharding is not just about data availability, but also about the settlement tier.

Recent Hot Topic Discussions:

Celestia data availability

The problem that must be completed just now needs to be corrected, data root is not a 2K*2K root, it is a 2K+2k root. Celestia also has a minimum honest node hypothesis, which means that when a light node is connected to an honest verifier, it can be secured. Byzantium does not work in this case (2/3). The latest situation update: it was previously thought that multiple transactions were placed in one share, and now a transaction is divided into multiple shares.

The team’s third phone call recently also explained the difference in danksharding. Technically, we’ve covered it above. From the user’s point of view, there are the following differences:

1) Block size. Compared to Ethereum’s blobs, each block is 16M. Celestia promises to reach a large block of 100 megabytes;

2) Celesita focuses on DA, with less metadata (auxiliary data) and less data about execution. The threshold fee is theoretically slightly lower than ETH;

3) The issue of sovereignty over the Rollups, Celestia favors freedom, and the Rollups need to ensure autonomous security. ETH has a contract to check the validity of the data submitted by Rollups;

  1. Celestia uses namespaces to ensure that you don’t have to get all the data on the main chain, you just need to get the data related to your rollups.

Mamaki testnet

Celestia data availability

At present, there is no incentive, the end of the year or the beginning of next year may have, the next testnet upgrade is about October, the plan is more service developers, the current testnet is mainly to give everyone to experience the way Clestia works. The testnet is now working normally. Bugs like crashes are now rare. The node is functioning normally. The light node is working well and is in line with expectations. For example, reduce the amount of downloaded data and perform efficiently. The demand for networking may be slightly higher. The operation of the main chain is not very stable, and it often occurs that it takes five minutes or ten minutes to produce a block. The order of the blocks, originally in the Tendermint mechanism, determines the probability of the block according to the number of pledges, and in the case of similar mortgage amounts, it should be rotated out, but there are often verifiers who continuously produce 3–4 blocks.

Issues currently seen:

1) The entry and exit of the validator node, no matter how much collateral, even a small verification node exit will cause network instability.

2) Can’t connect too many peer nodes, only a small number of nodes, relatively stable nodes;

3) In the case of small transaction volume, the block time is 50 seconds / block, which makes people worry about the throughput after that. At present, it is still relatively early, and the optimization space is still relatively large.

4) The problem of bridging nodes is relatively large. During the Mid-Autumn Festival, after our node was restarted, we found that the memory and network had skyrocketed. We also tried to contact other validator node service providers. Some bridge node memory has run to a 20 G, which is very abnormal. The amount of data in the Celestia disk is 14 G, the bridge node should not store data, and the memory uses 20 G, which is a more obvious problem (the reason is still under study).

Celestia data availability

Tendermint is to make a consensus. Optimint is used by Celestia rollup. There is currently only one sort of rollups on the market. There is no need to make consensus, but it is relatively simple to upload the data packaged by the sequencer to the mainnet. If rollups also need to make consensus in the future, they should still change it on Tendermint. The difficulty of making consensus is still much more difficult than doing a data upload. So the two are not competitive.

In terms of contracts, Celestia currently considers execution less, and it also borrows the ready-made CosmWasm technology (which is better combined with Cosmos and is now a usable state) (there may be Move VM EVM and so on in the future). At present, two examples have been made, which are greatly affected by the instability of the mainnet. Submitting a transaction sometimes takes 10 minutes and the user experience is not very good.

Optimint and App (Cosmos App is implemented in the form of an application chain) are currently connected through ABCI, the chain’s own transactions are to use the consensus engine, the upload function is through ABCI, and other ways will be added later.

Now the hard connection between nodes (tcp connection) is GRPC, and the technology is more advanced. But more use, better compatibility is Rest and other ways. There is still a little imperfection in the process of completion.

Celestia data availability

Let’s talk about the verification of the system. Light nodes do not need to download all the data to verify the validity of the data. Because of the 4x erasure code extension, it guarantees high reliability.

1) So how to ensure the reliability of sampling, such as 100 rows * 100 columns, that is 100,00 shares. But every sampling is not a guarantee of one in ten thousand. Extending four times means that at least 1/4 of the share in the entire share is unavailable, and you can only draw an unavailable share to mean that it is really unavailable because it cannot be recovered. Only if 1/4 of the unavailable can not be recovered, is the real effective to find the error, so the probability of a draw is about 1/4. Draw more than ten times, fifteen times, can reach 99% reliability guarantee. Now make a choice within the range of 15–20 times.

2) Celesita has not done DA verification correctly. For example, if you roll ups, you transfer the data to Celesita, ETH has a contract to verify the validity of the transaction, and Celestia returns it to the rollups node to verify the data.

Here is the role of the node. For example, what role a node plays in rollups may be another role on Celestia. For example, he may be a light node on rollups, a full node, but it may be a light node on Celestia. As long as it is a light node, it can obtain data, there is no need to re-do a full node like in Ethereum, which may be more advantageous than ETH’s current solution. ETH If you want to verify that data is sent to the mainnet, you need to open a full node of the mainnet. Celesita only needs light nodes. In the case of light nodes, then it means that sampling ensures reliability.

The minimum honesty assumption just mentioned means that you, as a light node, only need to connect an honest node to ensure security, what if you don’t even believe this, what if an honest node is not connected? Then you can start a full node of rollups, you take all the data of your rollups, and do a check with the root released by your sequencer. Regarding the verification of Rollups’ own state, Celesita does not have the ability to execute. For example, your account balance needs to be verified by yourself. If your sorter is evil and sends two blocks of the same height to Celestia, Celestia just guarantees that both of the data he sent to your Rollups node, the Rollup node receives two blocks of the same data, and now ETH encounters this situation, that is the fork, how to deal with it, Rollups is responsible for it.

The fee for Celestia has two parts

Celestia data availability

1) Rollups own byte fees and perform gas. If Ethereum acts as the execution layer, settle with ETH.

2) Celesita costs to save transaction data (sorters, package and pay for themselves, how you get money back from other nodes, or how Rollups users get their money back, requires Rollups to design it themselves. (There may also be state root data, etc.). Storage fees are paid in celestia’s local currency, and other currencies are also supported by the architecture. Changes have also been made so far. The data you upload is separate from the logic of payment.

3) The goal is to make it cheaper to combine the two parts of the data than to put the data on Ethereum alone. ETH is more expensive to call data. After ETH does danksharding, the cost of KZG will be lower. But Celesita promised lower fees.

4) The choice of storage method is more flexible, Celestia, ETH, or offline storage.

Celestia data availability

The way Celestia applies

1) Go directly to the mainnet or execute Rollups, not caring much about the order of transactions. Let Celestia’s validators do the transaction sorting, you have no sovereignty.

2) Sovereign rollups sort deals with Celesita and other nodes take data from Celesita and perform to maintain security over to Rollups itself

3) ER is built directly into the L1 specification and is not deployed as a smart contract. In conjunction with ETH, ETH acts as the Celestia SL tier and other rollups connect to the SL tier. Probably not directly to Celesita, dealing with Celesita through the settlement tier

4) Quantum gravity bridge, completely using Celesita as the plug-in of ETH.

Theoretically, both can be achieved with ETH as the settlement layer of Celestia, or Celestia as the DA layer of ETH. Combinatory, all are achievable.

Community Q&A

1) How many nodes does Celestia have?

Q&A:Member A

Full node on Celestia: Storage node, not a full-featured node like ETH. The node that stores all the data. Consensus nodes are also easy to understand. Light nodes are also easy to understand. Bridging nodes, in order to do DA, to provide RPC services, to the light node to connect services, to provide light nodes to do data sampling special functions. This specially made node is called the bridge node: the consensus node, the second pass between the full node and the light node. Similar to a gateway

2) Q&A: What is a namespace?

Rollup is uploaded to Celesia, stored in Celestia, and each Rollup only wants to fetch its own data. Namespaces are distinguishing flags between each Rollups. Rollups take this flag when they take data, and they can just take out their own data. The way it works is different: sovereign rollups are you send me the data, I just need to verify that the transaction data signature is the setter’s signature and approve your data. I give you Rollups services to ensure that data is transmitted to other Rollups nodes intact. That’s where the DA works. This is one way rollup works. There is another way to work, and ETH’s Smart Contract Rollups are defined by a series of contracts. For the sake of differentiation, different names were given.

3) How to solve the sharding, scalability?

Blocks are pure data, do not do execution, processing power is relatively high, there is no lot of verification. A block of 100 megabytes, now it seems that there is still a certain degree of certainty. Pure binary data, does not care what is inside, as long as the sorter signs, regardless of the contents inside.

4) Rollups data is stored on Celestia, the data is available, but how efficient is it to verify the validity? Sampling verification, will the larger the data you store, the cost changes more and more, Ethereum full node hundreds of G, nearly 1T (1T = 1000G). A Rollups will have a growing amount of data in the future, and the cost will be increasing. Is there any relevant research on the relationship?

W3 answer: Celestia blocks have capacity, the more transactions, the greater the cost, so how does Celestia deal with it? First of all, there is no execution layer, and all rollups transactions are a binary for Celesita and are purely a piece of data, with no practical significance. For any chain, there is a capacity size. data root, K*K is the block capacity. How to expand the block capacity, light nodes can help the entire network to converge. Celestia prefers this large block. So how do I help spread the whole network in large blocks. I would speed up the convergence of the network, the more light the nodes. Light node sampling is re-forwarded. This process of secondary forwarding accelerates the process of convergence of the entire network and improves the efficiency of the network. The bandwidth of my light nodes is also able to contribute, helping to propagate blocks between consensus nodes. Increase the number of light nodes instead of the load on individual nodes.

CFG Labs: DA block size, like the bandwidth of the Internet, is a core metric. The more bytes a DA block can hold per second, the more transactions in the rollups execution layer, and the faster the blockchain. Bytes per second, from a traffic point of view, is bandwidth.

5) Q: As the data stored in the block becomes larger, is the cost linearly related? Light nodes also have to participate in consensus, the cost increases, and the time for consensus will increase? (time, cost, experimental results)

Answer: Celestia has no execution layer. What is my consensus? My consensus is that I just need to collect the data. The data is confirmed, the order is confirmed, and my consensus is finished. My consensus does not need to be enforced. So as the amount of data increases, where is my main overhead increasing? One is storage and the other is bandwidth. My calculation is nothing. We have full nodes, nodes that specialize in storage. There are also light nodes that help large blocks of P2P propagate by sampling some of the data. Light node bandwidth facilitates the interaction of consensus nodes.

Issue: Celestia sampling, whether the sampling speed is not directly related to the data chunk data chunk size. 100 blocks, sampling 10 points at different locations, 100 megabytes, 1000 megabytes, the same time. More costs to consume?

Answer: You mentioned that the amount of data becomes larger, the overall cost rises, and the more official interpretation nodes, the greater the processing volume. I don’t know what your understanding of this sentence is? If I have a fixed number of nodes, I can handle so much, but I can increase the number of nodes. So this light node network expands processing power, and Celesita is using another way to alleviate this capacity conflict. Celestia block size, as the number of light nodes expands, the light nodes here are not limited to the light nodes of a certain rollups, but all the light nodes of the rollups participate in this matter together (regardless of which Rollups participates). For the whole network, the more cost, but the light nodes are amortized, the amount of data per sample will not change much. Let’s say the data is now 100 megabytes, and then it becomes 200 megabytes. The number of light nodes that may be needed has also doubled. The cost is not too large for a single light node. It is also through the number of expansions, for a single light node, the scaling problem is not perceived. The impact on light nodes is limited. 100 megabytes of such a block, not yet reached the limit.

Without this thing, the square root is a long-standing thing, to spread false rumors, from the earliest translators did not fully understand, no matter how large the block, the size of the data root inside the block head will not change at all. But the message index may be added later, which will change with the number of messages, but the index is very small, only a few hundredths of the data itself.

Question: Storage nodes, light nodes, block nodes, how are the incentive distribution of these nodes?

W3: Cost considerations are not particularly much. Light nodes are mainly users, Celestia storage costs may not be shared, and other and consensus nodes, storage nodes, bridge nodes, between the distribution. Like we are currently building a set of nodes, we have not yet thought about how to divide our own money. According to the mechanism of POS, it is determined according to the number of mortgages. Security and attacks are involved, and must be balanced.

Question: The execution layer, the DA layer, Celestia and our existing solutions, such as Ethereum, zkrollups, OProllups, and other cosmos ecological projects, I myself want to do rollups, how should I deploy?

Answer: There are three ways to apply Celesita, if you directly go to the Celestia mainnet or simple Rollups, then directly ride on the full node of Celesia. If sovereign Rollups, consider how to design your sequencer and how to design the security mechanisms of your network. The sequencer is responsible for passing the data to Celestia and paying for storing the data, and the other nodes on your Rollups can act as light nodes on Celestia to take the data off the Celestia. Your own chain, do your own business, and logic. If it’s a quantum gravity bridge, that’s a contract deployed in ETH, and if you’re an ETH rollups, you won’t deal with Celesitia.

8) The issue Cevmos Ethereum smart contract can be deployed in Cevmos, Celesita + Cevmos + Rollups

Cevmos can be understood to do the execution layer, settlement. So what does the whole mechanism of work be?

Answer: Other nodes on Rollups get data from Celestia and pass it to Cevmos to perform and perform updates to their own status. Depending on your design, because your sequencer may or may not execute. If you want to verify your transaction, package, you also need to run your transaction into cevmos, confirm that your transaction is valid, and then package and upload. It depends on how your own rollups design the mechanism. Rollups sends the transaction to the sequencer, who uploads the transaction to Celestia, your other nodes take the data from Celestia, and you execute it through an execution virtual machine. This is the whole cycle process. The settlement layer mainly solves the exchange problem between different assets and provides security (such as Ethereum). The exchange of assets between different chains requires a bypass (trustable bridge and a minimum trust bridge) and a common settlement layer. At present, the typical Rollups has only one sorter, and in the future, consensus mechanisms will be introduced, such as Tendermint, Avalanche, etc., which can achieve independent design.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/celestia-data-availability/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-10-02 23:51
Next 2022-10-03 09:00

Related articles