With the rapid emergence of cross-chain bridges, new frameworks, and other core cryptographic protocols, effectively planning blockchain infrastructure remains a key challenge for users, developers, and investors. The term “blockchain infrastructure” can encompass a variety of different products and services, from the underlying network stack to consensus models or virtual machines. We reserve a more in-depth analysis of the various “core” components that make up the L1/L2 chain for a future release (stay tuned!). In this article, our specific goals are:
- Provides a broad overview of key components of blockchain infrastructure.
- Break these components down into clear, digestible subsections.
- Infrastructure Map
We define an ecosystem around blockchain infrastructure as a protocol designed to support L1 and L2 development in the following key areas:
- Layer 0 infrastructure : (1) decentralized cloud services (storage, computing, indexing); (2) node infrastructure (RPC, staking/validator)
- Middleware : (1) Data Availability; (2) Communication/Messaging Protocols
- Blockchain development : (1) security and testing; (2) developer tools (out-of-the-box tools, frontend/backend libraries, languages/IDEs).
Tier 0 infrastructure
Decentralized cloud service
Cloud services are critical to the growth of Web2—as the computing and data demands of applications grow, service providers that specialize in rapidly delivering these data and computations in a cost-effective manner are critical. Web3 applications have similar needs for data and computation, but want to stay true to the spirit of blockchain. Therefore, protocols aimed at creating decentralized versions of these Web2 services have emerged. A decentralized cloud has 3 core parts:
- Storage – Data/files are stored on servers run by many entities. These networks enable a high degree of fault tolerance as data is replicated or striped across multiple machines.
- Computation – Like storage, computation is centralized in the Web2 paradigm. Decentralized computing is concerned with distributing this computation across many nodes for greater fault tolerance (if one or a group of nodes fails, the network can still service requests with minimal disruption to performance).
- Indexing – In the Web2 world, data is already stored on a server or group of servers owned and operated by an entity, and it is relatively easy to query this data. Because blockchain nodes are distributed, data can be siloed, scattered across different regions, and often under incompatible standards. The indexing protocol aggregates this data and provides an easy-to-use and standardized API to access this data.
Several projects provide storage, computation, and indexing (Aleph and Akash networks), while others are more specialized (e.g., The Graph for indexing, Arweave/Filecoin for storage).
Remote Procedure Calls (RPCs) are central to the functionality of many types of software systems. They allow one program to call or access programs on another computer. This is especially useful for blockchains, which have to serve a high volume of incoming requests from various machines operating in different regions and environments. Protocols such as Alchemy, Syndica, and Infura provide this infrastructure as a service, allowing builders to focus on high-level application development rather than the underlying mechanisms involved in transporting and routing calls to nodes.
Like many RPC providers, Alchemy owns and operates all nodes. For many in the crypto community, the dangers of centralizing RPC are obvious – it introduces a single point of failure that can jeopardize the liveness of the blockchain (i.e., if Alchemy fails, applications will not be able to retrieve or access the zone data on the blockchain). More recently, decentralized RPC protocols like Pocket have seen growth to address these issues, but the effectiveness of this approach remains to be tested at scale.
Staking/Validator – Blockchain security relies on a distributed set of nodes validating transactions on the chain, but someone must actually run the nodes participating in the consensus. In many cases, the time, cost, and energy required to run a node is prohibitive, causing many to opt out and instead rely on other nodes to take responsibility for securing the chain.
However, this attitude poses a serious problem – if everyone decides to transfer security to everyone else, no one will verify it. Services such as P2P and Blockdaemon run infrastructure that allows less mature or undercapitalized users to participate in consensus, often by pooling capital. Some argue that these staking providers introduce an unnecessary degree of centralization, but the alternative could be worse – in the absence of such providers, the barrier to entry for running a node is too high for ordinary network participants, May lead to higher concentration.
Applications consume a lot of data. In the Web2 paradigm, this data usually comes directly from users or third-party providers in a centralized manner (data providers get paid directly for aggregating and selling the data to specific companies and applications – such as Amazon, Google, or other machines learning data provider).
DApps are also heavy consumers of data, but require validators to make this data available to users or applications running on-chain. In order to minimize trust assumptions, it is important to provide this data in a decentralized manner. Applications can access high-fidelity data quickly and efficiently in two main ways:
Data oracles such as Pyth and Chainlink provide access to data streams, allowing encrypted networks to interface with legacy systems and other external information in a reliable and decentralized manner. This includes high-quality financial data (ie asset prices). This service is critical to expanding DeFi to a wide range of use cases in trading, lending, sports betting, insurance, and many other areas.
A data availability layer is a chain that specializes in ordering transactions and making data available to the chains it supports. Typically, by using a small fraction of a block, they generate evidence that provides clients with high-probability confirmation that all block data has been published on-chain. Data Availability Proof is the key to ensuring the reliability of the Rollup sequencer and reducing the cost of Rollup transaction processing. Celestia is a good example of this layer.
Communication and Messaging
As the number of Layer 1 and its ecosystem grows, the need for cross-chain management composability and interoperability is even more pressing. Cross-chain bridges enable otherwise isolated ecosystems to interact in meaningful ways, similar to how new trade routes help connect otherwise disparate regions, ushering in a new era of knowledge sharing! Wormhole, Layer Zero, and other cross-chain bridge solutions support universal messaging, allowing all types of data and information (including arrests) to move across multiple ecosystems, and applications can even make arbitrary function calls across chains, enabling them to enter other community without having to deploy elsewhere. Other protocols such as Synpase and CELER are limited to cross-chain transfers of assets or tokens.
On-chain messaging remains a key component of blockchain infrastructure. As DApp development and retail demand grows, the protocol’s ability to interact with its users in a meaningful but decentralized way will be a key driver of growth. Here are a few potential areas where on-chain messaging could be useful:
- Token claim notification.
- Allows communication messaging built into the wallet.
- Notice of important updates to the agreement.
- Track notifications of critical issues (e.g. risk metrics for DeFi applications, security breaches).
Some notable projects developing on-chain communication protocols include Dialect, Ethereum Push Notification Service (EPNS), and XMTP.
Safety and Testing
The security and testing of cryptography is relatively in its infancy, but it is undeniably critical to the success of the entire ecosystem. Cryptographic applications are particularly sensitive to security risks, as they are often directly related to user assets. Small mistakes in its design or implementation often have serious economic consequences.
There are 7 main security and testing methods:
- Unit testing is a core part of most software system test suites. Developers write tests to check the behavior of small atomic parts of a program. There are various useful unit testing frameworks. For example Waffle and Truffle on Ethereum, Solana’s standard is the Anchor testing framework.
- Integration testing tests various software modules as a group. Because libraries and high-level drivers often interact with each other in various ways, as well as other low-level modules (for example, a TypeScript library interacting with a set of low-level smart contracts). Testing the flow of data and information between these modules is critical.
- Auditing has become a core part of blockchain security process development. Protocols typically utilize third-party code auditors to check and verify every line of code before releasing a smart contract to the public. We take our auditors very seriously to ensure the highest level of safety. Trail of Bits, Open Zeppelin, and Quantstamp are some of the trusted names in the blockchain auditing space.
- Formal verification involves checking that a program or software component satisfies a set of properties. Often, someone writes a formal specification detailing how the program should behave. A formal verification framework will convert this specification into a set of constraints, which are then resolved and checked. One of the leading projects to enhance the security of smart contracts. Certora is a leading project that uses Runtime Verification to implement formal verification to support smart contract security.
- Simulations — Quant trading firms have long used agent-based simulations to backtest algorithmic trading strategies. Given the high cost of conducting experiments in blockchains, simulation methods provide a way to parameterize protocols and test various hypotheses. Among them, Chaos Labs and Guantlet are two high-quality platforms that utilize scenario-based simulations to secure blockchains and protocols.
- Bug bounties help address large-scale security challenges by leveraging the decentralization spirit in the crypto space. High bounties incentivize community members and hackers to report and resolve critical vulnerabilities. As such, bounty programs play a unique role in turning “gray hats” into “white hats.” For example, Immunefi, a bounty platform created by Wormhole, offers bug bounties worth up to $10 million! We encourage anyone to get involved!
- The test network provides a presentation form similar to the main network network, which supports developers to test and debug parameters in the R&D environment. Many testnets use Proof-of-Authority/other consensus mechanisms and a small number of validators for speed optimization, and the tokens on the testnet have no real value. Therefore, users have no other way to acquire tokens other than through the faucet. There are many testnets built to mimic some of the projects on mainnet L1 (eg Ethereum’s Rinkeby, Kovan, Ropsten).
Each approach has its own advantages and disadvantages, certainly not mutually exclusive, and different testing styles are often used at different stages of project development:
- Phase 1: Write unit tests when building the contract.
- Phase 2: Once the higher level program abstraction is built, integration tests are very important for testing the interaction between modules.
- Phase 3: Code audits are conducted on testnet/mainnet releases or large feature releases.
- Stage 4: Formal verification is often combined with code auditing and uses additional security guarantees. Once the procedure is specified, the rest of the process can be automated, which makes it easy to pair with Continuous Integration or Continuous Deployment tools.
- Phase 5: Launch the application on the test network to check throughput, traffic, and other scaling parameters.
- Phase 6: Launch a bug bounty program after deployment to mainnet, leveraging community resources to find and fix issues.
The growth of any technology or ecosystem depends on the success of its developers—especially in the crypto space. We divide developer tools into four main categories:
- Out-of-the-box tools
- An SDK for developing new L1s that helps abstract away the process of creating and deploying consensus models. Pre-built modules allow flexibility and customization, but are optimized for development speed and standardization. A good example is the Cosmos SDK, which enables the rapid development of new proven blockchains within the Cosmos ecosystem. Binance Chain and Terra are well-known examples of Cosmos-based public chains.
- Smart Contract Development – There are many tools that help developers develop smart contracts quickly. For example, Truffle boxes contain simple and useful examples of Solidity contracts (voting, etc.). The community can also suggest appendices to this repository.
- Frontend/Backend Tools – There are many tools that simplify application development. Connect the application to the chain (ie, ethers.js, web3.js, etc.).
- Upgrading and interacting with contracts (e.g. OpenZeppelin SDK) – There are various different tools specific to the ecosystem (e.g. Anchor IDL for Solana smart contracts, Ink for Parity smart contracts) that handle writing RPC request handlers, issuing IDL, fetching from IDs Generate client.
- Languages and IDEs — The programming model of blockchain is often very different from that of traditional software systems. The programming languages used for blockchain development facilitate this model. For EVM compatible chains, Solidity and Vyper are heavily used. Other languages like Rust are heavily used in public chains like Solana and Terra.
Blockchain infrastructure can be an overloaded and confusing term that is often synonymous with a range of products and services, covering everything from smart contract auditing to cross-chain bridges. As a result, discussions about cryptographic infrastructure are either too broad and disorganized, or too specific and targeted for the average reader. We hope this article strikes the right balance for those just entering the crypto industry and those looking for a more in-depth overview.
Of course, the crypto industry is changing rapidly, and it is likely that the protocols cited in this article will no longer constitute a representative sample of the ecosystem after 2 or even 3 months. Even so, we believe that the main goal of this paper (that is, breaking down infrastructure into more accessible and understandable parts) will be more relevant in the future. But as the blockchain infrastructure landscape evolves, we will also make sure to provide a clear and consistent update on our thinking.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/jump-crypto-detailed-explanation-of-the-subdivision-track-and-layout-of-blockchain-infrastructure/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.