Progress of Consensus Algorithms

In the past two years, with the ecological development of public chains such as Ethereum , there have been many applications. Although Defi and NFT applications are relatively “simple”, the overall operation volume on the chain is large, which makes users and developers still believe it. The processing power of Ethereum and the processing power of other public chains.

However, an important advantage of other public chains compared to Ethereum is that the gas fee of Ethereum is too high, and the gas fee of other public chains is extremely low.

The main reason is the consensus algorithm. Ethereum is still using the 1.0 chain for transaction verification, that is, using the PoW algorithm, while most other public chains use PoS or improved PoS and PoW.

In this article, we will analyze several consensus algorithms to show the differences between different algorithms.

A distributed system is composed of multiple nodes, and the nodes need to send messages through the network, and reach a consensus on a certain task message according to the protocol they follow and execute them unanimously. Many types of errors can occur in this process,

The first type of error is node crash, network failure, packet loss, etc. Nodes of this error type are not malicious and belong to non-Byzantine errors.

The second type of error is that nodes may be malicious and do not obey the protocol rules. For example, validator nodes can delay or reject messages in the network, can propose invalid blocks, can send different messages to different peers. In the worst case, malicious nodes may cooperate with each other. These are called Byzantine errors.

Considering these two kinds of errors, the system always maintains two properties: safety and liveness.

Security: When the above two types of errors occur, the consensus system cannot produce wrong results. In the semantics of the blockchain, it means that there will be no double spending and forks.

Liveness: The system can continue to generate submissions. Under the semantics of the blockchain, it means that the consensus will continue and will not get stuck. If the consensus of a blockchain system is stuck at a certain height, then the new transaction is not responded, that is, the liveness is not satisfied.


BFT (Byzantine Fault Tolerance Protocol) is a protocol that guarantees the security and liveness of a distributed system even if there are malicious nodes in the system. According to the Lamport paper, all BFT protocols have a basic assumption: when the total number of nodes is greater than 3f, the maximum number of malicious nodes is f, and honest nodes can reach a consensus and correct result.


Practical Byzantine Fault Tolerance (PBFT) is one of the first Byzantine fault tolerant protocols in the real world that can handle both Type I and Type II errors at the same time. Based on the partial synchronization model, it solves the problem of low efficiency of the previous BFT algorithms. The complexity is reduced from the exponential level of the number of nodes to the square level of the number of nodes, making the Byzantine fault-tolerant algorithm feasible in practical system applications.

The normal process of PBFT is a 3-stage protocol:

pre-prepare: The primary node (Primary) broadcasts the prepared message (Preprepare) to each replica node (Replica)

prepare: This stage is for each node to tell other nodes that I already know this message. Once a node receives nf prepare messages (we will use QC or Quorum Certificate to refer to it, the same below), it will enter the prepared state

Commit: In this stage, each node and other nodes know the message. Once a node receives nf commit messages (QC), it enters the committed state.

Viewchange is the most critical design of PBFT. When the master node hangs (timeout and no response) or the replica nodes collectively think that the master node is the problem node, the ViewChange event will be triggered to start the viewchange phase.

The communication complexity has a serious impact on the consensus efficiency of PBFT, which greatly restricts the scalability of PBFT.

How to reduce the communication complexity and improve the consensus efficiency is the challenge faced by the BFT consensus protocol in the blockchain scenario. The optimization methods for BFT consensus efficiency include the following categories: aggregated signatures, communication mechanism optimization, and view-change process optimization.

Protocols such as PBFT and SBFT have an independent view-change process, which is triggered when the master node fails. In Tendermint, HostStuff and other protocols, there is no explicit view-change process, and the view-change process is integrated into the normal process, thus improving the efficiency of view-change and reducing the communication complexity of view-change.

Tendermint integrates roundchange (similar to viewchange) into the normal process, so roundchange is the same as the normal block message commit process, unlike PBFT, which has a separate viewchange process, so the communication complexity is reduced.

Referring to Tendermint, HotStuff also merges the view switching process with the normal process, that is, there is no longer a separate view switching process. By introducing two-stage voting to lock blocks, and adopting the method of aggregated signatures by the leader node set BLS,

Hotstuff changed the two-round synchronous BFT of traditional BFT to three-round chain BFT. There is no clear prepare and commit consensus stage. Each block only needs one round of QC, and the prepare stage of the latter block is the previous block. The pre-commit phase of the block, the pre-commit phase of the next block is the commit phase of the previous block. Every time a block is generated, only low communication complexity is required, and the previous effect is achieved through two rounds of communication complexity.

Protocols such as PBFT and Tendermint have the characteristics of instant finality, and it is almost impossible for forks to occur. In PBFT, the next block can be produced only after each block is confirmed. Tendermint also proposes the concept of block locking, which further ensures the instant certainty of the block, that is, in a certain round stage, the node votes for the block message For pre-commit votes, in the next round, the node can only cast pre-commit votes for the block message, unless it receives a new proposer’s unlock certificate for a block message.

This type of BFT consensus protocol is essentially a synchronous system, which tightly couples the production and confirmation of blocks. After a block is confirmed, the next block can be produced. It is necessary to wait for the maximum possible network delay between blocks, and the consensus efficiency is limited. big restriction.


Based on the partially synchronous mesh communication model, CBFT proposes a parallel Byzantine fault-tolerant protocol with three-phase consensus. The mesh communication model is more suitable for the weak network environment of the public network.

The normal process of CBFT is similar to Hotstuff, which is divided into prepare, pre-comit, commit and decide stages. However, CBFT has also made key improvements: multiple blocks can be proposed continuously in one view window, and the generation of the next block does not need to wait for the previous block to reach QC; and each node can receive the vote of the previous block at the same time , execute the transaction of the next block in parallel, and vote and confirm the block in a pipeline way, which greatly improves the speed of block generation.

CBFT has an adaptive view switching mechanism: in a view window, when a node receives enough blocks and affirmative votes (more than 2/3 of the node votes, that is, QC), it will automatically switch the window and switch to Next window, no viewchange voting required. In addition, the node will start the viewchange process, and introduce the same two-stage locking voting rules as Hotstuff in the viewchange phase. At the same time, the BLS aggregation signature can be used to complete the view window switching with low communication complexity.

CBFT will only perform viewchange outside the normal process, so there will be less view switching overhead than HotStuff.

In the CBFT consensus, the validator set is updated every 430 blocks (called an Epoch), and the update rules are as follows:

New validators may not be able to participate in the consensus due to network connection or block synchronization. Therefore, we replace no more than 14 nodes each time. If the number of candidate validators is less than 14, the number of replacements is the total number of candidate validators. A new validator is randomly selected from the candidate validators using VRF.

Conflux : GHAST

Conflux can achieve the same level of decentralization and security as Bitcoin and Ethereum, but offers more than two orders of magnitude improvement in transaction throughput (TPS) and eventual latency.

The main advantages are the consensus protocol, authentication storage and transaction relay protocol. In the Conflux ledger, blocks are organized as a tree diagram, where each block references some other blocks, one of which is its parent block. Looking at only the blocks linked to the parent edge, the ledger appears to be a tree structure (the parent tree), while looking at all the blocks, it appears to be a directed acyclic graph. This is also the reason why Conflux’s ledger structure is called Tree-Graph.

Conflux’s consensus algorithm, called Greedy-Heaviest-Adaptive-SubTree (GHAST), enables all nodes in the blockchain network to agree on the pivot of a block by applying the heaviest subtree rule on the parent tree in the ledger. The chain agrees, which in turn agrees on the total order of all blocks based on the pivot chain. GHAST also allows Conflux nodes to detect some attacks that can compromise liveness (for example, a balancing attack that attempts to generate two balanced subtrees), the ability to confirm transactions, and block these attacks by adaptively adjusting the weights of blocks.

The Tree-Graph ledger and the GHAST consensus algorithm enable Conflux nodes to quickly generate new blocks without worrying that a fork in the ledger could compromise the security of the network, enabling the system to achieve both high throughput and low transaction confirmation latency.

other ideas


Dfinity changes the consensus algorithm by changing all the traditional consensus nodes to participate in the consensus calculation to select some nodes to complete the consensus calculation by calculating the random number, which is a step to speed up the consensus verification. The more core point is that the selected consensus node confirms the transaction through the non-interactive BSL algorithm (node ​​confirmation data signature feedback is carried out independently, not in combination), which means that nodes that do not experience BFT-type consensus repeatedly interact with each other. process, and achieve a similar “parallel” acceleration effect.


IOTA’s modification of the algorithm is relatively thorough. Compared with the blockchain, IOTA uses the Tangle data structure to form the general ledger Tangle. The feature is that each transaction is attached to two previous transactions, so it is necessary to completely eliminate the original blockchain chain. Dependency of structure on confirmation time. This forms an infinite associated confirmation structure for transactions, which can achieve parallel effects.


The revision of Filecoin in parallel is the parallel processing of storage tasks, because the storage part of Filecoin will completely calculate the stored data, and this process is extremely long (in comparison). Therefore, parallelism and speed-up are very important. At present, the updated NSE algorithm is used.

What can be seen from the split NSE algorithm is that when processing data, the data will be processed in window (which can be understood as a unit) and layer by layer, and the next step of data storage and subsequent Post proof will be performed after the processing is completed. Bale. After using NSE, in the processing part of layers, there is not too much dependence between layers, so parallel processing effects can be formed, which can be summarized as the adjustment of parallel speed-up.

Posted by:CoinYuppie,Reprinted with attribution to:
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-01-18 08:43
Next 2022-01-18 08:45

Related articles