Why is blockchain so slow? Where is the slowness?
Editor’s note: Many people complain that Bitcoin transfers are slower than snails, but there is a reason for the slowness. Below, the chief architect of FISCO BCOS will explain to you why blockchain transfers are so slow? Where is the slowness? Can you be faster?
Counting money, such as counting 100 million (isn’t it exciting~)
1. If the number of people is slow, but fortunately, focus and go all out, you can count them in the visible time. This is called single-threaded intensive computing.
2. If N people are counted together, each person is divided equally, and the total number is divided at the same time, and the total number is finally aggregated, the time used is basically 1/N of the first case. high. This is called parallel computing and MapReduce.
3. If N people count together, but because these N people do not trust each other, they have to stare at each other. First, one person is drawn by lot. Sign and seal, and then count it again for a few other people at the same time. All the people who counted are signed and sealed, and this stack of money is considered a good order. Then draw lots to change the individual to check out the next stack, and so on. Because when one counts the money, others just stare at it, and when one counts up a stack of money with seals and signatures, other people have to count it again and then sign to confirm, so it is conceivable that this method must be the slowest. This is called blockchain.
But from another angle, method 1, a number of people may be counted wrong, this person may be sick or on vacation, resulting in no one to work, and even worse, this person may exchange counterfeit money or hide some money, and report a wrong one total.
Method 2: There will be a certain percentage of N people who have wrong numbers. It is also possible that one of them is on vacation or slack off, resulting in the final result not coming out. …
Method 3 is very slow, but very safe, because everyone will check the whole process, so there will be no wrong counting. If one of them is disconnected, you can pick up a new stack of money to continue counting, and the work will not be interrupted. All the money that has been counted has seals and signatures on it, and will not be manipulated. If something goes wrong, the responsible person can be held accountable. In this case, the security of funds is completely guaranteed, unless all participants collude. In this mode, the more people involved, the higher the security of funds.
Therefore, the blockchain solution is committed to realizing the security and fairness of transactions in a distributed network environment lacking mutual trust, achieving a high degree of data consistency, tamper-proof, anti-evil, traceable, and the price paid. One is performance.
The most famous Bitcoin network can only process 5~7 transactions per second on average, and 1 block is generated in 10 minutes. It takes 6 blocks or 1 hour to reach the finality of the transaction, and the block generation process consumes a lot of computing power. (POW mining). Ethereum, known as the “global computer”, can only process transactions of the order of 2 digits per second, and a block is generated in ten seconds. Ethereum is also currently using the consensus mechanism POW mining that consumes computing power, and will gradually migrate to the POS consensus mechanism. These two networks may be stuck in a state of congestion when fans make explosive transactions, and it will take a day or two or more to be packaged and confirmed after a large number of transactions are sent out.
But in the scenario where financial security is life, some things are “must”, so even if it is slow, you will still consider choosing the blockchain.
Why is blockchain slow
There is a well-known theory in distributed systems called CAP theory: in 2000, Professor Eric Brewer proposed a conjecture: consistency, availability and partition fault tolerance cannot be satisfied simultaneously in distributed systems, and can only satisfy at most one of them. two.
Rough explanation of CAP
Consistency: Data is updated consistently, and all data changes are synchronized
Availability: good responsiveness
Partition tolerance: Reliability
Although this theory is somewhat controversial, from the perspective of engineering practice, like the speed of light theory, it can be infinitely approached to the extreme, but it is difficult to break through. The blockchain system can achieve the ultimate in consistency and reliability, but “good response performance” has always been criticized.
The field of “consortium chain” we are facing is different from the public chain in terms of access standards, system architecture, number of participating nodes, consensus mechanism, etc., and its performance is much higher than that of the public chain, but at present several mainstream blockchains The platform, measured on conventional PC-level server hardware, the TPS is generally at the thousand level, and the transaction delay is generally at the level of 1 second to 10 seconds. (I heard that TPS hundreds of thousands and millions of tens of millions of blockchains have been made? Well, look forward to it)
The author has worked in a large Internet company for many years. In the field of massive services, facing the C10K problem (concurrent 10,000 connections, 10,000-level concurrency), there are already familiar solutions. For general e-commerce business or content browsing services, ordinary PC-level servers stand-alone With tens of thousands of TPS and an average delay of less than 500 milliseconds, the flying experience is already the norm. After all, the Internet product card may lead to user loss. For fast-growing Internet projects, through parallel expansion, elastic expansion, and three-dimensional expansion, it is almost possible to face the massive traffic of mountains and tsunamis with almost no bottom line and no upper limit.
In contrast, the performance of blockchain is slower than that of Internet services, and it is difficult to expand. The root cause is its design idea of ”exchanging computing for trust”.
Where is it slow?
From the inside of the system of the “classical” blockchain
1. In order to be safe, tamper-proof, leak-proof and traceable , encryption algorithms are introduced to process transaction data, which increases CPU computing overhead, including asymmetric encryption of HASH, symmetric encryption, elliptic curve or RSA algorithms, data signature and verification, CA Certificate verification, even homomorphic encryption, zero-knowledge proofs, etc., which are still horribly slow. In terms of data format, the data structure of the blockchain itself contains various signatures, HASH and other verification data outside the transaction, and the processing of data packaging, unpacking, transmission, and verification is relatively cumbersome.
Compared with Internet services, there are also steps of data encryption and protocol packaging and unpacking, but the simpler the better, the optimization is to the extreme, and if unnecessary, it will never increase the cumbersome computing burden.
2. In order to ensure the transactional nature of the transaction, the transaction is carried out in serial, and it is completely serial. First, the transaction is sorted, and then the smart contract is executed with a single thread to avoid transaction chaos and data conflict caused by out-of-order execution. Even under the premise that a server has a multi-core CPU, the operating system supports multi-threading and multi-process, and there are multiple nodes and multiple servers in the network, all transactions are methodically and strictly based on a single thread and a single core on each computer. At this time, other cores of the multi-core CPU may be completely idle.
The Internet service is how many cores of how many servers can be used. Using fully asynchronous processing, multi-process, multi-threading, coroutines, caching, optimized IOWAIT, etc., will definitely run up the hardware computing power.
3. In order to ensure the overall availability of the network, the blockchain adopts a P2P network architecture and a transmission mode similar to Gossip. All block and transaction data will be broadcast to the network indiscriminately, and the received nodes will continue to relay. Schemas allow data to be communicated as much as possible to everyone in the network, even if those people are in different regions or subnets. The price is that the transmission redundancy is high, which will occupy more bandwidth, and the arrival time of the propagation is uncertain, which may be very fast or very slow (the number of transits is large).
Compared with Internet services, unless there is an error and retransmission, the network transmission must be the most streamlined, with limited bandwidth to carry massive data, and the transmission path will strive for the best, point-to-point transmission.
4. In order to support smart contract features , similar to blockchain solutions such as Ethereum, in order to achieve sandbox features, ensure the security of the operating environment and shield inconsistencies, the smart contract engine is either an interpreted EVM or docker. The encapsulated computing unit, the startup speed of the core engine of the smart contract, and the execution speed of the instructions have not reached the highest level, and the consumption of memory resources has not reached the optimal level.
While using conventional computer languages such as C++, JAVA, go, and rust languages to directly implement massive Internet services, there are often no restrictions in this regard.
5. In order to achieve the effect of easy verification and anti-tampering , in addition to the fact that the block data structure carries more data as mentioned in the first item, for transaction input and output, similar merkle trees, Patricia (Patricia) Patricia) tree and other complex tree-like structures, data proofs are obtained through layer-by-layer calculations for rapid verification in subsequent processes. The details of the tree are not expanded here, and its mechanism can be learned from the information on the Internet.
Basically, the process of generating and maintaining such a tree is very, very, very tedious, occupying not only the amount of CPU calculation but also the amount of storage. After using the tree, the overall effective data carrying capacity (that is, the transaction data initiated by the client and the The final data comparison actually stored) dropped sharply to a few percent. In extreme cases, after accepting 10m of transaction data, it may actually require hundreds of megabytes of data maintenance overhead on the blockchain disk), because the amount of storage The geometric progression of IO increases, and the IO performance requirements will also be higher.
Because Internet services basically do not consider the problem of distributed mutual verification and mutual trust, there are few proof structures that use this tree. It is remarkable that MD5 and HASH are used as protocol check bits.
6. In order to achieve the consistency and credibility of the entire network, all blocks and transaction data in the blockchain will be driven by the consensus mechanism framework and broadcast on the network, and all nodes will run multi-step complex verification and voting. Data approved by most nodes will only be confirmed on the ground.
Adding new nodes on the network will not increase the system capacity and improve the processing speed. This completely subverts the conventional Internet system thinking of “hardware compensation for insufficient performance”. The root cause is that all nodes in the blockchain are repeating It does not reuse other node data, and the computing capabilities of nodes are uneven, which may even slow down the final confirmation.
Adding nodes to the blockchain system will only increase the fault tolerance and the credibility of the network, but will not enhance the performance, so that the possibility of parallel expansion in the same chain is basically missing.
Most Internet services are stateless, data can be cached and reused, the steps between request and return are relatively simple, it is easy to perform parallel expansion, and more resources can be quickly dispatched to participate in the service, with unlimited flexibility.
7. Because of the characteristics of the block data structure and consensus mechanism , after the transactions arrive in the block chain, they will be sorted first, and then added to the block. With the block as the unit, a small batch of data will be confirmed by consensus, and It is not to confirm the consensus immediately after receiving a transaction. For example, each block contains 1000 transactions, and the consensus is confirmed every 3 seconds. At this time, the transaction may take 1~3 seconds to be confirmed.
Worse, transactions are queued all the time without being included in blocks (due to queue congestion), resulting in longer confirmation delays. This transaction delay is generally much larger than the 500ms response standard for Internet services. Therefore, the blockchain is actually not suitable for direct use in real-time transaction scenarios that pursue rapid response. The industry usually refers to “improving transaction efficiency” by including the final settlement time. For example, T+1 is as long as one or two. Days of reconciliation or clearing and calculation delays are shortened to tens of seconds or minutes, making it a “quasi-real-time” experience.
To sum up, the blockchain system is born with several mountains on its back, including the large internal computing overhead and storage of a single computer, the original sin of serial computing, the complexity of the network structure and the high redundancy, and the rhythm of block packaging consensus. The delay is long, and in terms of scalability, it is difficult to directly add hardware for parallel expansion, resulting in obvious bottlenecks in both scale up and scale out.
Scale Out (equivalent to scale horizontally) : scale out, scale out, such as: adding a set of independent new machines to the original system, using more machines to increase service capacity
Scale Up (equivalent to Scale vertically) : Scale up, scale up, such as adding CPU and memory to the original machine, increasing processing power inside the machine
Faced with the speed dilemma of the blockchain, the developers of FISCO BCOS exerted the spirit of “the foolish man to move the mountain” and worked hard to optimize it. After a period of hard work, the mountains and the seas have been moved, and one high-speed channel after another has been built, enabling the blockchain to find a way to the era of extreme speed (see the next article for details). This is what our series of articles will analyze in depth.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/the-speed-performance-dilemma-of-blockchain-slow-deserves-its-expensive-in-trust/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.