The Future of Disruptive Computing

Within the next 100 years, all-optical personal computers are put into widespread use, and classical-quantum computers are widely used in businesses, research centers, and other areas.


The speed of computer processing data mainly depends on two aspects, the speed of data transfer within the system and the speed of data processing, and modern computers based on the von Neumann architecture have encountered a bottleneck in both aspects, and in 2017, the journal Nature once exclaimed that “Moore’s Law is no longer applicable. In today’s era of big data, artificial intelligence, especially artificial neural network research, but in the performance of the computer put forward higher requirements. Von Neumann structured electronic computers seem to have difficulty in meeting the growing demand for faster speed, lower power consumption, and smaller size. Researchers and many high-tech companies are beginning to explore the possibilities of new types of computers such as heterogeneous computing, the More-Moore route, the More-than-Moore route, photonic computing, and quantum computing. This paper discusses the current state of development of the cutting-edge new computers and the outlook on the future form of computers.

A variety of new types of computing are racing together A variety of new computer development routes based on modern electronic computers are blossoming, forming a certain competition between each other and not excluding the creation of more routes. Before photonic computing and quantum computing are widely put into commercial use, these new and random will fill the gap between computing needs and computing power.

The idea of ultra-new computers is beginning to take shape Both photonic and quantum computers have now made semi-finished products or reached the early stage of commercialization. At present, due to the many characteristics of photons relative to electrons, photonic computing in the photoelectric combination of the AI chip is the largest research and development efforts. And the current research of quantum computers have also made prototypes, more companies provide quantum cloud computing services, however, these quantum-related products are currently also mainly for scientific research purposes. The future is expected to see the classical-quantum computer gradually come out of the laboratory.

The irreplaceable role of light in the future Light is now gradually playing a more important role in the field of communications and computers. Starting with fiber-optic networks, there is a clear trend of light into copper in this field, and information and data are gradually being propagated at the speed of light inside computers. In addition to photonic computing and optical interconnection, photons as a kind of quantum can also be used as the basis of optical quantum computing, and may have a place in quantum computing in the future.

I. Technical background: the need to study new types of computers

1.1 The bottleneck of von Neumann structure
Modern computers are based on the von Neumann architecture. In this architecture the processing unit (central processing unit, CPU) and the storage unit (random memory, RAM) are separated, while instructions and data are placed in the storage unit at the same time.

The Future of Disruptive Computing

(Source: Intel, CIMB Research, collated by Benwing Capital) Figure 1: The bottleneck of the von Neumann architecture – the broadband wall

Computer science has evolved to the point where the CPU’s computing speed has far exceeded the access memory speed, with the former typically being more than 200 times faster than the latter, so the CPU has to wait for data between executing instructions. Although modern computers have added multi-level cache structures closer to the CPU to read a batch of data in advance for the CPU; pipelining and branch prediction have also been added to reduce the CPU’s waiting time; some computers have added multi-core and multi-CPU structures in the hope of improving computer performance, but the bandwidth of data transfer between the CPU and memory has always been a bottleneck limiting computer performance. This is also known as the “bandwidth wall” between the processor and memory.

1.2 The End of Moore’s Law

The Future of Disruptive Computing

(Source: Ray Kurzweil, Steve Jurvetson, compiled by Benwing Capital) Figure 2: Moore’s Law Evolution

Gordon Moore, one of the founders of Intel, predicted in 1956 that the number of components on an integrated circuit would double in about 18-24 months and double in performance, provided that the price remained the same. With the development of chip technology, each chip is now able to integrate tens of billions of transistors; in the competition between Samsung and TSMC, TSMC also took the lead in announcing plans to begin production of 3nm chips in 2021. However, higher precision is pushing not only the limits of technology but also the limits of theory and practicality.

1.2.1 Technology Limits – Photolithography

The accuracy of photolithography determines the integration of the chip. The principle of photolithography is like a slide projector, which is to project light through a photomask with circuit diagram onto a wafer coated with photosensitive adhesive. In order to chase Moore’s Law, lithography would need to reduce the exposure critical size (CD) by 30%-50% every two years, and according to Rayleigh’s formula, higher precision means shorter light source wavelength, higher numerical aperture of the lens, and lower integration factor. Shorter light source wavelength, as the most direct means to improve the accuracy, has become the main focus of the manufacturers in the competition.

Currently, the highest precision lithography machine uses EUV (extreme ultraviolet) lithography process, and the light source wavelength is shortened from 193 nm to 10 nm. However, even if the precision of lithography machine also achieves a breakthrough one day, the physical properties of electrons will once again limit the further integration of chips.

1.2.2 Electron Limit – Electron Tunneling

A transistor is a semiconductor material that is the basic building block of all modern electronics as part of an integrated circuit. And as the size of transistor keeps shrinking, the channel between source and gate keeps shortening. When the transistor shrinks to a certain level, electrons will produce quantum tunneling effect and cross the channel between the source and gate freely. The transistor, which has lost its switching role, can no longer be combined into a logic circuit, and the modern electronic computer, which relies on 0 and 1 to perform operations, will completely lose its computing power.

1.2.3 Practical Limits – Power Consumption Wall

The metal wire on the integrated circuit is a resistive device, and its own series resistance makes the signal constantly decay during transmission. As the size and spacing of copper interconnects between transistors decreases and the microprocessor frequency rises, the signal transmitter needs to consume more energy to ensure that the signal can be transmitted to the receiver within a certain distance, which is another “power consumption wall” between the processor and the memory. Data from the memory to the processor’s energy consumption, much greater than the memory internal data handling and processor data computing energy consumption, the difference between the two reached more than 100 times. According to Intel (Intel) research shows that the CPU process into the 7nm era, data transfer and access to memory power consumption accounted for 63.7% of the total power consumption.

The Future of Disruptive Computing

(Source: Intel, CIMB Research, this wing capital collation) Figure 3: The “power wall” between memory and processor

Industry trends: the contradiction between demand and production

2.1 Demand side
2.1.1 Exponential growth of data volume in the data era

In the current era of big data, the amount of data generated by the Internet every day is growing at an exponential rate. According to the white paper “Data Age 2025” published by IDC (International Data Corporation) in 2017 predicts that the sum of global data volume will reach 175ZB (1ZB, one zetabyte is equivalent to one trillion GB) in 2025. To understand it more graphically, it will take 550 million years to download 175ZB at 80mb/sec (the global average broadband download speed). If all the people in the world download together, it will take 25 days to complete. Faced with the huge amount of data generated every day, Internet companies have to buy servers in large quantities. Tencent revealed in its Techo developer conference at the end of 2019 that it currently has has over 1 million servers. The arithmetic power sharing platform arithmetic power elastic resource pool built by Tencent Cloud has a scale of 200,000, and the big data platform has 15 million analysis tasks and 30 trillion real-time calculations per day, and has 35 trillion data access entries per day.

The Future of Disruptive Computing

(Source: IDC “Data Age 2025”, compiled by Benwing Capital) Figure 4: Global data circle size

2.1.2 Deep learning puts higher demands on computers

Deep learning systems based on artificial neural network algorithms are currently a hot topic of interest for various companies and academic centers. Unlike the logic operations generally run by general-purpose chips, deep learning systems spend most of their time on low-precision matrix multiplication operations. And modern electronic computers use serial operations, which are relatively inefficient in the face of matrix multiplication. The current architecture to implement this system comes can generally be attributed to three types of models: CPU + GPU (graphics processor), CPU + FPGA (field-programmable gate array), CPU + ASIC (application-specific integrated circuit). According to OpenAI, a Silicon Valley AI research organization, the amount of floating-point computation for deep artificial neural networks is also growing at an exponential rate during the 8 years from 2012-2020, doubling every 3.4 months on average, far exceeding the growth rate of Moore’s Law for integrated circuits, and also accompanied by alarming energy consumption problems that limit the development of AI. As a result, companies and research centers are now working to design hardware specifically for AI deep learning, with a large number of participants currently ranging from foreign countries to China, and from industry giants to startups. They are not only involved in various aspects of chip production, but also in a wide range of application scenarios, and there are even attempts to break the modern electronic computer framework.

The Future of Disruptive Computing

(Source: Semiwiki, organized by Benwing Capital) Figure 5: Global AI hardware industry

2.2 Supply side
2.2.1 Optical in and copper out

In 1966, Dr. Kao published a paper analyzing and proving the feasibility of using optical fiber as a transmission medium. Gradually, after basically achieving the coverage of optical fiber in long-distance transmission, optical fiber is also trying to extend to servers, desktops, and even board optical interconnects and chip optical interconnects. At present, some personal computers, high-performance servers and cell phones and other products have begun to use optical interfaces. Mi Lei, the founding partner of CSTC, predicted Mi 70 Law: Currently in the field of communication, 70% of the cost is in optical devices, and in the future, optical cost will account for 70% of the cost of all future technology products. It can be said that the phenomenon of optical into copper retreat is the general trend.

The Future of Disruptive Computing

(Source: Soochow Securities, McMasters Consulting, this wing capital collation) Figure 6: Light in the field of communication and technology development process and forecast

Three, the new computer: the transition to the future of computers

3.1 Breakthrough of new computers
In the face of the many bottlenecks to the continued increase in computing power of modern computers mentioned above, a variety of new types of computers are actively being developed.

More-Moore FinFET, GAA, FD SOI, EUV and other technologies seek to continue to reduce the process feature size of devices, working toward 5nm, 3nm or even 1nm precision chips, continuing Moore’s Law.

More-than-Moore 2.5D, 3D and other advanced packaging solutions aim to achieve higher density, higher frequency signal interconnect lines, reduce latency and improve data transfer rates to break the broadband wall and power consumption wall.

Technologies such as computational storage and in-deposit computing seek to achieve a more cost-effective “seamless” interface between storage and computing, bypassing the inherent limitations of the von Neumann architecture.

Beyond CMOS photonic computing and quantum computing are gradually replacing the use of electronics in computers, breaking the limitations of traditional electronic components and providing a new impetus for breakthroughs in computing performance.

3.2 More-than-Moore
Since the birth of the world’s first CPU, chip design and fabrication have been carried out at the two-dimensional level, with research focusing on how to increase the number of components per unit area. In recent years, chip manufacturers have also been developing stacking technologies for single chips in space.

2.5D stacking technology involves connecting multiple chips together using interconnects on a silicon intermediate layer. Because the interconnect line density on the silicon intermediate layer can be much higher than that on a traditional integrated circuit board, high performance interconnects can be achieved. 2.5D stacking technology is still essentially a two-dimensional planar stacking.

At the 24th Annual Technology Symposium in Santa Clara, California, USA, in April 2018, TSMC first announced the System Integrated Single Chip (SoIC) multi-chip stacking technology, which uses silicon perforation (TSV) technology to integrate many adjacent chips with different properties. At present, the 3D chip technology mainly has the following kinds: 1) chip-based stacked 3D technology, is still widely used in the field of SiP, this solution is more common in cell phones; 2) active TSV-based 3D technology, the technology are in the chip process is completed and then stacked to form a 3D chip; 3) passive TSV-based 3D technology, the technology and SiP substrate and bare (3) 3D technology based on passive TSV, which prevents an intermediate silicon substrate between the SiP substrate and the bare chip, and the intermediate layer has silicon perforations to connect the metal layers of the upper and lower chips; (4) 3D technology based on chip manufacturing, mainly used in 3D NAND Flash, which can now achieve 64 layers or even higher. This is mainly due to Toshiba’s Bit Cost Scalable (BiCS) process and Samsung’s Terabit Cell Array Transistor (TCAT) process. 3D chips can improve storage density and expand storage capacity. By increasing the parallel width or using serial transmission to enhance the storage bandwidth, it can not only link the broadband wall problem, but also alleviate the power consumption wall problem.

AMD’s plan is to insert a thermoelectric effect heat sink (TEC) in the middle of the memory or logic chip of the 3D stack. With funding from the U.S. Defense Advanced Research Projects Agency, IBM in 2018 researched embedded heat dissipation methods that pump heat extraction dielectric fluid into tiny gaps.

3.3 Integrated storage and computing
Storage and computation integration refers to the transfer of operations in a computer from the central processor to memory for the purpose of reducing data transfer events and data access energy consumption during computation. The current techniques of storage and computing integration are divided into two categories: 1) off-chip storage, where a computing chip or logical computing unit is embedded in the memory, making the computing and logical units closer to the memory; 2) on-chip storage, where the memory has algorithmic functions by embedding algorithmic weights in the memory, realizing the true sense of storage and computing integration. Since the distance from memory to logic and computing units can be significantly reduced, more and more people in the industry and academia believe that memory-computing integration can be a good solution to the problems of “bandwidth wall” and “power wall” based on the von Neumann architecture in the future. In the industry and academia, it is increasingly believed that the future integration of storage and computing can be a good solution to the problems of “bandwidth wall” and “power wall” based on von Neumann architecture.

The Future of Disruptive Computing

(Source: public information, organized by Benwing Capital) Table 1: The main technology classification of storage and computing integration

At present, there are two types of storage and computing chips: one is based on volatile, more mature SRAM or DRAM, and the other is based on non-volatile, new memory devices or new materials. Among them, non-volatile memory and computing chips have high arithmetic power, low power consumption and low cost, and have great application prospects in the field of artificial intelligence Internet of Things (AIoT) in the future. The NOR flash memory-based storage and computation chip can perform full-precision matrix convolution operations (multiplication and addition operations) directly in the memory cell by using the analog characteristics of NOR flash. The Flash memory cell can store the weight parameters of the neural network and also perform multiplication and addition operations related to this weight, thus combining multiplication and addition operations and storage into a single Flash cell. For example, 1 million Flash cells can store 1 million weight parameters and perform 1 million multiplication and addition operations in parallel. Compared with the traditional von Neumann architecture deep learning chip, this kind of computing efficiency is very high and low cost because DRAM, SRAM and on-chip parallel computing unit are eliminated, thus simplifying the system design.

Currently, the market for memory computing chips is vast, and according to Gartner’s forecast, the global market for memory computing can grow at a compound annual growth rate of more than 20% and is expected to reach $13 billion by the end of 2020.

3.3.1 Challenges of All-in-One

There are two major challenges facing all-in-one storage and computing. First, the application scenario of all-in-one memory computing has many constraints. Due to the high cost of memory devices, it is only suitable for scenarios with high demand for storage. Secondly, the industrialization of storage and computing chips has just started, and is facing the dilemma of insufficient upstream support and mismatched downstream applications. The widespread use of storage and computing chips also needs to expand the development of supporting services, tools, new applications and new scenarios.

3.3.2 The development direction and application of integrated storage and computing

In the era of IoT, more terminals and edge devices also need data processing, and there is a great potential for the development of this field of storage and computing chips. The amount of data generated by these terminals and edge devices is huge, and sending them to the cloud for processing will cause huge pressure on both the network and the cloud. And only a portion of their massive data is really meaningful data. If the useless data can be filtered out at the endpoints with limited processing power, the efficiency of the product will be substantially improved. On the other hand, for portable wearable devices that have a lot of standby time, battery life and privacy protection are two important issues that concern user experience. A chip with integrated storage and computing can significantly reduce energy consumption and extend battery life. In addition, if the future can also have a strong processing power, storage and computing chip can avoid data being transmitted to the cloud, to protect the user’s privacy.

The Future of Disruptive Computing

(Source: Public Information, organized by Benwing Capital) Table 2: Major Players in the Cumulative One

3.4 Beyond-CMOS
Beyond-CMOS tries to break free from the bondage of electronic devices and can be regarded as a transitional stage toward the future computer. If the computer of the future is photonic computer, quantum computer and biological computer, then the computer combined with optoelectronics is the first step to break free from the bondage of electronic digital computer. And the classical-quantum computer may also become the main productivity tool for people in the not too distant future.

Fourth, photonic computing

Optical communication, optical interconnection, optical computing …… applications of light at all levels of computer science are being discussed in full swing by academia and industry. From 1966, Dr. Kao published a paper analyzing and proving the feasibility of using optical fiber as a transmission medium, to the exploration of all-optical networks in recent years; from 1990, Bell Labs demonstrated their development of the world’s first digital optical computer, to the launch of the prototype board of photonic chips; due to the many superiority of photons compared to electrons, light is being used at all levels from microscopic to macroscopic to the field of computing The light is penetrating the computer field from the microscopic to the macroscopic level due to the many advantages of photons compared to electronics.

The Future of Disruptive Computing

(Source: Public information, collated by Benwing Capital) Table 3: Photons vs. electrons

The Future of Disruptive Computing

(Source: Public information, organized by Benwing Capital) Figure 7: Classification of photonic computing

4.1 Optical analog computers
Usually, we understand computers as digital computers, i.e., those that use logic gates and information about 0s and 1s to perform calculations. But in fact, in addition to digital computers, there is a class of computers that do not rely on logic gates, called analog computers. In optical analog computing technology, operations can be performed simultaneously with the propagation of signals, and optical signals can be transmitted without interference with each other.

Research on optics in the field of analog computing began as early as the 1970s. Initial research focused on coherent processing of images, showing that a variety of image processing functions (e.g., enhancement, deblurring, phase subtraction, recognition, processing of integrated aperture radar data, Fourier transform, convolution, correlation, etc.) could be implemented with various filtering processes. Such nonlinear calculations take considerable time in an ordinary digital computer, and the process of completing a Fourier transform through a filter alone takes almost no time at all. But the fatal weakness of coherent processing is the coherent noise that is difficult to eliminate, which severely limits its practical application.

Later research in analog optical computing has focused on incoherent processing. It uses spatial light modulators and other devices to implement various processes, such as directional filtering, image coding, etc. The non-coherent processing can reduce the coherence of optical speed and suppress the phase-thousand noise. On the other hand, negative and complex number operations cannot be implemented directly in non-coherent processing. The optical hybrid processing system can fill this gap and open up many practical applications. It exploits the parallelism of light to a certain extent, increases the flexibility of the system, and improves the processing accuracy, but the large number of photoelectric conversions limits the computational speed again.

4.2 Optical Digital Computers
Optical digital computers have been introduced in the last century. The optical logic gates can be realized by combining devices such as lenses, mirrors, prisms, filters, and optical switches, etc. In 1989, Bell Labs was the first to use laser diode arrays as light sources, self-electrical optical effect devices as logic gate arrays, and free-space beams for interconnection to produce an all-optical digital computer with an optical interconnection pipeline structure. In the following decades, research institutions around the world developed a variety of different optical devices. However, the main reason why all-optical digital computers have not been commercially available is that they are difficult to integrate and the size of the entire computer is too large.

The research and development of optical digital computers did not stop there: the idea of combining optoelectronic devices arose in the academic community. Some scientists have proposed that the peripheral electronic devices can be gradually replaced with photonic devices on the basis of the use of electronic chips. From optical communication between computers, to inter-frame interconnection, inter-board interconnection, on-chip interconnection and on-chip interconnection inside computers, that is, made into all-optical chips. Thanks to advances in integration technology, experience in fiber optic communication, and recent breakthroughs in silicon photonic technology, optical integrated circuit PICs have been commercially available. There are also scientists working from the opposite direction, i.e., making the most critical computing devices with light first, to produce optical chip-assisted electronic digital computers.

4.2.1 Combined optical and electrical AI chips

Light is inherently suitable for linear computing, has high latitude parallel computing capability, and also has higher fault tolerance than electronics; using photons to do matrix multiplication operations for artificial neural network systems could not be more suitable. At present, there are both domestic and foreign companies focusing on this field. Among them, Heiji Technology and Lightmatter use optical interferometer to realize matrix multiplication operation, while Optalysys uses spatial light modulator to handle intensive convolution function operation.

The Future of Disruptive Computing

(Source: “China Laser”, organized by Benwing Capital) Figure 8: Matrix multiplication and convolution operations of artificial neural network operation process

4.2.2 The crystallization of integration and interconnection – silicon photonic technology

Fiber optics has achieved extremely high coverage in China in the last few decades, catalyzing the trend of extending light to the user side and the advancement of photonic integration technology. Advances in silicon photonic technology for mixing optical paths and circuits on a single chip and global wiring of microprocessor chips have also shown the possibility of inter-chip and intra-chip adoption of optical interconnects.

The Future of Disruptive Computing

(Source: Soochow Securities, organized by Benwing Capital) Table 4: Silicon light application scenarios

Due to its own characteristics, photon is difficult to realize interaction, cell cache and logic operation conveniently and efficiently, and its device integration is difficult to reach the scale of electronic integrated circuit, therefore, it is impractical to use “photonic computing” to replace “electronic computing”. The “silicon-based photonic computing” is a new interdisciplinary discipline that combines optoelectronics, microelectronics, photonics, mathematics, algorithms, computer systems, etc., and is expected to achieve ultra-high performance computing through the in-depth joint design of software and hardware. Yole predicts that the silicon optical market will grow at more than 40% per year, and by 2025 the market will exceed $3.9 billion, of which more than 90 will come from data center applications, while the market for optical interconnects will reach $18 million.

The Future of Disruptive Computing

(Source: Yole, organized by Benwing Capital) Figure 9: Silicon Photonics Market Size Forecast 2019 to 2025

Silicon photonic technology is the etching of micron-level optical components on traditional CMOS chips, similar to the role of reflectors, prisms and displays in space optics, this technology greatly improves the integration of optical components. Heiji’s photonic chip uses silicon-based photonic technology to do data transmission and matrix multiplication operations. Following the current mature semiconductor process technology, the existing photonic chip only needs 45-90nm process to accomplish the desired performance.

The architecture needs to contain the basic units shown in the figure, of which the Optoelectronic Computing Unit (OECU) is the key to improve the performance of computational processing, to achieve high-speed matrix operations and analog calculations. Some computational operations that are inconvenient in the optical domain, such as signal delay, data caching and logic operations, still need to introduce the Arithmetic & Logic Unit (ALU), master control (Control Unit), registers (Register), cache (Memory), etc., in the electronic processing unit to achieve. The interconnection communication and hardware system I/O between the computation, control and storage units are realized through optical interconnection.

The Future of Disruptive Computing

(Source: China Laser, organized by Benwing Capital) Figure 10: Silicon-based photonic computing primary system

Over the past 50 years, silicon photonics technology has gone through three phases: technology exploration (1960-2000), technology breakthrough (2000-2008), and integrated application (2008-present). During this period, a group of traditional integrated circuit and optoelectronic giants in Europe and the United States quickly entered the silicon photonics field through mergers and acquisitions to seize the high ground, and the global silicon photonics industry pattern led by the traditional semiconductor powers has quietly taken shape.

The Future of Disruptive Computing

(Source: Yole, this wing capital collation) Figure 11: The development stage of each company in the silicon optical industry

The United States, represented by IBM, Intel, and Luxtera, have all made good achievements in the research and development of optical interconnection technology in recent years Silicon electronics has also received widespread attention in Europe, with projects such as PICMOS, WADIMOS, STREP, and PLAT4M being established one after another. Japan has also developed optoelectronics technology earlier. 2010, Japan began to implement the Funding for Advanced Research and Development (FIRST) program, which is supported by the Japanese Cabinet Office. The Photonic Convergence System Basic Technology Development (PECST) is part of the FIRST program, with the goal of achieving “data centers on a chip” by 2025.

On April 9, 2015, the U.S. Department of Commerce issued a report rejecting Intel’s application to sell to China’s Guangzhou Supercomputing Center to use silicon photonic technology “XEON” chips for the Tianhe-2 system upgrade. On March 7, 2016, ZTE was first sanctioned by the U.S. Department of Commerce, which not only did not allow it to purchase chips in the U.S., but also required suppliers to stop technical support for ZTE for one full year.

At present, optical technology, Huawei, Hisense have been in the silicon photonics industry to carry out deployment planning, optical technology has been invested in research and development to explore the collaborative pre-research mode of silicon optical integration projects, and strive to open the silicon optical modulation, silicon optical integration and other levels of cooperation joints, but the overall domestic technology development from developed countries still has a large gap. China’s R & D investment in optoelectronic device manufacturing equipment is scattered, without the establishment of silicon-based and InP-based optoelectronic systematization R & D platform. With the gradual strengthening of the comprehensive strength of domestic enterprises, as well as the support of the national integrated circuit industry, domestic manufacturers need to continue to accelerate the silicon photonics project. Shanghai is also the source of silicon photonics development: in 2017, the Shanghai municipal government included silicon photonics in the first batch of major municipal special projects; in January 2018, the first domestic silicon photonics process platform was established in Shanghai; in July 2018, Zhangjiang Laboratory started to build China’s first silicon photonics R&D pilot line.

The Future of Disruptive Computing

(Source: Yole, organized by Benwing Capital) Figure 12: Silicon optical technology industry chain gradually clear

4.2.3 Optical neuromimetic

Unlike the multilayer artificial intelligence neural network (CNN) used in deep learning, neuromimetic computing constructs a pulsed neural network (SNN), which achieves intelligence by simulating a biological neural network. It is itself a vehicle capable of processing information and is no longer dependent on a computer. Karlheinz Meier, a neuromimetic engineer and physicist at the University of Heidelberg in Germany, said that the human brain has three major characteristics relative to computers: first, low energy consumption, the power of the human brain is about 20 watts, while the current supercomputer trying to simulate the human brain requires several million watts; second, fault tolerance, the human brain is losing neurons at all times, without affecting the brain The human brain loses neurons all the time without affecting the information processing mechanism in the brain, while a microprocessor can be destroyed if it loses a transistor; third, no programming is required, as the brain learns and changes spontaneously in the process of interacting with the outside world without following the paths and branches restricted by predetermined algorithms, like the programs that implement artificial intelligence.

The research direction of optical neuromimetic computing mainly contains three parts: 1) the development of photonic neurons that conform to the biological properties of neurons; 2) the design of optical pulse learning algorithms based on the dynamic physical properties of optical devices; 3) the design of large-scale integratable optical neuromimetic network framework. Currently, influential international project groups in the field of neuromimetic processing include IBM’s SyNAPSE project, the FACETS/BrainScales project at the University of Heisenberg, Germany, and the SpiNNaker project at the University of Manchester, UK.

The latest international developments include a paper published by a research team at Queen’s University in early 2020. The team at Queen’s University has used silicon photonics to link photonic neurons using waveguides to create an optical neuromimetic chip.

Optical neuromimetic computing has certain advantages. It explores the adaptiveness, robustness and speed of ultrafast optical pulse signals and can avoid the problems of chip integration and noise accumulation of traditional digital optical computing. But neuromimetic chips and AI gas pedals have completely different value propositions and do not necessarily compete at present. The neuromimetic chip is a future-oriented technology that aims to create a new architecture and establish new intelligent models and systems. The AI gas pedal, on the other hand, is a technology based on the current industry, which aims to hardwareize the artificial neural network built by “computer + software” and improve operational efficiency.

4.2.4 Three-value optical computer

Traditional electronic computers usually use “1” and “0” to represent the two physical states of high and low potential, and operate in binary. In the history of computer development, 4-bit electronic CPU appeared in 1971; 64-bit CPU did not appear until 2006. due to the bottleneck of insulation solutions for electronic devices, the traditional principle of computer processors to increase the number of bits is increasingly difficult, and had to rely on a single core based on addition to multiple processors “multi-core” to increase computing power.

Shanghai University built a three-value optical computer in 2017. In their research, the physical state of light is expressed as a “dark state” and a “bright state,” which can also be divided into two orthogonal polarization states, making the optical definition of one data bit This allows the optical definition of one data bit to have three values, breaking the limits of “1” and “0”. The three-value optical computer uses a liquid crystal array to control the polarization direction of the light beam and the polarizer to complete the information processing. Because of the large number of pixels in the liquid crystal array and the absence of insulation bottlenecks, the three-value optical computer has a very large number of data bits, which can easily reach a million bits. On the other hand, the power consumption of the multi-million pixel LCD is also in the milliwatt range, which determines that the three-value optical computer consumes very little energy. What’s more, based on the characteristics of light, researchers have developed a special adder called MSD, which magically eliminates the rounding process in computer addition. The use of “rounding-free” addition makes the addition of millions of bits in a computer operation take as long as the operation of “11”. Shanghai University is the only entity capable of producing a complete system of hardware and software for three-value optical computers. All the patents in this field are currently owned by Shanghai University.

4.3 Subject company
Sunwise Technology

XiZhi Technology, also known as Lightelligence, is a startup company focusing on photonic AI chips. The goal of Lightelligence covers hardware to algorithm, and in the next 2-3 years will be dedicated to layout and build a complete photonic computing ecology including chip design, core algorithm, transmission, etc..

The company has won the favor of capital since its establishment. In just three years since its inception, Heiji received a $10.7 million seed round of funding at the end of 2007, followed by a $26 million Series A round of funding in 2020. This round was led by Matrix Partners China and Centrin Silicon Valley Fund, a subsidiary of Centrin Capital, followed by Xiangfeng Investment, Centrin, and China Merchants Venture Capital, with continued additions from old shareholders Baidu Ventures and Fengrui Capital. The Series A financing brings the cumulative funding of XiZhi Technology to $36.7 million, making it the highest-funded photonic computing startup in the world. 2019, XiZhi Technology was selected as one of MIT Technology Review’s 50 Smart Companies along with AliCloud, Baidu, Huawei and other companies. Hardware and algorithms work in tandem. XiZhi Technology adopts the strategy of realizing high-density optoelectronic hybrid chips before continuous optimization, and actively develops system design, chip design, package design, and test technology. Its core technologies are.

(1) Design technologies include PDK, simulation software, and special optoelectronic cell libraries.

(2) Package design technology includes high channel (64 or more) package technology, substrate and tool design technology.

(3) Test technology includes test software, test tools, and provides PCle boards and full-stack software based on optoelectronic hybrid architecture chips.

The world’s first photonic chip prototype board released by Dr. Yichen Shen, founder and CEO of Heiji Technology, in April 2019, which uses optical interferometer as the basic matrix operation unit to effectively replace the traditional electronic transistor. In testing, the photonic chip ran Google TensorFlow’s self-contained convolutional neural network model to process MNIST datasets, with over 95% of the entire model’s operations done on the photonic chip. The accuracy of the photonic chip processing is close to that of the electronic chip (over 97%), and in addition the time used by the photonic chip to complete matrix multiplication is within 1% of that of the state-of-the-art electronic chip.

The Future of Disruptive Computing

(Source: Heuristic recurrent algorithms for photonic Ising machines, 2020, Roques-Carmes C., Shen Y. Collated by Benwing Capital) Figure 13: Principle of Ising algorithm developed by Yichen Shen’s team

In a study published in Nature Communications in January 2020, Dr. Yichen Shen proposed a new Ising algorithm specifically designed to explore the NP-complete problem. This research demonstrates that photons are far more efficient for optimally solving complex problems than existing quantum solutions that have been implemented. Some disadvantages of optical computing, such as natural dynamic noise, instead help them find answers to problems faster in this type of computing.

Commercialization is in progress and has a wide range of application scenarios. Heiji’s goal is to make chips that are commercially available, widely used and compatible, and to bring them to market in the next few years. In addition to the hardware challenges, Heiji hopes to optimize the photonics, electronics and peripherals at the system level to form a new software ecosystem that is acceptable to customers.

Data centers and servers are the preferred scenario for Suntech’s photonic chip landing, as the data center environment is relatively controlled and this market will consider the added value more so that the arithmetic advantages of the photonic chip can be better utilized. Photonics this solving Ising problem of better ability, from biological research to drug discovery to route optimization, in a large number of scientific engineering encountered in the optimization problem have space to play, the future may be able to help to biotechnology, travel and other different industries. Edge devices could also benefit from this technology, such as drones, sensors and other devices that are more sensitive to power consumption.

The company plans to make the photonic chip available to a number of partners and potential customers for testing, and has already been approached by Google, Facebook, AWS, and BAT level customers.


Lightmatter is a Boston-based AI chip company founded by a team of researchers from MIT, of which Yichen Shen is one. Lightmatter’s goal is to combine electronics, photons and new algorithms to create a next-generation computing platform for artificial intelligence computing such as deep neural networks. Cumulative funding of $33 million. Following an $11 million investment from Spark and Matrix in early 2018, Lightmatter locked in a $22 million investment from Alpha’s venture capital fund in early 2019. lightmatter has been recognized and noticed by various sectors: in 2019, Lightmatter was sponsored by WeWork’s BostInno website as one of the top 20 startups to watch for the year; in 2020, CB Insights, the world’s leading market data research platform, named Lightmatter as one of the 30 companies with the potential to change the world. The prototype board release date is not yet clear. lightmatter is developing a chip based on existing CMOS technology, using the already very mature and most basic silicon optical technology, and building only the most critical part of the operation – matrix multiplication – on the Mach-Zehnder interferometer. By using only light to do matrix multiplication, Lightmatter does not need to wait for technological breakthroughs in optical storage and other areas. Since both Lightmatter and Heiji were founded on the same paper, the two companies are now following much the same path, but as of yet, Lightmatter has not been able to release its first AI chip. But Harris, one of Lightmatter’s founders, said the company will launch a commercially available optical chip within five years.


Optalysys is a company that came out of Cambridge University and was founded in 2013. It was also the first company in the world to work on developing and patenting optical computing. on March 7, 2019, Optalysys shelved the world’s first optical co-processing system, FT:X2000, enabling the first convolutional neural network using its technology.

The Future of Disruptive Computing

(Source: Optalysys, organized by Benwing Capital) Figure 14: Optalysys’ coprocessor principle

Unlike Heiji and Lightmatter, Optalysys’ AI coprocessor system’s strength lies in high-speed processing of large amounts of image data. the Optalysys architecture uses spatial light modulators (SLM) to manipulate light signals, processing Fourier transforms at the speed of light, and is compatible with the display industry’s ongoing development for high-resolution microdisplays Optalysys CEO Nick New says the system is more than 70 percent accurate, more than 300 times faster than a GPU, and consumes only a quarter of the power of a GPU. In addition to the development of the core technology, Optalysys has also sought to provide a seamless integration for the optical processor, providing the easiest way for users to use it and enabling it to be programmed quickly. The main application scenarios for the product include unmanned vehicles, medical image analysis, and security systems.

The SLM and associated optical components used by OPtalysys are expected to become smaller and more efficient in the exotic, potentially allowing the Optalysys architecture to become compact enough to be used in consumer-facing applications such as cell phones. In addition to image processing, Optalysys is also exploring tasks that would not normally associate its Fourier transforms with it, such as weather forecasting, solving partial differential equations, etc. In addition, Optalysys is also working with the Earlham Institute with the intention of developing a new genetic search system.


Optical AI chip startup Luminous is a research team coming out of Princeton University. 2019, Luminous received $9 million in investment from Bill Gates, Uber co-founder’s 10,100 fund and Uber CEO.

The scheme they use is not based on the Mach-Zehnder interferometer, but is called the Broadcast and Weight scheme (hereafter referred to as the B&W scheme). The input signal is encoded at different wavelengths and the intensity of each wavelength is modulated by the action of a micro-ring type optical filter, a process corresponding to a multiplication operation. Subsequently, the optical signal is detected at the detector and converted into a photocurrent, which corresponds to an additive operation. This current is used as an injection current for the laser, causing a corresponding optical signal output at the laser for connecting the next neuron, a process that corresponds to a nonlinear activation function. The B&W scheme has been implemented so far with a demonstration of 2 neurons and 4 weights.

Compared with the Mach-Zehnder type scheme (deep learning based on silicon optical chip), the main differences are as follows.

(1) No photoelectric conversion process occurs in the Mach-Zehnder-type scheme, while two photoelectric conversion processes occur in the B&W scheme

(2) The Mach-Zehnder scheme is based on the interference of the optical field, so it is phase-sensitive. While the B&W scheme is a non-coherent scheme, the signal is loaded on the intensity of light, so it is not sensitive to phase.

(3) The Mach-Zehnder type scheme uses a single wavelength, while the B&W scheme must use multiple wavelengths.

(4) The B&W scheme uses micro-rings as tunable filters, and its chip size is greatly reduced and power consumption is reduced.

According to the company’s executives, their current prototype is three orders of magnitude more energy efficient than other state-of-the-art AI chips.


LightOn is a company founded in 2016 and has received a total of $5 million in funding. lightOn’s has now produced a co-processor, Aurora, embedded with a very efficient optical core, Nitro. the core of its technology is multiple light scattering.

Currently, LightOn offers two solutions: users can pay for LightOn Cloud from April 2020, or they can have a LightOn Appliance, which will be shipped in the second half of 2020. Import to use LightOn’s coprocessor, similar to how a GPU is used. In LightOn’s tests, in terms of energy consumption, users using both their own CPU or GPU plus LightOn are one-twentieth of those using only a CPU or GPU, while spending one-eighth of the time for the same amount of computation.

The Future of Disruptive Computing

(Source: LightOn, organized by Benwing Capital) Figure 15: LightOn performance comparison

Photonic Arithmetic

Photonic Arithmetic is a photonic AI general-purpose chip developer and manufacturer, founded in Beijing in 2017. Photonic Arithmetic’s business covers two fields: chip design and machine learning instruction sets, and is committed to providing users with technical services such as photonic AI general-purpose chip design, production and sales, AI instruction sets, transmission, and peripheral ecology. It is reported that Photonic Arithmetic has obtained A+ round of financing, with an estimated amount of 10 million RMB.

The CEO of Photonic Arithmetic, Bing Bai, a PhD from the School of Electronic Engineering at Beijing Jiaotong University, says he was inspired to set up Photonic Arithmetic in 2017 after reading a paper published by the MIT research team behind Lightelligentce and Lightmatter. The company currently has a patent for a photonic artificial intelligence chip under review. The chip shown in the patent includes a modulator for converting electrical signals into optical signals; an optical beam splitter connected to the modulator and used to split the optical signals into multiple sub-optical signals; an optical transmission medium connected to the optical beam splitter and corresponding to the sub-optical signals, and used to transmit the sub-optical signals, where the optical transmission medium is a silicon waveguide or optical fiber; and a computational module used to receive the sub-optical signals and perform calculations on the sub-optical signals. However, instead of innovating on the most core computing module, the chip reduces the number of modulators used to reduce the area of the photonic AI chip and the number of pins, thereby reducing the difficulty of packaging and testing the photonic AI chip.

The Future of Disruptive Computing

(Source: International Intellectual Property Office, organized by Benwing Capital) Figure 16: Photonic AI chip patent for photonic computing

Fathom Computing

Fathom Computing is also a photonic hardware startup for artificial neural network learning. fathom Computing says it currently has over 300 patents and published articles.

Ayar Labs

Ayar Labs is a startup providing inter-chip optical interconnect solutions with a research team from MIT. ayarLabs received a $24 million investment from Grofond and Intel Capital in 2018.

Ayar Labs brings the benefits of integrated silicon photonics to edge multi-chip modules with their direct-to-chip “direct-to-chip” optical solution, which is now commercialized. They enable high-bandwidth, low-latency and low-power optical communications for their partners’ chips. According to Ayar Labs, they can increase the bandwidth of interconnects by a factor of 1,000 while consuming only one-tenth of the power, and they are targeting optical interconnects in industries such as artificial intelligence, supercomputers, cloud computing, communications, aerospace, military, and unmanned vehicles.

As mentioned above, many computer industry giants are also getting in on the optical game. Some of them are investing in promising startups, and some have their own R&D groups are. For example, Alpha has invested in Heiji Technology, IBM’s research team has made breakthroughs in all-optical computers and optical neuromimetic computers, and Huawei has recently published its breakthrough in photonic chip development: Huawei has directly abandoned the “photolithography” and “let TSMC OEM “These two roads. Decided to find another shortcut to develop a “photonic chip”, so as to solve the problem at the root. Huawei’s photonic chip is the light-emitting properties of indium phosphide and silicon light path integrated into a single hybrid chip, driven by a laser electrical signal, greatly reducing the use of photolithography.

V. Quantum computing
Modern electronic computers use the binary system of “0” and “1” for computing, and these two binary numbers are often referred to as bits. As time progresses, physicists and engineers are able to use smaller and smaller devices to build functional bits. Vacuum tubes and electromagnetic relays were gradually replaced by modern integrated circuits. Billions of transistors were gathered on a chip the size of a fingertip. However, as miniaturization approaches the size of an atom, the physical laws of this enclosed world change and we enter the quantum mechanical wonderland. Quantum bits, can be any combination of 0 and 1. When used for storage, N quantum bits can theoretically store 2N data simultaneously. For example, 250 quantum bits can store 2250 data, which is more than all the atoms in the known universe combined. 250 quantum bits can be used to compute 2250 mathematical operations at the same time, which is equivalent to 2N calculations that a classical computer has to repeat. Thus, a quantum computer could theoretically factorize a large number of 1000 bits in just a few seconds, while a conventional computer would take 1025 years.

Compared to a classical computer made of bits, a quantum computer made of quantum bits may be more powerful, but also more complex and fragile. Research on quantum computers is currently one of the most preverbal directions. But currently, a quantum computer is larger than a classical bit based on a modern transistor.

Quantum computers fall into three main categories in terms of applications: quantum annealing, quantum simulation, and general-purpose quantum computing. Among them, quantum annealing is suitable for solving optimization problems with high operational efficiency for some specific problems, such as the optimization of paths. Quantum simulation is suitable for exploring complex phenomena in chemistry, biology, and other sciences, such as protein folding. General quantum computing, on the other hand, is similar to general artificial intelligence and aims to solve various complex problems, however, this kind of quantum computer is also the most difficult to develop.

Quantum computing is also divided into three main categories algorithmically: Shor algorithm, Grove algorithm and HHL algorithm. Among them, Shor algorithm can quickly solve the factorization of large numbers, Grove algorithm can quickly find a specific element from unclassified elements, and HHL can quickly solve a system of linear equations.

The Future of Disruptive Computing

(Source: Soochow Securities, collated by Benwing Capital) Table 5: Major Technical Schools of Quantum Computing

5.1 Current Achievements
The quantum computers currently being built are mainly for research purposes.

In October 2019, Google released a quantum computing prototype called Sycamore and claimed that it overwhelmingly solved a problem that is currently intractable by the best supercomputers.Google refers to this quantum computing milestone as quantum hegemony – the ability of a quantum computer to accomplish a task that a typical classical computer cannot accomplish in a shorter period of time. Surprisingly, their quantum computer consists of only a few dozen low-quality (or “noisy”) quantum bits, yet its performance is comparable to that of the most advanced classical computers, which consist of tens of billions of high-quality bits.

IBM, with its pioneering position in computing and decades of experience in quantum computing, also achieved a milestone in quantum computing in September 2019: they launched the world’s first 53-bit quantum computer with the most powerful computational performance. In addition they launched IBM Q, a free quantum computing cloud-based platform. anyone can apply for an account to do research on quantum computing and explore some new algorithms on IBM’s cloud platform, but its only 5 quantum bits of computing power. There are also Alibaba, Google, Microsoft and Amazon launching their own quantum cloud service platforms one after another in 2017-2019.

In addition to the two developers Google and IBM, there are many large companies, startups and universities using different approaches to quantum computing respectively. the quantum bits used by Google, IBM and Rigetti are composed of micro and nano resonant circuits etched from superconducting metals.

Although China started late in quantum computing, it also has a series of achievements.

In July 2015, Alibaba established a quantum computer laboratory jointly with the Chinese Academy of Sciences. in May 2017, Prof. Pan’s team, together with Prof. Haohua Wang’s team, released a breakthrough in quantum computer research based on photonic and superconducting systems. In the optical system, based on the realization of ten-photon entanglement manipulation, the world’s first single-photon quantum computer that surpasses early classical computers was constructed using a high-quality quantum dot single-photon source.In October 2017, Tsinghua University, Alibaba-CAS (Shi Yaozheng’s team), and Hongyuan Quantum-CUSTOM (Guo Guangcan’s team) released their respective quantum cloud platforms on the same day.In February 2018 CAS and AliCloud released 11 bits of cloud access to superconducting quantum computing services in the direction of superconducting quantum computing. This is the second system in the world, after IBM, to provide cloud services for quantum computing above 10 bits to the public. The service has been launched on the quantum computing cloud platform, which enables a complete back-end experience of classical computing simulation environment and real quantum processors in the cloud.In March 2018, Baidu announced the establishment of the Quantum Computing Institute to carry out business research on quantum computing software and information technology applications, and Professor Duan Runyao, founding director of the Quantum Software and Information Centre at the University of Technology Sydney, became the director of the Baidu Quantum Computing Institute, reporting directly to Baidu President Zhang Yaqin reports directly to Baidu. The plan is to set up a world-class quantum computing institute in Baidu in five years, and gradually integrate quantum computing into Baidu’s business in the following five years. From the ranking of the number of patents, the head company is currently dominated by IBM, Google, Microsoft and other U.S. companies, and only one Chinese startup, Hongyuan Quantum, is in the top 20, a very obvious gap.

The Future of Disruptive Computing

(Source: incoPat, collated by Hongyuan Capital) Table 6: Global Ranking of Quantum Computing Patent Numbers

5.1.1 Hongyuan Quantum

Founded on September 11, 2017, Hefei Hongyuan Quantum Computing is the first quantum computing startup in China. Hongyuan Quantum also conducts several other research and development around quantum computers: quantum chips, quantum cloud, quantum measurement and control, quantum software, and quantum applications.

Currently, the company’s core products include two quantum chips, Xuanwei and Quartus. Among them, Xuanwei XW S2-200 is a native second-generation silicon-based spin two-bit quantum chip, which realizes single-bit and two-bit pervasive quantum logic gate units of silicon-based semiconductor spin quantum bits by adjusting ultrafast electrical pulses on the gate electrode as well as microwave pulses. And the Quartet KF C6-130 is the first generation of superconducting six-bit quantum processor of this source. The superconducting quantum chip is based on the modification of the superconducting Josephson structure to construct superconducting quantum bits and realize the mutual coupling of any two of the six superconducting quantum bits through a “quantum data bus”. Using precisely designed pulse sequences, high-fidelity quantum logic gate operations can be achieved, enabling the design and demonstration of quantum algorithms.

In terms of cloud platform, Hongyuan has launched four platforms: 32-bit quantum virtual machine, 64-bit quantum virtualizer, semiconductor quantum computer and superconducting quantum computer, all available through its official website. In terms of measurement and control, Hongyuan Quantum provides standardized quantum measurement and control instruments, personalized and customized quantum computer control system solutions, cryogenic electronic devices and quantum functional devices. In terms of software, Hongyuan has launched QPanda, a quantum software development kit based on Hongyuan’s self-developed OriginIR quantum language instruction set. In addition, we also provide EmuWare quantum virtual machine and Qurator quantum software development plug-in. Origin’s quantum software can be used in a wide range of applications including machine learning, big data, and biochemical manufacturing. Our source also provides application design assistance for customers who are not familiar with related technologies. On the application side, Hongyuan Quantum has launched ChemiQ, a chemical application system running quantum programs. It is mainly used to simulate the energy of chemical molecules at different bond lengths, and also to view and analyze historical calculation results. The company also has its own “Introduction to Quantum Computing and Programming” textbook, the Native Quantum online education platform and a series of quantum computing science comics.

5.2 Moore’s Law Exponential
Hartmut Neven, director of Google’s Quantum AI Lab, first introduced Neven’s law at the 2019 Google Spring Symposium. He predicted that the arithmetic power of quantum computers will grow at a “double exponential rate,” meaning that it will grow much faster than Moore’s law. Instead of growing by orders of magnitude to the power of 2, it would grow by powers of 2. This growth is somewhat difficult to understand, so much so that it is hard to find such cases in reality. The rate of growth of quantum computing is probably the first.

The Future of Disruptive Computing

(Source: publicly available information, compiled by Benwing Capital) Figure 17: Double exponential rate growth

Neven’s Law is not empty: in December 2018, Google researchers could replicate the calculations performed on its best quantum processor using just a regular laptop. By January 2019, replicating calculations on Google’s improved version of the quantum chip will require the use of a powerful desktop computer. And by February 2019, there will no longer be any classical computers in the building to simulate research on quantum computers. Researchers will have to request arithmetic power on Google’s vast network of servers to do so.

5.3 The heavy challenges ahead
5.3.1 Quantum Error Correction

Quantum bits can be any combination of the two states 0 and 1 at the same time, and this intermediate state will decoherence in an extremely short time. Quantum bits are also very fragile and can be subject to errors due to extremely weak interactions with their surroundings. In order to make sense of the computational results of quantum bits, researchers must find ways to correct these errors, and quantum error correction is an immensely difficult task. In Google’s Quantum Hegemony demonstration, they used 53 quantum bits to complete a calculation that would take a supercomputer thousands of years to complete, but 99% of their results were noise and only 1% were real signals. So quantum error correction will be the next major milestone after quantum hegemony.

At the beginning of the invention of computers, bits consisting of vacuum tubes or relay sets would sometimes invert without warning. To overcome this problem von Neumann pioneered computer error correction. He used redundancy by making a copy of each output, then using parity to find errors and then correcting them. And greater redundancy means greater error-correction capability. But in fact, the microchip transistors that modern computers use to encode bits are so reliable that they don’t need error correction.

However, for a quantum computer, like the one that uses superconducting quantum bits, error correction is not simply a matter of redundancy, checksum, or flip-flop. However, in quantum mechanics, the unclonable theorem tells us that it is impossible to copy the state of a quantum bit to other quantum bits without changing its original state. It means that we cannot directly convert a classical error-correcting code into a quantum error-correcting code. So, researchers can only extend the state of one bit to other bits by means of quantum entanglement. They would encode a single logical quantum bit into a lattice array of physical bits, with the fidelity of the logical bits increasing with the scale of the array.

On June 8, 2020, Professor Andreas Wallraff of ETH Zurich and his collaborators demonstrated in one of their published Nature Physics papers the detection of errors in a 4-bit square lattice encoding of logical quantum bits, but no corrections have yet been made. Although this is sufficient to demonstrate the principle of quantum error correction, to build a practical quantum computer, physicists must be able to control a large number of quantum bits. A fully-fledged quantum computer with thousands of logical quantum bits would eventually require several million physical bits.

Yet quantum error correction is not the last challenge for quantum computers, either. Once scientists have mastered quantum error correction, they will again have to repeat almost all of the development done so far in the field of quantum computing. But this time on more robust, but more complex, logical quantum bits rather than physical quantum bits.

5.3.2 Dilute coolers

There are two types of quantum computers that can only operate at extremely low temperatures: superconducting quantum computers and spin quantum computers based on semiconductor quantum dots. They both use extremely fine energy level structures. At very low energy levels, to maintain the coherence of the quantum states in them, the noise in the environment must be well below this energy level difference. To clearly see the coherent evolution of quantum states in a quantum circuit, the required ambient temperature needs to be below 30 mK. With such high requirements, the only refrigeration technology currently available to man is dilution refrigeration.

Dilution refrigeration technology was invented 60 years ago and was commercialized very early. By now, dilution chillers are not rare anymore, although they are still very expensive equipment. Currently, several top teams engaged in cryogenic transport and quantum computing research in China have more than a dozen dilution chillers. However, China is not yet able to produce dilution chillers independently. At present, the dilution chillers owned by our country basically come from the four Oxford Instruments, Bluefors, Janis and Leiden.

5.4 Current Direction
Similar to photonic computing, based on the current level of technology, the new development of full quantum computing is blind. This is not only because scientific and technological progress does not happen overnight, but also because quantum computing is not necessarily more efficient than modern computing when dealing with simple operations, that is. As a result, the field of quantum computing has also developed a direction of hybrid classical-quantum computing, which uses a classical computer to call a relatively small quantum “coprocessor” to do some critical calculations. The idea of this coprocessor is similar to that of a GPU to assist in deep learning of artificial intelligence.

The Future of Disruptive Computing

(Source: Public Information, organized by Benwing Capital) Figure 18: Comparison of CPU, GPU and QPU computational efficiency

As shown in the figure, the time complexity of problem solving is related to the algorithm. The time complexity of the classical algorithm developed for a problem that can be solved by CPU, GPU, and QPU is O(N2), the time complexity of the classical algorithm is reduced to O(N) after the GPU parallel computing optimization, and the time complexity of the quantum algorithm developed by QPU using the quantum superposition state principle is kept at O(1). When solving small-scale problems, the QPU still needs to make multiple observations in order to obtain the probability distribution of the running results, while the GPU needs to consume time by transferring data from the CPU to the GPU, so the CPU is the most efficient. However, as the problem size increases, the final running efficiency QPU>GPU>CPU.

5.5 Applications and problems of quantum computing
Quantum computing will be expected to facilitate the following research areas.

Chemical engineering and molecular modeling
Factor decomposition and cryptography
New space discovery and search for extraterrestrial civilizations
Artificial intelligence and machine intelligence
Civil engineering and urban planning
Facial recognition and pattern recognition
Particle physics modeling
Genetic engineering and genetic mapping
Weather forecasting and climate prediction
However, due to the large number of operations that quantum computing can perform in a short period of time, it has the ability to break modern ciphers and other security measures in a short period of time. Currently the most difficult to crack is the 1024-bit public key cryptosystem, which has the highest level of cryptography. Cracking this cipher would take the most powerful current supercomputers millions of years of events. A 1024-bit quantum computer, on the other hand, would take only a few days to crack. Thus, while quantum computers are being developed, researchers also need to develop new quantum cryptographic encryption systems at the same time, such as quantum communication, which stands out from the crowd of encryption methods with features such as irreproducibility and the ability to implement a detectable listener. Quantum communication is a technology that uses the information transfer function of quantum media for communication. It mainly includes techniques such as quantum key distribution and quantum invisible transmission. Quantum Cryptography is a cryptographic system developed using the properties of quantum mechanics. Unlike traditional cryptographic systems, its security relies on quantum mechanical properties (unmeasurable, unclonable, etc.) rather than mathematical complexity theory.

While many experts do not believe that quantum computers will have the sophisticated computing power needed to break modern standard cryptography within the next decade, the National Institute of Standards and Technology, NIST, is already ahead of the curve. They plan to have a new cryptographic standard ready by 2022. The agency is reviewing the second phase of its “post-quantum cryptography standardization program” to narrow down the best candidate cryptographic algorithms that are resistant to quantum cracking and can replace modern cryptography. can agree on a shared secret message. This category also includes public-key encryption algorithms – such as RSA and elliptic curve cryptography. The second category includes digital signature algorithms, which are used to ensure the authenticity and reliability of data. Such digital signatures are important in applications such as code signing, so that one can be sure that a program was developed by the intended developer and not a hacker.

VI. Looking to the future

Currently, research teams at universities and companies are working intensively to find new directions for new computers as Moore’s Law comes to an end. At one time, cloud computing, EUV lithography, 3D chips, storage and computing, and other research ideas are blossoming. Among them, the most futuristic and forward-looking direction is Beyond-CMOS, which includes photonic computing, quantum computing and so on. Due to today’s technology, all-optical and all-quantum computers cannot be fully realized yet. Whereas optoelectronic combined computers may become mainstream in the next few decades, classical-quantum computers may not land until the next 50 years or more. Both are likely to be put into use gradually along the route from data centers to personal computers.

Before optoelectronic computers and classical-quantum computers are put into commercial use, new computers such as More-Moore, More-than-Moore, and storage and computation computers will compete with each other and be applied in different fields for different needs, for example, storage and computation computers have great potential in edge computing and other places, while 3D stacked chips have a large potential market in high-end consumer electronics. . There is also a great possibility of new routes outside these three routes. All three routes are currently developed based on modern electronic computers, but these three routes or ideas can also be referenced in the future task of improving the computing speed and integration of optoelectronic computers and classical-quantum computers after the emergence of optoelectronic computers and classical-quantum computers.

Technically speaking, the photoelectric combined AI chip is at the forefront of photonic computing, and currently companies such as Heiji Technology and Lightmatter are moving toward commercialization or putting commercialization on the agenda. And all-optical manufacturing AI chips have been bred in IBM’s labs. 5G has given a strong boost to silicon photonic technology, optical interconnects have been commercialized under Ayar Labs’ promotion, and Chinese companies, such as Huawei and Lightmatter, are catching up with government support. Optical neuromimetic research has been confined to the lab, but major research teams, such as Quansheng Ren’s team at Peking University, are still actively seeking breakthroughs. Although the breakthrough points of each research team are different, the common idea is the same. That is, not blindly pursue a step to do all-light personal computer, but from the outside or from the inside gradually replace electricity with light, and gradually improve the surrounding supporting software. It is believed that the experience accumulated by various research teams can help the realization of all-optical computers in the future when the combination of optoelectronic technology is applied on a large scale. Quantum computing, on the other hand, is more technically difficult and also requires more theoretical breakthroughs in quantum physics. Similar to photonic computers, the idea of combining classical and quantum can improve the arithmetic power to meet the increasing amount of big data computing and artificial intelligence computing while accumulating experience to continue working toward a general-purpose quantum computer.

In terms of industry, there are currently global emergence and many startups making efforts to improve arithmetic power from all angles. Currently the most active various AI chip startups, and the optical AI chip companies mentioned in the article are just the tip of the iceberg. Meanwhile, industry giants, such as Huawei, Intel, IBM, etc., are also very active in this field. They not only have their own research teams, but are also actively investing in promising startups. At a public event at MIT Technology Review, Wenwei Xu, director of Huawei’s Strategic Research Institute, made it clear that Huawei will invest $300 million a year in universities and labs around the world over the next 5 to 10 years, and will invest in several emerging technology directions, including optical computing. Overall, their investments are broad and large, and they are not placing their bets on any single technology.

Posted by:CoinYuppie,Reprinted with attribution to:
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-05-31 03:43
Next 2021-05-31 04:09

Related articles