At the Inel Accelerated conference, Pat Gelsinge and his technical team not only shared the company’s process roadmap, but also talked about the company’s packaging, foundry, and even the company’s EUV process planning.
Since Pat Gelsinger returned and became Intel’s CEO, the chip giant has embarked on the fast lane, and quickly moved down multiple lines to strive for the lead.
At the Inel Accelerated conference held today, Pat Gelsinge and his technical team not only shared the company’s process roadmap, but also talked about the company’s packaging, foundry, and even the company’s EUV process. planning.
Now, we synthesize some of the content of our Intel leadership and some of the essence of foreign media reports to satisfy our readers.
Process roadmap: 4nm, 3nm, 20A and 18A
As Pat Gelsinger said in his speech, initially, the name of the process “node” corresponds to the gate length of the transistor, and is measured in microns . As transistors get smaller and smaller, and the length of the gate gets smaller and smaller, we begin to use nanometers as the unit of measurement.
He went on to say that in the past years of development, Intel has made a lot of contributions to the process. For example, in 1997, Intel introduced strained silicon (strained silicon) technology, coupled with other technological innovations, and then continue to shrink transistors, making them faster, cheaper and more energy efficient has also become equally important.
“From then on, the traditional naming method no longer matches the actual gate length of the transistor.” Pat Gelsinger emphasized.
In 2011, Intel also took the lead in launching FinFET technology. This is a new way to build transistors, with unique shapes and structures. Thanks to this innovative technology, Moore’s Law continues to take effect, but Pat Gelsinger said that with the emergence of this technology, the industry is further divided.
In Pat Gelsinger’s view, the entire industry, including Intel, uses different process node naming and numbering schemes. These various schemes no longer refer to any specific measurement methods, nor can they fully demonstrate how to achieve energy efficiency and performance. The best balance.
“For this reason, Intel wants to update its naming system to create a clear, consistent and meaningful framework to help our customers have a more accurate understanding of the evolution of the entire industry’s process nodes, and then make more informed decisions. Decision.” Pat Gelsinger emphasized.
Based on this idea, after launching Intel’s most powerful 10-nanometer SuperFin node with enhanced performance within a single node last year, Intel launched the next node—we called it Enhanced SuperFin before—now renamed to Intel 7. Intel 4 and Intel 3 followed. Intel named the node after Intel 3 as 20A instead of Intel 1.
According to foreign media anandtech, Intel has also inherited some past traditions in craftsmanship. As shown in the figure below, there is a difference between Intel’s use in production and retail; Intel refers to certain technologies as “being ready”, and other technologies as “acceleration”. “(Ramping’), so this timetable is only the dates mentioned. As you can imagine, each process node may exist for several years. This figure just shows Intel’s leading technology at any given time.
In summary, Intel’s detailed planning and timing are as follows:
In 2020, 10nm SuperFin (10SF): This process has achieved mass production: Tiger Lake and Intel’s Xe-LP discrete graphics solutions (SG1, DG1) based on this process have been launched;
2021 H2, Intel 7: This node was formerly known as 10nm Enhanced Super Fin or 10ESF. Alder Lake (in mass production) and Sapphire Rapids are products of this generation process. Due to transistor optimization, the performance per watt of this generation process is 10-15% higher than that of 10SF. In addition, Intel’s Xe-HP will now be referred to as Intel 7 products.
2022 H2, Intel 4: This connection was previously called Intel 7nm. Intel said earlier this year that its Meteor Lake processor will use a computing block based on the process node technology, and the chip has now returned to the laboratory for testing. Intel predicts that under this node, the chip’s performance per watt will be 20% higher than the previous generation, and the technology will use more EUV, mainly for BEOL. Intel’s next Xeon scalable product, Granite Rapids, will also use Intel 4 for production. It needs to be emphasized that Intel 4 is Intel’s first process node that uses extreme ultraviolet lithography (EUV) technology;
2023 H2, Intel 3: Formerly known as Intel 7+. Increase the use of EUV and new high-density libraries. This is where Intel’s strategy becomes more modular-Intel 3 will share some of the features of Intel 4, but is new enough to describe this new full node, especially the new high-performance library. Nevertheless, it is expected to follow up soon. Another advancement in EUV usage is that Intel expects its manufacturing volume to increase in the second half of 2023, and its performance per watt is 18% higher than that of Intel 4.
In 2024, Intel 20A: formerly known as Intel 5nm. But the new road is eager to turn to two-digit naming, A represents Ångström, or 10A equals 1nm. There are few details about this node, but at this node, Intel will switch from FinFET to its Gate-All-Around (GAA) transistor called RibbonFET. In addition, Intel will also introduce a new PowerVia technology.
In 2025, Intel 18A: This is not listed in the figure above, but Intel expects that there will be an 18A process in 2025. 18A will use ASML’s latest EUV machine, called High-NA machine, which can perform more precise lithography. Intel stated that it is ASML’s main partner in High-NA and is preparing to receive the first high-NA machine. ASML recently announced that High-NA has been postponed-when asked if this is a problem, Intel said it will not, because the timetable for High-NA and 18A is where Intel hopes to cross and have an unquestionable leadership position.
When talking about why Intel renamed nodes, anandtech emphasized that one of the elements is that they must match other foundry products. Intel’s competitors TSMC and Samsung both use smaller numbers to compare similar density processes. As Intel now changes its name, they are more aligned with the industry. Having said that, anandtech hinted that Intel’s 4nm may be on par with TSMC’s 5nm. By 3nm, we expect there will be a good parity point, but this will depend on Intel and TSMC’s release schedule matching.
A key point to note is that the new Intel 7 node (previously called the 10ESF node) is not necessarily a “full” node update as we usually understand it. This node is derived as a 10SF update, as shown in the figure above, will have “transistor optimization”. From 10nm to 10SF, this means that SuperMIM and the new thin film design provide an additional 1 GHz+, but the exact details from 10SF to the new Intel 7 are still unclear. However, Intel said that migrating from Intel 7 to Intel 4 will be a regular full-node jump. Intel 3 uses Intel 4’s modular parts and new high-performance libraries and chip improvements to achieve another jump in performance.
When asked whether these process nodes of Intel will have additional optimization points, Intel responded that whether any of them will be explicitly productized will depend on the characteristics. Individual optimization may increase the performance per watt by 5-10%. We are told that even 10SF (retaining its name) has several additional optimization points, but they are not necessarily public. Therefore, it is not clear whether these updates will be sold in the form of 7+ or 7SF or 4HP, but as with any manufacturing process, as updates occur to help improve performance/power/yield, assuming the design follows the same rules, they will be utilized.
“The last name (20A) reflects that Moore’s Law is still in effect. As we get closer and closer to the “1 nanometer” node, we will adopt a name that better reflects the new era, that is, the era of manufacturing devices and materials at the atomic level. ——The Amy Era of Semiconductors.” Pat Gelsinger said. He further pointed out that Intel has a clear path towards innovation beyond the “1 nanometer” node in the next ten years .
In Pat Gelsinger’s view, Moore’s Law will not fail until the periodic table is exhausted, and Intel will continue to use the magical power of silicon to continuously promote innovation. Intel’s latest naming system is based on the key technical parameters that our customers value, namely performance, power and area.
But Anandtech pointed out that one of the problems here is the difference between the process node being ready (ready), the ramping production of the product release, and the actual availability (available). For example, Alder Lake (now using Intel 7nm) will come out this year, but Sapphire Rapids will become more of a 2022 product. Similarly, there are reports that Raptor Lake on Intel 7 will be launched in 2022 to replace Alder Lake with tiled Meteor Lake on Intel 4 in 2023. Although Intel is happy to discuss the process node development time frame, the product time frame is not open (if there is no doubt that if the specified time is missed, customers will feel frustrated).
Two innovative technologies: RibbonFET and PowerVia
In the speech, Dr. Ann Kelleher, head of Intel’s global technology development team, said that the company’s Intel 20A, which will be launched in the first half of 2024, will become another watershed in process technology. It has two pioneering technologies- RibbonFET ‘s new transistor architecture, an unprecedented innovative technology called PowerVia , which optimizes power transmission.
As mentioned above, when moving to 20A, Intel’s process name refers to Angstroms rather than nanometers. It is at this moment that Intel will transition from its FinFET design to a new type of transistor called the Gate-All-Around transistor or GAAFET. In Intel’s case, the marketing name they provided for their version was RibbonFET.
It is widely expected that once the standard FinFET failure to move power, semiconductor manufacturing industry will shift GAAFET design. Each leading supplier claims that their implementation is different (RibbonFET, MCBFET), but they all use the same basic principle-flexible width transistors with multiple layers help drive the transistor current. FinFET relies on the cell height of multiple quantized fins and multiple fin tracks for the source/drain, while GAAFET supports a single fin of variable length, allowing each individual cell device to be optimized in terms of power, performance or area的current.
For many years, Intel has been discussing GAAFETs at semiconductor technology conferences. At the International VLSI conference in June 2020, Dr. Mike Mayberry, then Intel CTO, showed a chart that included enhanced static electricity for GAA design. At that time, we asked Intel about the timetable for mass implementation of GAA and was told that it was expected to be “within 5 years.” At present, Intel’s RibbonFET will use the 20A process. According to the above roadmap, it is likely to be commercialized by the end of 2024.
In the Intel RibbonFET chart at this event, they showed both PMOS and NMOS devices, as well as structures that clearly looked like 4-stack designs. Given that I saw Intel’s presentation at industry conferences covering anything from 2-stack to 5-stack, we confirmed that Intel will indeed use 4-stack implementation. The more stacks are added, the more process node steps are required for manufacturing. To quote Intel’s Dr. Kelleher, “It is easier to remove a stack than to add a stack!” For any given process or function, what exactly is the correct number of stacks is still an active area of research, but Intel seems to be keen on four.
According to Dr. Sanjay Natarajan, head of Intel process technology, RibbonFET is a Gate All Around transistor. As a technology that has been developed in the industry for many years, the name Gate All Around comes from the structure of the transistor. From the design point of view, this new design completely wraps the grid around the channel, which can achieve better control and obtain higher drive current at all voltages.
The new transistor architecture speeds up the switching speed of transistors, and ultimately can create higher performance products. By stacking a plurality of through channels, i.e. nanoribbon, a plurality of fins may be implemented with the same driving current, but take up less space. Through the deployment of nanobelts, Intel can make the width of the belts adjustable to suit a variety of applications.
Looking at other competitors, TSMC is expected to transition to GAAFET design on its 2nm process. At the annual technical seminar in August 2020, TSMC confirmed that it will continue to use FinFET technology until its 3nm (or N3) process node, because it has been able to find major updates to the technology to achieve performance and leakage expansion beyond initial expectations ——Compared with TSMC N5, N3 has up to 50% performance improvement, 30% power consumption reduction or 1.7 times density improvement. TSMC said that the continued use of FinFET provides comfort to its customers. It should be emphasized that the details of TSMC N2 have not been disclosed.
In contrast, Samsung said it will introduce its GAA technology in its 3nm process node. As early as the second quarter of 2019, the Samsung foundry announced to provide its first v0.1 development kit using GAAFET’s new 3GAE process node to major customers. At that time, Samsung predicted mass production by the end of 2021, and the latest announcement indicated that although 3GAE will be deployed internally in 2022, major customers may have to wait until 2023 to obtain more advanced 3GAP processes.
According to this indicator, Samsung may be the first to enter the GAA gate, although there are internal nodes, and TSMC will first get a lot of benefits from the N5, N4 and N3 nodes. Around the end of 2023, everything will become interesting, because TSMC may consider its N2 design, while Intel is committed to the 2024 time frame. The official slide shows the first half of 2024, although as technical announcements and product announcements, there is usually some lag between the two.
PowerVia is Intel’s new back-side power transmission network. This is a unique technology developed by Intel engineers and will also be adopted for the first time in Intel 20A.
We know that the manufacturing process of modern circuits starts with the transistor layer M0 as the smallest layer. On top of this, additional metal layers are added in larger and larger sizes to solve all the wiring required between the transistor and the different parts of the processor (cache, buffer, accelerator). Modern high-performance processors designed through- often metal layers 10 and 20, placing the top layer external connection. The chip is then flipped (called flip chip) so that the chip can communicate with the outside world through the connections on the bottom and the transistors on the top.
But as Dr. Sanjay Natarajan said, this traditional interconnection technology is interconnecting on the top of the transistor layer. The resulting mixing of power lines and signal lines leads to low wiring efficiency, which will affect performance and functionality. Consumption. For this reason, the industry has turned to “back-side-powered technology”, which is what Intel calls PowerVias.
In the new process, Intel puts the power line under the transistor layer, in other words on the back of the wafer. By eliminating the need for power wiring on the front side of the wafer, more resources can be freed up for optimizing signal wiring and reducing latency. By reducing sagging and reducing interference, it also helps to achieve better power transmission. This allows us to optimize performance, power consumption or area based on product requirements.
In other words, in a brand new design, we now place the transistor in the middle of the design. On the side of the transistor, we placed a communication line to allow the various parts of the chip to communicate with each other. On the other hand are all power-related connections (and power gating). Essentially, we turned to sandwiches, where transistors are fillers.
“PowerVia will be the industry’s first back-side power transmission network to be deployed. When we implement this innovation into our products, its defect density, performance and reliability convince us that it will be ready to go.” Dr. Sanjay Natarajan emphasized .
From an overall point of view, we can be sure that the benefits of this design start with simplifying the power cord and connection lines. Usually, these must be designed to ensure that there is no signal interference, and one of the main sources of interference is high-power transmission lines, so by placing them on the other side of the chip, they can be excluded. It also works in another way—interference from interconnecting data lines will increase power transmission resistance, leading to energy and heat loss. In this way, PowerVias can help a new generation of transistors when the drive current increases, because it can be powered directly there instead of routing around the connection.
But as anandtech said, there are several obstacles to be aware of.
Usually we start manufacturing transistors first, because they are the most difficult and most likely to have defects-if defects are found early in the measurement (defect detection during manufacturing), they can be reported as early as possible in the cycle. By placing transistors in the middle, Intel can now manufacture several layers of power supplies before entering the difficult stage. Now technically speaking, compared with transistors, these power layers will be very easy and will not go wrong, but this needs to be considered.
The second obstacle to consider is power management and thermal conductivity. Modern chips first build transistors into dozens of layers, ending with power and connections, and then the chip is flipped, so the power-consuming transistors are now on top of the chip and can manage heat. In a sandwich design, the heat will pass through anything on top of the chip, which is most likely an internal communication line. Assuming that the increase in heat from these wires will not cause any problems in production or regular use, then this may not be a big problem, but it needs to be considered when heat must be conducted away from the transistor.
It is worth noting that this “back-side power supply” technology has been developed for many years. Among the five research papers published at the VLSI Symposium in 2021, imec published multiple papers on the technology, showing the latest progress when using FinFET, and in 2019, Arm and imec announced that they are in imec research Similar technical facilities on Arm Cortex-A53 built on the equivalent 3nm process.
In general, this technology reduces the IR pressure drop on the design, which is increasingly difficult to achieve in more advanced process node technologies to improve performance. It will be interesting when the technology is used in large quantities on high-performance processors.
Next-generation packaging: EMIB and Foveros
In addition to the progress of process nodes, Intel must also advance next-generation packaging technology. Because of the market’s demand for high-performance chips and the increasingly difficult development of process nodes, an environment has been created in which the processor is no longer a single silicon chip, but relies on properties of the encapsulated together a plurality of smaller (and possibly optimized) chips or small blocks, and the power of the final product.
In other words, a single large chip is no longer a wise business decision—because they may end up being difficult to be defect-free, or the technology used to make them is not optimized for any specific function on the chip. However, dividing the processor into separate silicon chips creates additional obstacles to moving data between these chips-if the data must transition from the silicon chip to something else (such as a package or interposer), then there is power to consider Cost and delay cost.
The trade-off is optimized silicon built for a specific purpose, such as logic chips manufactured on a logic process, memory chips manufactured on a memory process, and smaller chips usually have better voltage/frequency characteristics than larger chips when combined . But what supports all of this is how the chips are put together,
Intel’s two main specialized packaging technologies are EMIB and Foveros. Intel explained the future of both related to future node development.
1. EMIB: Embedded Multi-chip Interconnect Bridge
Intel’s EMIB technology is designed for chip-to-chip connections laid out on a 2D plane.
The easiest way to communicate with each other two chips on the same substrate through the substrate is the use of the data communication path. The substrate is a printed circuit board composed of layers of insulating material, interspersed with metal layers etched into tracks and traces. Depending on the quality of the substrate, the physical protocol, and the standards used, data transmission through the substrate consumes a lot of power and reduces bandwidth. However, this is the cheapest option.
The alternative to the substrate is to place both chips on the interposer. The interposer is a large piece of silicon, large enough to allow the two chips to be fully bonded, and the chip is directly combined with the interposer. Similarly, the interposer also has a data path, but because the data moves from the silicon chip to the silicon chip, the power loss is not as much as that of the substrate, and the bandwidth can be higher. The disadvantage of this is that the interposer must also be manufactured (usually at 65nm), the chips involved must be small enough to fit, and can be quite expensive. To this end, interposer and active interposers are a good solution.
Intel’s EMIB solution is a combination of interposer and substrate. Instead of using a large interposer, Intel uses a small silicon chip and embeds it directly into the substrate, which Intel calls a bridge. The bridge is actually two halves, with hundreds or thousands of connections on each side, and the chip is built to connect to one half of the bridge. Now, both chips are connected to the bridge, which has the benefit of transmitting data through silicon without the possible limitations of a large interposer. If more bandwidth is needed, Intel can embed multiple bridges between two chips, or embed multiple bridges for designs that use more than two chips. In addition, the cost of the bridge is much lower than that of a large interposer.
With these explanations, it sounds like Intel’s EMIB is a win-win situation. However, this technology has some limitations-it is actually a bit difficult to embed the bridge into the substrate. Intel has spent years and a lot of money trying to perfect the technology to achieve low-power operation. The most important thing is that every time you add multiple elements together, the process will produce related yield problems-even if the yield rate of connecting the chip to the bridge is 99%, a dozen are used in a single design Chips will reduce the overall yield rate to 87%, even starting with known good chips (which have their own benefits). When you hear that Intel has been working to bring this technology to the market, they are working hard to improve these numbers.
Intel currently has EMIB on several products on the market, most notably its Stratix FPGA and Agilex FPGA series, but it is also part of the Kaby G mobile processor series, which connects Radeon GPUs to high-bandwidth memory. Intel has stated that it will launch a number of future products based on it, including Ponte Vecchio (supercomputer-level graphics), Sapphire Rapids (the next generation of Xeon enterprise processors), Meteor Lake (2023 consumer-level processors) and other graphics-related products .
In terms of EMIB’s roadmap, Intel will reduce the bump pitch in the next few years. When the chips are connected to the bridge embedded in the substrate, they are connected by bumps. The distance between the bumps is called the pitch-the smaller the bump pitch, the more connections can be established in the same area. This allows the chip to increase bandwidth or reduce bridge size.
The first generation of EMIB technology in 2017 used a 55-micron bump pitch, and the upcoming Sapphire Rapids still seems to be the case, but Intel is aligning itself with the 45-micron EMIB that surpasses Sapphire Rapids, leading to the third-generation 36-micron EMIB. These times The table did not disclose, but after Sapphire Rapids will be Granite Rapids, so at this time, a 45-micron design may be introduced.
2. Foveros: Die to Die stack
Intel introduced its chip-to-chip stacking technology through Lakefield in 2019, which is a mobile processor designed for low idle power consumption. Although the processor has since come to the end of its life, the idea is still an indispensable part of Intel’s future product portfolio and the future of foundry products.
Intel’s die-to-die stacking is very similar to the interposer technology mentioned in the EMIB section to a large extent.
We put one (or more) silicon wafers on another silicon wafer. However, in this case, the interposer or substrate has active circuitry related to the complete operation of the main computing processor in the top silicon chip. Although the core and graphics are on Lakefield’s top chips and built on Intel’s 10-nanometer process node, the basic chip has all PCIe channels, USB ports, security, and all low power consumption related to IO, and is built on 22FFL low power Consume the process node.
Therefore, although EMIB technology separates the silicon wafers from each other to work is called 2D scaling, but by placing the silicon wafers on top of each other, we have entered a complete 3D stacking method. This brings some benefits, especially in terms of scale, where you can get the advantage of shorter data paths, less power loss due to shorter wires, but also better delays. The chip-to-chip connection is still a bond connection, and the first-generation pitch is 50 microns.
But there are two key limitations here: heat and power consumption. In order to avoid heat dissipation problems, Intel makes the basic chip almost no logic and uses a low-power process. In terms of power supply, the problem is to let the top computing chip power its logic-this involves high-power through silicon vias (TSV) from the package up through the base chip to the top chip, and those TSVs that carry power become interference caused by high currents And cause local data signaling problems. It is also hoped that in the future process, it will be reduced to a smaller bump pitch, so as to achieve a higher bandwidth connection, and more attention needs to be paid to power transmission.
The first announcement related to Foveros today is about the second-generation product. Intel’s 2023 consumer processor Meteor Lake has been described above as using Intel’s 4nm computing block. Intel also said today that it will use its second-generation Foveros technology on the platform to achieve a 36-micron bump pitch, which effectively doubles the connection density compared to the first-generation. Another tile in Meteor Lake has not yet been made public (what it has or on which node it is), but Intel also stated that Meteor Lake will expand from 5 W to 125 W.
3. Foveros Omni: The third generation of Foveros
For those who have been paying close attention to Intel packaging technology, the name “ODI” may be familiar. It stands for Omni-Directional Interconnect, which is the name of Intel’s previous packaging technology roadmap. It will now be sold as Foveros Omni.
This means that the first-generation Foveros requires that the top tie be smaller than the base die limit is now cancelled. The top tie can be larger than the base die, or if there are multiple dies on each layer, they can be connected to any number of other silicon chips. Foveros Omni target true positive solution must issue an initial portion of the power Foveros discussed – will cause a lot of interference in the local carrier signal TSV as power, so they are placed over the outside position of the base die. Foveros Omni is a technology that allows the top die to dangle from the base die, and the copper pillars extend from the substrate to the top die to provide power.
Using this technology, if you can introduce power from the edge of the top die, you can use this method. However, I do want to know whether the power supply will be better fed from the middle if a large silicon chip is used. Intel once stated that the Foveros Omni works with separate base dies, so that if the base die is designed to be used in this relatively Substrate used on the lower layer.
By moving the power TSV outside of the base die, this can also improve the die-to-die bump pitch. Intel claims that the Omni is 25 microns, and compared with the second-generation Foveros, the bump density has increased by 50%. Intel expects Foveros Omni to be ready for mass production in 2023.
Four, Foveros Direct: the fourth generation of Foveros
One of the problems with any chip-to-chip connection is the connection itself. In all of these techniques mentioned so far, we are dealing with microbump bonding connections-small copper pillars with solder caps, which are put together and “glued” to create the connection. Because these technologies are increasing copper and deposited tin solder, it is difficult to scale them down, and the power loss of electronic devices will also be transferred to different metals.
Foveros Direct solves this problem by directly performing copper-to-copper bonding.
For many years, people have been studying the concept of direct connection between silicon and silicon instead of relying on the combination of pillars and bumps. If one piece of silicon is directly aligned with another piece, then there is almost no need for additional steps to grow copper pillars, etc. The problem is to ensure that all connections have been completed, to ensure that the top die and base die are very flat, without any obstacles. In addition, the two pieces of silicon must be combined into one and be permanently bonded together without separating.
Foveros Direct is a technology that helps Intel reduce its chip-to-chip connection bump pitch to 10 microns, which is 6 times the density of Foveros Omni. By achieving flat copper-to-copper connections, bump density increases, and the use of all-copper connections means low-resistance connections and reduced power consumption. Intel recommends using Direct, functional chip partitioning becomes easier, and functional blocks can be split into multiple levels as needed.
Technically speaking, Foveros Direct, as a chip-to-chip bond, can be considered as a complement to Foveros Omni. It has an external power connection to the base die-both can be used independently of each other. Direct bonding will make the internal power connection easier, but there may still be interference issues, and Omni will take care of these issues.
It should be pointed out that TSMC has a similar technology called Chip-on-Wafer (or Wafer-on-Wafer), and its customer products will be introduced to the market using a 2-layer stack in the next few months. TSMC demonstrated a 12-layer stack in mid-2020, but this is a test tool for signals, not products. The problem in the stack is still heat, and what goes into each layer.
Intel predicts that Foveros Direct, like Omni, will be ready for mass production in 2023.
“As we continue to promote the development of advanced packaging, we will transition from electronic packaging to integrated silicon photonics optical packaging in the next few generations of technology. Of course, we will continue to work closely with industry partners including Leti, IMEC and IBM. Cooperation to further develop process and packaging technologies in the above and many other innovative fields.” Intel emphasized.
EUV lithography machine and foundry customers
In Intel’s speech today, they emphasized that the company will become a major customer of ASML’s next-generation EUV technology (ie, High-NA EUV). Among them, NA is related to the “numerical aperture” of the EUV machine, or simply, how wide the EUV beam can be inside the machine before it hits the wafer. The wider the beam before you hit the wafer, the greater its intensity when it hits the wafer, thereby improving the accuracy of the printed lines.
Generally, in order to obtain better printing lines in photolithography, we switch from single patterning to double patterning (or quadrangular patterning) to obtain this effect, which will reduce the yield. Moving to High NA means that the ecosystem can maintain a single model for a longer period of time, which some people believe will allow the industry to “be in line with Moore’s Law for a longer period of time.”
First of all, Intel said that to put EUV into mass production, it is necessary to build a complete supply chain ecology centered on the device-photoresist, mask generation, mask attachment, and measurement and testing. Intel has made great efforts to build this ecosystem.
It is understood that IMS, an Intel subsidiary, is a major global supplier of EUV multi-beam mask writers. This is an indispensable tool for making high-resolution masks, and the mask is a key part of the realization of EUV lithography technology. The use of mask writing technology is extremely competitive for Intel, and it is also a key driving force for the industry.
At the same time, Intel is working with ASML to define, build, and deploy a next-generation EUV tool called High-NA EUV (High-NA EUV). High-NA will integrate higher-precision lenses and mirrors to improve resolution, thereby engraving smaller patterns on the silicon wafer. Intel is expected to be the first to obtain the industry’s first High-NA EUV lithography machine, and plans to become the first chip manufacturer to actually use High-NAEUV in production in 2025.
Intel emphasizes that these developments also depend on close cooperation between us and other key players in the industry. Cooperation with equipment suppliers including Applied Materials, LAM Research and TEL is the key to our leading technology roadmap.
From the current point of view, the NA of the current EUV system is 0.33, while the NA of the new system is 0.55. The latest update of ASML indicates that it expects customers to be able to use High-NA equipment for production in 2025/2026, which means that Intel may get the first machine in mid-2024 (we think it is ASML NXE: 5000). To be precise, it is unknown how many High NA machines ASML intends to produce in that time period, which means that owning the first machine will not be a big victory. However, if High-NA rises slowly, Intel will take advantage of it.
Finally, Intel also disclosed their progress in foundry.
Pat Gelsinger said that one of the advantages of Intel Foundry Services (IFS) is that the company can not only provide leading process and packaging technology innovation, but also serve our customers in a new way with our existing mature technologies. Customers have always had a strong interest in Intel Foundry Services (IFS). Among them, our mature advanced packaging technology is focused on.
Based on this, Intel announced that the company has signed a contract with AWS, and they will be our first customers to use Intel Foundry Services (IFS) packaging solutions. In addition, Intel is also cooperating with Qualcomm, they will use Intel 20A process technology.
Both companies firmly believe that the leading development of mobile computing platforms will usher in a new era of semiconductors. What is clear is that Intel foundry services have set sail!
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/intels-latest-roadmap-4nm-3nm-20a-and-18a/ Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.