How does the semiconductor manufacturing process node evolve?
How does the semiconductor manufacturing process node evolve?
How did the transistor architecture develop into what it is today?
Here’s what you need to know…
Semiconductor manufacturing process node involves many aspects, such as manufacturing technology and equipment, transistor architecture, materials, etc. Below, we will introduce and analyze them in detail for your reference.
First of all, what does technology node mean? We often hear things like Nvidia GPU with TSMC 16nm process, or Intel i5 with 14nm process. The meaning of this length requires a detailed structure diagram of the transistor to be given. Simply put, in the early days, it can be regarded as the size of the transistor.
This size is very important, because the function of the transistor, simply put, is to send electrons from one end (S) through a channel to the other end (D). After this process is completed, the transmission of information is completed. Because the speed of electrons is limited, in modern transistors, they generally operate at saturation speed, so the time required is basically determined by the length of this channel. The shorter, the faster.
The length of this channel and the size of the transistor mentioned earlier can be roughly regarded as consistent. But there are differences between them. The channel length is a physical concept of a transistor, while the size used for technology node is a concept of manufacturing technology. They are related, but they cannot be completely equated.
In the micron era, the smaller the technology node number, the smaller the size of the transistor and the shorter the channel length. But after the 0.25 µm process in 1997, Intel began to introduce more aggressive gate length scaling. For example, their 0.25 µm process had L g = 0.20 µm and likewise, their 0.18 µm process had L g = 0.13 µm (a node ahead). At these nodes, the “process node” was effectively larger than the gate length.
The term itself, as we know it today, dates back to the 1990s when microprocessor development was driven by higher frequency while DRAM development was dominated by the ever-growing demand for higher capacities.
Here are three questions:
Below I will try my best to answer them. Please correct me if I’m wrong.
The first question, part of the answer has been said before, because smaller is faster. This fast can be directly translated into increased performance of chips based on transistors. Take CPU as an example, as shown in the following figure.
The information content of this figure is very large. The green dots represent CPU clock frequency, which is faster as it gets higher. It can be seen that until 2004, CPU clock frequency basically increased exponentially. The main reason behind this is that transistor size was reduced.
Another important reason is that after reducing size, integration (number of transistors per unit area) increased. This has multiple benefits: first, it can increase chip functionality; secondly, according to Moore’s Law , integration increase results in cost reduction.
This is also why semiconductor industry has been pursuing Moore’s Law for 50 years. Because if you don’t meet this standard , your product cost will be higher than your competitors who can meet this standard , and your company will go bankrupt.
Another reason is that reducing transistor size can reduce power consumption of a single transistor , because according to scaling rules , it will also reduce overall chip supply voltage , thereby reducing power consumption .
But there are exceptions , from physical principles , power consumption per unit area does not decrease . Therefore this becomes a very serious problem for transistor size reduction , because theoretical calculations are ideal situations , in fact , not only does not decrease , but increases with integration .
Around 2000 , people predicted that according to Moore’s Law , if there was no technological progress , transistor size would shrink to around 2010 , and its power density could reach rocket engine level . Such chips are obviously impossible to work normally . Even if they don’t reach this level , high temperature will affect transistor performance .
In fact , industry has not found a truly thorough solution to transistor power consumption problem . Actual practice is : on one hand reduce voltage ( power consumption is proportional to square of voltage ) , on other hand no longer pursue clock frequency . Therefore in figure above , after 2005 , CPU frequency no longer increased , performance improvement mainly relied on multi-core architecture . This is called “power wall” , which still exists today , so you can’t buy 5GHz processor , 4GHz are almost none .
These are three main incentives for shrinking transistors . It can be seen that they are all heavyweight methods to improve performance , functionality , and reduce cost , so industry has persisted until now .
So how do you shrink ? Simple and crude : reduce area to half of original . Area is equal to square of size , so size is reduced by about 0.7 . If you look at numbers of transistor technology nodes :
130nm 90nm 65nm 45nm 32nm 22nm 14nm 10nm 7nm (5nm)
You will find that it is a geometric series with a ratio of about 0.7 . Now , this is just a naming habit , and there is already a gap with actual size .
The second question , why does current technology node no longer reflect transistor size ?
The reason is also very simple , because it is impossible to achieve this degree of shrinkage . There are three main reasons :
First , atomic scale unit of measurement is angstrom , which is 0.1nm .
10nm channel length , there are only less than 100 silicon atoms . Future transistor physical model is like this : use quantum mechanics band theory to calculate electron distribution , but use classical current theory to calculate electron transport .
After electron distribution is determined , it is still treated as a particle , rather than considering its quantum effects . Because size is large , it is not necessary . But smaller , it won’t work , you need to consider various complex physical effects .
Secondly , even using classical model , performance also has problems , this is called short channel effect , its effect is to damage transistor performance .
Short channel effect is easy to understand . Simply put , transistor is a three-port switch . Its working principle is to send electrons from one end (source) to other end (drain) through channel . Another port (gate) decides whether this channel is open or closed . These operations are done by applying specific voltage to port .
Transistor performance depends on one point : it must be able to open and close tightly . Short channel devices can open without problem , but can’t close tightly . Reason is that size is too small , there are many internal interference on electric field , which can be ignored before , but now it will cause gate electric field to not play its full role , so it can’t close tightly . Consequence of not closing tightly is that there is leakage current , simply put, it is unnecessary and wasteful current .
Don’t underestimate this part of current, because at this time transistor is resting, not doing anything, but wasting electricity. Currently, power consumption caused by this part of leakage current accounts for nearly 50% of total power consumption, so it is also one of main problems of transistor design and manufacturing.
Third, manufacturing process also becomes more difficult to achieve such small size.
What determines minimum size of manufacturing process is called lithography machine. Its function is to print pre-printed circuit design onto wafer surface like developing photos. In my opinion, it is a bug-level existence, because throughput rate is very high. Otherwise, how can such complex integrated circuits be manufactured? For example, Intel’s Pentium 4 processor, reportedly requires 30-40 different design templates, successively exposed, to complete entire processor design printing.
But lithography machine, as name suggests, uses light, of course not visible light, but light anyway.
And anyone with common sense will know that all things that use light have a problem, which is diffraction. Lithography machine is no exception.
Because of this problem, minimum size that any lithography machine can engrave is basically proportional to wavelength of light source it uses. Smaller wavelength, smaller size, this principle is very simple.
Current mainstream production process uses step-and-scan lithography machine produced by ASML in Netherlands. Light source used is 193nm argon fluoride (ArF) molecular oscillator produced. It is used for finest size lithography.
In contrast, current smallest mass-produced transistor size is 20nm (14nm node), which has more than 10 times difference.
Someone will ask, why isn’t there diffraction effect?
Answer is that industry has invested huge amounts in lithography technology for more than 10 years and developed various magic-level violent technologies, such as immersion lithography (putting light path in some liquid, because refractive index of light is higher, and minimum size inversely proportional to refractive index), phase mask (using 180-degree reverse way to make diffraction generated cancel each other out and improve accuracy), etc., and has been supporting progress of all technology nodes since 60nm.
Others will ask, why not use smaller wavelength light source?
Answer is that technology can’t do it temporarily.
Yes, the light source of high-end lithography machines is a world-class industrial problem.
The above is the current mainstream deep ultraviolet exposure technology (DUV). The industry generally believes that the 7nm technology node is its limit, and even 7nm may not be able to achieve mass production. The next generation of technology is still under development, called extreme ultraviolet (EUV), which reduces the light source to 13nm. But don’t be too happy too soon, because at this wavelength, there is no suitable medium to refract light, forming the necessary optical path, so the optical design in this technology is all reflection, and at such high precision, designing such a complex reflection optical path is an unimaginable technical problem.
This is not the case (it has been overcome), the most difficult thing is the light source, although it can produce the required light, but the intensity is far lower than the industrial production demand, resulting in EUV lithography machine wafer output can not meet the requirements, in other words, it will lose money when used. One of these machines costs hundreds of millions of dollars. So EUV still belongs to the future.
Based on these three reasons, in fact, very early on, it led to the shrinking of transistor size into deep water, more and more difficult, after 22nm, it has been unable to do large-scale shrinkage, so there is no longer pursuing a certain shrinkage, but instead adopted a more optimized transistor design, coupled with CPU architecture on multi-core multi-threading and a series of technologies, continue to provide consumers with equivalent to upgrade products performance.
At present, the number of technology nodes is still shrinking, but it is no longer equivalent to the size of transistors, but represents a series of indicators that constitute this technology node technology and process.
The third question is how transistor design has evolved in the process of shrinking technology nodes.
First of all, we need to understand what the idea of transistor design is. The main points are nothing more than two: first, improve switch responsiveness; second, reduce leakage current.
To explain this question clearly, the best way is to look at the picture. The physical picture of transistors, basically one is enough to understand, that is the leakage current-gate voltage relationship diagram, such as the following:
The horizontal axis represents gate voltage and the vertical axis represents leakage current. And the vertical axis is usually logarithmic coordinates.
As mentioned earlier, gate voltage controls transistor switching. It can be seen that the best transistor is one that can switch from completely off (leakage current is 0) to fully on (leakage current reaches saturation value) in a very small gate voltage change. This property has many advantages and will be explained below.
Obviously this kind of transistor does not exist on this planet. The reason is that under classical transistor physics theory, the standard for measuring this switch response capability is called Subthreshold Swing (SS), which has a limit value of about 60.
According to Intel’s data, the latest 14nm transistor has a value of about 70 (the lower the better).
And reducing this value is equivalent to reducing leakage current, increasing working current (increasing speed), reducing power consumption and other requirements. Because the lower this value is, the lower the leakage current will be at the same voltage. And to achieve the same working current, lower voltage will be required. This reduces power consumption. So this value is the most important indicator in transistor design.
What have people done around this indicator and behind several goals of transistor performance design?
Let’s look at industry first. After all practice is the only criterion for testing truth. The following are my memories and may not correspond exactly to nodes but specific descriptions should be correct:
65nm introduced Ge strained channel.
Strain I do not know how to translate into Chinese words but its principle is by doping a little bit of germanium into silicon in appropriate places. Germanium and silicon have different lattice constants which causes silicon lattice shape to change. And according to band theory this change can increase electron mobility in channel direction and higher mobility increases transistor working current. And in practice people found that this method works better for hole-type channel transistors (pmos) than for electron-type channel transistors (nmos).
45nm introduced the high-k dielectric/metal gate configuration.
This is also a milestone achievement. When I was studying, I had a professor who helped him move bricks. He was one of the main members of the team that developed this technology at Intel, so he mentioned it a lot, and I remembered it by hearing and seeing.
These are two technologies, but they are actually for solving the same problem: how to ensure that the gate works effectively at a very small size.
I didn’t explain the structure of the transistor in detail before, so I’ll add a picture below:
This is a schematic diagram of the most basic transistor structure. The transistors nowadays don’t look like this anymore, but any semiconductor physics starts from here, so this is the “standard version” of the transistor, also called bulk transistor.
Gate is the gate.
There is an oxide, an insulator, which I didn’t mention before, but it is one of the most critical components of all the components of the transistor. Its function is to isolate the gate and the channel. Because the gate switches the channel by electric field, and the electric field is generated by applying a certain voltage on the gate, but Ohm’s law tells us that there is current when there is voltage. If current flows from the gate into the channel, then what switch? It has leaked.
So you need an insulator. Why is it called oxide (or dielectric) instead of insulator? Because the earliest insulator was silicon dioxide, which naturally coexisted with silicon. Its relative permittivity (a measure of insulation, the higher, the better for transistor performance) is about 3.9. A good insulator is the lifeline of a transistor. The definition of “good” here is not much to say, but it should be noted that silicon naturally has this property: super good insulator, for the semiconductor industry, it is a historically significant thing.
Some people have lamented that God is helping humans invent, first giving so much sand (the raw material for silicon wafers), and then giving a perfect natural insulator. So far, silicon is very difficult to replace. One important reason is that as a material for making transistors, its comprehensive performance is too perfect.
Silicon dioxide is good, but when it shrinks to a certain limit, problems arise. Don’t forget that in the process of shrinking, the electric field strength remains unchanged. In this case, from the perspective of energy band, because of the wave nature of electrons, if the insulator layer is very thin and very thin, then there is a certain probability that electrons will tunnel through the energy barrier of the insulator layer and generate leakage current.
It can be imagined as passing through a wall that is higher than yourself. The size of this current is negatively correlated with the thickness of the insulator layer and the “barrier height” of the insulator layer. Therefore, the smaller the thickness and the lower the barrier, the larger this leakage current and the more unfavorable it is to transistors.
On the other hand, transistor switching performance, working current, etc., all require a large insulator layer capacitance. In fact, if this capacitance is infinitely large, then it will reach an idealized 60 SS indicator.
The capacitance here refers to capacitance per unit area. This capacitance equals permittivity divided by insulator layer thickness. Obviously, smaller thickness and larger permittivity are more beneficial to transistors.
You can see that there is already a contradiction in design goals here: whether to continue to reduce the thickness of insulator layer or not. In fact before this node silicon dioxide has been reduced to less than two nanometers in thickness which is only a dozen atomic layers in thickness leakage current problem has replaced performance problem becoming number one enemy.
So smart humans began to think of ways. Humans are greedy and unwilling to give up performance enhancement with large capacitance or risk leakage current. So humans said if there is a material with high permittivity and high energy barrier then can we continue to increase capacitance (improve switching performance) without reducing thickness (protect leakage current)?
So everyone started looking for it using almost violent methods finding many kinds of strange materials finally after verification determined to use a material called HfO2. I have never heard of this element before and I can’t even say what it sounds like in Chinese. It’s so awesome. This is called high-k where k is relative permittivity (relative to silicon dioxide).
Of course complexity of this process far exceeds simple description here materials with high-k properties are many but materials ultimately used must have many excellent electrical properties because silicon dioxide really perfect transistor insulator material and manufacturing process flow can be easily integrated with other manufacturing steps of integrated circuits so finding such high-performance insulator material that meets all requirements of semiconductor manufacturing process is great engineering achievement.
As for metal gate it’s matching technology with high-k. In earliest days of transistor gate was made of aluminum later after development changed to heavily doped polysilicon because process simple performance good. To high-k here everyone found high-k material has two side effects one is inexplicably reduce working current second is change threshold voltage of transistor. Threshold voltage is minimum voltage required to open channel of transistor this value very important transistor parameter.
This principle not explained in detail here (actually not clear right haha…?) main reason is high-k material will reduce mobility of channel carriers and affect position of Fermi level on interface. Lower carrier mobility lower working current and so-called Fermi level is a way to explain distribution of semiconductor electrons from energy band diagram a simple way to say it affects threshold voltage of transistor.
These two problems are related to dipole distribution inside high-k material. Dipole is a pair of charge systems with positive charge at one end and negative charge at the other end. It can change its distribution according to direction of external electric field. High permittivity of high-k material has a lot to do with internal dipoles. So this is a double-edged sword.
So humans thought about using metal to make gate because metal has very high free charge concentration (over 10^20) and has mirror charge effect which can neutralize influence of dipoles in insulator layer of high-k material on channel and Fermi level. This way it’s best of both worlds.
As for what kind or kinds of metal this is I’m sorry no one outside knows except for few companies that master technology it’s trade secret.
A friend added that this metal is tungsten I found information also mentioned tungsten; tungsten itself also used in backend via; but I have some reservations on this issue mainly for four reasons:
First when I was in class several professors clearly mentioned that outside world knows very little about this metal gate at least they don’t know or don’t want to say for some reason;
Second from principle point of view for NMOS and PMOS because required work function different so single metal impossible meet whole high-k process requirements even if really tungsten also need work function engineering;
Third there are many information mentioned other materials such as TiN series as metal gate;
Fourth perhaps most puzzling in information I consulted although Intel said early that used HfO2 as high-k material but Intel itself did not reveal what kind or kinds of metal are it published iedm article in 2008 did not mention specific material but used “metal gate” as name. Mark Bohl’s article published in 2007 also clearly stated following information:
“Because the electrical characteristics of the gates of NMOS and PMOS transistors are different, they actually needed not one metal but two—one for NMOS and one for PMOS.”
“But by themselves, none had exactly the work function of the doped silicon, so we had to learn to change the work function of metals to suit our needs.”
“We cannot disclose the exact makeup of our metal layers, because after all, the IC industry is very competitive!”
In the updated information, there seems to be no relevant information, but rather using WFM (work function metal) as a reference. Although there have been many research papers published on various materials such as W, TiN, etc. in the research community, I am unable to confirm the source of information about this metal gate material. Because I am not engaged in transistor manufacturing or design research, I cannot answer this question. Therefore, I hope that someone who knows can supplement and provide the source.
So Moore’s Law wins again.
32nm second-generation high-k insulator/metal gate process.
In the 45nm era, Intel achieved great success (in many transistor development charts, the 45nm generation of transistors will suddenly show a larger improvement in power consumption, performance, etc.), and continued to change better materials on the basis of 32nm time, and continued the old road of shrinking size. Of course, the previous Ge strain process is also continued to use.
22nm FinFET (Intel calls it Tri-gate), three-gate transistor.
This generation of transistors has undergone a change in architecture. The earliest design of the change can be traced back to Professor Hu Zhengming of Berkeley’s three-gate and ring-gate transistor physical model proposed around 2000, which was later turned into reality by Intel.
FinFET generally looks like this. It essentially adds a gate.
Why do you want to do this? Intuitively speaking, if you look at the “standard version” of the transistor structure diagram above, in a very short transistor, the leakage current is quite serious due to the short channel effect. And most of the leakage current flows through the area below the channel.
The channel is not marked on the picture, it is located below the oxide insulator and on the surface of the silicon wafer, a very very thin (one or two nanometers) narrow layer. The area below the channel is called the depletion layer, which is most of the blue area.
So someone started to think, since electrons move in the channel, why do I have to leave such a large piece of depletion layer under the channel? Of course there is a reason for this, because the physical model needs this area to balance charge. But in short-channel devices, there is no need to put the depletion layer and channel together, waiting for leakage current to flow by.
So someone (IBM) opened a brain hole: remove this part of silicon directly and replace it with an insulator. The remaining silicon is below the insulator, so that the channel and depletion layer are separated. Because electrons come from both poles, but both poles and depletion layer are separated by an insulator. This way, except for the channel, there will be no leakage current. For example:
This is called SOI (silicon on insulator), although it has not become mainstream, but because it has its advantages, some manufacturers are still working on it.
So someone (Intel) thought again, since they are all removing silicon from the depletion layer and inserting a layer of oxide, why do you have to put a bunch of useless silicon underneath? Directly under the oxide layer, put another gate and clamp the channel on both sides. Isn’t that more cool? You see IBM, you have no ambition.
But Intel still felt that it was not enough. He thought again: Why do you have to bury the oxide layer in silicon? I take out silicon and sandwich it with an insulator like a sandwich. Put a gate on the outside. Isn’t that awesome?
So there was FinFET, which looked like this above. FinFET’s awesome place is that it not only greatly reduces leakage current, but also because of an additional gate. These two gates are generally connected together and greatly increase the front insulation layer capacitance mentioned earlier. It also greatly improves the switching performance of transistors. So it was another revolutionary progress.
This design is not difficult to think of. The difficulty is that it can be done. Why? Because the part of silicon that stands up and is used as a channel is too thin, only less than 10 nanometers. It is not only much smaller than the minimum size of transistors, but also much smaller than the minimum size that can be engraved by the most sophisticated lithography machines. So how to get this Fin out and make it good became a real problem.
Intel’s approach is very clever. It takes a lot of process flow diagrams to explain it. I won’t say much about it. But basically the principle is that this part of silicon is not lithographed out but grown out. It first uses ordinary precision lithography to engrave a bunch of “frames”. Then deposit a layer of silicon on top and grow a very thin layer of silicon on the edge of each frame. Then use selective etching to remove excess material and leave behind these standing ultra-thin silicon Fins. When I heard about this method at that time I was completely kneeling down this IQ is too crushing.
The process flow of FinFET
How is the process flow of making FinFET? I am not an expert in this field, and I have limited knowledge. After consulting some information as much as possible, I have sorted out some relevant information below for your reference.
In this interview in August 2016, Intel’s Mark Bohl (Senior fellow and director of process architecture and integration) talked about Intel’s FinFET technology and the technical outlook for the 10nm process. In it, he mentioned that Intel will continue to use SADP (Self-Aligned Double Patterning) process.
Double Patterning is a technique that can improve the minimum accuracy of lithography, and is currently a mainstream technique adopted by many manufacturers. It has many versions. Its principle is easy to understand. For example, Intel uses 193nm immersion lithography to handle the steps with the highest accuracy requirements. The minimum size of this technology is about 80~90nm. If Double Patterning is used, the accuracy can be increased to about 40nm.
This can be understood in principle. If you first pattern a batch of 80nm accuracy patterns, and then stagger another batch of 80nm accuracy patterns, after two lithography steps, the accuracy of the patterns, measured by pitch, will be half of the original accuracy. This process is explained in Wikipedia, you can directly check
Multiple patterning.
Self-Aligned Double Patterning is one of the techniques, which only requires one lithography step to complete, and from the principle point of view, it can be used to make fins (the step of making fins is called active fin formation). When I took some related device and process courses, my own professor also mentioned that this process is used to make fins. But in this regard, I did not find any direct information from Intel or other big manufacturers to explain how their active fin formation is done, so this can only be regarded as a reasonable guess.
In this process, a layer of hard mask, also called mandral material, such as Si3N4, etc., is deposited first. This layer of material is patterned by ordinary precision lithography. After the mandral is patterned, it is called spacer. Then another layer of insulating material, such as silicon dioxide, is grown, called film.
The thickness W of the fin can be controlled by controlling the time of this growth process. Then etch the film and remove all the horizontal material, leaving only the part that grows along the edge of the spacer. Then selectively etch and remove the spacer material, leaving only this layer of sidewall film. Finally, etch the silicon material below, which is equivalent to using this layer of film as a mask.
Next, in order to ensure isolation, another layer of insulating material silicon dioxide needs to be grown. This step requires a high level of precision, because the space between fins has a high aspect ratio and requires silicon dioxide to completely fill this gap. Therefore, this step is called conformal coating.
Obviously after this step, the surface of the wafer is uneven, so a CMP (Chemical Mechanical Polishing) is required, which is to use mechanical polishing with a certain abrasive to smooth the surface of the entire wafer.
Finally, another etching is performed on the silicon dioxide material. By controlling the time of this etching, the height H of the exposed fin can be controlled. On this fin, high-k materials and other gate stacks are deposited by ALD (Atomic Layer Deposition) and other steps to basically complete this part of the production.
The above process does exist and is adopted. However, there is a problem with it. The information I can find seems to show that this process is used by IBM and Samsung series manufacturers on SOI finfet.
I mentioned SOI before. Here I should add that SOI and finfet are not two opposing technologies. The previous comparison was just for convenience to explain from the perspective of transistor physics what are the ideas behind these two technologies. Finfet can also be made on SOI wafers. This is Samsung’s approach.
But Intel doesn’t seem to use this approach. For cost reasons (SOI wafers are more expensive), Intel uses bulk finfet without buried insulation layer under the channel. So in this process flow, whether active fin formation also uses SADP or similar processes? I did not find any direct evidence.
Although Intel mentioned in some interviews and reports that they used SADP process, but this process is not limited to making fins. It can also be used in making gate patterns and back-end via interconnects. So I can’t be sure how Intel does it.
Samsung published several pictures in their latest 7nm transistor report at IEDM conference that outline their process flow thinking for Samsung\IBM (these two are family…):
You can see that Samsung uses SAQP (self-aligned quadruple patterning), which has basically the same process as SADP, but adds another lithography step, so the minimum size is further reduced) to make 7nm fins, as shown in the following figure:
It also mentioned the whole process, but I can’t understand it at all
TSMC also published 7nm at the same conference, but it was vague. Intel did not publish. In the article where Intel published their 14nm transistor, there was only one sentence mentioning that they used SADP process, but they did not explain the process steps in detail like Samsung, but directly started to talk about the performance of the transistor, so there is very little information in this regard.
It should be noted that no matter which process, it was not invented by these companies themselves at first. For example, Professor Hu Zhengming once published an article related to SADP, which was many years before FinFET came out in 2006.
These companies value the prospects of a certain process (whether it can scale, cost issues, etc.), and then integrate it into their own production process that has been accumulated for many years, and launch a new node of process. Because the cost of completing a process equipment is extremely high, it often requires planning many years in advance.
14nm continues FinFET. Below is a SEM cross-section of Intel’s 14nm transistor, you can see that the width of the fin is only 9nm on average.
Of course, in all subsequent technology nodes, the previous technology is also integrated and adopted. So now, in the industry and research community, the general transistor heard is called high-k/metal gate Ge-strained 14 nm FinFET, integrating the technical essence of many years.
And in academia, in recent years, various whimsical new designs have emerged, such as tunneling transistors, negative capacitance effect transistors, carbon nanotubes, etc.
All these designs are basically four directions: materials, mechanisms, processes, and structures. And all the design schemes can actually be summarized by a simple idea, which is the decision formula for the SS value mentioned earlier, which consists of two items multiplied:
Therefore, either improve the electrostatics of the transistor, which is one of them, or improve the transport properties of the channel, which is another.
In addition to considering the switching performance in transistor design, another performance needs to be considered, which is the saturation current problem. Many people have misunderstandings about this problem and think that saturation is not important. In fact, current saturation is the fundamental reason why transistors can work effectively. If it is not saturated, the transistor cannot maintain signal transmission. Therefore, it cannot carry load. In other words, it looks good but not practical. It cannot work properly in the circuit.
For example, there was a time when graphene transistors were very popular. The idea of using graphene as a channel is the second item, which is transport, because graphene’s electron mobility far exceeds silicon. But until now, graphene transistors have not made much progress because graphene has a fatal flaw: it cannot saturate current. However, last year it seemed to hear that someone could control graphene’s band gap from opening to closing. Graphene is no longer just zero band gap. This may have a positive impact on transistor materials.
At last year’s IEDM conference, TSMC took the lead over Intel and released 7nm technology node transistor samples. Intel has postponed the release of 10nm. Of course, the standards of their technology nodes are different. TSMC’s 7nm is actually equivalent to Intel’s 10nm. But TSMC took out the finished product first. Samsung also seemed to publish its own 7nm product at the meeting.
It can be seen that Moore’s Law has indeed slowed down. 22nm came out around 2010. Until now, the technology node has not progressed below 10nm. Last year, ITRS announced that it would no longer formulate new technology roadmaps. In other words, the authoritative international semiconductor organization no longer believes that Moore’s Law’s shrinkage can continue.
This is the main status quo of technology nodes.
Is it necessarily a bad thing that technology nodes cannot progress? Not necessarily. The 28nm node does not belong to part of the standard dennard scaling mentioned earlier. But this technology node still occupies a large market share in the semiconductor manufacturing industry until now.
Large foundries such as TSMC and SMIC are all playing well on 28nm. Why? Because this node has been proven to be an optimized combination of cost, performance and demand in many aspects. Many chip products do not need to use expensive FinFET technology. 28nm can meet their needs.
But there are some products whose performance improvement comes from process improvement to a large extent such as mainstream CPUs GPUs FPGAs memory etc… So how to continue to improve the performance of these products in the future is a question mark in many people’s hearts and also a new opportunity.
Recent Posts
A Guide to Choosing the Best micro SD Card for Your DJI Mini 3 ProOct 30, 2023Finish half a year’s work in one day! OpenAI tests GPT content moderation feature, reduces human participationAug 16, 2023How does the semiconductor manufacturing process node evolve?Aug 15, 2023Transformers Explained: FAQs and Key ConceptsJul 28, 2023Resistor color code chart for beginnersJul 28, 2023Using Ultrasonic Sensors for Accurate Distance MeasurementJul 24, 2023Design and Realization of LM386 Audio Amplifier Oscillator CircuitJul 24, 2023LM386 Audio Power Amplifier Circuit's Prevention and Control of Self-excitation and PrecautionsJul 21, 2023Company
About UsContact UsTerms & ConditionsPrivacy StatementPayment,Shipping & InvoiceRefund & Return PolicyWarranty PolicyFrequently asked questionHolidays for Chinese Mid-Autumn Festival and National Day in 2023