Fascinating. I’m truly excited to see how much more efficient in energy consumption these chips will be. I was blown away by the leap forward in battery life M1 was capable of at launch. If we can start to bring those efficiency gains to data centres we can start to crunch numbers on serious problems like climate change.
M1 gets most of its performance-per-watt efficiency by running much farther down the voltage curve than Intel or AMD usually tune their silicon for, and having a really wide core design to take advantage of the extra instruction-level parallelism that can be extracted from the ARM instruction set relative to x86. It’s a great design, but the relatively minor gains from M1 to M2 suggest that there’s not that much more in terms of optimization available in the architecture, and the x86 manufacturers have been able to close a big chunk of the gap in their own subsequent products by increasing their own IPC with things like extra cache and better branch prediction, while also ramping down power targets to put their competing thin-and-light laptop parts in better parts of the power curve, where they’re not hitting diminishing performance returns.
The really dismal truth of the matter is that semiconductor fabrication is reaching a point of maturity in its development, and there aren’t any more huge gains to be made in transistor density in silicon. ASML is pouring in Herculean effort to reduce feature sizes at a much lower rate than in years past, and each step forward increases cost and complexity by eyewatering amounts. We’re reaching the physical limits of silicon now, and if there’s going to be another big, sustained leap forward in performance, efficient, or density, it’s probably going to have to come in the form of a new semiconductor material with more advantageous quantum behavior.
Manufacturing is actually the name of the game with chip design. Even if a quantum computing design becomes feasible, the exotic nature of its construction will turn any discovery into a engineering nightmare.
As for the type of technology, here’s what a competitor looking for the first blue LED said about the Nobel Prize winners: “It’s like I say to people: they had been working on the steam engine for 100 years, but they never could make one that really worked, until James Watt showed up. It’s the guy who makes it really work who deserves the Nobel Prize. They certainly deserve it.”
Not really: you have to keep in mind the amount of expertise and ressources that already went into silicon, as well as the geopolitics and sheer availability of silicon. The closest currently available competitor is probably gallium arsenide. That has a couple of disadvantages compared to silicon
It’s more expensive (both due to economies of scale and the fact that silicon is just much more abundant in general)
GaAs crystals are less stable, leading to smaller boules.
GaAs is a worse thermal conductor
GaAs has no native “oxide” (compare to SiO₂) which can be directly used as an insulator
GaAs mobilities are worse (Si is 500 vs GaAs 400), which means P channel FETs are naturally slower in GaAs, which makes CMOS structures impossible
GaAs is not a pure element, which means you get into trouble with mixing the elements
You usually see GaAs combined with germanium substrates for solar panels, but rarely independently of that (GaAs is simply bad for logic circuits).
In short: It’s not really useful for logic gates.
Germanium itself is another potential candidate, especially since it can be alloyed with silicon which makes it interesting from an integration point-of-view.
SiGe is very interesting from a logic POV considering its high forward and low reverse gain, which makes it interesting for low-current high-frequency applications. Since you naturally have heterojunctions which allow you to tune the band-gap (on the other hand you get the same problem as in GaAs: it’s not a pure element so you need to tune the band-gap).
One problem specifically for mosfets is the fact that you don’t get stable silicon-germanium oxides, which means you can’t use the established silicon-on-insulator techniques.
Cost is also a limiting factor: before even starting to grow crystals you have the pure material cost, which is roughly $10/kg for silicon, and $800/ kg for germanium.
That’s why, despite the fact that the early semiconductors all relied on germanium, germanium based systems never really became practical: It’s harder to do mass production, and even if you can start mass production it will be very expensive (that’s why if you do see germanium based tech, it’s usually in low-production runs for high cost specialised components)
There’s some research going on in commercialising these techniques but that’s still years away.
Easier question: What behavior exactly would allow for better ICs? The story you read in popsci is about quantum behavior showing up at feature-scale, which seems like it should be only somewhat effected by material choice.
Fascinating. I’m truly excited to see how much more efficient in energy consumption these chips will be. I was blown away by the leap forward in battery life M1 was capable of at launch. If we can start to bring those efficiency gains to data centres we can start to crunch numbers on serious problems like climate change.
M1 gets most of its performance-per-watt efficiency by running much farther down the voltage curve than Intel or AMD usually tune their silicon for, and having a really wide core design to take advantage of the extra instruction-level parallelism that can be extracted from the ARM instruction set relative to x86. It’s a great design, but the relatively minor gains from M1 to M2 suggest that there’s not that much more in terms of optimization available in the architecture, and the x86 manufacturers have been able to close a big chunk of the gap in their own subsequent products by increasing their own IPC with things like extra cache and better branch prediction, while also ramping down power targets to put their competing thin-and-light laptop parts in better parts of the power curve, where they’re not hitting diminishing performance returns.
The really dismal truth of the matter is that semiconductor fabrication is reaching a point of maturity in its development, and there aren’t any more huge gains to be made in transistor density in silicon. ASML is pouring in Herculean effort to reduce feature sizes at a much lower rate than in years past, and each step forward increases cost and complexity by eyewatering amounts. We’re reaching the physical limits of silicon now, and if there’s going to be another big, sustained leap forward in performance, efficient, or density, it’s probably going to have to come in the form of a new semiconductor material with more advantageous quantum behavior.
Is there anything looking even remotely promising to replace silicon? Manufacturing base aside, what’s the most like candidate so far?
Manufacturing is actually the name of the game with chip design. Even if a quantum computing design becomes feasible, the exotic nature of its construction will turn any discovery into a engineering nightmare.
As for the type of technology, here’s what a competitor looking for the first blue LED said about the Nobel Prize winners: “It’s like I say to people: they had been working on the steam engine for 100 years, but they never could make one that really worked, until James Watt showed up. It’s the guy who makes it really work who deserves the Nobel Prize. They certainly deserve it.”
Not really: you have to keep in mind the amount of expertise and ressources that already went into silicon, as well as the geopolitics and sheer availability of silicon. The closest currently available competitor is probably gallium arsenide. That has a couple of disadvantages compared to silicon
You usually see GaAs combined with germanium substrates for solar panels, but rarely independently of that (GaAs is simply bad for logic circuits).
In short: It’s not really useful for logic gates.
Germanium itself is another potential candidate, especially since it can be alloyed with silicon which makes it interesting from an integration point-of-view.
SiGe is very interesting from a logic POV considering its high forward and low reverse gain, which makes it interesting for low-current high-frequency applications. Since you naturally have heterojunctions which allow you to tune the band-gap (on the other hand you get the same problem as in GaAs: it’s not a pure element so you need to tune the band-gap).
One problem specifically for mosfets is the fact that you don’t get stable silicon-germanium oxides, which means you can’t use the established silicon-on-insulator techniques.
Cost is also a limiting factor: before even starting to grow crystals you have the pure material cost, which is roughly $10/kg for silicon, and $800/ kg for germanium.
That’s why, despite the fact that the early semiconductors all relied on germanium, germanium based systems never really became practical: It’s harder to do mass production, and even if you can start mass production it will be very expensive (that’s why if you do see germanium based tech, it’s usually in low-production runs for high cost specialised components)
There’s some research going on in commercialising these techniques but that’s still years away.
Easier question: What behavior exactly would allow for better ICs? The story you read in popsci is about quantum behavior showing up at feature-scale, which seems like it should be only somewhat effected by material choice.