In 2007, Intel adopted a model for microprocessor development which they called “Tick-Tock” (not to be confused with the social media TikTok). The idea is that two different types of innovations are needed to continue producing better and better microprocessor chips. The first type is to shrink the manufacturing process using tighter design rules and achieving better transistor performance and smaller die sizes. Intel called this the “Tick” step. An example would be when they went from a 32 nm to 22 nm manufacturing process in 2012. The second type of innovation is the “Tock” step that introduces a new processor microarchitecture using the same process to improve performance, energy efficiency, and incorporate new features. An example was when they moved from the Ivy Bridge to the Haswell microarchitecture in 2013.
The reason Intel does this is because it is just too difficult to innovate on both fronts at the same time. If a bug were to develop in a new chip with both a new process and a new microarchitecture, could they figure out if the problem was because of the manufacturing process or with the microarchitecture? Intel has successfully used this strategy repeatedly over the past 15 years and has kept up the pace of introducing new generations of microprocessors on roughly an annual basis. Although the specific engineering developments of each step take longer than one year, by overlapping the activities they can alternate the processor introductions to keep up this one year cadence. In recent years, Intel has updated this model from a two-step “Tick-Tock” to a three step “Tick-Tock-Optimize” pattern but they are still taking this basic approach. You can read more about this development model on a page that describes it on Intel’s website here and another description available on Wikipedia here.
Although quantum processor development is not quite exactly the same, elements of this strategy make sense and it appears that IBM has adopted it as part of their quantum processor roadmap. The challenge in quantum is one would like to make continued improvements in both qubit scaling and qubit quality. But as in the microprocessor case, doing both at the same time is hard and one might make faster progress by adopting elements of this “Tick-Tock” strategy. With IBM’s recent announcement of its 433 Osprey processor, some may be disappointed that while the increase in the number of qubits is substantial, there probably won’t be a very significant change in the qubit gate fidelity and other measures. We suggest that this introduction should be considered as a “Tick”. It was designed to introduce some very specific manufacturing and design technologies to improve scalability.
Next year, IBM will be announcing another processor with 133 qubits called Heron and we would call it the “Tock” stage. This chip is expected to have significant improvements in qubit fidelity measures. It will incorporate additional microarchitecture structures called tunable couplers that will help isolate neighboring qubits, reduce cross-talk and improve gate speed. Because of the more sophisticated design, the Heron may have almost as many components on the chip as the Osprey, even though the raw number of qubits is less. And later in 2023 IBM will then implement another “Tick” cycle, and they are expected to announce the Condor chip with 1121 qubits that combines the innovations of the proceeding two chips. And so the cycle continues.
Although IBM’s quantum “Tick-Tock” is not quite the same as Intel’s microprocessor “Tick-Tock”, there are parallels to the approaches and shows that lessons can be learned from looking at how developments occur in the classical world.
November 12, 2022
dougfinke2022-11-12T14:50:58-08:00