In 1965, Gordon Moore wrote an article called Cramming More Components Onto Integrated Circuits. Sometimes, the title of an article sort of writes itself.
Moore was the Director for Research and Development at Fairchild Semiconductor, which was a truly pivotal company in the history of computing and in the birth of Silicon Valley. It had been founded by the so-called “Traitorous Eight”—scientists and engineers who had left Shockley Semiconductor due to William Shockley’s authoritarian and abrasive style of management.
Shockley himself had been one of the inventors of the transistor while working at Bell Labs, and people were in awe of his accomplishments. A Nobel Prize in 1956 pushed Shockley’s already legendary status over the top, and as a result, Shockley had little trouble recruiting brilliant minds, including Moore and the rest of the Traitorous Eight. Unfortunately, he drove them away within just a couple of years.
When Moore penned his piece (here’s the original article), Fairchild Semi was capable of making transistors that were about a tenth of a millimeter in size. In 1959, Moore’s brilliant co-founder Robert Noyce had developed something called the planar integrated circuit, which could hold multiple transistors on a single chip. In 1965, Moore and Fairchild could get perhaps a few dozen transistors onto one chip.
Moore wrote about an important pattern he observed: the number of transistors that could be economically placed on an integrated circuit was doubling roughly every year. This observation was made after just six years of existence of the integrated circuit, but the pattern was there for all to see once Moore pointed it out.
Moore went on in his paper to make a simple prediction: that this trend would continue at pace for the next decade. Stunningly, this prediction came true, and by 1975 there were several thousand transistors on a single silicon chip. Based on a slowing trend, Moore revised his prediction that same year to doubling every two years instead of one.
This revised guidance has held true for the five decades since then.
Today, there are tens of billions of transistors on the most advanced single chips. The size of these transistors is so small that we have to measure it in billionths of a meter, and even that isn’t going to be small enough pretty soon. There is speculation that Moore’s Law will have to end in the near future due to physical constraints at the quantum level, and for good reason: presumably, you can’t make a transistor any smaller than an atom, and quantum effects make approaching the atomic level of manipulation increasingly difficult.
How on earth did we get from a couple dozen transistors on a circuit to tens of billions on the same little dime-sized space?
Part of this certainly has to do with the concept of a self-fulfilling prophecy. Moore created an excellent roadmap for the semiconductor industry that was ambitious, but attainable. He saw the engineering challenges first hand, and he was able to extrapolate how these challenges would continue with scale.
Naturally, chip makers set their goals to align with Moore’s predictions. Competition from firms that spun off from Fairchild—they called them “Fairchildren”—included Intel and AMD, two market leaders in semiconductor development for decades. Both companies guided themselves toward attaining these milestones as they approached, and the rest of the market followed suit.
Over the last six decades since Moore made his initial observation, we’ve seen the exponential growth of transistors in the material world. Your phone alone has several billion of them, and transistors around the world are counted not in the trillions, but in the sextillions. A sextillion is a one followed by 21 zeros.
We’ve also seen the effects of all these little on and off switches. Computers (and smartphones) have followed something like Moore’s Law too, with price-performance coming down at a similar rate. This means that the same $666.66 from the 70s buys you something like a million times more computing power.
As technology has sped things up in the world, it has become more and more difficult to keep up with the latest innovations. Generative AI is the latest example of this phenomenon—there’s a new white paper published every day, and more ideas about things to do with our technology comes about because of our technology itself, a virtuous cycle of ever-increasing innovation.
Paradigms of communication seem to get shorter and shorter, and the pace of change is quickening rapidly. Even if Moore’s Law officially comes to an end due to quantum effects or economic concerns, this sort of parallel version of accelerating returns doesn’t necessarily have to slow down any. Innovations in other areas might make up for the shortfalls.
I don’t know where all this is going, but it sure looks like we’re in for at least another decade of rapidly accelerating advances in technology. The most recent waves that have come about (the internet, smartphones and social media, and AI) have been plenty disruptive to our ways of life, and we can certainly expect even more change in the years ahead.
There is promise and peril in all this change. The disruptions from entire job sectors disappearing isn’t a new thing for us, but it’s happening at a much faster rate today. War is being fought in an entirely new way, and government surveillance is now possible at a level unimaginable to us a few decades ago.
At the same time, Arthur C Clarke’s quote immediately comes to mind:
The idea that we might soon be able to have access to much better medical treatment, that those with disabilities may soon be able to overcome them, and that we can essentially perform magic in our daily lives is intoxicating and enticing. Being able to research a hundred times faster than when I was a kid is something I experience every single day when I write for you.
We need to keep the double-edged nature of technology in mind. We need to remember that there are trade-offs for everything, and ask ourselves if there’s a different approach than the one we’re taking at any given moment. At the same time, we need to understand that we may not be able to stop all this change, but instead we need to be able to adapt to it.
Think about all those little transistors everywhere, all sextillions of them. Now think about how that number is likely to double at least every two years for the next decade. We’re going to need more zeros.
So I don't think the number of transistors is what we measure but compute power. Quantum will continue the doubling of compute.
The critical question is whether we are getting double the performance because of all this doubling. My experience says No, but I look for others to chime in. I also believe that it now takes more than two years for doubling to happen. As the hardware gets better, the software gets bigger. Conversely, it allows people to not worry about code performance and focus on productivity, as hardware doubling will take care of most bottlenecks.
I agree for the foreseeable future, we will see doubling. However, it should not be every two years as I believe 3-D integration, etc., will continue the trend.