It’s getting kind of weird out there.
30 years ago, I was introduced to dial-up internet. This completely changed my world forever, but it was mind-numbingly slow by today’s standards. Gen Xers, do you remember how exciting it was when they upgraded us to 56 kbps from 28 kbps?
It took several minutes to display a high quality still image on the screen, and still images were pretty much all you could get back then.
Today’s connections are hundreds (and sometimes thousands) of times faster, and we watch videos in real time. And, while the speed of the connections has continued to pick up pace, computer processing speeds have done a marvelous job of keeping up with them (or vice versa).
This blend of ever-faster, better, and cheaper hardware and software is a close cousin of the phenomenon known as Moore’s Law, where the number of transistors on a chip has roughly doubled every two years for more than four decades.
All this doubling means the change isn’t linear—it’s exponential. Every change builds on the previous change, as today’s computer chips help us design tomorrow’s, and today’s software helps us write better code in the future.
If you zoom out a little bit, this has been going on for a very, very long time. Computing power beyond the human brain really began picking up pace at the cusp of the 20th century, as Herman Hollerith designed a working punch-card driven computer to help tabulate the 1890 census:
Hollerith’s machine was a sort of computer, with those punch-cards as the data input. It could handle about 50 cards in one minute of operation, and each card could have as many as 80 data points on it. If all the data was filled in on each card, that meant this proto-computer could process 4000 data points per minute.
By the time ENIAC showed up on the scene, it was capable of about 5000 calculations per second. This was thanks to ENIAC’s design: it was digital and electronic, and it represented a new way of computing. From here on out, all of the fastest and most powerful machines would be both digital and electronic, and general purpose: capable of being programmed to do any task.
Your phone can probably do billions of calculations per second, depending on how old it is. Fugaku supercomputer, today’s most powerful machine, can do hundreds of quadrillions of calculations per second.
If you zoom out and consider a few observations, your eyebrows might go up one at a time at first, but eventually both of them are going to go up at once. That’s because the conclusions hiding in plain sight are nothing short of shocking.
Zoom out and you can see that our ability to compute has grown exponentially over time. Hollerith’s machine calculated less than 100 items per second, and 56 years later, ENIAC could do 5000. 56 years after ENIAC’s invention, the fastest computer was called The Earth Simulator, and it could do about 36 trillion calculations per second.
This is not linear growth. If you plot this data on a chart, it looks like this:
That’s not terribly helpful since it looks like everything is happening right now, since quadrillions of additional calculations per second are needed to move the needle today. Soon, this number will be in the quintillions, and so on. However, you can change the axes so that the chart measures exponential growth, and you get this much more useful graph:
You may note that “per constant dollar” has been added in here, and that other data points are also included. The price aspect is incredibly important to our conversation today, since if nobody could afford computers, nobody would be using them. Fortunately for us, the price per calculation has fallen even more precipitously than the performance has increased.
For $666.66 in 1976, you could get an Apple computer, which could perform somewhere north of 250,000 calculations per second. By 1996, those 250,000 calculations per second cost you about $4, since laptops in 1996 were capable of many thousands of times more calculations per second than the original Apple. By 2016, that cost was around 2 cents.
The same is true for memory, too. Every important metric by which computers improve, continues to improve at an exponential rate, and it’s an even faster rate when you consider price. In a nutshell, computers are getting cheaper, faster, and more powerful at an ever-faster rate.
This brings up one really, really important question: how long can this go on?
After all, trees don’t grow to the sky, and there seems to be some kind of hard limit to nearly any process we know of in nature. It seems evident that Moore’s Law must end some time in the next few years. The self-fulfilling prophecy that has guided Silicon Valley (and the global economy) isn’t going to work for much longer.
Even still, there are tantalizing reasons to believe our exponential acceleration might continue, even once it’s no longer possible to double the number of transistors on a chip. It’s not going to be possible for much longer because two atoms can’t take up the same space, and we are fast approaching those sorts of quantum limits.
Could we figure out another way to keep speeding things up just as quickly as we have in the past? There’s some speculation about 3D chips, quantum computing, and other paradigm-expanding concepts that could pick up the baton Moore’s Law will inevitably drop.
There’s also the idea of the virtuous cycle. This is the concept that better hardware can allow us to create better AI, which can then help us to design better hardware, which can then help us to create more powerful AI systems, and so on. This has been happening ever since the first programs were devised, and there’s no good reason to believe Moore’s Law is the only way it can continue.
If the download and upload speed continues to increase, and if our devices keep getting faster and more powerful, and yet cheaper, what does that mean? For one thing, instead of trillions of computers in the world, we are very likely going to have quadrillions of them. What will it mean to have a million computers for every human on the planet? How about a billion computers per person?
It might mean implantable devices in our bodies, for one thing. Ubiquitous hardware in every room might also mean our every need can be met instantly, and we might even be able to interact with those tiny machines with our body’s tiny machines.
Our minds are already augmented by technology, but it’s possible that the level of augmentation might look like that exponential chart, increasing rapidly as anyone who doesn’t choose to augment is left in the dust, utterly unable to keep up with the rest of us.
This moment is what Ray Kurzweil refers to as the Singularity. This is borrowed from physics, where it describes a point where certain physical quantities become infinite or undefined, like a black hole or the Big Bang. A singularity in physics challenges our understanding at a fundamental level, often necessitating new theories to explain them properly.
The idea, then, is that if we reach this hypothetical moment, nobody knows what might happen next. Technology will be improved so quickly that we just can’t predict what sorts of changes will be wrought.
We’ve seen a lot of improvements in a lot of different areas lately, and that’s not a coincidence. When an idea improves a process, that process can now be shared much more quickly than in the past with every other field. Progress in one industry often means progress in several other industries, and the pace of change itself tends to speed up.
This trend is almost certain to continue, with ever-shorter times between significant innovations, and with a decade’s worth of change happening in a year, and eventually in a month, and so on.
So, is this moment getting close, where we won’t be able to understand the changes that are taking place? I might argue that it’s a slippery slope, and we are clearly on it today.
Nobody quite understands exactly how Large Language Models (LLMs) like ChatGPT work. While we certainly understand a great deal about their architecture, there’s still a “black box” inside, wherein they produce answers by using emergent properties that weren’t explicitly programmed. We don’t know how these properties arise, nor how LLMs can come up with many of the answers.
The same “black box” powers all of today’s generative AI, including image and video generators. One of the reasons for this is that everything is run on code, and LLMs can help you write better code now (professional coders, don’t shoot me in the comments!).
Similarly, no human is going to design a chip with a trillion transistors on it. The work is vastly too complex for an individual human, so we have AI to help us with this. AI designs better chips by a process we don’t fully understand, and then those better chips help us to create better AI systems we surely understand even less.
You can see how a cycle like this could quickly spin out faster than we can adjust to it, and we may well be living through the early stages of this kind of liftoff. Here on Earth, we’re constrained by resources and physical laws, so there must be some kind of catching point where the whole thing slows down, but I frankly don’t have any idea when that might be.
This thing (technology) may go on speeding up until it’s way, way out of our hands. This is not to say that I’m a doomsayer—mind you, there’s an equal amount of promise and peril here, and the promise is very exciting. We’re already seeing people walking or seeing for the first time, other folks now able to communicate in ways unimaginable a decade ago. All of this is thanks to this exponentially accelerating technology, and there’s going to be a great deal more of it in the near future.
I wrote a bit about where we are today in this journey up the technological ladder, and I quoted Guns N Roses in doing so:
So, are we close to some kind of liftoff whereby we won’t know what’s what? My answer is that it’s complicated, and I really don’t know. We’re clearly living through faster times than ever before, but are we keeping up as well as we have in the past? Again, I have no idea.
Do you notice the faster pace of change out there? Is it noticeable how quickly new ideas get rolled out into the world, and is it going to get so fast that we won’t be able to keep up? Let’s talk!
I continue Moores Law in both of my Sci-Fi books. When you can start breaking into quantum and organic another whole new world opens up.
I started out in 1985 online at 300bps. Not kbps mind you 🤯🤣