

Discover more from Goatfury Writes
People are pretty skeptical about AI.
ChatGPT hallucinates. Dall E can’t draw hands. Even our old friends Siri, Google, and Alexa have made a lot of frustrating mistakes over the years.
We’re being told that some kind of liftoff is taking place in 2023, with mind-blowing advances in all sorts of fields just around the corner. It feels like we’ve seen this movie before.
Four of the most dangerous words in the English language are: “this time, it’s different.” Nevertheless, I want to make that case with you today.
This skepticism is not without reason; we've been promised significant breakthroughs before, only to be left waiting. In fact, the phenomenon of technology's promises outrunning its actual progress is so common it has a name: "Amara's Law," after the futurist Roy Amara, which posits:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
In the realm of AI, the cycle of hype and disillusionment has happened more than once. But there really is something about "this time" that might make it different.
To appreciate this, let's take a brief detour to the AI landscape before 2017. AI was a fractured field, composed of many different sub-disciplines: computer vision for interpreting images, natural language processing for understanding and generating text, reinforcement learning for training game-playing AIs, and many more. Each of these fields had its own specialized methods, techniques, and advancements. Progress in one didn't necessarily translate to progress in another. In some ways, it was less a single field than a confederation of related ones.
Then, in 2017, a model called "Transformers" was introduced, setting off a shift that's been called the "Great Consolidation." Transformers applied to one of AI's sub-disciplines, natural language processing (NLP), but they did so in such a way that their impact rippled out to the other sub-disciplines as well.
The key insight of Transformers was treating various subjects as a form of language. When we think of language, we usually think of words and sentences, not images or DNA sequences or stock market fluctuations. But at a fundamental level, all these things are just sequences of data, which can be processed and understood much like sentences in a text.
With this paradigm shift, the many became one. Computer vision, which had its own specialized methods and models, could now be handled using the same techniques as NLP. The same went for bioinformatics, or financial analysis, or any number of other fields. Each of these areas could now advance in lockstep because they were all using the same fundamental approach. Instead of a confederation of related fields, AI became a unified discipline.
This consolidation has brought natural language processing, the original domain of Transformers, to the center of the AI universe. By revolutionizing NLP, Transformers have indirectly revolutionized almost everything else.
While skepticism of AI is understandable, it misses this bigger picture. Yes, AI models like ChatGPT and Dall E have their flaws. But it's important to remember that these models aren't just hallucinating or drawing hands poorly in isolation. They're part of a larger trend of consolidation and advancement across AI as a whole.
Each improvement to these models doesn't just make the specific task better; it potentially improves a wide range of applications in various fields. As this unification and advancement progress, we need to take a minute to understand the broad implications of these changes.
Tristan Harris and Aza Raskin from the Center for Humane Technology provide an engaging discussion on this and the dilemmas it brings. Their dialogue offers a valuable perspective on the potential impacts and ethical considerations stemming from the accelerating development and consolidation of AI technologies. It's worth a watch to understand the implications of the AI advancements we're discussing. Thanks to regular reader
for bringing this presentation to my attention!So, if there’s an advancement in one field, there is now an advancement in all fields.
That’s why 2023 is such a “before and after” year for me. I wrote about this phenomenon here in case you’re curious.
The possibilities for the future are staggering. The AI liftoff that's being promised doesn't merely imply incremental advances in individual fields but the convergence of those fields into a single, unified discipline. This not only multiplies the potential impact of each discovery but also opens up new avenues of exploration that were unimaginable in the age of fragmented AI research.
The central role of language in this new era of AI cannot be overstated. Just as the invention of writing systems revolutionized human societies thousands of years ago, so too might the interpretation of everything as language revolutionize AI and, by extension, our world. It's as if the AI research community has discovered a Rosetta Stone that deciphers the languages of DNA sequences, video generation, and everything else, transforming them into something that can be understood and manipulated using the same set of tools.
Despite the very real concerns about AI's hallucinations and limitations, the "Great Consolidation" underway signals that we are at a turning point in the history of artificial intelligence. Rather than viewing AI as a collection of separate and specialized tools, we can now see it as an interconnected whole, like a global brain where each advancement in one area can benefit all others.
So yes, we have been here before. We've had AI winters, periods of disillusionment, and moments of over-hype. But it's also true that the landscape of AI research today is markedly different from what it was a few years ago. This time, there's a genuine cause for optimism. And this optimism isn't born out of the promises of the next big thing, but out of a recognition of the paradigm shift that's already happening.
Every AI, Everywhere, All At Once.
The boundaries that used to separate the different branches of AI are dissolving, leading to a future where AI advances might be universally applicable.
If ever I’ve seen one, this is a clear case of the “gradually, then suddenly” phenomenon I wrote about earlier:
So, all of this is happening very, very fast. It’s all coming together, but this consolidation isn’t unambiguously good. Regular readers will know full well that I’m interested in the double-edged sword nature of AI and technological change, and with all of this promise comes plenty of peril. Beware the binary!
For balance, I’ll leave you with this piece I wrote recently about the creepy stuff that’s right around the corner, just in case this piece leaves you feeling a little too optimistic.
Every AI, Everywhere, All At Once
Great article.
I’m very disappointed about you not making the connection between “Transformers” and there being “more than meets the eye”. 🤣 but all kidding aside, I feel it’s apt. A lot of the practical applications of AI in society today are essentially small advances, but... they’re everywhere. A new reality is growing around us and many are unaware because it’s happening on small levels and, as fast as it is, it’s not instantaneous. There will be no single magic moment where it all changes. It’s already happening. There needs to be more recognition of that so it can be better managed and not run wild like a weed.