I’ve watched the debates surrounding generative artificial intelligence intensify over the past few years. The truly big questions aren’t really new, though: ever since antiquity, people have been wondering whether we humans might one day be able to make a machine that could think.
During the ancient world, these questions were asked in the form of myths and legends. In Greece, the god Hephaestus created mechanical servants to help him in his workshop. Maybe that’s what inspired Heron to create a working steam engine and a very early version of a self-driving car 2000 years ago.
Even back then, people told cautionary tales of the hubris of creating something with agency, something that could think and act on its own. Jewish folklore has an analog in the Golem, a creature made from clay and brought to life by humans using magic. The Golem was powerful, but also potentially incredibly dangerous.
In more recent literature, Mary Shelley crafted the idea of a similar creature created during the peak of hubris during the Victorian era, when there was a feeling that science had given us an answer to every important question already. Frankenstein’s monster raises all kinds of ethical questions about creating something that can think and act on its own.
All of these stories and ideas predate actual artificial intelligence by centuries, but maybe you can see how they’ve set the stage for the way we think about AI today. Very smart people have been contemplating these same ideas for a very long time now.
The reason I bring up antiquity and the more recent past here is simply to show that we’ve had a lot of time to wrap our minds around what this might mean for humanity, and the discussions you see everywhere on the internet today are often echoes of conversations that have taken place a thousand times already.
The questions today are things like:
Is an AI going to be conscious? Will it have agency? Is it intelligent? Can AI be creative?
Long before you get to any sort of Skynet scenario, you run into these very interesting (and very fundamental) questions. What I want to point out today is that not only are these questions difficult to answer about AI, but we don’t even know how to answer these questions about ourselves.
The way I see it, artificial general intelligence is the culmination of thousands of years of human thought. It is a very, very human invention—maybe the most human invention we have ever come up with, for it consolidates all of our existing knowledge (or at least as much of it as it can get its mind on) and then tells us things about ourselves that we can’t easily see ourselves.
Now, that might sound like woo and profundity, but I don’t really think of it that way. Instead, Chat GPT is a bit like the Library of Alexandria insofar as you can ask the librarian any question about something humans have done or thought about, and the generative AI librarian will immediately bring you a book with an answer to the question you asked, turned to the right page with their finger pointing at the thing they want you to read.
The magic librarian does all this in an instant, and it has access to billions of times more information than the actual Library of Alexander. It’s true that this librarian will make mistakes, but they are now further and fewer between than most human assistants, and our librarian is getting better every day.
This is where most of the value of generative AI comes from for me: search, only a hundred times better than search used to be.
All of this is pretty boring compared to the tantalizing idea that we might be creating a new form of life, isn’t it?
I couldn’t agree more that this would be very exciting, but I have a question: how do we measure whether consciousness has been achieved? Is there any way to tell if an AI system is conscious or not?
Consciousness is way too thorny for today. I will certainly be writing about this in the future, at the right time, but for now, let’s focus on a lower hanging fruit: creativity.
Can AI be creative? This might be the single most debated question among artists today. If not, it’s up there.
Let’s zoom out far enough so that we can see the entire puzzle here. We’re asking if something we’ve created ourselves can be creative, which might seem kind of silly on the surface of it, but why should that be so?
And while we’re here, what does creation even mean? If we’re talking about making something that wasn’t there before, all living things create, and so do plenty of non-living things. Animals build nests, and plants create a habitable atmosphere for us. Stars and planets self-organize into beautiful geometric forms we can recognize, dictated by the laws of physics.
No, the idea is that we expect the unexpected with creativity. Instead of making something that should be there, we make something surprising and say that it’s creative. We don’t follow the same boring process every time, so we come up with something that will surprise folks.
And down the rabbit hole we go: doesn’t an AI generate images, words, or video that surprise us? Sure, but there’s a case to be made that creativity requires intention. Did they AI system have intention? If so, how?
Notice: I said there’s a case to be made. These words are heavy, folks. This indicates that there’s debate as to what the word creativity even means.
If you believe that creativity requires intention to be considered creativity, I take no issue with that whatsoever. However, it’s important to understand that not everyone who describes creativity is talking about something that requires intention.
This specific conversation is usually justified as a philosophical debate. One side is saying that intention is needed in order to create something new and valuable, while the other side is making the claim that whatever you end up with should be creative, regardless of how you got there. I think there’s some claim to philosophical grounds there, but something else is going on, too.
People are just arguing over the definition of creative or creativity.
If someone asks me whether I think AI can be creative, I will always ask them what they mean by creativity. This will almost always open a doorway to a much richer, deeper conversation, for we’re not running the risk of talking past one another. Instead, we’ve calibrated our idea of how to define something so complex first.
Then, a funny thing happens: we’re not usually talking about AI anymore, but instead about ourselves. How does one define creativity in the first place? If it’s through a human lens, there’s still plenty of complexity and potentially crossed wires contained in this very loaded word.
AI philosophy quickly becomes, simply put, philosophy.
This type of argument, where two people enter in good faith and—instead of talking about their ideas—fight one another based on an incomplete understanding of someone’s point of view, is pernicious and pervasive. Political discourse is full of legitimate misdirection and shenanigans, but it’s also riddled with this type of argument.
The same thing happens frequently with regard to philosophical or theological ideas, and that’s more or less what’s happening here with this “debate.”
Have you seen similar traps people have fallen into, where folks are just arguing over definitions? Have you fallen into one of these traps yourself in the past? Let’s talk!
Can AI be Creative? https://www.polymathicbeing.com/p/can-ai-be-creative
I have a little clay Golem from the Jewish ghetto in Prague. It's very cute. Wish substack supported pictures in comments!