LLMs are pretty wild. You can ask a question, and then some processing happens inside of a black box, and then you get an answer on the other side.
The human brain doesn’t work exactly like this, but you might have more in common with Gemini or ChatGPT than you might think. Don’t get me wrong—I’m going to do plenty of simplifying here, mainly just to make the analogy more clear, so if you’re itching to correct something I’ve said, go right ahead:
When the question goes into the black box, the LLM processes the information, then spits something out at the other end. This at least rhymes with how your brain does it.
Instead of typed characters as inputs, you have sensory detectors that are constantly reporting what’s going on in the world to your brain. Seeing, hearing, tasting, smelling, and touching are commonly cited, but there are plenty of other ways for information to get into your brain.
Proprioception tells you where your limbs are, while chronoception gives you a sense of how much time has passed. We have little gyroscope-like canals inside our ear that let us tell whenever we’re out of balance, and we have sensors inside our bodies to let us know when we’re hungry or thirsty.
All of these signals pass through the dark, moist computer smashed inside your skull. There are no exceptions to this, for if you experience anything at all, it is processed in the brain.
Even wilder, when you experience one of these senses—if something feels hot to the touch, or if you feel a little off today, or if you inhale deeply while inside the bathrooms of the Roman coliseum—all of this is presented to you by way of the brain. “Take a look out there on my behalf,” you might say to your eyes, who then gather some raw data, which is then converted into an electrical signal.
Your brain then decides which bits to pay attention to, and what parts to edit out. That’s not all it does, though: it also translates that simple electrical signal into something more useful. It takes that raw data and turns it into a portrait of a girl with a pearl:
When we’re kids, we’re taught that seeing is believing. What’s out there in the world is faithfully represented by your five (it was always only five) senses, and that’s that.
Instead, it’s much more like your brain gets some kind of prompt from your inputs, and then it converts it into something useful for you, something that will make sense. Things are omitted, simplified, or summarized so that you have something like a useful map of the world, but it’s important to remember that the map is not the territory.
2400 years ago, Plato made a very bold claim about the way we perceive reality, likening all humans to prisoners in a cave who could only see shadows. It took a long time for science to realize he was right.
Reality’s New Clothes
The prisoners stared intently at the dark, two-dimensional forms playing out a drama for them to observe.
It's interesting how our brains convert into different realities based on biases.
"Your brain then decides which bits to pay attention to, and what parts to edit out."
In the meantime, my brain:
https://media.tenor.com/Jfvooie8DbAAAAAi/monkey-cymbals.gif