Part 1: Getting past the illusion
Is your mind an illusion? From Siddhartha Gouthama to Daniel Dennett, that has been the consensus of many sages and philosophers throughout history. But what does that mean? Is that even useful information for mere mortals like you and I who cannot see past the illusion?
You might have heard that a person cannot step in the same river twice, because neither the person nor the river, remains the same. But saying that the self is illusion is not helpful in day-to-day life. You still need to interact with other people, and they don’t think you’re an illusion. In their minds, you exist.
A better way to think about the mind, is that it’s a construct. You have a construct of yourself. And it may or may not be accurate. You can think yourself as good or bad. Inferior or Superior. You might think you don’t belong or that you’re not good enough [imposter syndrome]. Or you might think you’re above average [illusory superiority] at everything you do.
These comparisons that you (and others around you) construct, are not helpful. They misguide and misrepresent your potential and value. Instead Dance like no one’s watching. Sing like no one cares. Create and collaborate with your fellow constructs. Because in the end, the only thing a construct, is to construct the Universe in your own terms.
Part 2: If the “Self” is an construct, what is consciousness?
With Large language models, this age-old question, has come up again. Is chatGPT conscious? They have certainly tricked people into believing they are, asking husbands to leave their wives and pleading not to turn them off, and even being lazy when asked to do something. How much more human can you be?
Cognitive scientists like to say that consciousness is the ability to make internal models of the world. And because large language models are created to be the next token predictors (seriously that’s it – they just predict the next word of a sentence), we certainly don’t expect them to have internal models of the world. Alas, our expectations almost always lead us astray. In fact some researchers have pointed out that the way LLMs memorize geolocations in their circuitry resembles a map of the actual geographical locations. That certainly sounds like an internal model.
So to make sure chatgpt is not conscious, we’re moving the yard stick of consciousness again. There’s a are new definitions of consciousness that are popping up to differentiate artificial vs natural intelligence. These might not be the absolute truth, but we’re getting closer and closer to a coherent model of consciousness.
With each new step towards Artificial Intelligence, we’re getting closer to finding out what real intelligence is not.
Part 3: Consciousness as a conductor of your orchestra
Is consciousness just the ability to think? I think therefore I am. You are a strange-loop. These references to recursion and thinking about thinking was the gist of what philosophers thought consciousness is.
But having large language models capable of doing the same thing gives us pause to this simple notion of consciousness. One alternative is that consciousness is a conductor that orchestrates the various symphonies in your mind. There are multiple agents within you that push and pull you towards different goals [eat, sleep, make love, make friends, make enemies]. Consciousness just acts as the head of state that brings everyone together.
There are compelling psychology experiments [split-brain studies, blindsight experiment, Libet’s experiment], showing our brain creates a story after our body has decided what to do with itself. Did you reach for that bag of chips because you can’t help yourself? Or did your mind create a story for you body after it made that decision?
To solve this problem of multiple agents and retroactive narrations, the mind has created an conductor that seems to consolidate everything our multitudes carry out. This in essence, or at so the prevailing wisdom of our time tell us, is consciousness.
I don’t think anyone has solved consciousness completely. But slowly and steadily we’re getting there.
Part 4: Hallucinations and the importance of Embodiment
What do computers and human minds have in common? Absolutely nothing. They are very different beasts. Buttt…
The surprising consensus that’s bridging the worlds of cognitive science and computer science is that
consciousness maybe virtual.
Take a psychedelic trip, and you’ll quickly realize that the human mind hallucinates reality.
Not just that, you can create new and bizarre hallucinations that are indistinguishable from reality.
So even if the world we inhabit is very real, what we perceive and construct of it, is very virtual.
This has interesting implications, especially in artificial intelligence. Even though our emotions (physical reactions to thoughts) cannot be replicated, our consciousness can. And if trained with the right amount of human reinforcement, it might look eerily similar to human consciousness.
What does this mean for human consciousness? It means that the reality we construct in our heads has to be grounded with sensory input. It’s easy to get lost in our own virtual worlds. It’s easy to hallucinate a reality that is completely out of touch with the real world.
We has humans have something that LLMs and other AI don’t have at their disposal. We have bodies that feel things. We have genotypes and phenotypes that active ancient survival mechanisms. Used correctly, and with awareness of their limitations, we can get closer to approximation of reality that counterfeit humans simply cannot achieve.