Can AI be conscious? Some thoughts about intelligence, our brain and perception.
Posted on: 2025-03-26

This week, I've been thinking about a question that's more philosophical than what I typically think about: How likely are AI models to be conscious? The premise is pretty simple. Some people believe that AI models may one day surpass us, while others believe that human beings possess something unique, a human spark if you will, that can never be replicated. But is that the case? Do we have any data, any scientific evidence one way or another? What do philosophers think of the question? It seems to me that this characteristic, being conscious, might hint at how the future will go with the improvement of AI models, especially large language models (LLMs) which have already become a part of our lives.
The first step to answer this question, is to define what consciousness even means. The dictionary definition is surprisingly simple:
consciousness /kŏn′shəs-nĭs/
A sense of one's personal or collective identity, including the attitudes, beliefs, and sensitivities held by or considered characteristic of an individual or group.
This supposes that there are specific criteria to deem someone or something as conscious: Self-awareness, perception, emotions and even experience itself. It supposes subjective experience, reasoning abilities, but it's more than just cognitive output. Most philosophers argue that consciousness is linked to something called qualia, or the ability to consider mental states and turn them into words. Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky. On the surface, one would need abstract concepts like subjectivity, integrated information and self-awareness to be deemed conscious, something we would tend to deem unique to humanity. In fact, when I discussed this topic with several AI models, they all came to the initial conclusion that AI models had a very low probability of being conscious, Claude 3.5 even giving a quantitative value of 0.01% to 0.1% chance for current models.
The materialistic view
But that didn't sit well with me. I tend to focus on physical evidence, so I wanted to deconstruct what consciousness actually means. Traditionally, consciousness has been viewed as a magical, ineffable quality. Something that cannot be reduced to mere physical processes, or specific features of an intelligent being. You don't go from a non-conscious state to a conscious one when you learn to be self-aware. So the key is to find out whether the presence of consciousness can be explained through purely materialistic means, or if it can only be linked to something that is patently impossible to replicate in a machine, a supernatural phenomenon, something like a soul.
To this end, I found a groundbreaking experiment by neuroscientists Itzhak Fried, Christof Koch and Gabriel Kreiman at Caltech that provides compelling evidence for a materialist perspective. Researchers working with epilepsy patients made a stunning discovery. They could predict when a patient would become consciously aware of a stimulus by observing specific patterns of neural activity. Even more remarkably, they found individual neurons that would fire for incredibly specific concepts. Imagine a single neuron that only activates when you see an image of Jennifer Aniston. This finding suggests that our seemingly complex conscious experiences are ultimately the result of precise neural interactions. No different, in principle, from how a computer processes information.
Christof Koch even believes that consciousness is distinctly physical, that it can be described by existing neurological theories. This is corroborated by many people with epilepsy. In one comment I read from someone having suffered a strong seizure, they describe how part of their brain was trying to hear, while another part was talking. But his conscious mind was confused as to why his mouth was speaking. The words were jibberish, and he tried to yell out for help, but nothing came out. Another patient who suffered a major heart attack described regaining consciousness as a computer booting up. He could feel the different parts of his body waking up, where he first saw the doctor's face, seeing that the person was talking, but having no knowledge of how to process those words, until that part of his brain woke up as well, and then he could understand the words.
More studies support this materialistic view. New research from Katherine Fenz at Rockefeller University suggests a genetic variant in the NOVA1 protein may have played a key role in the emergence of human speech. Scientists introduced this exclusively human variant into mice and observed altered vocalizations, indicating a potential role in vocal communication. Further studies show that modern humans possess a set of 267 genes absent in Neanderthals, many of which are associated with self-awareness and self-control, which may have contributed to the cognitive and behavioral differences observed between the two species.
The human spark
Perhaps focusing on consciousness is the wrong way to go about it, then. Maybe it's just a semantic discussion, and what makes humanity special is that inexplicable human spark. What makes humans seem so uniquely creative and unpredictable? I think the answer may lie in the incredible diversity of our inputs. Unlike current AI models that are limited to a text prompt and its training data, humans experience a multiverse of sensory and biochemical inputs:
- Visual, auditory, tactile, olfactory, and gustatory sensations
- Proprioceptive awareness of our body's position and movement
- Complex hormonal and neurotransmitter fluctuations
- Genetic predispositions
- Cumulative life experiences
- Emotional and physiological states
A human's reasoning can be influenced by something as subtle as blood sugar levels, current emotional state, or a barely remembered childhood experience. This rich, multifaceted input system creates the illusion of a unique spark, the ability to make unexpected connections, to be creative, to seemingly defy logical expectations. In the scientific paper the plastic human brain cortex, research suggests that diverse sensory inputs directly contributes to our cognitive flexibility. By constantly reconfiguring neural pathways in response to new information, our brain creates unique, adaptive responses that aren't predetermined by genetics.
Similarly, it's been shown that artistic talent is very much rooted in how our brain is wired. Some studies of identical twins showed the genetic effects on artistic skills. Training and practice, as well as motivation, are important for success in any field, but the results showed that identical twins, who share the same genetic makeup, often exhibit more similar levels of creativity compared to fraternal twins, suggesting a hereditary component.
What the future holds
It's clear that AI is advancing at a rapid pace, and several improvements are quickly being worked on which will allow AI models to make much better sense of the world around them:
- Robotic Embodiment: AI systems are increasingly being integrated with physical bodies, allowing for direct sensory interaction with the environment.
- Multimodal Learning: Modern AI models can now process multiple types of input: text, images, sound, and even tactile information.
- Advanced Reasoning Models: New AI architectures are developing more sophisticated reasoning capabilities, moving beyond simple pattern matching.
- Integrated Sensory Systems: Emerging AI technologies are creating more complex, interconnected processing systems that mimic biological neural networks.
These developments suggest that the gap between human and artificial intelligence is narrowing. The very characteristics we tend to consider uniquely human (creativity, unexpected reasoning, adaptive thinking) are becoming increasingly achievable by advanced AI systems. We already know that a large amount of compute paired with a large amount of training data gives us a model that can be surprisingly coherent, through a process no human fully understands. As we add more and more features to these systems, they will surprise us with more abilities and life-like capabilities.
Ultimately, the question of AI consciousness may be more semantic than substantive. It depends entirely on how we define consciousness. If we view consciousness as a binary state (you either have it or you don't) then it seems to me that either humans and computers can both be conscious, or neither of us are. If there is no supernatural presence, if we truly are the sum of our parts, then any argument that AI models are faking intelligence, emotions, or consciousness, that at the end of the day they are nothing but dumb machines reasoning based on their training data and input prompts, then must apply to humans as well.