Who are you?
- crosbynorbeck
- Oct 24
- 6 min read
Updated: 5 days ago
Self-awareness would be required to answer that, wouldn’t you think?
This train of thought began with pondering AI and the coming of… what? Skynet? Robot overlords? Scenarios have been put forth that often depict the AI achieving a level of heightened consciousness, leading it to conclude that humans are a threat or superfluous.
So, is AI, or are the AIs, going to take over and render us dead or irrelevant, threatening our survival? Many fictional scenarios have been put forth that robots, i.e., AI, will eventually either replace or do away with us. Unquestionably, AI will be able to outperform humans on many tasks, from repetitive operations to those requiring some problem-solving.
My initial thoughts on this topic involved robots, the non-autonomous labor devices that have become quite sophisticated. But not autonomous. They’ll simply run until they’ve exhausted commands. Nothing drives them to do otherwise.
Without instructions to do so, or even to evolve to a point of feeling like dominating, AI would have to first be… what? Conscious? Yes, we have to look at consciousness, awareness, intelligence, sentience, sapience, and finally, self-awareness. And that last level is, I think, critical to developing the will to survive that drives biological life, and that AI must acquire to begin wanting to replace us.
Let’s first consider consciousness. At its most basic level, consciousness can be described as having the ability to detect and respond to the environment. While the plant on my kitchen table will reorient towards the sunlight if I turn it – and thus possesses some level of consciousness – there are non-autonomous robots that also detect and respond to things in their environment; motion detectors, for example. It seems some theories of awareness make a distinction between consciousness in living organisms and a mechanical ability to respond to conditions. Here, I’ll make a logical stretch: we’re pondering whether AI might “replace” life, so we can consider the possibility of its consciousness.
But it isn’t easy to contemplate consciousness, sentience, and intelligence, though, while convergent, as independent concepts. And they are not truly independent, but they are hardly synonymous.
Slugging my way through several readings on awareness and all its concomitant phenomena, I’ve needed to regard several trains of thought based primarily on the neurobiological bases of awareness. Even though I recognize that identifying an AI system’s capacity for self-awareness depends not on the neurobiological substrates on which aspects of human (and non-human animal) awareness depend, the conceptual explications of awareness, etc., are the ultimate targets here.
Consciousness studies are at no common understanding, with a wide variety of ideas extant. They are beyond the scope of this effort to describe fully, but we can take a quick look at some to see what we might expect consciousness to entail.
Integrated Information Theory (IIT) and Neurobiological Naturalism (NN) are two prominent theories that, along with a few others such as Global Workspace theories, Higher Order theories, and reentry and predictive processing theories, from which we can look past the neurobiological correlates and attempt to assemble some concept of what AI consciousness might entail.
From Integrated Information Theory we get five axioms:
1. Intrinsic existence - only the subject has this experience, and no outsider observer can (Critical w/AI)
2. Composite nature
3. Specific information
4. Integration – not subject to subdivision to parts
5. Exclusion – every experience has definite content(?)
Starts with the phenomenal, then to the physical (deductive?)
IIT may be helpful in our quest, as reading about it made me think, “This sounds like a programming manual!” In other words, it relies on logic and mathematical formalism, and mentions of neurobiology are not prominent. Axioms 2, 3, 4, & 5 are not difficult standards for AI to meet, but Axiom 1 will be critical. As opposed to the physicalist Neurobiological Naturalism, IIT starts from the phenomenological and proceeds to the physical. Qualia —our internal sensations from experience that are felt qualitatively and with the quantity of shared information —constitute a basis for IIT. How to describe consciousness with regard to the relationship between our gossamer-felt experiences and our neurophysiology seems to be the gist of consciousness studies’ “hard problem.”
Neurobiological Naturalism is a physicalist view that asserts that mental events have exclusively physical causes and that consciousness evolved in living, complex organisms, limited to vertebrates, arthropods, and cephalopod mollusks. NN’s three main aspects of consciousness are:
i. Exteroceptive consciousness – of the sensed external world
ii. Interoceptive consciousness – one’s sensed inner body
iii. Affective consciousness – of emotions and moods
Both IIT and NN reject Panpsychism, an idea that postulates that all things possess some form of consciousness. Looking back at the plant on my kitchen table, I suppose.
And those are primarily concerned with human (and animal?) consciousness, while this writing's leitmotif is AI consciousness. What can apply to this endeavor? We wanted to consider consciousness, awareness, intelligence, sentience, sapience, and finally, self-awareness.
AI can be considered to be aware, which is a constituent of consciousness, in that it can recognize or have knowledge of something. By some accounting, AI can be considered to have consciousness (that will not satisfy all definitions).
Melding multiple sources, it seems awareness is generally considered a (simple) form of consciousness, but, while implying alertness and knowledge of something, it does not necessarily entail introspection. There’s an AI rub, introspection.
And intelligence is another element to consider alongside consciousness and awareness. It can be partially described with some of the same words used for consciousness. As with much of what is contemplated herein, only some of what can be said about intelligence will apply to AI. Initial thoughts about the nature of consciousness pondered what we call intelligence; after all, it’s not unusual to hear reference to “market intelligence” to which investors try to pay attention. That intelligence has no nexus of consciousness. The intelligence we’re concerned with here goes beyond learning a body of facts or an approach to an operation; rather, it bespeaks cognitive ability to create thought beyond what’s been learned, with, in humans, some emotional gradation.
Beyond consciousness, awareness, and intelligence, sentience and sapience are parcels of the human consciousness experience. Are they necessary for AI to develop self-awareness?
Sentience is the quality of perceiving sensation, or the ability to assimilate the external world, where qualia are our internal sensations of such. This ability requires more than just the processors, chips and memory that “think” for AI, but also sensors to apprehend the physical world. “Olfactory robotics” is likely a rabbit hole.
Sapience is the product of experience gathered that allows understanding and insight, otherwise called wisdom.
Thus, our consideration of AI’s ability to acquire self-awareness requires at least parts of awareness, consciousness, intelligence, sentience, and sapience, but also requires something else: identity.
Does the intrinsic experience mentioned above, which requires that only the subject has this experience, and no outsider observer can, limit an AI's self-awareness to non-parallel or non-networked computers, or to particular storage devices? Debatable, I suppose.
But a self-aware person knows whether they’re tired, happy, hungry, or their knee hurts. Will AI ever be able to smell the coffee? Perhaps a sensor can be used to allow it to recognize such, but can it have anything like the subtle emotional response a human might get?
And a self-aware human or animal (plants, even?) will generally act in its self-interest (even including collective self-interest). How would AI be able to determine what’s in its self-interest; what value system would guide it?
So it would seem, then, that for AI to possess the self-awareness that might be required to perceive the absence of humans to be in their identity’s self-interest, it would need to identify as an individual. If that identity is, per the intrinsic experience, an individual, then, should it be sentient, its experience might lead it to develop criteria for self-interest that differ from those of other individual AIs. Hmm…
And of course, if it’s sentient, by means unknown, then it would have emotional responses to different stimuli, and its conscious state would also have to be cognizant of its physical condition.
It seems there are multiple hurdles to AI ultimately possessing the self-awareness I posit would be necessary for any Skynet-type overthrow of humans (sorry, HAL 9000).
But back when the idea of AI was barely around, there were Asimov's Three Laws of Robotics: a set of ethical guidelines for robots, designed to ensure their safe interaction with humans. They state that a robot must not harm a human, must obey human orders unless doing so conflicts with the first law, and must protect its own existence as long as doing so does not conflict with the first two laws.
Well, nice sentiment, but as per all of the above, it’s not going to be within AI’s province to come and get us. But AI is trained and programmed by humans. Malevolent people (who often mistake themselves for the benevolent meant to lead humanity) can program AI/robotics to harm humans. Whence come the values. Keep an eye on ‘em.
P.S. - Never mind the insane energy requirements of AI data centers at present.

Thanks for some peace of mind on the mechanics of the robot takeover.
Quite a job pulling all the theories together and putting them in a comprehensive order.
I agree with your take and most of the analysis because my gut agrees. The concepts touched on entail subjects that can't be dealt with with language. Consciousness, awareness, intelligence, sentience, sapience, and self-awareness are all states that are analyzed by looking to the past. The emergent states may be completely different.
Being aware without self awareness. Consciousness without intelligence. Etc.