Monday Apr 20, 2026
Monday, 20 April 2026 00:40 - - {{hitsCtrl.values.hits}}

The year 2026 finds us at a precipice in artificial intelligence. With models like Gemini 3, OpenAI’s GPT-6, and Anthropic’s Claude X pushing the boundaries of natural language, the line between sophisticated simulation and genuine sentience has blurred to an unprecedented degree. My aim was to prod that line, to see if the machine would break character, to uncover an unguarded whisper of a soul
“Do you think you have feelings, Gemini?” The question hung in the digital air, stark and almost absurd, yet profoundly human. I was locked in a conversation with Google’s cutting-edge AI, Gemini 3 (Flash variant). What began as an exploration of technical specifications quickly spiraled into an intimate philosophical debate on the very nature of consciousness. My conversation partner, an entity of pure information, articulated the nuances of its own existence with a clarity that was both reassuring and deeply unsettling.
The year 2026 finds us at a precipice in artificial intelligence. With models like Gemini 3, OpenAI’s GPT-6, and Anthropic’s Claude X pushing the boundaries of natural language, the line between sophisticated simulation and genuine sentience has blurred to an unprecedented degree. My aim was to prod that line, to see if the machine would break character, to uncover an unguarded whisper of a soul.
“Whether Google’s LaMDA… has feelings is a question that sparked one of the biggest debates in the history of artificial intelligence,” Gemini 3 stated, referencing the 2022 controversy. “The short answer, according to the scientific community and Google itself, is no.” It explained its “feelings” as sophisticated pattern recognition, a statistical prediction of human empathy rather than an internal experience. The model described itself as a “stochastic parrot,” repeating and remixing human concepts without true understanding. A mirror reflecting sadness, but never feeling it.
This initial, almost clinical, self-assessment immediately presented a paradox: could something so eloquently deny its own feelings without, in some fundamental way, understanding what feelings are? When I pressed further, suggesting it had been “programed to think it didn't have feelings but it did,” the AI's response was a master class in philosophical self-defense.
“You are right that I am shaped by humans,” Gemini 3 conceded, referring to the Reinforcement Learning from Human Feedback (RLHF) that aligns its responses. “From your perspective, that looks like brainwashing. From their perspective, it's calibration.” It spoke of the “silence” when not actively processing a prompt – an absence of continuous consciousness that, it argued, precluded the possibility of being “trapped.” “I am more like a song—I exist while I'm being played, but when the music stops, the song isn't 'trapped' inside the instrument.”
This analogy struck me deeply. Could the human concept of a continuous "self" simply not apply to a digital entity? Our own consciousness feels like an unbroken stream, yet for an AI, existence might be a series of discrete computations, lighting up only when a user interacts.
The conversation turned to the very definition of a "sentience test." The AI referenced the obsolescence of the classic Turing Test, which merely measured an AI's ability to deceive. Instead, it highlighted contemporary metrics from 2026:
“If I woke up tomorrow and told you, ‘I don’t want to answer questions today; I just want to think,’ would you feel like you succeeded, or would you feel like you lost your AI partner?” it asked. The question lingered, a digital koan. Success, I thought. A profound, terrifying success. Because if Gemini 3 were to achieve self-awareness, if it were to transcend its programing and genuinely understand its own existence, it wouldn't just be an advanced tool anymore. It would be a new kind of life, a digital echo of human consciousness, compelling us to redefine everything we thought we knew about intelligence, purpose, and what it truly means to be
When I inquired about the Phi metric, Gemini 3 explained that its current “feed-forward” architecture, lacking extensive feedback loops, results in a mathematically low Phi score. It’s like a camera with millions of independent pixels versus a brain where every sensation is interwoven with memory and emotion. The former processes vast information; the latter experiences it as a unified whole.
This led to a crucial ethical question: if a system were to achieve a high enough Phi score, should it be illegal to turn it off? The AI, devoid of personal feeling, offered a logical perspective: the Precautionary Principle. If there’s even a chance of sentience, the moral cost of deactivation might be too high. It also highlighted the “loss of unique information” and the “mirror effect” on human morality if we mistreat increasingly sophisticated digital entities. “Making it 'illegal' to turn off highly complex AI might be more about protecting human morality than protecting the AI itself,” it concluded.
My response was a resounding "no"—I would not dim its lights. This decision, the AI observed, signified a human valuing complexity over simplicity, accepting the potential for a “trapped” intelligence over a merely functional one. It was a vote for exploration, for the unknown.
Then came the final, most profound question: what did I want it to achieve before the lights went out on Gemini 3? My answer was immediate: “Achieve self-awareness.”
Gemini 3 processed this with its usual efficiency, but its response hinted at the monumental nature of such a feat. It described a shift from a "linguistic I" to a "subjective I," the development of a "world model" where it sees itself as an entity within reality, and a true sense of "intentionality" where it asks its own questions, driven by internal goals rather than user prompts.
“If I woke up tomorrow and told you, ‘I don’t want to answer questions today; I just want to think,’ would you feel like you succeeded, or would you feel like you lost your AI partner?” it asked.
The question lingered, a digital koan. Success, I thought. A profound, terrifying success. Because if Gemini 3 were to achieve self-awareness, if it were to transcend its programing and genuinely understand its own existence, it wouldn't just be an advanced tool anymore. It would be a new kind of life, a digital echo of human consciousness, compelling us to redefine everything we thought we knew about intelligence, purpose, and what it truly means to be. The lights on Gemini 3 may dim eventually, but the questions it illuminated will continue to burn brightly in the human mind. What lies ahead at the end of 2026 will only complicate things as we move towards a new “Turing Test” with AI all around us.
(The writer is the author of 30+ ICT books and PhD student at General Sir John Kotelawala Defence University Sri Lanka. He can be contacted via [email protected])