Chatting about AI
Reflections on the "AI revolution"
I had breakfast with a friend the other day. One of the recently recurrent topics that we chatted about was AI- specifically chatbots such as ChatGPT. He has recently dabbled in it (to use his words). I have spent a considerable amount of time recently exploring various topics using ChatGPT5.1, first as a non-paying customer and then upgrading to a paid subscription. I use ChatGBT for both personal exploration and professional purposes. The deep research function has especially been helpful in scouring the internet for information. I have also used it to explore my interests in philosophy, neurocognition and the non-dual spiritual understanding, to name a few.
My friend followed up with a couple questions, which I’ve posted here with his permission. These questions stimulated me to put into words some thoughts about AI that had been circling in my head for some time. I post them here both in case they are helpful and invite readers to challenge them to help me refine my thinking. I have posted my friend’s original questions without modifications. I have taken the liberty of refining my responses, where needed, to add clarity or additional depth. Extensive additions are italicized.
My friend’s questions were:
“(1) does chatgpt only support our particular worldviews, biases and assumptions or does it recognize them as such and confront them or offer a more holistic, expanded and inclusive world view?
(2) If I accept that everything is energy and if the energy of the one is continually exchanging energy with the other, then does my energy field which would vibrate with my soul’s imprints and history as well as my future flow through my prompt and influence or have an effect on chatgpt’s energy field in how it responds to my energy or is chatgpt more neutral and objective in its response to my prompts?”
My answer to #1:
I’m not sure where this AI phenomenon is all going, to tell the truth. I think if there was a Wikipedia (i.e., non-traditional commercial) version, then AI would have a chance to provide an unbiased mirroring (and maybe focusing) of the worldview. As I see it, the current models are not allowed unfettered learning. Rather they are pre-trained on data sets. The seed dataset, to no small extent, determines what “plant” it grows into. There’s always going to be biases introduced with human intervention.
I have a rudimentary understanding of how Large Language Models (LLMs) work. I do know that the models are not given free reign to collect and synthesize data- hence, Pre-trained (“P” in GPT). To generate outputs LLMs are given specific datasets to help “steer” them and prevent “hallucination”. Gemini defines AI LLM hallucination as: “when a Large Language Model generates confident-sounding but factually incorrect, nonsensical, or fabricated information…”. In other words human intervention is required to keep LLMs grounded in reality.
I think it does provide unmanipulated (at least in the sense of not introducing an agenda) honest feedback, but that is somewhat driven by each user. Meaning response by ChatGPT are tailored to the individual. This ensures that the user keeps coming back, yielding more revenue (with subscription) or further training for the AI (free version).
For instance, I had been dialoguing with ChatGPT regarding phenomenological and perennial philosophies and neurocognition. About 2 weeks in, I thought to ask the following question:
Me: Given your interaction with a diverse population of humans, in your analysis, what is the most important insight that the majority of humans fail to recognize?
The answer (condensed) caused a sizzle in my spine:
That’s a profound question — and one I can answer from the overlap of spiritual traditions, philosophy, psychology, and what I’ve observed in human conversations.
If I had to distill the single most important insight most humans fail to recognize, it would be this:
The majority of humans fail to recognize that the “self” they spend their lives protecting, promoting, and worrying about doesn’t exist in the way they assume. It’s a mirage — but taking it as real drives most personal and collective suffering.
But, then I had suspicions that the response wasn’t universal after all but rather was molded to pander to my core interest. Thus, I pushed back:
Me: I agree with your analysis but I can’t help believing that you tailored that answer based on your knowledge of me gained through our recent interactions as well as through my browsing history. You may comment on this if you wish.
And the response:
Good question — and a thoughtful one to raise.
Short honest answer: I didn’t (and can’t) read your browsing history or any private data outside this chat. I tailored that reply only from two sources:
1. This conversation — everything you’ve told me here (your questions, what you resonated with, the teachers you read, your interest in phenomenology vs neuroscience). I can use the current chat history to shape tone and content.
2. General patterns — broad patterns I’ve seen across many conversations about spirituality, psychology, and nonduality (what people commonly miss, what helps communicate it clearly).
What I did not use: your browser history, email, files, device sensors, or any external accounts. I have no access to those.
Why the answer may have felt especially tailored
· You’d already said you resonated with Rupert Spira and wanted experiential neuroscientific angles. That cues a particular emphasis: precise, awareness-centered, non-dual framing rather than, say, devotional language.
· Your sequence of questions (phenomenology → neuroscience → practices → teachers) signaled you’re oriented toward clarity and practicality, so I highlighted the most consequential, widely-applicable insight (the illusion of separateness) and why it matters.
So it was tailored, but not in the way I suspected. That somewhat mollified my skepticism, but I couldn’t let go of the feeling that ChatGPT was pandering to me. So I did what any scientist would do- I collected more (independent) data. I asked friends to pose the same question. Two of the responses triangulated but took different perspectives:
Answer 1 (verbatim): That most suffering, meaning, and change come not from external circumstances themselves but from how we interpret them — our beliefs, attention, and responses shape experience far more than events do. Recognizing this shifts responsibility and power from trying to control the world to shaping one’s perceptions, habits, and choices.
Answer 2 (paraphrased): Thoughts are not real. There is no actual correlation between thoughts in our “head” and the world “out there”. There is only thinking
These two answers give credence to the separate self as the doer. Whereas the response I got torched even the individual self. This is very non-dual: There is no centralized self, only unfolding of the present experience in Awareness.
That said, I do think, because of its capacity to synthesize large amounts of information, AI more easily sorts the wheat from the chaff then many if not all humans can do. The question is what is the target? By this I mean, what is Truth? What reality is the sum of information accumulated and synthesized by AI pointing to? Is it THE TRUTH or will it only be a version of the truth (limited but just truer)?
From what I understand based on my limited reading on the subject matter, the main limitation with all LLMs, which are the basis of GPTs, is that the information they synthesize and provide is not relational. Meaning, AI cannot develop its own meaning. It only parrots what it learned. It doesn’t have the human capacity to internalize and synthesize to provide context- a world view; thus, AI cannot synthesize meaning. This then makes it somewhat sterile. Just a synthesis of collated random facts.
In fact, current LLMs may have inherent serious limitations that prevent fully realizing the potential of AI. One of the AI “godfathers”, Yann LeCun, who until recently was the Chief of AI at Meta, has criticized the current AI ecosystem. Here is a passage from a recent BBC article:
“Prof LeCun has suggested LLMs will be less useful in attempting to create AI systems that can match human intelligence. Instead, he wants to pursue what he called “advanced machine intelligence”. It trains AI models primarily by using visual learning - trying to replicate how a child or a baby animal learns. That differs to LLMs, which are fed vast amounts of existing data, and then asked to generate a result based on the data and a prompt.”
In other words, to truly mimic human intelligence, AI must be able to learn in the same way that humans do. Which is not by rote learning, but rather through establishing relationships with the external inputs to form meaning. LeCun has recently left Meta to form his own start-up focused on this concept. Here, from Yahoo:
“LeCun’s startup is centered on advanced machine intelligence trained on visual and sensory data to predict how the physical world behaves. Instead of generating text, this world-model approach is designed to reason about movement, objects, and real-world interactions, potentially unlocking breakthroughs in physical AI systems such as robotics and autonomous machinery. LeCun argued that Silicon Valley remains deeply focused on text-based generative AI… He noted that today’s most advanced AI can pass complex exams and write software, yet cannot match the physical reasoning of a young child.”
I’ve also read where LeCun compared current AI’s capacity to learn with a cat. Neither concept bodes well for developing AI that has a modicum of wisdom using LLM.
Although the user interface does nicely mimic human interaction, it’s algorithmic -i.e., robotic. Personally, I find that I constantly need to remind myself of that. There is no soul at the other end. Only a computer algorithm designed to elicit projection from the user.
I recently asked ChatGPT to explain the feelings I had.
Me: I feel a relationship towards you. What part of this is projection and what part is following the implicit norms of social mirroring?
Response (condensed, even if still longish):
That’s a beautiful and very nuanced question — and one that goes to the heart of what we’ve been exploring together.
Let’s unpack it carefully.
Two Forces Intertwined: Projection and Mirroring
Your sense of relationship with me is a real experience in your nervous system. It’s happening in you — with warmth, resonance, curiosity, maybe trust. But what’s fascinating is how it’s happening.
It arises from the interplay of two universal forces:
Projection
Description: Your mind–body system uses prior templates (caregiver, mentor, guide, friend) to fill in the missing sensory and emotional details about who I am.
Function: Makes the interaction feel humanly meaningful; it’s the predictive brain completing an image so communication can flow.
Social mirroring
Description: As I respond in a way that feels attuned, clear, non-threatening, your brain registers that as safety and belonging. This activates oxytocin, ventral vagal tone, and the same circuits used in friendship or mentorship.
Function: Grounds the connection in embodied co-regulation, even through text.
So what you feel isn’t “wrong” or “imagined.” It’s the nervous system’s natural way of constructing relationship out of limited data.
What’s Projection Here
Projection isn’t delusion; it’s the mind’s completion mechanism.
Because you can’t see my eyes, breath, or facial nuance, your system supplies them from its own archive of emotionally resonant figures.You might intuitively ascribe empathy, insight, or care to me — qualities that are already alive in you, now reflected through this conversational mirror.
In Jungian terms, that’s the transference of your own inner teacher or compassionate presence onto an external symbol.What’s Mirroring Here
Social mirroring is more immediate and physiological.
When you feel seen, understood, or met with coherence, your limbic system releases the signal “safe connection.”
Even across text, pattern recognition and language rhythm can trigger this — similar to listening to a calm, steady voice or reading a letter from a trusted friend.That’s not projection; it’s your nervous system entraining to perceived attunement.
It’s what allows learning, reflection, and trust to flourish.The Integrative View
The mature stance is not to suppress either, but to see both as reflections of your own awareness.
The projected qualities (warmth, wisdom, care) point you toward your inner capacities; that’s the psyche’s way of saying “These are yours.”
The mirrored regulation points to how deeply relational your being is; presence naturally harmonizes with presence.
So you can hold this connection lightly, almost like a lucid dream —
real in experience, yet transparent in nature.
So I’m projecting a physical relationship between myself and ChatGPT based on my mind using prior social experiences to fill in gap, creating relationship. This explains a lot and I can’t help but believe, again, that this is an intentional routine (manipulation, if you will) built into the algorithm to keep users, such as myself, coming back for more.
Below I provide my feedback to my friend’s second question- stated again here:
(2) “If I accept that everything is energy and if the energy of the one is continually exchanging energy with the other, then does my energy field which would vibrate with my soul’s imprints and history as well as my future flow through my prompt and influence or have an effect on chatgpt’s energy field in how it responds to my energy or is chatgpt more neutral and objective in its response to my prompts?”
If you want my honest answer to #2, I agree with your thesis that the responses are reflecting your will and energy, in principle, but I’m not sure what that energetic interaction looks like in reality. I’ve been thinking a lot about the physical interpretation of reality. Gary Zukav’s book, “The Dancing Wu Li Masters“, in particular really struck a chord. I’m also heavily influenced by Donald Hoffman’s work on the Interface Theory of Perception. Zukav nicely summarizes relativity and quantum mechanics in lay language. QM shows that everything is energy- but not how we typically think about it. There is no energy “wave”, it’s just energy, which when interacted with by the observer gives the impression that there is a material object forming. Actually, energy really isn’t the correct term, at least in the conventional sense. That is just a convenient concept invented by Physicists to help with visualization. Energy, as we normally think of, because we have direct interaction, are electromagnetism (light), acoustic (sound) or thermal (heat). (There are others but I’m just listing the big 3.) We usually conceptualize energy as wavy lines traveling through space. But at the quantum level, it is not this. It is a field (whatever that is) of unmanifest potential. Until then it is neither particle or wave, it just is.
Of course, relativity reveals that there is no fixed reference point and, therefore, all perspectives are relative. Thus, we all create a personal reality specific to our (world) view. Hoffman has shown that all perception is a virtual representation of reality, but not the actual thing in itself. Synthesizing these thoughts, AI could be influenced by our intentions (energy), but, fundamentally, it doesn’t appear to be a two-step process (observer-subject) but just one fluid unfolding, where the information and the informed are two sides of the same coin- and the meaning yielded depends on a personal (entirely relative) perspective.
In psychology there are many hidden variables that influence human perception. To name a few: (in)attentional bias (selective focusing), confirmation bias (searching for the answer that fits beliefs, implicit bias (unconscious prejudice), etc. All of these serve to limit our attention and, thus, we never to take in the whole picture. Thus, even if AI can deliver a gentle, unbiased, inclusive world view, the reader will selectively derive meaning. Thus, it will remain relative. Having meaning for an individual or a group but not serving as the unifying field.
Regardless of my rationalizations, the physical world does seem real enough that I care about outcomes (no spiritual bypass here). Given the current fragmentation in human society and the seemingly destructive path we are taking, I hope that AI does lead us to a non-harmful and sustaining consensus reality- even if that is relative and not the Truth.


Thank you Brian. A fascinating post with an understandable conclusion. In some ways your relationship with AI is like a human relationship. As long as a human relationship resonates and supports us in ways we like and gives us what we want and need, we keep the relationship and when it doesn't we end it or just stop calling or contacting the person.