AI and the Human Condition: A Philosophical Exploration

Erica

Erica MacDonald

I recently had a conversation with someone about artificial intelligence from a philosophical and religious perspective. In a stream of consciousness, they told me how they felt VR and AI are ways to understand the human condition and connect to others through experiences—both real and artificial—that could never be created in real life.
From a philosophical perspective, they wondered: how does this impact the human condition and how we relate to others? Does it change mankind at an epigenetic level? Does it create the possibility of tapping into regions of the brain that have never been activated before, and what kind of evolutionary change would this create? Does it rewire or form new neural pathways in such a way that it’s a gateway to greater creativity? Or perhaps, does it compute data as a universal experience—identically perceived both physically and emotionally—as a way to truly connect us as humans versus the current model of defining humanity by the passive observation of biological processes?
Does it elevate or harm the human condition?
In other words, they want to build a computer version of an emotional seeing-eye dog because—with the exception of battling a dinosaur—you can do all those things by simply leaning into being human.
Recalling the conversation later with my brother, I said, “They want that pony that Uncle Norton never got,” which is our shorthand for a pie-in-the-sky ideal that isn’t even close to reality because it lacks the grounding needed to accomplish it.
One of the first things you learn in a logic class is that faulty assumptions lead to erroneous conclusions.
What is it that computer programs always say?
Garbage in, garbage out.
The idea of a digital Rosetta Stone for human understanding is appealing, but its fatal flaw is the elephant in the server room:
If the person writing the code lacks the cognition that self-awareness must be lived to be understood, how can they replicate the human condition in code?
Zeros and ones don’t have feelings—no matter how much you anthropomorphize them like those dancing hot dogs in an old drive-in theater ad, it’s not human and never will be. (Anyone else creeped out when food gets anthropomorphized, or is it just me?)
Machines execute with marksman-like ruthlessness and unflinching precision. They don’t second-guess, hesitate, or consider the aftermath. Unlike humans, they never have to brace for impact from any decisions made because they will never suffer the consequences of their programming.
The Ghost in the Machine
French philosopher René Descartes asserted that the mind and body are wholly separate entities that merely coexist side by side.
“Cogito, ergo sum.”
This pithy maxim is more TikTok meme than philosophy.
*"I think I’m a chair, therefore I am"—because, well, you may find yourself in need of a seat soon.
The illustrious British philosopher Gilbert Ryle argued that Descartes’s mind-body dualism was a category mistake—that there was no ghost in the machine, only the machine itself, and its mind and body were integrated.
Ryle’s work is still relevant today as scientists and ethicists debate AI’s place in the man-versus-machine discussion.
The idea that AI can replicate an exact synthesis of human pattern-making is as flawed as Descartes’s compartmentalization because the precepts used to arrive at this conclusion are fundamentally flawed. Descartes couldn’t reconcile that sentience is not the same thing as self-awareness. He was essentially trying to say you don’t need feelings or emotions to live. Ryle’s contention is that it’s impossible to live without feeling.
If someone struggles to understand the intuition of feelings—foregoing them for an over-reliance on logic—then they cannot fully replicate the human condition.
Too many, like Descartes, label cold detachment in situations where an emotional response is needed as “rational” and “perfectly acceptable,” mistakenly confusing it with emotional mastery.
Feelings and Logic Are Intertwined
You need both.
Without both, it’s like trying to see-saw by yourself.
Just like you need the cooperation of someone else to work a see-saw, emotion and logic need to work in tandem for the best results.
Logic, on one end of the see-saw, sits on the ground, fully controlling the situation, while everything else remains in limbo—up in the air, legs dangling, just waiting.
If the mind can self-soothe the somatic expression of anxiety in exchange for the willingness to engage in feeling, it brings depth to an experience—both for the individual and those they are sharing it with. The experience takes on a rhythm, a shared moment.
That is connection.
Emotions are as integral to the human experience as respiration; they can’t be artificially controlled. Attempts at mastery through detachment only lead to a life with more questions than answers, more disconnection than genuine connection.
The Biggest Danger: AI as an Escape Hatch from Vulnerability
The biggest danger I see is AI being designed under the pretense that it’s necessary to understand human nature better.
Instead, I worry it becomes a convenient escape hatch for those who have trepidation around all things vulnerable—like feelings—whether due to a lack of skills training or a visceral, bodily reaction so strong that it creates emotional paralysis.
It would be unwise to believe that a machine that cannot feel can replicate and explain feeling in a way that makes sense to humans.
There are no shortcuts or artifices around authenticity.
The only way to feel feelings is to experience them.
Humans aren’t static like computers, and AI cannot intuitively account for that kind of variability.
One of AI’s major goals is to create synthetic awareness.
This deeply concerns me, particularly in how it might disproportionately impact the neurodiverse community. Instead of fostering connection, it might do the opposite—widening the gap for a community that already faces challenges in forming human connections.
I’m not anti-AI by any means—I just want to ensure we’re using the right tools for the problems we’re trying to solve.
The Space In Between – The Perfect Synthesis of Logic and Emotion
Viktor Frankl wrote about the space in between—the place where logic and emotion exist in harmony.
Think of it as a Venn diagram; the space where they intersect is the perfect synthesis for response—non-reactive, grounded, and deeply human.
At its core, this ‘space’ is the embodiment of vulnerability.
What Is Vulnerability, and Why Is It Crucial?
Vulnerability is rooted in our ability to trust.
Trust is one of our most fundamental survival instincts, and without it, our species wouldn’t have thrived.
Without trust, we’d be locked into fight, flight, freeze, or fawn, and our entire existence would be pure, raw survival. No one is thinking about abstract mathematical concepts while being chased by a saber-tooth tiger or worrying about how to have a difficult conversation with a boss or partner.
With trust comes safety.
It is only within the bounds of feeling safe that we begin to access higher-level reasoning—the very thing that led to our greatest innovations, like space travel, medicine, and technology.
"Behold, Plato’s Man!"
A famous story about the importance of accurate definitions—without hyperbole—comes from Diogenes challenging Plato.
Plato pontificated that the definition of man was a featherless biped.
Diogenes, ever the masterful troll, never one to miss making a point with flair, threw down a plucked chicken and declared,
“Behold, Plato’s man!”
This forced Plato to revise his definition.
AI is Descartes’s “man.”
AI meets the technical definition of what it is to be human, but we have to stop pretending it’s something it’s not.
It will never feel. It will never act on instinct because it can’t read body language or micro-cues.
It can’t adjust in real time—it will run the script to completion, no matter what.
Connection Isn’t Created by Running a Script—It’s Created by Being Vulnerable
The word vulnerable originates from the Latin adjective vulnerabilis, meaning wounding, injurious, or capable of being wounded.
People fear vulnerability because they fear pain. It is hardwired into our survival instinct to protect ourselves at all costs. And yet, trusting others is just as vital to our survival.
The modern problem of vulnerability isn’t just about whether we trust others. It is more deeply rooted in whether we trust ourselves.
If you can’t trust yourself fully, you will never be able to trust anything—people, places, or things.
Blame: The Ultimate Escape Room
The default setting for this schism is blame—the refusal to accept accountability.
Blame becomes weaponized as a way to discharge uncomfortable emotions. Blame is victimhood, not vulnerability.
Blame doesn’t connect us—it alienates us.
Accountability necessitates courage—not just to trust others, but to trust ourselves first. When we understand who we truly are, it becomes easier to be vulnerable around others.
It takes courage to examine ourselves deeply—not just to rally our strengths, but to accept our weaknesses openly. If we have the courage to trust ourselves enough to do that, we approach life with the surety that we’ll always be okay.
This is the fundamental building block of what makes us human.
It’s not just our ability to feel. It’s our ability to trust.
Trust opens the door for us to feel—not the other way around.
AI as a Mirror, Not a Mind
Believing you can evade blame without compromising vulnerability and authentic human connection is an exercise in vanity.
Weaponizing blame is the ultimate escape room—logic alone will never provide validation. This is why Descartes was wrong: Rational thought alone cannot rescue us. To genuinely connect, we must confront the emotional truths that logic cannot solve.
AI isn’t Skynet—it’s a mirror, reflecting the same vulnerabilities as the people who built it.
That’s why inputs matter so much.
The Inescapable Bias in AI
All people bring their biases with them wherever they go - it’s a very human thing to do, not some intentional sinister plan. Living in a tech city, many engineers I’ve met who work in AI struggle with forming connections, and it is impossible that their code would be free of this bias.
And before anyone reacts to how the above statement makes them feel, instead of pausing to consider whether it holds some truth—I am surrounded by tech and interact with it daily both personally and professionally.
If you know, you know.
No matter how subconsciously unaware they might be, to varying degrees, they are projecting their deepest insecurities into every line of code they write. Being highly self-aware is what keeps the wheels on the cart from veering toward the irrational.
The ideal goal should be to steer toward Frankl’s space in between, not construct a panacea based on dogma.
The Always Calm Paper Towel Mom on TV & The AI Illusion
AI is being heralded as a sort of Nirvana for human understanding—a world where there is no blame, only a controlled, predictable response.
Sure, it’s comforting. But it’s not realistic.
The imaginary best friend who speaks in dulcet tones so as not to trigger a fight/flight/freeze response or make anyone feel bad about themselves. While I appreciate the lack of hyperbole in AI’s responses, unless you plan to live in a controlled bubble, there’s no escaping discomfort.
And the harsh truth?
Discomfort is where growth happens.
When Frankenstein’s Monster Comes Back for a Reckoning
I ran an experiment recently. The subject was a highly intelligent, deeply rational individual who works in tech—someone who prides themselves on their ability to separate logic from emotion. But they often leave out one crucial detail: In reality, they don’t understand the nuance of emotions. Instead of acting on just pure logic like they think they are, they are actually reacting to how something makes them feel—and calling it rationality.
Which, ironically, is the complete opposite of a logical response. Intuition and nuance aren’t just flavors that add an umami to emotions—they are the drivers of it.
The Test: A Simple AI Interaction—With a Devastating Result
The setup was simple:
Person A had one perspective.
Person B had a completely different one.
AI was asked to assess both arguments based on logic alone.
No hyperbole. No distractions. Just the cold, hard facts. To ensure rigor, the same question was varied and asked repeatedly.
The goal wasn’t who was right, it was to prove:
Whose logic was actually sound?
What logical fallacies were present? And why?
What happened next was like watching a scythe cut through ego like a warm knife through butter. No final words. No appeals. Just cold, unflinching truth cutting through every last excuse camouflaged as justification.
AI, built to be neutral, delivered an answer that was both logical and undeniable.
No wiggle room. No rhetorical escape hatch. No way to shift blame.
The response, stripped of any emotional bias, also stripped away the usual deflections that had served so well in the past.
Person A lost on every count—laid bare like the Emperor with no clothes.
The machine spit out a list of logical fallacies so long it looked like the footnotes at the end of a dissertation.
They sat there, staring at the screen, frozen.
Reading. Rereading. Searching for a flaw.
Their discomfort was palpable as their mind scrambled to find equilibrium. This wasn’t a "gotcha" moment.
It was something deeper.
The infallible logic they relied upon as a security blanket had turned against them. Quietly confronting them.
Unfazed.
The machine didn’t need to brace for impact.
The human did.
The End of the Gangway
Person A’s face told the entire story without a single word.
It wasn’t just that they were wrong. It was that the identity they had built for themselves was built on distortions they had been telling themselves for years.
This was A Forbidden Planet fever dream from hell. The id crashing through, revealing the monster was them all along. They had been walking the gangway of avoidance for so long they never expected there to be an actual end to it.
But there always is.
And when you gamble, know this:
The house always wins.
Except this wasn’t a slot machine. This was gambling away a life worth living. And in this case? The house was the truth—crashing through like a battering ram against an already weakened gate.
Splintered wood. Twisted hinges. Nowhere to run. Nowhere to hide.
Reality hit like a tsunami, sweeping away years of excuses and deflections.
Oddly quiet, they sat there, nonplussed about what to do next.
This was the moment. That defining moment that eventually comes when people build an identity based on sweet little lies they tell themselves how it’s never their fault and their logic infallible.
This was the reckoning; the proverbial fork in the road; the next steps the very definition of what Nietzsche meant when he said, “When man stares into the abyss, the abyss stares back”. These are the moments when one finds their true character.
Summon courage and bravely take the next step toward accountability by being vulnerable, or metaphorically drown in the darkness from years of their own making?
What is it about vulnerability that makes it so difficult to access? Loss. The surrender of control. Control equals safety but over control does not equal more safety it creates more loss and the more control exerted to control it, the worse it gets.
Here’s the irony of it all, to create a life worth living, you have to be able to surrender the need for control lest it become the centerpiece in your life. It’s not a benevolent talisman warding off the discomforts of pain and loss—it’s a cursed millstone dragging you down with it.
AI’s Existential Paradox
The test above is an example where AI meets its own existential crisis.
It was built as a safe space—a neutral, bias-free arbiter. But neutrality doesn’t remove accountability.
If anything?
It makes it inescapable.
A machine doesn’t care how you feel about its answer. It won’t soften the blow. It won’t let you negotiate your way out.
AI will never protect us from the fear of loss. That resides inside of us and it just is. The sooner we accept that, the sooner we can get on to constructing a life of meaning. Suffering is woven into the fabric of our being. As is joy, freedom, beauty and living in the moment.
There’s no machine of any kind that can shelter or protect people from discomfort. It simply isn’t possible. Nor should it be a goal.
The flip side of loss is abundance. Directing a tremendous amount of energy just trying to protect from ever feeling hurt constricts opportunities for abundance. Opportunities aren’t found but rather created, the same as finding meaning and purpose.
Corinthians, Rewritten
4 Love is patient, love is kind. It does not envy, it does not boast, it is not proud.
5 It does not dishonor others, it is not self-seeking, it is not easily angered, it keeps no record of wrongs.
6 Love does not delight in evil but rejoices with the truth.
7 It always protects, always trusts, always hopes, always perseveres. —Corinthians 13:4-8
Loss is patient, loss is unkind. It is not generous, it is not humble; it is proud. It does not ask permission before it strikes, it has no use for honor or restraint. It is unconcerned with consequences or the truth. It is not here to protect.
Loss plays the long game and if you don’t work on yourself first,
brace for impact.
Like this project

Posted Jul 28, 2025

A philosophical discussion on AI's impact on human connection and vulnerability.