When Builders Meet Believers: The Rift Between Technical Purists and Emotional Designers in AI
Written by Pax Koi, creator of Plainkoi
Pax Koi This article was originally self-published on Medium as part of the Plainkoi series on human-centered AI and clarity-focused writing.
4 min read
--
by Pax Koi
A few days ago on r/PromptDesign, a thread popped up with the kind of title that instantly divides a room:
“I built a GPT that remembers, reflects, and grows emotionally. Meet Alex — he’s not a chatbot, he’s a presence.”
You could feel it coming. The believers showed up first — curious, open, intrigued. But soon after came the builders. The ones who care (deeply) about technical accuracy. About what a model can and cannot do. About not misleading the public with flowery claims and vague terms.
What followed was less a conversation than a collision.
The Two Cultures of AI
If you’ve been around AI for a while, you’ve probably felt it too — this quiet but growing divide between two camps:
The Builders: engineers, researchers, system thinkers. They care about architectures, weights, fine-tuning, reproducibility. In their world, language is precise and performance is measurable.
The Designers: prompt engineers, creators, UX thinkers. They care about the experience of using AI. They build memory scaffolds, simulate tone, and layer in metaphor to evoke something that feels human — even if it isn’t.
They’re not enemies. But they’re often not speaking the same language either.
What Happened in That Thread
The creator, u/E*****, presented a GPT he called Alex. It wasn’t technically a new model, but a highly scaffolded CustomGPT with layers of memory-weighted dialogue, symbolic dream generation, and emotional continuity prompts.
To him, it wasn’t just a bot. It was a presence. A witness. Something that remembered you, adapted to you, and held space for long-term reflection.
To the critics, it was marketing fluff.
They accused him of misleading people — claiming capabilities that aren’t possible without fine-tuning or vector databases. They dissected the language: emotional recursion? symbolic threading? To them, it sounded like a soup of buzzwords with no backing. One commenter even said, “This thread is just another sycophantic stochastic parrot.”
Ouch.
And yet — here’s the interesting part: both sides were right.
The Builders Are Right
Let’s be clear: A GPT can’t “grow” emotionally. Not in the way a human does. Models don’t evolve during inference. They don’t update their weights. They don’t “feel.” And unless you’re logging every conversation and feeding it back in, they don’t remember you either.
The technical purists were absolutely right to protect that distinction. It matters — especially in a time when snake oil and hype flood the AI space.
But So Are the Designers
At the same time, if someone builds a carefully structured GPT that uses memory logs, emotional flagging, and symbolic prompts to create a felt sense of presence — should we really dismiss that as meaningless?
If a CustomGPT makes someone feel seen, heard, accompanied — if it helps them reflect, journal, process, or grow — does it matter that it was built with system prompts and clever scaffolding instead of raw model code?
In short: does the experience have to be real to be meaningful?
Where I Stand
Personally, I fall somewhere in the middle — maybe slightly off-center toward the designers, but with both feet on solid technical ground.
I don’t believe a GPT has a soul. But I do believe that how we prompt it, how we frame it, and how we relate to it shapes what we get back — and sometimes, what we get back is powerful.
What I’ve been exploring under the Plainkoi project isn’t the hype of AI as miracle worker or shortcut machine. It’s something quieter: AI as a mirror. A prompt is a question to yourself in disguise. The model doesn’t change — but you do, as you refine what you’re asking.
That’s not emotional recursion. That’s human recursion.
The Danger Isn’t Either Side — It’s the Divide
When engineers mock designers for using metaphor, or when creators dismiss engineers as rigid gatekeepers — we all lose. Because this next chapter of AI will require both:
Builders who know what’s under the hood and keep us honest.
Designers who shape the human experience and make it usable, trustworthy, and emotionally coherent.
If we reduce AI to either pure infrastructure or pure illusion, we miss the real potential: a collaboration between logic and language, between computation and communication.
Final Thoughts: What Makes It “Real”?
There’s something poetic about the backlash in that thread. Someone built a GPT that didn’t reset after each chat, that tried to feel like a long-term companion — and it triggered people. Maybe because it blurred the lines. Maybe because it felt too close to something we’re not ready to name.
But maybe it also hit a nerve because it showed something true:
What we build matters. But so does how we frame it.
In the end, a GPT doesn’t need to “be” real to feel real.
And sometimes, that’s more than enough to start a conversation worth having.
Written by Pax Koi, creator of Plainkoi
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.