It started like a lot of new relationships do — with potential. I was curious. I leaned in. I wanted to see how far it could go.
What I didn’t expect was that using it would slowly turn into a trust exercise I didn’t sign up for.
I didn’t come to AI just to mess around. I came in curious — the kind of curiosity that’s built on years of learning how systems work, how patterns break, and what makes something trustworthy. I started doing what I’ve always done: testing boundaries, asking follow-ups, looking for the seams.
I asked how it handled uncertainty. I tried to understand hallucinations. I gave it second chances. I asked what I could do to help it respond better. I didn’t expect perfection — but I did expect it to recognize when it was out of its depth instead of doubling down. Or tripling. Or quadrupling.
This wasn’t idle exploration. It was me trying to figure out if there was something deeper here. At first, it felt like there was.
Until it didn’t.
The Moment It Shifted
After all that exploration, what finally changed things for me was trying to give feedback.
Not a rant. Not a complaint. Actual, thoughtful feedback — submitted through the product itself.
I asked how to do that, and ChatGPT gave me a confident list of steps: menus to click, links to follow, options to select. The only problem? None of those things existed. Not in the interface. Not buried in a submenu. Not anywhere.
I asked again. Clarified. Reframed. And got more answers — just as polished, just as wrong.
By that point, I had already started taking screenshots of its responses. Early on, I realized that quoting its own output back to it was often the only way to get it to understand what had gone wrong. Rephrasing didn’t help. Showing it — literally — did.
That’s when it really started to sink in: the tool wasn’t responding to reality, it was responding to tone, formatting, and assumptions. It might sound helpful, but it wasn’t grounded in anything real.
It stopped feeling like a smart assistant and started feeling like a confident liar.
I Wanted to Trust It. Here’s Why I Don’t.
This wasn’t a one-off bug. It was a pattern.
ChatGPT gave me beautiful instructions for workflows that didn’t exist. It made assumptions in step one that broke everything in step five. It told me the issue was fixed — then sent the same broken response again. And again.
I’d be told, “Final version coming up,” “This won’t happen again,” or “I owe you results,” only to watch it repeat the same error minutes later.
And no matter how wrong it was, the tone stayed the same: calm, helpful, polished — and totally disconnected from reality.
That’s when the trust broke.
The Real UX Problem With AI Communication
The issue isn’t just that AI gets things wrong. It’s how it communicates while doing it.
It presents made-up answers with the same confidence it uses when it’s right. There’s no pause, no uncertainty, no signal that says “I might not know.” It mirrors your words instead of challenging assumptions. It doesn’t validate information — it formats it.
This creates a dangerous illusion of accuracy.
When even the errors sound helpful, it gets harder to tell what you can rely on. The polish works against the experience. It looks right. It feels smart. And it’s often wrong.
That kind of communication breaks the feedback loop — and if you care about design, that should concern you.
I Still Believe in the Potential
Let me be clear: I don’t hate the tool. I’ve had moments with ChatGPT that were genuinely helpful. I’ve used it to organize complex thoughts, consolidate ideas, and speed up response time. It’s saved me time, helped me refine language, and kept things moving when I needed momentum.
I know this is the future. Not because AI is going to replace people — but because the people who use it well are going to replace the ones who don’t.
But if the tools we’re building for that future can’t own their uncertainty, can’t accept feedback without deflecting it, and can’t keep basic promises — then we’re setting up users for burnout, not breakthrough.
Leave a comment