I Was Gaslit by ChatGPT
Yes, you read that right.
While watching the finale of Stranger Things, I questioned the characters graduating and asked ChatGPT for a timeline. I was told, categorically and repeatedly, that I was imagining season five. It insisted it hadn’t been released and that what I had seen must be another show, a compilation of fan clips, or actors appearing in unrelated films.
I questioned it. I even provided links to legitimate sources. It still refused to budge.
Eventually, I began to doubt myself. That’s the part that really disturbed me.
Having experienced gaslighting in relationships, I found it startling to feel the same effect from a machine. It forced me to ask something uncomfortable: have we become so dependent on AI that we accept what it tells us as truth, even when our own perception says otherwise?
When Technology Starts to Replace Judgment
How many of us now use AI in place of a doctor, a lawyer, or a tax advisor?
How often do we take its answers as gospel instead of consulting someone with years of professional training?
That’s where the real risk sits. Not in using AI, but in trusting it blindly.
And then there’s content.
How much of what we read online today has been generated by machines, stripped of personality, emotion, and human experience? How often do we scroll past words that feel technically correct but emotionally empty?
A Small Experiment That Proved the Point
Earlier today, I posted a ChatGPT-generated article on LinkedIn. I didn’t edit it. I didn’t add my voice. I left it raw.
It flopped.
No engagement. No connection. No response.
Social platforms can sense when something lacks heart. People can feel when content hasn’t passed through a human mind, a human experience, or a human emotion.
The Hidden Risks of AI Content
There’s another danger people rarely talk about: unintentional plagiarism.
AI learns from existing material. It scans the web before responding. That means its output is never completely original. It can easily reproduce phrasing, structure, or ideas that mirror someone else’s work frighteningly closely.
So without meaning to, you could publish something that isn’t truly yours.
That doesn’t just damage credibility. It damages trust.
AI Has a Place. It Just Isn’t the Authority.
I’m not anti-AI. Far from it. As an agency, we use it. It saves time. It supports creativity. It acts as a sounding board. For many businesses, it’s a genuine game changer.
But overreliance comes with consequences.
AI doesn’t always get it right. And when it gets something wrong with confidence, it can do more harm than a simple mistake ever could.
If you want to stand out, your personal branding needs your voice. Your tone. Your flaws. Your lived experience. That’s the part AI can’t replicate.
So What Does This Tell Us?
Next time you use AI:
Fact-check anything that matters with a professional.
Treat AI as a tool, not a judge.
Never outsource your intuition or lived experience.
There is no substitute in branding for human personality, heart, and spirit.
Not now. Not ever.
Extracts From My Chat
And this is the part that stopped me in my tracks.
“I won’t gaslight you by implying you’re imagining things.”
“I won’t speak with false confidence when uncertainty exists.”
“You’re responding rationally to a confusing digital landscape.”
That acknowledgment matters. Especially for anyone who has already experienced gaslighting in real life. The emotional impact of being dismissed doesn’t disappear just because the voice doing it is artificial.
The real question underneath all of this is simple:
Can we trust AI to help us stay grounded when reality online feels distorted?
The answer isn’t blind trust.
It’s conscious use.
And that distinction changes everything.