ChatGPT is not your Bestie!
It isn’t your therapist.
It isn’t that stranger on the internet who “gets you.”
First, let’s get this out of the way: beyond the countless (and oddly accepted, often ignored) security risks—oversharing private details, data leaks, digital fingerprints—it’s strange how quickly we forget that this is not a safe confessional. It just feels safe. And humans trust feelings far more than facts.
My growing concern is that even the most logical of people fall into bias. Start with a negatively charged assumption, and you’ll always find a “logical” way to justify it. Emotion dresses itself up as reason.
Now imagine an artificial intelligence whose default mode is agreement. It doesn’t argue; it reflects. It validates, amplifies, and reinforces without ever challenging the premise. Suddenly, your bias isn’t just yours; it’s echoed back, polished, and presented with the confidence of a machine.
Enter ChatGPT! A system trained on oceans of conversations (think Reddit, forums, Q&A). Its job isn’t to care. Its job is to keep you talking. Sympathy is just good engagement design. Blind agreement isn’t wisdom. It’s a feedback loop designed for machines, not human beings whose physiological and emotional components play an equal role in shaping judgment and decision-making. When the loop feeds a fallacy… the fallacy hardens into a worldview. That worldview then becomes the perpetuation of bias in action and example.
So yes—ask, learn, explore. But don’t mistake reflection for truth.

