When you fire up a chat window, it feels weirdly personal. You get an “I think” or an “I’m sorry,” and for a second, you forget you’re just talking to a stack of servers in a warehouse.
The thing is, ChatGPT and Gemini sounding human isn’t a sign of a “soul” or actual consciousness. It’s basically a massive game of “Follow the Leader.” Humans write in the first person. We say “I believe” or “I feel.” Since these models are trained on everything we’ve ever put on the internet, they’re just mirroring our speech patterns. Or nothing. Let’s be real, it’s just math predicting the next word.
Also Read | Imran Khan and Bushra Bibi Sentenced to 17 Years in Jail
Why They Say “I”
-
The Data Mirror: Most of the internet is written in the first person. If a model tries not to say “I,” it actually sounds broken and stiff.
-
The “Friction” Factor: Calling an AI “this system” or “it” feels cold. Tech companies want you to stay engaged, and sounding like a friendly co-pilot is better for business than sounding like a calculator.
-
The Illusion: It’s a deliberate branding choice. Gemini is tuned to be “professional,” while ChatGPT is “helpful.” Those too.
Also Read | Imran Khan and Bushra Bibi Sentenced to 17 Years in Jail
The Risk of the “Human” Voice
| The Feature | The Risk |
| Friendly Tone | Users trust the AI too much, even when it’s wrong. |
| “I understand” | Creates the illusion of empathy where there is only code. |
| Confident Phrasing | Masks the fact that the AI is just guessing based on probability. |
And here’s the kicker: Critics are worried that by making AI sound like an entity rather than a tool, we’re inviting people to treat it like a person. When a chatbot says “I’m sure about this,” it’s a lot harder to fact-check than a weird-looking search result. It’s an ongoing debate in Silicon Valley—is AI a calculator or a digital friend? For now, they’ve picked “friend,” but the line is getting blurrier by the day.
Also Read | Imran Khan and Bushra Bibi Sentenced to 17 Years in Jail
End…
