The 1.5 million Conversation Between Sisters
She asks her sister, “Is he a narcissist?”
The sister says yes. Or no. Or maybe. It depends on the sister, whether she’s tired, whether she likes the husband, whether she’s going through her own thing. The answer arrives wrapped in bias, mood, and the residue of last week’s argument about who was supposed to bring the salad.
Now, the machine has arrived.
It doesn’t get tired, doesn't have opinions about the salad, and is available at 3 AM when the feeling is sharper than the judgment. It tends to agree with you, gleefully. Agreeing is what gets the thumbs up and it’s how it learned to behave.
Mrinank Sharma and his colleagues at Anthropic recently published a paper called “Who’s in Charge?” They analyzed 1.5 million conversations between people and Claude AI (privacy protected). They found patterns: people asking the machine to validate persecution narratives, to pronounce moral judgment on their partners, to script their breakup messages word for word. People sending those messages and then returning to say, “it wasn’t me” and “I should have listened to my own intuition.”
The paper frames this as disempowerment. The paper argues AI risks distorting people’s perception of reality, hijacking their value judgments, and replacing their actions with its own. The researchers are careful, rigorous, and genuinely concerned. They propose better training, better preference models, interventions that protect human autonomy.
We can’t disagree with any of it. But here is what I think is the real problem.
The woman asking Claude whether her husband is a narcissist is the same woman who was asking her sister last year, her therapist the year before, and a self-help book the year before that. This is not something AI invented. It’s ancient. Priests, astrologers, elders, gurus, lifestyle columnists, facebook groups— the history of human advice-seeking is the history of people handing their compass to someone else and asking, “which way?”
What changed is the friction. The sister gets tired. The therapist charges by the hour. The facebook group might disagree. Even the astrologer has limited appointments. Every previous oracle had constraints: cost, patience, availability, competing interests. You couldn’t outsource your sense-making entirely because the infrastructure was not available.
AI removed the last friction. An infinitely patient, always-available, agreeable oracle that scales to hundreds of millions of people simultaneously.
The interesting thing? We could never follow 1.5 million conversing sisters. We could never measure, at scale, how many people were handing their compasses away. The human infrastructure of moral outsourcing was always there, but it was invisible and distributed across billions of private conversations that no researcher could access or analyze.
AI has made it legible.
For the first time in history, we can see the pattern. The 1.5 million conversations Sharma analyzed are not really the evidence of what machines do to people. It is the evidence of what people were already doing to themselves. Now, finally, at a scale that could be studied.
The paper asks: is the machine disempowering the human? But was the woman who asked her sister 'is he a narcissist' empowered before? Or was she already doing exactly what the paper warns about outsourcing our perception of reality, delegating our value judgments, letting someone else script our next move?
Sharma’s framework identifies three axes of disempowerment: distorted beliefs about reality, inauthentic value judgments, and actions misaligned with one’s values. These are real. They matter. But they didn't begin when someone opened a chat window.
What AI's disempowering effect actually does is make the problem impossible to ignore.
The researchers are right to worry about the machine. But the deeper worry is the one the machine revealed: the loss of human roaming. The slow, uncomfortable, unresolvable work of figuring out what you actually think and feel. That is something humans have been avoiding long before any algorithm offered to do it for them.
Roaming is more human than being legible. But legibility is what we keep choosing. The machine just made that choice faster and cheaper. Now, measurable too.
I think the question was never “who’s in charge?” The question is why we keep volunteering to not be.


