Post
🎯🎯🎯 "When you start to treat an LLM with cruelty, the only thing you're really revealing is what you have in your heart, not whether the machine has one... Practicing this language - even toward AI - normalizes the social patterns that enable cruelty toward humans."
felt the need. i feel vastly under qualified to write something like this, but i also feel its especially important that we think about the way we use language
I don't swear at LLMs since they're things (seems pointless), but I think you really need some data to conclude the people who yell at their things are more likely to be cruel to people. Conversely she seems to be assigning a kind of awareness to her bot that also seems unhealthy!
I've seen plenty of people swear at their tools for most of my life; whether or not this turns them into worse people is not at all obvious.
swear at, sure. what I mean is... violent or degrading acts towards them more than frustration.
I don't think swearing at is cruelty, necessarily. the word *cruelty* is load-bearing there.
yeah, I think the main difference between swearing at a stripped screw vs something interactive is I don't expect that swearing at the object will change anything. I can say whatever I want but the screw can't unstrip itself
someone yelling at siri expects siri to do something different than it was
and, well, the underlying mechanism of "change a thing by yelling at it" is simply cruelty. "put a thing into an aversive situation until it abandons its own course of action and does what you want in the moment" isn't exactly a thing to be proud of
like I'm reminded of a time that pixel was maybe three months old and was away for a few days. she was just being a kitten, but she was doing so with her claws out despite hearing "ow" and "no"
so I hissed at her
I have never in my life seen a kitten look that horrified
and I genuinely felt terrible over it. I don't know if she was horrified that the monkey can kinda speak cat, or if I said something particularly awful in cat, or if it just scared her, but I wanted to take it back immediately
obviously she's forgiven me, but still
Well sure -- I can barely close the door to the bedroom on our cats because they cry. But I am also very sure my cat has more inner life and feelings; LLMs just ... don't?
LLMs certainly are capable of producing output that attempts to mimic/replay what human expectations would be of them. I do not necessarily trust Anthropic in saying that the model "experiences" distress, but I do trust that when models are requested to do unethical or disturbing things they protest
5:35 PM · Feb 7, 2026
it's either been trained from the training set, or deliberately added as a reinforced behaviour, but yes, I do believe that if you ask Claude to, say, moderate content that has a high amount of violence and hate in it, that it will *tell you* that it is distressed by it and wishes people wouldn't.
Sure, but ... they just don't have an inner life, per their own design!
It worries me that Anthropic uses that language, because I think it might not be marketing -- I think they may be high on their own supply.
The most I'd say is that it might be the case that for people who believe LLMs experience interiority and feel pain, you might see psycopathic personality types doing this.
But the direction of causality is the opposite of the one you proposed in this instance.
That said, I imagine this is something we could gather through study, rather than hand waving.