AI in Practice · Lead Story
A machine that says sorry
Anthropomorphic AI does not merely make software feel human; it changes what users do.
Most arguments about anthropomorphic AI circle the wrong question: whether users believe the machine is conscious, caring, malicious, or sorry. The more durable effect is behavioral. A chat interface arranges a social scene, and familiar scenes occasion familiar conduct before anyone has to endorse a theory of machine minds.
That social surface is useful at the start. It lets people ask, interrupt, correct, soften, and continue without learning syntax or operating a console. But when the work requires control, the same repertoire starts to mislead: editing tone while the source is missing, treating an apology as a diagnosis, scolding the model instead of changing the conditions that shape the next answer.
Sometimes the cost is dramatic: a chatbot gives harmful advice, a company tries to disown the answer on its own website, a companion interface becomes part of a crisis. More often it is ordinary and cumulative. Work slows down. Errors get personalized instead of traced. Accountability drifts from the organization that designed the system to the fictional character the interface has staged.
The point is not to strip language models of every human cue. It is to notice when the human shape has become the wrong control surface. AI may keep arriving; the interface does not have to keep pretending the machine is someone to negotiate with.