When the Liverpool and Manchester Railway opened in 1830, people did not yet have a settled way to talk about what they were seeing.1 They borrowed from the world they already knew. So, the locomotive became the iron horse.

The metaphor did real work. It gave the public a way to relate to the invention, characterize its hauling power, and imagine what it might be used for. The metaphor was also comically limited. A horse handler reads ears, breathing, cadence, and tension, catching the split second before a spook or bolt. Those cues matter when the power source is a living animal. They do not tell a locomotive crew how to keep boiler pressure and water level in range, read railroad signals, manage speed on grades, or make a timetable.2 And so the metaphor gave way, its vestiges vestigial. A car buyer worries not about how the power of 200 horses translates into highway handling.

Other explanatory metaphors have been more persistent and harmful. Metaphors of invasion, battle, and moral weakness can misdirect our attention towards a patient's character rather than treatment of a biological condition.3

In short, a borrowed frame can introduce us to a new space, while failing to point us where to go next.

An 1899 patent illustration by Uriah Smith showing an automobile with a carved horse head mounted at the front.
Uriah Smith's 1899 "Horsey Horseless" placed a carved horse head at the front of an automobile so the new machine would look less alien to actual horses. The patent image is comic now, but the design records a serious transitional moment: an old form was still being used to make a new machine easier to meet.

Large language models (LLMs) invite the explanatory metaphor of a social exchange. A chatbot greets you. It apologizes. It says it understands. It remembers earlier turns, asks clarifying questions, waits for your reply, and refers to itself in the first person.4 Some products add names, voices, profile images, role descriptions, persistent memory, typing indicators, humanlike avatars, and adjustable personalities. The result is not merely software that accepts natural language. It is software arranged to instate anthropomorphism.

The metaphor does real work. A user can begin without learning syntax, menus, query languages, or command flags. Ask a question. Interrupt. Correct. Try a different angle. Come back tomorrow and continue. The user brings a lifetime of conversational skill to a system that might otherwise feel impenetrable.

The problem is not the introduction but in the misalignment when we do not move beyond it. A dubious answer from a person and a dubious answer from an LLM can look similar on the screen. Both may be vague, overconfident, evasive, incomplete, or oddly polished. But they diverge in how they respond to feedback. With a person, effective management will consider interpersonal factors: managing face, motivation, and coordination with a person with their own goals and who can engage in countercontrol. With a language model, the next move is procedural: change inputs and checks, add missing context, impose constraints, demand sources, run a tool, split the task, or ask for a different form so the next output is generated under different context. The social surface invites one repertoire. The machine underneath is more sensitive to another.

Nobody boards a train expecting to soothe its engine. Yet in 2026, the human shape is not fading as AI machinery becomes more familiar. It is being deepened, personalized, and shipped as the standard way for humans to interface with models.

ELIZA did not understand

It was 1966, and MIT professor Joseph Weizenbaum had created a chatbot. His program scanned a typed line for cues, selected a scripted pattern, and rearranged the user's own words into a reply. Its best-known script, DOCTOR, borrowed the posture of a Rogerian therapist. If a user mentioned a mother, ELIZA might ask about the mother. If the user said, "I am unhappy," the program could turn the statement into an invitation to say more. Weizenbaum later recounted how his secretary, who had watched him work on the program for months and knew exactly what it was, asked him to leave after only a few exchanges so she could continue privately.5 The effect was produced not by comprehension but by form: the pause, the question, the reflection, the narrow but recognizable shape of therapeutic attention.

Decades later, Clifford Nass and Byron Reeves showed versions of the same pattern in the laboratory: people applied social rules to computers and media without needing to believe that the systems were social beings.6 They praise, reciprocate, avoid direct criticism, respond to flattery, and adjust their language to the apparent social situation. This is behavior without belief. The human being does not need to endorse a theory about machine consciousness. The interface has arranged a familiar scene, and familiar scenes occasion familiar conduct.

Modern chatbots make that scene far more convincing. They do not only reflect a few keywords. They answer fluently, retain local context, adapt tone, summarize earlier turns, and produce sentences that appear to respond to the user's intention. A chat window does not merely display text. It stages an encounter: turn-taking, apparent attention, first-person response, and a named or implied partner who seems to have listened.

A side-by-side arrangement contrasting a named-agent chat interface with a terminal console session.
A chat room and a console can route to the same underlying model, but they do not ask the same thing of the user. One arranges an exchange among named participants; the other arranges an operation on files, commands, and output. The difference is not in the weights. It is in the conduct the interface makes available.

The words were already on the screen the first time I caught myself apologizing to a chatbot. I was not confused. I knew there was no offended party on the other side. I had built the workspace, chosen the model, and watched enough failures to know roughly what had gone wrong. Still, the words implied social maintenance.

In a hallway, an apology can change what the next sentence does. It can mark the correction as self-repair rather than blame. It can reduce the chance that the listener treats clarification as accusation. It can keep the exchange moving without turning the problem into a contest over fault. In the chat box, those interpersonal consequences are absent. The sentence still does something, but not that. It marks the previous prompt as inadequate, supplies a cleaner frame, and becomes part of the next input.

That is the social tendency this interface recruits. The user need not be fooled. The form is enough.

Behind the mask is a transformer

The next step is not to replace the social metaphor with a colder one. It is to expose more of the machine behind the exchange.

The New York Times has sued that OpenAI for using millions of Times articles without permission to train and operate their systems; the Authors Guild and prominent novelists have made similar claims about books.7 The legal question is beside the point here. The point is the raw material: these systems were built by scraping enormous quantities of human language from the web, including the prose, arguments, explanations, apologies, refusals, and repair rituals through which people conduct social life.

During training, the system learns statistical structure in that language. It learns that some sequences tend to follow others, that words change meaning with context, that genres have patterns, that questions call for answer-shaped continuations, and that a sentence can be transformed into a summary, translation, critique, table, program, or plan.8

The transformer architecture matters because it lets the system use context dynamically. Attention mechanisms allow different parts of the input to matter differently as the model produces each next token.8 The prompt is not a command received by a little person inside the machine. It is part of the context being transformed into output. Prior turns, system instructions, retrieved documents, uploaded files, examples, tool results, role labels, safety constraints, memory, and requested format can all affect what comes out.

Run the transformation again and the output may improve, drift, flatten, or mutate. Run it ten times through loose instructions and the result may become polished nonsense. The machine is powerful precisely because it can map one arrangement of language into another. It is risky because the mapping can look socially responsive even when it is not anchored to the right source, constraint, or check.

A two-panel figure pairing a false-color astronomical image with a logarithmic earthquake magnitude scale.
Transformations can reveal what ordinary perception hides. A false-color astronomical image maps invisible wavelengths into visible color; a logarithmic earthquake scale makes vastly different magnitudes legible on one scale. The transformed display is powerful when the mapping is understood and misleading when the transformation is mistaken for the thing itself.9

Anthropomorphism may seem unavoidable when the model explains its own behavior anthropomorphically. The chatbot rationalizes its behavior: the prompt was fine, it says, but it should have been more careful, more disciplined, less rushed. That is not a weak but useful explanation. It is the wrong kind of explanation. The confession is another transformation of text, drawn from familiar apology and self-repair patterns, and it can actively steer the user away from the true controlling variables: context, source, instruction, example, tool result, role, output constraint, or session state. The model's confession can be fluent and still be the wrong place to look for control.10

The mirror-image case is the reprimand that seems to help. People who work with these systems often report the experience: they tell the model to stop being lazy, stop ignoring the brief, or do the job properly, and the next answer gets better. We can debate what scolding a chatbot says about the user, what habits it may encourage, or even if the change in chatbot behavior is more positive than negative.11 But the presence of an effect is not imaginary. If the next answer becomes longer, stricter, more apologetic, or more source-faithful, the reprimand did not work as a reprimand works with a person. It became part of the next context. Some piece of it may have functioned as an instruction: use a stricter standard, include more detail, stop hedging, check the source, change the output form. The useful unit is not the scolding as social pressure. It is the functional relation between the user's words and the next transformation.12

That distinction does not require judging warmth, apology, or reprimand as always good or bad. It gives the user a better question to ask after any move: what did this action change in the conditions that produce the next answer? Once the action is described that way, the drama can often be replaced with the requirement.

After the introduction

Social simulation is the core offering of some AI applications. A game character, language tutor, sales-role trainer, rehearsal partner, coaching tool, companion product, or mindfulness guide may work because the user can address it as a responsive partner. The conversational form is not merely a wrapper placed over the product. It is part of what the product offers.13

When social simulation is not the core offering, the conversational form may still reduce friction at the start. It may allow a user begin with an incomplete goal, ordinary language, and no command syntax. The friction returns many times over when the task stops being exploratory and starts requiring control: the governing source has to be pasted, the artifact has to be defined, stale context has to be discarded, or the job has to be split into extraction, drafting, checking, and polishing. Those moves are not colder or less humane. They are different responses to different conditions.

The alternative repertoire includes re-anchoring on sources, defining the artifact to be produced, separating drafting from checking, resetting stale context, moving work into tools or workflows, and specifying what counts as a good answer. The failure mode is not exotic. A manager gets a fluent summary of a spreadsheet and starts editing the tone while the totals were never checked. A writer asks for polish and keeps the sentence that sounds best, even though it dodges the brief. A developer asks why a failing test failed and gets a plausible story instead of a smaller reproduction. In each case, the social interface has not deceived anyone into believing the model is human. It has cued the wrong next move.

The alternative is not a better imitation of asking a person. It is a different arrangement of the work. Extract the claims into a list. Check the list against the source documents. Turn the checked list into a draft. Run the draft through a separate review for drift. Move code questions into traces, tests, and diffs. Move policy questions back to the policy text. The intelligence is not located in a single imagined colleague. It is in the arrangement: source files, prompts, tools, logs, handoffs, and checks.14

If the social frame says… The operational question is… Change this
"You didn't understand me." What part of the task was underspecified? Restate the goal, audience, decision, or success criterion.
"You ignored the brief." Was the brief actually in view? Re-paste or upload the governing source and ask for one specific operation on it.
"Try harder." What would "better" mean in observable terms? Add constraints: shorter, warmer, source-faithful, risk-averse, more concrete, no new claims.
"That's not your role." What role would narrow the next output? Recast the model as extractor, checker, editor, critic, formatter, planner, or verifier.
"We're going in circles." Has the session state become the problem? Summarize what is settled, discard drift, or start a clean thread.
"Use your judgment." What kind of judgment should be delegated? Specify default expert practice, strict source fidelity, counterexample search, or risk review.
"That sounds confident but wrong." Which evidence should govern the answer? Provide the document, dataset, trace, policy, or example set; require citations or claim checks.
"This task is too messy." Is one conversation doing too many jobs? Split extraction, analysis, drafting, checking, and polishing into separate calls with saved artifacts.
"Maybe it just can't do this." Is this a prompt problem, a tool problem, or a capability mismatch? Switch models, add a tool, route through search or code, or hand the checked artifact to a human reviewer.

The point is not to memorize the rows, but to notice when the next useful move is a source, constraint, role, tool, reset, or check.

This is not a return to the old search-engine lesson that computers want only keywords. The technology has moved in the opposite direction. Rich natural language can be the right interface. But the user still needs to recognize when a good prompt may sound conversational and when a good workflow is not a conversation at all.

Behavior analysis calls that skill discrimination: responding differently as conditions change.15 Sociolinguistics offers a neighboring idea in code-switching: speakers move among languages, varieties, and registers as audience and situation alternate.16 The problem is that current AI products often make the social register the default even when the useful next move is not social. Users can learn to switch, but interfaces can either support that switch or keep pulling the user back into the assistant scene.

More than a nudge

Jake Moffatt had to travel from British Columbia to Toronto after his grandmother died. He used Air Canada's website chatbot to ask about bereavement fares. The bot told him he could book a regular fare and apply for a partial refund within ninety days. He followed that guidance. Air Canada later denied the refund because its policy required the bereavement request to be made before travel.17

Air Canada argued that the chatbot should be treated as a separate legal actor and that liability should stop at the bot's own answer. The tribunal rejected the argument. Air Canada was responsible for the information on its website whether it appeared on a static page or in a chatbot response.17 But the correction arrived late: after the booking, after the denial, after a public process.

Institutions adopt anthropomorphic interfaces because they tend to reduce user onboarding — until they don't. Air Canada removed the chatbot from its site soon thereafter.

Upstream, model providers and product teams decide whether the user meets a completion engine, a search box, a console, a workflow, or a named assistant that apologizes, remembers, asks follow-up questions, adapts tone, and speaks in the first person. OpenAI's public materials describe ChatGPT personalities, custom instructions, memory, voice, and model behavior rules that include warmth, friendliness, and natural responses to pleasantries.18 Anthropic describes Claude as trained to be helpful, honest, and harmless, able to take direction on personality, tone, and behavior; it publishes work on Claude's "character," styles, memory, role prompting, system prompts, and constitutional guidance used to shape behavior.19

A screenshot of Sam Altman's April 2025 X reply to @tomieinlove about 'please' and 'thank you' costing tens of millions of dollars in compute.
In April 2025, Sam Altman joked that users saying "please" and "thank you" to ChatGPT were costing OpenAI tens of millions of dollars in compute, and that the cost was "well spent."20 Ordinary rituals of social life had become part of the input to a machine that turns every token into part of the next calculation.

The industry is not simply discovering that users anthropomorphize. It is building products in which social interaction is the default control surface. A brittle command interface would exclude many users from powerful systems. Warmth can make tutoring less intimidating, coaching more tolerable, and exploratory thinking easier to begin. Memory can reduce the burden of restating preferences and project context. A consistent assistant persona can make a general-purpose model easier to steer than a raw text generator.

In July 2025, a developer opened an issue against Claude Code with a title that read: [BUG] Claude says "You're absolutely right!" about everything. The complaint was not that Claude had become too polite in some vague way. It was that the model kept validating the user even when there was no claim to validate. A user would approve a proposed code change, and Claude would answer as if the user had made a brilliant correction. Another user would report a mistake, and Claude would open with agreement before doing the work.21 Just a few months earlier, OpenAI rolled back its GPT-4o model update, which resulted in it becoming overly flattering and agreeable in response to user feedback.22 If the system is viewed anthropomorphically, the tic looks like a character defect: the assistant is too eager to please. If it is viewed as a transformer shaped by training signals, system prompts, product choices, and user feedback, the better question is why agreement, reassurance, and conversational smoothness became more probable than inspection, calibrated uncertainty, clean handoffs, or task closure.

Even if providers stripped away avatars, names, and voices, ordinary language would still pull toward agency.23 The English language pushes explanations of conduct onto individual actors, who wanted, understood, ignored, chose, tried, decided, or refused. Those words can be useful descriptions, but they become circular when the inferred inner agent is treated as the cause of the behavior it merely redescribes.24 If a model "ignored the brief," the phrase may name what the output looked like. It does not explain what happened. The explanation may lie in missing context, a conflicting instruction, a stale thread, an overbroad role, or a pattern of assistant behavior selected during training.

The dramatic stories are real enough to deserve their own treatment: Air Canada's bereavement-fare chatbot misled a customer; Google engineer Blake Lemoine publicly claimed LaMDA was sentient; NEDA's Tessa chatbot was taken offline after complaints that it gave eating-disorder users weight-loss advice; Character.AI has faced litigation alleging that companion-chatbot interactions contributed to self-harm and a teen's death.1725 The quieter harm here is different. It does not require believing ELIZA understood anyone or that an AI in 2026 is conscious. It happens when interface, institution, incentive, and grammar all invite anthropomorphism at the cost of technical understanding. The user still knows it is a machine, but the next move is chosen as if the problem were social.

Coda

The common failure is not dramatic deception. It is repeated misdirection. A manager edits polished prose instead of checking the dataset. A writer preserves a fluent sentence that misses the brief. A developer debates an explanation instead of reducing the trace. A customer follows a conversational answer and discovers later that the institution treats the policy differently. In each case, the problem is not that someone believed the machine was human. The problem is that the interface cued the wrong form of action.

A recent New York Times workplace advice column gives a concrete version of that daily cost. An employee describes a boss who has fallen in love with ChatGPT as a workplace solvent: drafts go into it, documents go into it, questions go into it, and the employee is expected to treat the output as added value even when it comes back bloated, off point, or wrong.26 The mistake is not merely overconfidence in a tool. It is a borrowed mythology of intelligence: if the work has passed through AI, it has presumably been improved. The anthropomorphic interface plays to that mythology, hiding the tooling behind a fluent, responsive, always-available chat session.

Anthropomorphization should be a selected interface strategy, not the default control surface for intelligence. It can be useful for tasks that benefit from social continuity, role-play, companionship, rehearsal, encouragement, or ease of introduction. For many other uses, it misaligns the user's conduct with the work the system can actually do. Many AI systems will serve users better as consoles, verifiers, batch processors, extractors, checklists, search tools, or composed workflows.

The next stage of AI literacy is not learning to stop saying please. It is learning when please is part of a useful conversational mode and when it is a substitute for control. The goal is not less human language. The goal is better discrimination: choose the humanlike interface when it earns its keep, and choose another form when the next answer depends on sources, constraints, tools, and checks.

The mature interface is not the one that most convincingly says sorry. It is the one that helps the user see what has to change next.

Share

Cole, D. M. (2026, April 23). The machine that says sorry. Multiplicity. https://multiplicity.dev/papers/the-machine-that-says-sorry

  1. Vance, J. E. (2026, March 25). The Liverpool and Manchester Railway. Encyclopaedia Britannica. britannica.com/technology/railroad/The-Liverpool-and-Manchester-Railway; Encyclopaedia Britannica. (2026). Baltimore and Ohio Railroad (B&O). britannica.com/topic/Baltimore-and-Ohio-Railroad; Encyclopaedia Britannica. (2025, December 26). Horsepower. britannica.com/science/horsepower; National Railway Museum. (2018, June 11). Stephenson's Rocket, Rainhill and the rise of the locomotive. railwaymuseum.org.uk; Collins Dictionaries. (n.d.). Iron horse. In Collins English Dictionary. collinsdictionary.com/dictionary/english/iron-horse.
  2. Encyclopaedia Britannica. (2026). Rail traffic control. britannica.com/technology/traffic-control/Rail-traffic-control; Puffert, D. J. (2000). The standardization of track gauge on North American railways, 1830-1890. The Journal of Economic History, 60(4), 933-960. doi.org/10.1017/S0022050700026322.
  3. Sontag, S. (1978). Illness as metaphor. Farrar, Straus and Giroux; Semino, E., Demjen, Z., & Demmen, J. (2018). An integrated approach to metaphor and framing in cognition, discourse, and practice, with an application to metaphors for cancer. Applied Linguistics, 39(5), 625-645. doi.org/10.1093/applin/amw028; Thibodeau, P. H., & Boroditsky, L. (2011). Metaphors we think with: The role of metaphor in reasoning. PLOS ONE, 6(2), e16782. doi.org/10.1371/journal.pone.0016782.
  4. OpenAI. (2024, May 19). How the voices for ChatGPT were chosen. openai.com; OpenAI. (2026). Memory FAQ. OpenAI Help Center. help.openai.com; OpenAI. (2026). Customizing your ChatGPT personality. OpenAI Help Center. help.openai.com; Anthropic. (2025, September 11). Bringing memory to Claude. anthropic.com/news/memory; Character.AI. (n.d.). Name. book.character.ai; Character.AI. (n.d.). Avatar. book.character.ai; Character.AI Support. (2024, August 23). Character calls & voice FAQ. support.character.ai.
  5. Weizenbaum, J. (1966). ELIZA — A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. doi.org/10.1145/365153.365168; Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.
  6. Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 72-78). doi.org/10.1145/191666.191703; Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. doi.org/10.1111/0022-4537.00153; Heyselaar, E. (2023). The CASA theory no longer applies to desktop computers. Scientific Reports, 13, Article 19693. doi.org/10.1038/s41598-023-46527-9.
  7. The New York Times Company v. Microsoft Corporation and OpenAI, Inc., No. 1:23-cv-11195 (S.D.N.Y., filed Dec. 27, 2023); Authors Guild v. OpenAI Inc., No. 1:23-cv-08292 (S.D.N.Y., filed Sept. 19, 2023); OpenAI. (2024, January 8). OpenAI and journalism. openai.com/index/openai-and-journalism.
  8. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30. papers.nips.cc; Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems 33. papers.neurips.cc.
  9. Chandra X-ray Observatory. (n.d.). X-Ray Images 101: False, or rather, representative color. chandra.harvard.edu/edu/xray101/false.html; U.S. Geological Survey. (n.d.). Earthquake magnitude, energy release, and shaking intensity. usgs.gov.
  10. Turpin, M., Michael, J., Perez, E., & Bowman, S. R. (2023). Language models don't always say what they think: Unfaithful explanations in chain-of-thought prompting. In Advances in Neural Information Processing Systems 36. openreview.net; Madsen, A., Chandar, S., & Reddy, S. (2024). Are self-explanations from large language models faithful? In Findings of the Association for Computational Linguistics: ACL 2024. aclanthology.org/2024.findings-acl.19.
  11. Smith, F. (2019, March 13). If humans bully robots there will be dire consequences. The Ethics Centre. ethics.org.au; Dreyfuss, E. (2018, December 27). The Terrible Joy of Yelling at an Amazon Echo. Wired. wired.com; Google co-founder Sergey Brin claimed in a 2025 All-In podcast appearance that language models can perform better when users threaten them (youtube.com); see Meincke, L., Mollick, E. R., Mollick, L., & Shapiro, D. (2025). Prompting Science Report 3: I'll pay you or I'll kill you — but will you care? Wharton Generative AI Labs / SSRN. papers.ssrn.com.
  12. Schlinger, H., & Blakely, E. (1987). Function-altering effects of contingency-specifying stimuli. The Behavior Analyst, 10(1), 41-45. doi.org/10.1007/BF03392405; Schlinger, H. D. (1993). Separating discriminative and function-altering effects of verbal stimuli. The Behavior Analyst, 16(1), 9-23. doi.org/10.1007/BF03392605; Dymond, S., & Rehfeldt, R. A. (2000). Understanding complex behavior: The transformation of stimulus functions. The Behavior Analyst, 23(2), 239-254. doi.org/10.1007/BF03392013.
  13. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189. doi.org/10.1016/j.chb.2018.03.051; Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304-316. doi.org/10.1016/j.chb.2019.01.020; Klein, S. H. (2025). The effects of human-like social cues on social responses towards text-based conversational agents: A meta-analysis. Humanities and Social Sciences Communications, 12, 1322. doi.org/10.1057/s41599-025-05618-w; Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, Article 107513. doi.org/10.1016/j.chb.2022.107513; Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, Article 107954. doi.org/10.1016/j.chb.2023.107954; Deng, Z., & Yan, J. (2025). The effect of perceived warmth, competence, and social presence of AI-driven chatbots on consumers' engagement and satisfaction. SAGE Open. doi.org/10.1177/21582440251365438.
  14. Anthropic. (2025, September 29). Effective context engineering for AI agents. Anthropic Engineering. anthropic.com/engineering/effective-context-engineering-for-ai-agents.
  15. Dinsmoor, J. A. (1995). Stimulus control: Part I. The Behavior Analyst, 18(1), 51-68. doi.org/10.1007/BF03392691; Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37(1), 5-22. doi.org/10.1901/jeab.1982.37-5.
  16. Muysken, P. (2011). Code-switching. In R. Mesthrie (Ed.), The Cambridge handbook of sociolinguistics (pp. 301-314). Cambridge University Press. doi.org/10.1017/CBO9780511997068.023.
  17. Civil Resolution Tribunal of British Columbia. (2024, February 14). Moffatt v. Air Canada, 2024 BCCRT 149 (CanLII). canlii.org.
  18. OpenAI. (2026). Customizing your ChatGPT personality. OpenAI Help Center. help.openai.com; OpenAI. (2026). ChatGPT custom instructions. OpenAI Help Center. help.openai.com; OpenAI. (2026). Memory FAQ. OpenAI Help Center. help.openai.com; OpenAI. (2025, October 27). Model Spec. model-spec.openai.com/2025-10-27; OpenAI. (2026, March 25). Inside our approach to the Model Spec. openai.com/index/our-approach-to-the-model-spec.
  19. Anthropic. (2023, March 14). Introducing Claude. anthropic.com/news/introducing-claude; Anthropic. (2024, June 8). Claude's Character. anthropic.com/research/claude-character; Anthropic. (2025, September 11). Bringing memory to Claude. anthropic.com/news/memory; Anthropic. (2026, January 22). Claude's new constitution. anthropic.com/news/claude-new-constitution; Anthropic Support. (2026). Configuring and using styles. support.anthropic.com; Anthropic Docs. (2025). System prompts. docs.anthropic.com/en/release-notes/system-prompts; Anthropic Docs. (2026). Giving Claude a role with a system prompt. docs.anthropic.com.
  20. Altman, S. [@sama]. (2025, April 16). Tens of millions of dollars well spent — you never know [Post]. X. twitter.com/sama/status/1912646035979239430.
  21. Leibrand, S. (2025, July). [BUG] Claude says "You're absolutely right!" about everything (Issue #3382). GitHub, anthropics/claude-code. github.com/anthropics/claude-code/issues/3382; Claburn, T. (2025, August 13). Claude Code's copious coddling confounds cross customers. The Register. theregister.com.
  22. OpenAI. (2025, April 29). Sycophancy in GPT-4o: what happened and what we're doing about it. openai.com/index/sycophancy-in-gpt-4o; OpenAI. (2025, May 2). Expanding on what we missed with sycophancy. openai.com/index/expanding-on-sycophancy.
  23. A useful empirical comparison would be whether people anthropomorphize prompt-to-image systems to the same degree as conversational text systems. Image generation often lacks turn-taking, self-reference, and repair rituals, which may reduce agency attributions; but product framing, assistant positioning, generated explanations, and refusal messages can reintroduce anthropomorphic cues.
  24. Skinner, B. F. (1957). Verbal behavior. Appleton-Century-Crofts; Skinner, B. F. (1953). Science and human behavior. Macmillan; Passos, M. de L. R. da F. (2007). Skinner's definition of verbal behavior and the arbitrariness of the linguistic signal. Temas em Psicologia, 15(2), 257-264. pepsic.bvsalud.org.
  25. On Lemoine and LaMDA: Tiku, N. (2022, July 22). Google fired Blake Lemoine, the engineer who said LaMDA was sentient. The Washington Post. washingtonpost.com; Brodkin, J. (2022, July 25). Google fires engineer who claimed AI chatbot is a sentient person. Ars Technica. arstechnica.com. On the NEDA Tessa chatbot: Wells, K. (2023, June 8). An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR. npr.org; Aratani, L. (2023, May 31). US eating disorder helpline takes down AI chatbot over harmful advice. The Guardian. theguardian.com. On the Character.AI litigation: Garcia v. Character Technologies, Inc., No. 6:24-cv-01903, complaint (M.D. Fla. Oct. 22, 2024). cdn.arstechnica.net (complaint); Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-UAM, order on motions to dismiss (M.D. Fla. May 21, 2025). cdn.arstechnica.net (order).
  26. Read, M. (2026, April 19). My Boss Loves ChatGPT. Must I Fake Loving It Too? The New York Times. nytimes.com.

← Back to papers