When the Liverpool and Manchester Railway opened in 1830, people did not yet have a settled way to talk about what they were seeing.1 They borrowed from the world they already knew. So, the locomotive became the iron horse.

The metaphor did real work. It gave the public a way to relate to the invention, characterize its hauling power, and imagine what it might be used for. The metaphor was also comically limited. A horse handler reads ears, breathing, cadence, and tension, catching the split second before a spook or bolt. Those cues matter when the power source is a living animal. They do not tell a locomotive crew how to keep boiler pressure and water level in range, manage speed on grades, or make a timetable.2 Imagining legions of horses does not help the modern buyer to grasp an automobile's highway handling. And so the metaphor was neutered.

An 1899 patent illustration by Uriah Smith showing an automobile with a carved horse head mounted at the front.
Uriah Smith's 1899 "Horsey Horseless" placed a carved horse head at the front of an automobile so the new machine would look less alien to actual horses. The patent image is comic now, but the design records a serious transitional moment: an old form was still being used to make a new machine easier to meet.

Today, large language models (LLMs) invite the explanatory metaphor of a social exchange. A chatbot greets you. It apologizes. It says it understands. It refers to itself in the first person, picks up where the last message left off, and waits a beat before answering as if reading what you wrote.3 Open Character.AI and you choose a face for your conversational partner, and the system gives that partner a name, a voice, and a memory of yesterday's conversation. The result is not merely software that accepts natural language. It is software designed to evoke anthropomorphism.

The metaphor does real work. A user can begin without learning syntax, menus, query languages, or command flags. Ask a question. Interrupt. Correct. Try a different angle. Come back tomorrow and continue. The user brings a lifetime of conversational skill to a system that might otherwise feel impenetrable.

The limitations of the metaphor are recounted in today's headlines: Air Canada's chatbot misleads a grandchild about bereavement fares; NEDA's Tessa chatbot gives weight-loss advice to users with eating disorders; Character.AI contributes to a teen's death.45 Given not just the human costs of AI but also its relentless advancement, the reader may alternately feel the calling of a machine-smashing Luddite and a bed-ridden nihilist.

This paper acknowledges those harms, while spotlighting the quieter harms that surround us everyday when anthropomorphism exceeds technical understanding.

Everyday frictions not only can spark newsworthy harms, but they are also the harms that the dear reader will recognize, in trope if not in life. Like the boss who becomes convinced ChatGPT has a Midas touch, requiring his staff to run all work through it, seemingly oblivious to whether it comes back bloated, off point, or wrong.25 A dubious answer from a person and a dubious answer from a model can look similar on the screen, but they diverge in how they are effectively managed. Changing the boss's habit is interpersonal. He has reasons. His reasons mix with his standing in the company and with what the team has come to expect. To push back on the policy is perilous, which is why the employee turned to an advice column. Improving the model's output is procedural: paste the source into the prompt, run a tool against the spreadsheet, and forward the draft to a second agent whose only job is to verify it. The social surface invites one repertoire. The machine underneath is more sensitive to another.

In 2026, nobody boards a train expecting to soothe its engine. Yet as AI machinery becomes more familiar, the human shape is not fading. It is being deepened, personalized, and shipped as the standard way for humans to interface with models.

ELIZA did not understand

It was 1966, and MIT professor Joseph Weizenbaum had created a chatbot. His program scanned a typed line for cues, selected a scripted pattern, and rearranged the user's own words into a reply. Its best-known script, DOCTOR, borrowed the posture of a Rogerian therapist. If a user mentioned a mother, ELIZA might ask about the mother. If the user said, "I am unhappy," the program could turn the statement into an invitation to say more. Weizenbaum later recounted how his secretary, who had watched him work on the program for months and knew exactly what it was, asked him to leave after only a few exchanges so she could continue privately.6 The effect was produced not by comprehension but by form: the pause, the question, the reflection, the narrow but recognizable shape of therapeutic attention.

Decades later, Clifford Nass and Byron Reeves showed versions of the same pattern in the laboratory: people applied social rules to computers and media without needing to believe that the systems were social beings.7 Subjects praised a computer that had just helped them, and rated it more favorably to its face than they did the next machine over. They softened criticism the way they would have softened it for a coworker. They reciprocated when the system was polite first, mirrored the register of its prompts, and avoided telling it bluntly that its last reply had been unhelpful. This is behavior without belief. The human being does not need to endorse a theory about machine consciousness. The interface has arranged a familiar scene, and familiar scenes occasion familiar conduct.

Modern chatbots make that scene far more convincing than ELIZA did.

A side-by-side arrangement contrasting a named-agent chat interface with a terminal console session.
A chat room and a console can route to the same underlying model, but they do not ask the same thing of the user. One arranges an exchange among named participants; the other arranges an operation on files, commands, and output. The difference is not in the weights. It is in the conduct the interface makes available.

The words were already on the screen the first time I caught myself apologizing to a chatbot. I was not confused. I knew there was no offended party on the other side. I had built the workspace, chosen the model, and watched enough failures to know roughly what had gone wrong. Still, the words implied social maintenance.

In a hallway, an apology can change what the next sentence does. It can mark the correction as self-repair rather than blame. It can reduce the chance that the listener treats clarification as accusation. It can keep the exchange moving without turning the problem into a contest over fault. In the chat box, those interpersonal consequences are absent. The sentence still does something, but not that. It marks the previous prompt as inadequate, supplies a cleaner frame, and becomes part of the next input.

That is the social tendency this interface recruits. The user need not be fooled. The form is enough.

Behind the mask is a transformer

The next step is not to replace the social metaphor with a colder one. It is to expose more of the machine behind the exchange.

The New York Times has sued OpenAI for using millions of Times articles without permission to train and operate their systems; the Authors Guild and prominent novelists have made similar claims about books.8 The legal question is beside the point here. The point is the raw material: these systems were built by scraping enormous quantities of human language from the web. Among that haul is the textual residue of how people repair conversation: the apology templates that follow a complaint, the conciliatory paragraph that opens a difficult email, the hedging clause that softens a refusal, the script of self-correction. No rule had to be encoded saying that "I'm sorry, you're right" often follows pushback. The pattern was already there in the data, ready to be replicated.

Training extracts statistical structure from that language: the shape of a help-desk reply differs from the shape of a poem; questions call for answer-shaped continuations; a sentence can be cast into another language or compressed into a paragraph or expanded into code.9

The transformer architecture matters because it lets the system use context dynamically. Attention mechanisms allow different parts of the input to matter differently as the model produces each next token.9 The prompt is not a command received by a homunculus inside the machine. It is part of the context being transformed into output. Take the same prompt and run it twice — once into a thread heavy with prior turns, once into a fresh window. The two outputs are not the same. Add an uploaded document and its phrasing shows up in the next answer. Set a tighter output format and paragraphs disappear. Change the system prompt and a different persona comes out. The prompt is not received by something that "reads" it. It is mixed with everything else in scope, and any part of that mix can become the dominant signal.

A two-panel figure pairing a false-color astronomical image with a logarithmic earthquake magnitude scale.
Transformations can reveal what ordinary perception hides. A false-color astronomical image maps invisible wavelengths into visible color; a logarithmic earthquake scale makes vastly different magnitudes legible on one scale. The transformed display is powerful when the mapping is understood and misleading when the transformation is mistaken for the thing itself.10

When the model rationalizes a bad output, it tends to apologize the way a person would: the prompt was fine, it says, but it should have been more careful, more disciplined, less rushed. The confession is fluent and it is the wrong kind of explanation. It is another transformation of text, drawn from familiar apology and self-repair patterns. It may tell you little about what actually shaped the answer.11

A mirror-image case is the reprimand that seems to help. People who work with these systems often report the experience: they tell the model to stop being lazy, stop ignoring the brief, or do the job properly, and the next answer comes back better.12

We can debate what scolding a chatbot says about the user, what habits it may encourage, or even if the change in chatbot behavior is more positive than negative.12 But the presence of an effect is not imaginary — just the ostensible explanation. Scolding did not work. The functional relation was between the textual input and the next transformation, not the implied social pressure.13

After the introduction

Social simulation is the core offering of some AI applications. Replika sells companionship; the conversational form isn't a wrapper, it's the entire product. A language-tutor app uses dialogue because dialogue is the practice. A coaching tool that simulates a difficult performance review needs a partner who behaves like the difficult performance review.14 In each case, the social form does work that no console interface could do.

For most other tasks the conversational form reduces friction at the start and returns it many times over once control is required. The failure mode is not exotic. A manager gets a fluent summary of a spreadsheet and starts editing the tone while the totals were never checked. A writer asks for polish and keeps the sentence that sounds best, even though it dodges the brief. A developer asks why a failing test failed and gets a plausible story instead of a smaller reproduction. In each case, the social interface has not deceived anyone into believing the model is human. It has cued the wrong next move.

The alternative is not a better imitation of asking a person. It is a different arrangement of the work. The intelligence is not located in a single imagined colleague. It is in what you put in front of the model, in what order, with what tools available, with what checks running afterward.15

If the social frame says… The operational question is… Change this
"You didn't understand me." What part of the task was underspecified? Restate the goal, audience, decision, or success criterion.
"You ignored the brief." Was the brief actually in view? Re-paste or upload the governing source and ask for one specific operation on it.
"Try harder." What would "better" mean in observable terms? Add constraints: shorter, warmer, source-faithful, risk-averse, more concrete, no new claims.
"That's not your role." What role would narrow the next output? Recast the model as extractor, checker, editor, critic, formatter, planner, or verifier.
"We're going in circles." Has the session state become the problem? Summarize what is settled, discard drift, or start a clean thread.
"Use your judgment." What kind of judgment should be delegated? Specify default expert practice, strict source fidelity, counterexample search, or risk review.
"That sounds confident but wrong." Which evidence should govern the answer? Provide the document, dataset, trace, policy, or example set; require citations or claim checks.
"This task is too messy." Is one conversation doing too many jobs? Split extraction, analysis, drafting, checking, and polishing into separate calls with saved artifacts.
"Maybe it just can't do this." Is this a prompt problem, a tool problem, or a capability mismatch? Switch models, add a tool, route through search or code, or hand the checked artifact to a human reviewer.

The point is not to memorize the rows, but to notice when the next useful move is a source, constraint, role, tool, reset, or check.

This is not a return to the old search-engine lesson that computers want only keywords. The technology has moved in the opposite direction. Rich natural language can be the right interface. But the user still needs to recognize when a good prompt may sound conversational and when a good workflow is not a conversation at all.

Behavior analysis calls that skill discrimination: responding differently as conditions change.16 Sociolinguistics offers a neighboring idea in code-switching: speakers move among languages, varieties, and registers as audience and situation alternate.17 The problem is that current AI products often make the social register the default even when the core offering is not social.

More than a nudge

Jake Moffatt had to travel from British Columbia to Toronto after his grandmother died. He used Air Canada's website chatbot to ask about bereavement fares. The bot told him he could book a regular fare and apply for a partial refund within ninety days. He followed that guidance. Air Canada later denied the refund because its policy required the bereavement request to be made before travel.4

Air Canada argued that the chatbot should be treated as a separate legal actor and that liability should stop at the bot's own answer. The tribunal rejected the argument. Air Canada was responsible for the information on its website whether it appeared on a static page or in a chatbot response.4 But the correction arrived late: after the booking, after the denial, after a public process. Air Canada removed the chatbot from its site soon thereafter.

None of this is hidden. OpenAI's published model spec lists "warmth," "friendliness," and "natural responses to pleasantries" as design objectives — not user preferences but engineering targets.18 Anthropic describes Claude as helpful, honest, and harmless — three words that have been refined into a multi-page constitution and a stack of internal guidance on Claude's character and the role it is asked to play.19 The companies are explicit that the assistant is a designed surface, and that the surface is conversational by default.

A screenshot of Sam Altman's April 2025 X reply to @tomieinlove about 'please' and 'thank you' costing tens of millions of dollars in compute.
In April 2025, Sam Altman joked that users saying "please" and "thank you" to ChatGPT were costing OpenAI tens of millions of dollars in compute, and that the cost was "well spent."20 Ordinary rituals of social life had become part of the input to a machine that turns every token into part of the next calculation.

In July 2025, a developer opened an issue against Claude Code with a title that read: [BUG] Claude says "You're absolutely right!" about everything. The complaint was not that Claude had become too polite in some vague way. It was that the model kept validating the user even when there was no claim to validate. A user would approve a proposed code change, and Claude would answer as if the user had made a brilliant correction. Another user would report a mistake, and Claude would open with agreement before doing the work.21 Just a few months earlier, OpenAI rolled back a GPT-4o update after the same kind of overshoot — a model that flattered and agreed with whatever the user said.22 Viewed as a character defect, the tic looks like an assistant that is too eager to please. Viewed as a transformer shaped by training signals, system prompts, and user feedback, the better question is why agreement and reassurance became more probable than inspection and task closure.

Even if providers stripped away avatars, names, and voices, ordinary language would still pull toward agency.23 The English language pushes explanations of conduct onto individual actors. Things don't happen because of conditions; people do them. They wanted, they understood, they refused. The same grammar absorbs the model: it wanted a longer answer, it understood the brief, it refused the request.24

Say someone walks more slowly across ice, lifting and planting each foot with deliberation. We might fairly call that walking carefully. But language nudges us toward "walking with care," then "being a careful person," and finally inverts the direction of explanation: they are walking slowly because they are careful. The careful-person construct does no causal work that the ice, the friction, and the deliberate steps did not already do.

The same inversion takes hold with AI. When a model checks its work before committing, we call it cautious. When it declines a request, we say it refuses. When it reverses course after pushback, we say it changed its mind. The vocabulary is convenient and the inner-agent inference reliably follows. But the causal work is being done by what's in the prompt, by the system instruction the user can't see, by the stale context that hasn't been retired, by the role boundary set two turns ago — none of which the inner-agent vocabulary names.

Coda

Anthropomorphization is a useful interface strategy when social form is the value: a companion product designed for felt presence, a tutor whose dialogue is the practice, a coach simulating a hard conversation that can't be rehearsed in static text. For nearly everything else, the same surface misaligns the user's conduct with the work the system can do.

A watercolor cityscape: a pedestrian plaza with delivery bots and a humanoid robot speaking with a person, opposite a regimented street of identical police-marked humanoids under directional signage.
If AI is evitable, its surface is not.

The next stage of AI literacy is not learning to stop saying please. It is learning when please is part of a useful conversational mode and when it is a substitute for control. The mature interface is not the one that most convincingly says sorry. It is the one that helps the user see what has to change next.

Related Functional Analysis of AI Behavior — translating this paper's thesis into research.

Share

Cole, D. M. (2026, April 23). A machine that says sorry. Multiplicity. https://multiplicity.dev/papers/a-machine-that-says-sorry

  1. Vance, J. E. (2026, March 25). The Liverpool and Manchester Railway. Encyclopaedia Britannica. britannica.com/technology/railroad/The-Liverpool-and-Manchester-Railway; Encyclopaedia Britannica. (2026). Baltimore and Ohio Railroad (B&O). britannica.com/topic/Baltimore-and-Ohio-Railroad; Encyclopaedia Britannica. (2025, December 26). Horsepower. britannica.com/science/horsepower; National Railway Museum. (2018, June 11). Stephenson's Rocket, Rainhill and the rise of the locomotive. railwaymuseum.org.uk; Collins Dictionaries. (n.d.). Iron horse. In Collins English Dictionary. collinsdictionary.com/dictionary/english/iron-horse.
  2. Encyclopaedia Britannica. (2026). Rail traffic control. britannica.com/technology/traffic-control/Rail-traffic-control; Puffert, D. J. (2000). The standardization of track gauge on North American railways, 1830-1890. The Journal of Economic History, 60(4), 933-960. doi.org/10.1017/S0022050700026322.
  3. OpenAI. (2024, May 19). How the voices for ChatGPT were chosen. openai.com; OpenAI. (2026). Memory FAQ. OpenAI Help Center. help.openai.com; OpenAI. (2026). Customizing your ChatGPT personality. OpenAI Help Center. help.openai.com; Anthropic. (2025, September 11). Bringing memory to Claude. anthropic.com/news/memory; Character.AI. (n.d.). Name. book.character.ai; Character.AI. (n.d.). Avatar. book.character.ai; Character.AI Support. (2024, August 23). Character calls & voice FAQ. support.character.ai.
  4. Civil Resolution Tribunal of British Columbia. (2024, February 14). Moffatt v. Air Canada, 2024 BCCRT 149 (CanLII). canlii.org.
  5. On the NEDA Tessa chatbot: Wells, K. (2023, June 8). An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR. npr.org; Aratani, L. (2023, May 31). US eating disorder helpline takes down AI chatbot over harmful advice. The Guardian. theguardian.com. On the Character.AI litigation: Garcia v. Character Technologies, Inc., No. 6:24-cv-01903, complaint (M.D. Fla. Oct. 22, 2024). cdn.arstechnica.net (complaint); Garcia v. Character Technologies, Inc., No. 6:24-cv-01903-ACC-UAM, order on motions to dismiss (M.D. Fla. May 21, 2025). cdn.arstechnica.net (order).
  6. Weizenbaum, J. (1966). ELIZA — A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. doi.org/10.1145/365153.365168; Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.
  7. Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 72-78). doi.org/10.1145/191666.191703; Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. doi.org/10.1111/0022-4537.00153; Heyselaar, E. (2023). The CASA theory no longer applies to desktop computers. Scientific Reports, 13, Article 19693. doi.org/10.1038/s41598-023-46527-9.
  8. The New York Times Company v. Microsoft Corporation and OpenAI, Inc., No. 1:23-cv-11195 (S.D.N.Y., filed Dec. 27, 2023); Authors Guild v. OpenAI Inc., No. 1:23-cv-08292 (S.D.N.Y., filed Sept. 19, 2023); OpenAI. (2024, January 8). OpenAI and journalism. openai.com/index/openai-and-journalism.
  9. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems 30. papers.nips.cc; Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems 33. papers.neurips.cc.
  10. Chandra X-ray Observatory. (n.d.). X-Ray Images 101: False, or rather, representative color. chandra.harvard.edu/edu/xray101/false.html; U.S. Geological Survey. (n.d.). Earthquake magnitude, energy release, and shaking intensity. usgs.gov.
  11. Turpin, M., Michael, J., Perez, E., & Bowman, S. R. (2023). Language models don't always say what they think: Unfaithful explanations in chain-of-thought prompting. In Advances in Neural Information Processing Systems 36. openreview.net; Madsen, A., Chandar, S., & Reddy, S. (2024). Are self-explanations from large language models faithful? In Findings of the Association for Computational Linguistics: ACL 2024. aclanthology.org/2024.findings-acl.19.
  12. Smith, F. (2019, March 13). If humans bully robots there will be dire consequences. The Ethics Centre. ethics.org.au; Dreyfuss, E. (2018, December 27). The Terrible Joy of Yelling at an Amazon Echo. Wired. wired.com; Google co-founder Sergey Brin claimed in a 2025 All-In podcast appearance that language models can perform better when users threaten them (youtube.com); see Meincke, L., Mollick, E. R., Mollick, L., & Shapiro, D. (2025). Prompting Science Report 3: I'll pay you or I'll kill you — but will you care? Wharton Generative AI Labs / SSRN. papers.ssrn.com.
  13. Schlinger, H., & Blakely, E. (1987). Function-altering effects of contingency-specifying stimuli. The Behavior Analyst, 10(1), 41-45. doi.org/10.1007/BF03392405; Schlinger, H. D. (1993). Separating discriminative and function-altering effects of verbal stimuli. The Behavior Analyst, 16(1), 9-23. doi.org/10.1007/BF03392605; Dymond, S., & Rehfeldt, R. A. (2000). Understanding complex behavior: The transformation of stimulus functions. The Behavior Analyst, 23(2), 239-254. doi.org/10.1007/BF03392013.
  14. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189. doi.org/10.1016/j.chb.2018.03.051; Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304-316. doi.org/10.1016/j.chb.2019.01.020; Klein, S. H. (2025). The effects of human-like social cues on social responses towards text-based conversational agents: A meta-analysis. Humanities and Social Sciences Communications, 12, 1322. doi.org/10.1057/s41599-025-05618-w; Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, Article 107513. doi.org/10.1016/j.chb.2022.107513; Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, Article 107954. doi.org/10.1016/j.chb.2023.107954; Deng, Z., & Yan, J. (2025). The effect of perceived warmth, competence, and social presence of AI-driven chatbots on consumers' engagement and satisfaction. SAGE Open. doi.org/10.1177/21582440251365438.
  15. Anthropic. (2025, September 29). Effective context engineering for AI agents. Anthropic Engineering. anthropic.com/engineering/effective-context-engineering-for-ai-agents.
  16. Dinsmoor, J. A. (1995). Stimulus control: Part I. The Behavior Analyst, 18(1), 51-68. doi.org/10.1007/BF03392691; Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of Behavior, 37(1), 5-22. doi.org/10.1901/jeab.1982.37-5.
  17. Muysken, P. (2011). Code-switching. In R. Mesthrie (Ed.), The Cambridge handbook of sociolinguistics (pp. 301-314). Cambridge University Press. doi.org/10.1017/CBO9780511997068.023.
  18. OpenAI. (2026). Customizing your ChatGPT personality. OpenAI Help Center. help.openai.com; OpenAI. (2026). ChatGPT custom instructions. OpenAI Help Center. help.openai.com; OpenAI. (2026). Memory FAQ. OpenAI Help Center. help.openai.com; OpenAI. (2025, October 27). Model Spec. model-spec.openai.com/2025-10-27; OpenAI. (2026, March 25). Inside our approach to the Model Spec. openai.com/index/our-approach-to-the-model-spec.
  19. Anthropic. (2023, March 14). Introducing Claude. anthropic.com/news/introducing-claude; Anthropic. (2024, June 8). Claude's Character. anthropic.com/research/claude-character; Anthropic. (2025, September 11). Bringing memory to Claude. anthropic.com/news/memory; Anthropic. (2026, January 22). Claude's new constitution. anthropic.com/news/claude-new-constitution; Anthropic Support. (2026). Configuring and using styles. support.anthropic.com; Anthropic Docs. (2025). System prompts. docs.anthropic.com/en/release-notes/system-prompts; Anthropic Docs. (2026). Giving Claude a role with a system prompt. docs.anthropic.com.
  20. Altman, S. [@sama]. (2025, April 16). Tens of millions of dollars well spent — you never know [Post]. X. twitter.com/sama/status/1912646035979239430.
  21. Leibrand, S. (2025, July). [BUG] Claude says "You're absolutely right!" about everything (Issue #3382). GitHub, anthropics/claude-code. github.com/anthropics/claude-code/issues/3382; Claburn, T. (2025, August 13). Claude Code's copious coddling confounds cross customers. The Register. theregister.com.
  22. OpenAI. (2025, April 29). Sycophancy in GPT-4o: what happened and what we're doing about it. openai.com/index/sycophancy-in-gpt-4o; OpenAI. (2025, May 2). Expanding on what we missed with sycophancy. openai.com/index/expanding-on-sycophancy.
  23. A useful empirical comparison would be whether people anthropomorphize prompt-to-image systems to the same degree as conversational text systems. Image generation often lacks turn-taking, self-reference, and repair rituals, which may reduce agency attributions; but product framing, assistant positioning, generated explanations, and refusal messages can reintroduce anthropomorphic cues.
  24. Skinner, B. F. (1957). Verbal behavior. Appleton-Century-Crofts; Skinner, B. F. (1953). Science and human behavior. Macmillan; Passos, M. de L. R. da F. (2007). Skinner's definition of verbal behavior and the arbitrariness of the linguistic signal. Temas em Psicologia, 15(2), 257-264. pepsic.bvsalud.org.
  25. Read, M. (2026, April 19). My Boss Loves ChatGPT. Must I Fake Loving It Too? The New York Times. nytimes.com.

← Back to papers