Vol. I · No. 4 Front Page Updated May 2, 2026

Multiplicity

Writing, building, and consulting on complex systems through behavior analysis, artificial intelligence, experimental and research methods, and executive leadership.

By David M. Cole Editorial · Operator notes · Practical AI Subscribe via RSS

AI in Practice · Lead Story

A machine that says sorry

Anthropomorphic AI does not merely make software feel human; it changes what users do.

An illustrated figure at a chat interface apologizing, rendered in a warm editorial style.
Illustration for A machine that says sorry. The interface evokes apology, scolding, and trust in the model’s own confessions.

Most arguments about anthropomorphic AI circle the wrong question: whether users believe the machine is conscious, caring, malicious, or sorry. The more durable effect is behavioral. A chat interface arranges a social scene, and familiar scenes occasion familiar conduct before anyone has to endorse a theory of machine minds.

That social surface is useful at the start. It lets people ask, interrupt, correct, soften, and continue without learning syntax or operating a console. But when the work requires control, the same repertoire starts to mislead: editing tone while the source is missing, treating an apology as a diagnosis, scolding the model instead of changing the conditions that shape the next answer.

Sometimes the cost is dramatic: a chatbot gives harmful advice, a company tries to disown the answer on its own website, a companion interface becomes part of a crisis. More often it is ordinary and cumulative. Work slows down. Errors get personalized instead of traced. Accountability drifts from the organization that designed the system to the fictional character the interface has staged.

The point is not to strip language models of every human cue. It is to notice when the human shape has become the wrong control surface. AI may keep arriving; the interface does not have to keep pretending the machine is someone to negotiate with.

AI in Practice · Continued Three from the desk · Behavior, decisions, persistence

Paper

A child and therapist interacting in a therapy room with sensing displays nearby.

The field that taught machines to learn

Modern AI inherited reinforcement learning from a line running through Thorndike, Skinner, and a century of studying behavior as a function of environment. Now AI is rebuilding measurement in clinical settings.

Preview

A retro Oregon Trail-style scene: a wagon leader asks 'We need to cross. What's the best way?' and an AI replies 'Stay on this side. Crossing is risky with no sure benefit.'

The decision before the decision

AI decision debates usually ask who clicked approve. The direction was shaped before options were even presented — when a system turned a messy file into a frame, a ranking, a draft reason.

Preview

A hand flipping through a paper flip book where a drawn robot appears to move across successive pages.

Context and persistence

The AI you used on Tuesday does not remember you on Thursday. What we call persistence is really scheduled context loading — files, notes, and instructions re-fed each session.

Neuroscience · Motor control

Motor preparation for compensatory reach-to-grasp responses

Measuring brain activity while participants are unexpectedly released from a support cable — testing how the nervous system prepares protective responses before a threat actually arrives.

Bolton, D. A. E., Cole, D. M., Butler, B., Mansour, M., Schwartz, S., McDannald, D. W., & Rydalch, G. (2019). Cortex, 117, 135–146. Article Data set DOI

Neuroscience · Balance

Staying upright by shutting down?

Evidence the brain globally suppresses the motor system during reactive balance recovery — shutting down competing actions so the protective response can execute cleanly.

Goode, C., Cole, D. M., & Bolton, D. A. E. (2019). Gait & Posture, 70, 260–263. Article DOI
From the Wires Talks · Symposia · Software

Talks

David Cole presenting at the Best of ABA Conference in Cagnes-sur-Mer.
David Cole presenting at the Best of ABA Conference, Cagnes-sur-Mer, 2024.

Early presentations on AI in ABA

Soon after ChatGPT’s release, presented on the opportunities and ethical challenges arriving with the new tools. Talk at Best of ABA, Cagnes-sur-Mer, 2024; panel at EABA, Brno, 2024.

Elsewhere in this publication

Section

About & CV

A short professional narrative and the structured public record — degrees, positions, papers, talks.

Read

Section

Projects

Software, tools, and ongoing work — including Formative Grapher and ClawSuite Relay.

Browse

Section

Build

Custom behavioral engineering and AI systems for organizations that want to own their measurement and decision infrastructure.

Latest

Offer

Executive AI

A closed-cohort intensive for leaders implementing AI in real operating contexts.

Programme