Vol. I · No. 1 Front Page Updated May 9, 2026

Multiplicity

Writing, building, and consulting on complex systems through behavior analysis, artificial intelligence, experimental and research methods, and executive leadership.

By David M. Cole Writing · Building · Consulting
Subscribe via RSS

AI in practice · Lead Story

A machine that says sorry

Anthropomorphic AI does not merely make software feel human; it changes what users do.

Most arguments about anthropomorphic AI circle the wrong question: whether users believe the machine is conscious, caring, malicious, or sorry. The more durable effect is behavioral. A chat interface arranges a social scene, and familiar scenes occasion familiar conduct before anyone has to endorse a theory of machine minds.

That social surface is useful at the start. It lets people ask, interrupt, correct, soften, and continue without learning syntax or operating a console. But when the work requires control, the same repertoire starts to mislead: editing tone while the source is missing, treating an apology as a diagnosis, scolding the model instead of changing the conditions that shape the next answer.

Sometimes the cost is dramatic: a chatbot gives harmful advice, a company tries to disown the answer on its own website, a companion interface becomes part of a crisis.

An illustrated figure at a chat interface apologizing, rendered in a warm editorial style.
The social surface invites one repertoire. The machine underneath is more sensitive to another.

More often it is ordinary and cumulative. Work slows down. Errors get personalized instead of traced. Accountability drifts from the organization that designed the system to the fictional character the interface has staged.

The point is not to strip language models of every human cue. It is to notice when the human shape has become the wrong control surface. AI may keep arriving; the interface does not have to keep pretending the machine is someone to negotiate with.

AI in practice · Continued Three from the desk · Behavior, strategy, persistence

Paper

A child and therapist interacting in a therapy room with sensing displays nearby.

The field that taught machines to learn

Modern AI inherited reinforcement learning from a line running through Thorndike, Skinner, and a century of studying behavior as a function of environment. Now AI is rebuilding measurement in clinical settings.

Preview

A retro Oregon Trail-style scene where an AI assistant reframes a wagon leader's decision about crossing a river.

The decision before the decision

AI decision debates usually ask who clicked approve. The direction was shaped earlier — when a system turned a messy file into a frame, a ranking, a draft reason.

From the Field Talks · Project · AI in applied work
Illustrated workspace with phase-change behavior charts and the title Formative Grapher.

Project · Methods

Formative Grapher to return as a web app

A web-app refactor of Formative Grapher is in early development. The new version drops the Excel dependency, carries the time-series graphing primitives to the browser, and keeps the original’s bias toward accuracy and speed.

Clinical behavior analysts are expected not only to graph data continuously but also to follow particular conventions that are laborious to implement with commonly available software. The 2015 original, developed with Dr. Benjamin Witts for my master’s thesis, addressed that gap with a free APA-Style Excel template. While it still finds users a decade later, it is no longer maintained.

David Cole presenting at the Best of ABA Conference in Cagnes-sur-Mer.

Talk

Inevitable: Opportunities and ethical challenges of artificial intelligence in ABA

— As ChatGPT was still percolating into public discourse, the talk surveyed where AI tools open up everyday behavior-analytic work, and where the new ethical hazards land: supervision, documentation, and clinical judgment among them. Originally given at the Best of ABA Conference, with a follow-up panel scheduled at the European Association for Behaviour Analysis in Brno.

From the Lab Older work · Science, decision-making, neuroscience

Symposium

Adding genetically modified mice to the armamentarium of behavior analysis

— Rats and pigeons still dominate as animal models in the experimental analysis of behavior. In this symposium on alternative model organisms — from alcoholic bees to robotic zebrafish — I discussed tradeoffs of mice, which learn more slowly than rats but offer more genetic engineering possibilities.

— 44th Annual Convention of the Association for Behavior Analysis International

Neuroscience · Motor control

Motor preparation for compensatory reach-to-grasp responses

A handle on a wall is more than background scenery. We unexpectedly released a cable holding people in a forward lean. Using transcranial magnetic stimulation, we demonstrated that merely seeing the handle was sufficient to prepare their motor system, such that participants later reached for the handle with greater specificity than pure reflex explains and with greater speed than pure volition explains.

— Cortex 117, 135–146

Neuroscience · Balance

Staying upright by shutting down?

Falls are the leading cause of accidental death among older adults. The usual suspect is frailty, but greater culpability lies with the nervous system. Specifically and paradoxically, the culprit may be less the failure to rapidly fire a recovery action and more the failure to inhibit competing, incompatible actions in time.

— Gait & Posture 70, 260–263

Behavior · Decision-making

Assessing susceptibility of a temporal discounting task to faking

Delay discounting describes how people choose between smaller sooner rewards and larger delayed ones. It can also be faked. Given a motivational prompt and no other insight into common laboratory assessments, participants systematically manipulated their results. Translational researchers and test designers should take note.

— Journal of Clinical Psychology 75(10), 1959–1974

Neuroscience · Theory

Neuronal response variability as a product of divisive normalization

Some brain waves are illusory, artifacts of averaging punctuated bursts of brain activity across hundreds of trials. Buried in the smoothly undulating waves is trial-by-trial variability that can predict behavior with trial-by-trial resolution.

— HRB Open Research 3(34)

Science · Management

NextGen advises “Trying to Manage”

Managing people is an unavoidable part of laboratory work. And it deserves the same rigor: Identify manipulatable variables, systematically change them, and keep the PI informed.

— Science 366(6461), 28–30

Elsewhere on this site

Section

About & CV

A short professional narrative and the structured public record — degrees, positions, papers, talks.

Section

Projects

Software, tools, and ongoing work — including Formative Grapher and ClawSuite Relay.

Section

Build

Custom behavioral engineering and AI systems for organizations that want to own their measurement and decision infrastructure.

Offer

Executive AI

A closed-cohort intensive for leaders implementing AI in real operating contexts.