Writing, building, and consulting on complex systems through behavior
analysis, artificial intelligence, experimental and research methods,
and executive leadership.
By David M. ColeWriting · Building · Consulting
Subscribe via RSS
Anthropomorphic AI does not merely make software feel human; it changes what users do.
Most arguments about anthropomorphic AI circle the wrong question: whether users believe the machine is conscious, caring, malicious, or sorry. The more durable effect is behavioral. A chat interface arranges a social scene, and familiar scenes occasion familiar conduct before anyone has to endorse a theory of machine minds.
That social surface is useful at the start. It lets people ask, interrupt, correct, soften, and continue without learning syntax or operating a console. But when the work requires control, the same repertoire starts to mislead: editing tone while the source is missing, treating an apology as a diagnosis, scolding the model instead of changing the conditions that shape the next answer.
Sometimes the cost is dramatic: a chatbot gives harmful advice, a company tries to disown the answer on its own website, a companion interface becomes part of a crisis.
The social surface invites one repertoire. The machine underneath is more sensitive to another.
More often it is ordinary and cumulative. Work slows down. Errors get personalized instead of traced. Accountability drifts from the organization that designed the system to the fictional character the interface has staged.
The point is not to strip language models of every human cue. It is to notice when the human shape has become the wrong control surface. AI may keep arriving; the interface does not have to keep pretending the machine is someone to negotiate with.
AI in practice · ContinuedThree from the desk · Behavior, strategy, persistence
Modern AI inherited reinforcement learning from a line running through
Thorndike, Skinner, and a century of studying behavior as a function
of environment. Now AI is rebuilding measurement in clinical settings.
AI decision debates usually ask who clicked approve. The direction
was shaped earlier — when a system turned a messy file into a
frame, a ranking, a draft reason.
From the FieldTalks · Project · AI in applied work
A web-app refactor of Formative Grapher is in early development.
The new version drops the Excel dependency, carries the
time-series graphing primitives to the browser, and keeps the
original’s bias toward accuracy and speed.
Clinical behavior analysts are expected not only to graph data
continuously but also to follow particular conventions that are
laborious to implement with commonly available software. The 2015
original, developed with Dr. Benjamin Witts for my master’s thesis, addressed that gap with a free APA-Style Excel template. While it still finds users a decade later, it is no longer maintained.
Talk
Inevitable: Opportunities and ethical challenges of artificial intelligence in ABA
D. M. Cole & S. Carstens · 2024
Cagnes-sur-Mer — As ChatGPT
was still percolating into public discourse, the talk surveyed
where AI tools open up everyday behavior-analytic work, and where
the new ethical hazards land: supervision, documentation, and
clinical judgment among them. Originally given at the Best of ABA
Conference, with a follow-up panel scheduled at the European
Association for Behaviour Analysis in Brno.
From the LabOlder work · Science, decision-making, neuroscience
Symposium
Adding genetically modified mice to the armamentarium of behavior analysis
D. M. Cole · 2018
San Diego — Rats and pigeons
still dominate as animal models in the experimental analysis of
behavior. In this symposium on alternative model organisms —
from alcoholic bees to robotic zebrafish — I discussed
tradeoffs of mice, which learn more slowly than rats but offer more
genetic engineering possibilities.
Motor preparation for compensatory reach-to-grasp responses
D. A. E. Bolton, D. M. Cole, B. Butler, et al. · 2019
A handle on a wall is more than background scenery. We unexpectedly
released a cable holding people in a forward lean. Using transcranial
magnetic stimulation, we demonstrated that merely seeing the handle
was sufficient to prepare their motor system, such that participants
later reached for the handle with greater specificity than pure
reflex explains and with greater speed than pure volition explains.
Falls are the leading cause of accidental death among older adults.
The usual suspect is frailty, but greater culpability lies with the
nervous system. Specifically and paradoxically, the culprit may be
less the failure to rapidly fire a recovery action and more the
failure to inhibit competing, incompatible actions in time.
Assessing susceptibility of a temporal discounting task to faking
D. M. Cole, J. M. Rung, & G. J. Madden · 2019
Delay discounting describes how people choose between smaller sooner
rewards and larger delayed ones. It can also be faked. Given a
motivational prompt and no other insight into common laboratory
assessments, participants systematically manipulated their results.
Translational researchers and test designers should take note.
Neuronal response variability as a product of divisive normalization
K. L. Ruddy, D. M. Cole, C. Simon, & M. Bächinger · 2020
Some brain waves are illusory, artifacts of averaging punctuated
bursts of brain activity across hundreds of trials. Buried in the
smoothly undulating waves is trial-by-trial variability that can
predict behavior with trial-by-trial resolution.
D. Steinhauff, J. H. Ellwanger, … D. M. Cole, et al. · 2019
Managing people is an unavoidable part of laboratory work. And it
deserves the same rigor: Identify manipulatable variables,
systematically change them, and keep the PI informed.