On a Tuesday afternoon in a clinic in Dayton, Ohio, a BCBA named Rachel Bennett11. Rachel is a composite. The specifics are altered; the room is not unusual. is on the floor beside a three-year-old named Marcus. She is running natural-environment teaching. Marcus reaches for a toy car just out of his grasp. Rachel pauses, prompts the vocal mand, and when Marcus approximates the word, she hands him the car. On the tablet propped on her knee, she records the teaching opportunity: prompted vocal mand, approximate response, car delivered. She has been a Board Certified Behavior Analyst since 2018, and by her count she has collected data this way for eight years.
While Rachel is scoring Marcus's teaching opportunities, a company called Frontera Health is raising $42 million in seed funding to build artificial intelligence (AI) for autism care.22. Inspired Capital Job Board. Clinical prompt engineer (Board Certified Behavior Analyst — remote) @ Frontera. Frontera's software records sessions like Rachel's and analyzes video, audio, movement, speech, and interaction at thirty frames per second — beyond what a human observer can reliably see.33. Frontera Health, “Frontera Health launches, bringing AI solutions to transform autism care and advance health equity”, press release, February 18, 2025; The Company Check, Frontera Health company profile. Its clinical-report tools promise to cut hours from assessment writing. Its own clinics in rural New Mexico serve as the testing ground.44. Frontera Health, ABA clinical report automation, early autism detection digital phenotyping technology, assessment builder, diagnosis builder, and “Introducing Frontera”, February 18, 2025.
Frontera is hiring. The listing is for a role called Clinical Prompt Engineer. Requirements: a current BCBA credential and years of clinical experience. The work is to develop and refine AI prompts so the software produces clinical output a behavior analyst would recognize. The field's expertise is genuinely needed. But it is being hired downstream — after the architecture, after the funding, after the product decisions have already been made. An ordinary hybrid title tells you where the new territory begins; this one tells you where the old territory ends.
The company is not led by behavior analysts. Its chief executive officer is a former partner at Kleiner Perkins, one of Silicon Valley's most powerful venture firms, whose son has autism.55. Frontera Health, launch press release, February 18, 2025; About Frontera; Manu Kohli, “Frontera Health: My mission to transform autism care”, LinkedIn, March 11, 2025; CogniAble, About Us. One co-founder and AI leader is an IIT-Delhi researcher who previously co-founded an AI autism-screening platform. The leadership is venture capital, product, engineering, and machine learning. Behavior analysts appear in the public story as users, validators, and now prompt engineers.
This is not a story about one company. Frontera may build useful tools. Families in rural New Mexico may benefit. The story is that the field whose science is built around behavioral measurement did not build the measurement stack first. Someone else noticed the problem, raised the money, hired the engineers, and then came back for the BCBAs.
That should bother us.
A lineage unclaimed
Behavior analysis has an unusual relationship to artificial intelligence. It is not merely another health profession being asked to adopt AI. At the frontier of AI, reinforcement learning is doing work behavior analysts should recognize: not as metaphor, but as a training arrangement in which consequences shape what a system does next.
Pretraining gives a language model a vast repertoire. Reinforcement learning is increasingly used to organize that repertoire into extended behavior: checking work, persisting through difficult problems, deciding when to call tools, recovering from failed attempts, and producing answers that survive external tests. The target is not only a better sentence. It is a pattern of action over time.
That is familiar territory. Reinforcement learning is the branch of machine learning concerned with how an agent learns from the consequences of its actions. Sutton and Barto's foundational textbook traces the lineage through Thorndike's Law of Effect and devotes a full chapter to connections with behavioral psychology.66. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press. RLHF made the public version legible: responses are generated, evaluated, and then selected toward the consequences arranged by human preference, reward models, or verifiable task success.77. Christiano, P. F., et al. “Deep reinforcement learning from human preferences” (2017); Ziegler, D. M., et al. “Fine-tuning language models from human preferences” (2019); Ouyang, L., et al. “Training language models to follow instructions with human feedback” (2022); OpenAI, “Learning to reason with LLMs” (2024); OpenAI, “Introducing OpenAI o3 and o4-mini” (2025); Guo, D., et al. “DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning”, Nature (2025).
The vocabulary differs. The scale is industrial. The training environment is made of data centers, benchmarks, human raters, tool calls, tests, and reward models rather than levers, lights, pellets, and cumulative records. But the contingency logic is not foreign. A system acts; the environment differentially selects; the future distribution of behavior changes.
The people telling this story are computer scientists, computational neuroscientists, and AI labs. Harvard's Kempner Institute titled one explainer “From Lab Rats to Chatbots.”88. Kempner Institute, Harvard University, “From lab rats to chatbots: On the pivotal role of reinforcement learning in modern large language models.” AI researchers routinely cite operant conditioning, animal behavior, and the Law of Effect when explaining reinforcement learning and RLHF. Behavior analysis has not made this lineage central to its own public account of AI. A field that helped give AI one of its central training logics is not substantially present in the room where that logic became infrastructure.
That is the first strange fact. The second is that this absence would have been hard to predict at the beginning of the field.
When behavior analysts were builders
Skinner was not only a theorist. He was a builder.
The operant chamber was not borrowed from experimental psychology with a new label attached. It was engineered to solve a specific measurement problem: how do you observe behavior continuously, automatically, without relying on a human observer's memory, patience, or fatigue? In “A Case History in Scientific Method,” Skinner described the device as born partly from an engineer's impatience — “some ways of doing research are easier than others” — and partly from accidental discoveries that only continuous automated recording could have caught.99. Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 11(5), 221-233. The food magazine jammed. Pellets stopped coming. The rat pressed harder and faster — the cumulative line climbing steeply. Then pressing slowed — gradually at first, then more sharply, with an occasional burst of renewed activity, until the line bent toward horizontal. Skinner had not set out to discover extinction. The science followed the instrument.
The cumulative recorder was not decoration. Skinner compared it to the microscope, X-ray camera, and telescope — instruments that made visible what ordinary observation could not.1010. Skinner, B. F. (1972). Cumulative Record. B. F. Skinner Foundation. Before the recorder, behavior was counted in trials. After it, behavior was a slope: continuous, real-time, visually inspectable. That was not a stylistic choice. It was a measurement revolution.
In 1948, Skinner extended the engineering ambition from hardware to society. His novel Walden Two imagined a community designed on behavioral principles — positive reinforcement, designed environments, reduced aversive control.1111. Skinner, B. F. (1948). Walden Two. Macmillan.
That ambition did not carry forward evenly into applied work. The move from laboratory chambers to classrooms, clinics, homes, and psychiatric wards made automatic recording harder. Applied behavior analysis gained social reach, but much of its measurement became manual again: observers with clipboards, interval sheets, counters, and later tablets. The data apps enhanced the clipboard. They did not recreate the automatic measurement ambition Skinner had already demonstrated.
Skinner was not an outlier. Building was normal in early behavior analysis. Ogden Lindsley built a human operant laboratory and framed free-operant measurement in engineering terms.1212. Lindsley, O. R. (1996). The four free-operant freedoms. The Behavior Analyst, 19(2), 229-240; Lindsley, O. R. (2003). Skinner boxes for psychotics. Journal of Applied Behavior Analysis, 36(4), 545-559. Charles Ferster built automated programming equipment and ran tens of thousands of hours of automated behavioral research.1313. Ferster, C. B., & Skinner, B. F. (1957). Schedules of Reinforcement. Appleton-Century-Crofts. Nathan Azrin — who used the language of behavioral engineering — built token-economy systems, toilet-training apparatus, habit-reversal protocols, and a community reinforcement approach that treated entire social environments as designable systems.1414. Azrin, N. H., & Powell, J. (1969). Behavioral engineering: The use of response priming to improve prescribed self-medication; Ayllon, T., & Azrin, N. H. (1968). The Token Economy; Azrin, N. H., & Foxx, R. M. (1971). A rapid method of toilet training the institutionalized retarded; Azrin, N. H., & Nunn, R. G. (1973). Habit-reversal; Azrin, N. H. (1976). Improvements in the community-reinforcement approach to alcoholism; Fuqua, R. W., Poling, A., & Normand, M. P. (2016). Nathan H. Azrin: A biographical sketch. Sidney Bijou engineered classroom arrangements.1515. Bijou, S. W., Peterson, R. F., & Ault, M. H. (1968). A method to integrate descriptive and experimental field studies at the level of data and empirical concepts; Ghezzi, P. M. (2010). In memoriam: Sidney W. Bijou. The founders treated instrumentation as continuous with science.
In 1968, Baer, Wolf, and Risley warned that applied work could degenerate into “a collection of tricks” if it lost principled connection to the science that generated its procedures.1616. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1(1), 91-97. ↑ We are now living inside that warning.
Growth without instruments
In 1999, there were twenty-eight BCBAs.1717. Deochand, N., Lanovaz, M. J., & Costello, M. S. (2024). Assessing growth of BACB certificants (1999-2019). Perspectives on Behavior Science, 47(1), 251-282. Today the Behavior Analyst Certification Board (BACB) credentialing system includes approximately 342,000 certificants across BCBAs, Board Certified Assistant Behavior Analysts (BCaBAs), and Registered Behavior Technicians (RBTs).1818. Behavior Analyst Certification Board, BACB certificant data, retrieved April 17, 2026. In the United States, state-by-state autism insurance reforms — beginning with Indiana's mandate in 2001 and closing with Tennessee in 2019 — helped make ABA a reimbursed healthcare service nationwide,1919. Barry, C. L., et al. (2019). State mandate laws for autism coverage and high-deductible health plans. Pediatrics, 143(6); National Conference of State Legislatures, Autism and insurance coverage state laws (September 30, 2025); Autism Speaks, press release, August 2, 2019; Bernhard, B., “Autism insurance coverage now required in all 50 states,” Disability Scoop, October 1, 2019. and that reimbursement system became the commercial engine of the modern ABA industry.
A profession can grow faster than its science. It can grow through billing codes, credential maintenance, utilization review, and compliance infrastructure while its measurement tools barely change.
That mismatch is easier to see from outside behavior analysis than from inside it. In neuroscience, the pace of change can feel almost punitive because the science advances through its instruments. New recording methods, imaging pipelines, cell-type tools, open datasets, machine-learning models, and analytic conventions change not only what researchers know, but what they can ask next. The BRAIN Initiative was built around exactly this premise: that progress in neuroscience depends on developing tools for mapping circuits, monitoring neural activity, and intervening causally in nervous-system function.2121. National Institutes of Health, BRAIN 2025: A Scientific Vision; Bargmann, C. I., & Newsome, W. T. (2014). The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative and neurology. JAMA Neurology, 71(6), 675-676; Ecker, J. R., et al. (2017). The BRAIN Initiative Cell Census Consortium: Lessons learned toward generating a comprehensive brain cell atlas. Cell, 170(6), 1098-1110. Behavior analysis has had a more tractable interface for a long time: a person, an environment, observable behavior, a data sheet, and a graph. That tractability was a strength. It made the science portable. But it also reduced the pressure to rebuild the instrument. ABA could keep producing useful work with familiar tools while adjacent sciences entered a feedback loop in which better sensors, computation, and models generated better questions, better data, and faster cycles of description, prediction, and control.
Rachel was credentialed in 2018. In the eight years since, the RBT column in the chart above has more than doubled. Her caseload has grown with it. The app on her tablet has not.
Private equity entered the field heavily in the mid-2010s. More than sixty firms have been active in ABA acquisitions, and a large share of autism-related mergers and acquisitions between 2017 and 2022 involved private equity.2222. Batt, R., & Appelbaum, E. (2023). Pocketing money meant for kids: Private equity in autism services. Center for Economic and Policy Research; Herman, B. “The private equity firms, like Blackstone and KKR, behind 8 of the biggest names in autism therapy,” STAT, August 15, 2022. The Center for Autism and Related Disorders became emblematic: acquired by Blackstone for roughly $600 million, it later collapsed into bankruptcy after operational strain, training reductions, and clinic closures, and was repurchased for a small fraction of that price.2323. Blackstone, “Blackstone to acquire Center for Autism and Related Disorders (CARD),” press release, April 30, 2018; Chuck, E. “The Center for Autism and Related Disorders grew to 265 clinics. Then private equity took over,” NBC News, October 5, 2023; Batt & Appelbaum (2023), CEPR. RBT turnover runs between 77% and 103% annually. Jenna was Rachel's third RBT this year. BCBA burnout is reported at 72%.2424. CentralReach, 2025 Autism and IDD Care Market Report; Slowiak, J. M., & DeLongchamp, A. C. (2022). Self-care strategies and job-crafting practices among behavior analysts. Behavior Analysis in Practice, 15(4), 1300-1313. The causes are multifactorial and the numbers deserve careful reading, but the broad pattern is real: ABA is a labor-intensive service delivered inside systems that reward billable volume more clearly than scientific instrumentation. Claims data show ABA visit volumes grew by roughly 267% between 2019 and 2024, with the bulk of billing concentrated in a single technician-delivery code.2525. Trilliant Health, “ABA therapy utilization grew nearly 300%, driven by increases in Medicaid,” December 22, 2025. The dominant commercial incentive is to move more hours through that code, not to measure what happens during those hours with greater fidelity.
The scientific methods that produced ABA's evidence base produced the advocacy, produced the state-by-state mandates, and produced the reimbursement stream that followed. The industry has been harvesting that dividend for two decades. It has not been reinvesting it in the measurement science that earned the dividend in the first place. A field that grows through mandate and does not reinvest in the instruments of its science gets the worst of both worlds: market dependence on the science alongside declining capacity to renew it.
The measurement problem sits inside that system. Many clinical data collectors have competing responsibilities during sessions — teaching, prompting, reinforcing, redirecting — while simultaneously recording what happened.2626. Morris, C., Conway, A. A., Becraft, J. L., & Ferrucci, B. J. (2022). Toward an understanding of data collection integrity. Behavior Analysis in Practice, 15(4), 1361-1372. The measurement methods vary by program: trial-by-trial accuracy, frequency, duration, latency, momentary time sampling, partial-interval recording, whole-interval recording, narrative ABC data. The important claim is not that one method dominates. It is that much applied measurement is constrained by human bandwidth, and those constraints are treated as normal rather than as a problem to engineer away.
The apps that replaced clipboards — CentralReach, Catalyst, Motivity, Raven, and others — improved many workflows and continue to add features.2727. CentralReach, Enterprise product page, APIs, and 2025 market report release, March 26, 2025. But the difference between digitizing clinical paperwork and rebuilding the measurement apparatus is the difference between a faster clipboard and a new instrument. Skinner built an instrument. Neuroscience keeps rebuilding its instruments. The field has built the clipboard. The question now is what it is prepared to build next.
The instruments are being built elsewhere
Elsewhere, the instruments are being built — just not by behavior analysts.
The pattern closest to ABA's daily work is in automated behavioral coding.
Animal-behavior researchers already have open-source tools — SLEAP,
DeepLabCut, A-SOiD — for markerless pose tracking, multi-animal identification,
and expert-guided behavior classification that are more sophisticated than anything
clinical ABA uses.2828. Mathis, A., et al. (2018).
DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning.
Nature Neuroscience, 21, 1281-1289; Pereira, T. D., et al. (2022).
SLEAP: A deep learning system for multi-animal pose tracking.
Nature Methods, 19, 486-495; Tillmann, J. F., et al. (2024).
A-SOiD, an active-learning platform for expert-guided, data-efficient discovery of behavior.
Nature Methods, 21, 703-711.
Horse-10 benchmark and the “important tool for measuring behavior”
framing: Mathis, A., Biasi, T., Schneider, S., Yuksekgonul, M., Rogers, B.,
Bethge, M., & Mathis, M. W. (2021).
Pretraining boosts out-of-domain robustness for pose estimation.
WACV 2021. Human model (full_human) and other shared
checkpoints:
DeepLabCut project page,
Mathis Lab. The
DeepLabCut team describes pose estimation as “an important tool for measuring
behavior,” widely used in technology, medicine, and biology; their Horse-10
benchmark shows generalization from ten labeled horses to twenty held-out individuals,
and a full-body human model already ships in their Model Zoo. These systems
do not just count behavior. They track movement at frame-level resolution, classify
behavioral states in real time, and let researchers define the response classes that
matter rather than accepting vendor defaults. The DeepLabCut project alone has grown
into an open-source ecosystem — shared models, active extensions, training
resources, and a community of contributors — of a kind that clinical ABA has
no counterpart for. The technology for continuous automated measurement of complex
behavior exists. It is being used on zebrafish, mice, primates, and humans. It is
not being used on therapy sessions.
Ambient medical scribes are already reducing documentation burden in medicine, with a Permanente Medical Group deployment reportedly saving physicians close to sixteen thousand hours of documentation time and improving patient-physician interaction,2929. Tierney, A. A., et al. (2025). Ambient artificial intelligence scribes: Learnings after 1 year and over 2.5 million uses. NEJM Catalyst Innovations in Care Delivery, 6(5); Feldheim, B., “AI scribes save 15,000 hours — and restore the human side of medicine,” American Medical Association, June 12, 2025; The Permanente Federation analysis, April 7, 2025. and peer-reviewed evaluations of ambient AI scribes showing meaningful reductions in clinician burnout.3030. Olson, K. D., et al. (2025). Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Network Open, 8(10), e2534976. Open-source speech and vision models can transcribe and tag clinical interactions. The infrastructure for turning a session into structured, searchable, analyzable data is no longer a research prototype. It is commodity technology waiting to be configured for behavior-analytic problems.
The screening and assessment layer tells the same story from the outside in. At Duke University, Geraldine Dawson and Guillermo Sapiro developed SenseToKnow, a tablet app using computer vision to quantify social attention, facial dynamics, head movements, blink rate, and response to name in toddlers. Published in Nature Medicine in 2023, it achieved strong autism-screening performance from a ten-minute assessment, and the technology has been licensed to Apple.3131. Perochon, S., et al. (2023). Early detection of autism using digital behavioral phenotyping. Nature Medicine, 29(10), 2489-2497. Dawson is a clinical psychologist. Sapiro is an electrical engineer. No behavior analysts appear in the project. A nine-university consortium funded by a $20 million NSF grant — the AI Institute for Exceptional Education — is building AI tools for children with speech and language disorders, including autism.3232. National Science Foundation, “NSF announces new AI institute,” press release, January 9, 2023. These are not ABA projects, and behavior analysts are not usually diagnosticians. But the broader pattern is clear: the measurement layer around autism and developmental disability is being rebuilt by engineers, psychologists, and AI labs, and it will not stop at screening. It will reach treatment delivery, outcome tracking, and service authorization — the places where ABA lives.
The venture-capital picture confirms the direction: global digital mental health attracted $2.7 billion in 2024, a 38% year-on-year increase, with autism-specific companies raising tens of millions each.3333. Galen Growth, “Mental health's investment resurgence: A market ripe for innovation,” March 5, 2025. Meanwhile, the technology layer inside the ABA market itself is still dominated by practice management — scheduling, billing, documentation — not measurement science.
Capital has noticed that behavioral measurement is a solvable problem. It has not waited for behavior analysts to solve it.
When the dial belongs to someone else
The quieter threat is not competition. It is loss of control over the instruments that define clinical reality.
Payers are deciding which measures count. In 2025, the National Academies concluded that TRICARE's ABA outcome measures — the Vineland-3, SRS-2, and PDDBI — were not validated for the purpose of measuring outcomes in the autism population to which they were applied, and recommended halting the requirement.3434. National Academies of Sciences, Engineering, and Medicine (2025). The comprehensive autism care demonstration: Solutions for military families. National Academies Press. These were payer-chosen assessments imposed on a behavior-analytic intervention.
The next generation of this problem is algorithmic. RethinkFirst markets a patent-pending AI dosage calculator trained on a large dataset to recommend ABA service hours to utilization reviewers.3535. RethinkFirst, “Streamlining utilization review for applied behavior analysis,” February 23, 2023; RethinkBH medical necessity assessment. The provenance of such training data matters: historical authorization patterns already reflect payer preference, regional variation, and the biases of previous reviewers, which an algorithm trained on them will inherit and generalize.
Rachel knows what Marcus needs. She drafts the TRICARE appeal. Somewhere in a server rack, a model grades her appeal. It has been trained to establish funding caps from a decade of prior authorization decisions — records shaped less by any child's clinical need than by funder caps, state mandates, incomplete documentation, and whichever reviewers happened to approve them.
And families will not protect the field from that shift. The parents who fought for ABA mandates fought for their children, not for a credential. Retention in ABA services stands at 46% at twenty-four months.3636. Choi, K. R., Bhakta, B., & Knight, E. A. (2022). Patient outcomes after applied behavior analysis for autism spectrum disorder. Journal of Developmental & Behavioral Pediatrics, 43(1), 9-16. Over half the families who started with Rachel's clinic two years ago have left. Naturalistic developmental behavioral interventions are gaining evidence and adoption. Speech-language pathologists are delivering NDBI strategies under their own billing codes, bypassing ABA's insurance structure.3737. Lee, J., Sone, B., Rooney, T., & Roberts, M. Y. (2023). The role of naturalistic developmental behavioral interventions in early intervention for autistic toddlers; D'Agostino, S. R., et al. (2023). Toward deeper understanding and wide-scale implementation of naturalistic developmental behavioral interventions; ASHA speech-language pathology billing codes. If a parent sees a competing service that offers continuous progress data, clearer communication, and a therapist whose full attention is on their child, they will not ask which field has the stronger theoretical lineage. They will ask whether their child is helped. The pressure from the payer side and the pressure from the family side point in the same direction: a field that does not shape its own measurement instruments will find its authority squeezed between the two.
Form is not function
A camera can detect that a child fell to the floor. A model can learn that floor-dropping happens more often during academic demands. A dashboard can count duration, latency, and co-occurring events. All useful. None sufficient.
Suppose the floor-drop produces escape from demands. The intervention is to modify the demand or teach an alternative response that achieves the same outcome. Now suppose the same floor-drop produces adult attention — the therapist turns, leans in, speaks. Same topography. Different controlling relation. Opposite intervention. A system that cannot distinguish what the behavior does in the environment cannot distinguish these two cases. It sees a child on the floor. It does not see why.
Behavior analysts work with black boxes. So, it is worth noting, do the AI engineers whose systems now dominate the technology conversation. Both traditions face the same practical problem: how do you steer a system whose internals you cannot inspect? The behavior-analytic answer has been to work at the level you can actually change — antecedent, response, consequence, and the contingencies that hold them together.
In 1959, Teodoro Ayllon and Jack Michael turned the psychiatric nurse into a behavioral engineer.3838. Ayllon, T., & Michael, J. (1959). The psychiatric nurse as a behavioral engineer. Journal of the Experimental Analysis of Behavior, 2(4), 323-334; McIlvane, W. J., & Kledaras, J. B. (2012). Some things we learned from Sidman and some we did not (we think). On a locked ward where patients had been labeled unteachable, they redesigned the environment: prompts were changed, reinforcement was made contingent on behavior that had previously been ignored, tasks were broken into smaller steps that could be taught and reinforced in sequence. Patients began to learn. The point was not that these patients had no inner life. It was that the environment was where the leverage sat, and that a cohesive, repeatable method for arranging environmental contingencies was what the clinical team could actually do.
What changes now is not the level of analysis. It is the range of variables at that level. A room that can sense gaze, vocal affect, proximity, and response timing at frame resolution is a room where a far larger set of environmental variables becomes legible to a clinician. A room that can cue, prompt, or respond automatically is a room where the environment itself can be made to play its part. These are expansions of what an environmental approach can do — which is to say, expansions of what behavior analysis has been doing all along.
That same logic is the foundation of functional analysis, the method the field uses to decide what to do about a problem behavior. Its payoff is not philosophical; it is clinical. Treatments tied to a correctly identified function repeatedly outperform treatments that are not.3939. Hurl, K., Wightman, J., Haynes, S. N., & Virues-Ortega, J. (2016). Does a pre-intervention functional assessment increase intervention effectiveness? A meta-analysis; Pelios, L., Morren, J., Tesch, D., & Axelrod, S. (1999). The impact of functional analysis methodology on treatment choice; Geiger, K. B., Carr, J. E., & LeBlanc, L. A. (2010). Function-based treatments for escape-maintained problem behavior. The empirical base is substantial: the method's foundational report aggregated more than one hundred and fifty single-subject analyses, and reviews spanning four decades find reliably differentiated outcomes in the large majority of cases.4040. Iwata, B. A., et al. (1994). The functions of self-injurious behavior: An experimental-epidemiological analysis; Melanson, I. J., & Fahmie, T. A. (2023). Functional analysis of problem behavior: A 40-year review. That is not a methodological preference. It is an outcome difference.
What functional analysis does that prediction alone cannot is arrange the contingency on purpose and watch whether the behavior responds. Rich observational variability can narrow the candidate set — if a behavior sometimes precedes escape, sometimes attention, and sometimes tangible delivery, a good model may even propose a primary function. But identifying which of those arrangements would actually change the behavior, for this person, in this setting, requires changing one of them and observing the result. Judea Pearl put the formal version succinctly: association asks what correlates with what; intervention asks what happens when you change something.4141. Pearl, J., & Mackenzie, D. (2018). The Book of Why. Basic Books; Peyrard, M., & West, R. (2021). A ladder of causal distances. Most machine learning lives on the first rung. Functional analysis lives on the second.
How current language models produce output matters here. They work by conditional prediction over training patterns — structured compression at scale — which can yield something coherent enough to read as a functional analysis while identifying no controlling relation at all. The rigor behavior analysis expects has to be built around the pattern match, not assumed from it.
The field's comparative advantage, then, is not a better prediction algorithm, and it is not a monopoly on the idea of function — adjacent fields reason about function too. It is that behavior analysis has worked longer and more systematically at building the structures that identify controlling relations reliably: experimental conditions, single-subject designs, contingency analyses, treatment-fidelity checks, and the institutional habit of tying treatment to what the test showed. That advantage is only real if it shows up in the instruments. If the field does not carry those structures into the measurement systems that clinicians, payers, and families rely on, those systems will default to what they measure most easily, which is topography. The field's most important contribution will become invisible at the exact moment measurement becomes more powerful.
The case against building
The counter-case is real. ABA built the current system without rebuilding its measurement apparatus, and the system works: state-by-state mandates passed, coverage became near-universal, children who would otherwise have had no access to structured services are reached. The clipboard was good enough for that.
Clinical time is already compressed. The BCBA reading this paper is supervising dozens of caseloads. New measurement tools historically survive when they fit the existing workflow and pay off within the clinician's own day, and stall when they ask something harder and slower for a payoff that materializes on someone else's timeline. Premature automation in medicine has harmed patients. “Move fast and break things” translates poorly into settings where the thing being broken is a child's service plan.
Each of those is correct. None of them argues the field should stand still. The compression of clinical time is a reason to automate the recording burden, not a reason to preserve it. The fit-the-workflow constraint is a design requirement, not a verdict. Prior harm from premature automation in medicine is an argument for careful instrumentation — designed, validated, and deployed by the people who understand the behavior the instrument is meant to measure. The critics of fast deployment are right about the risk. They are wrong about the alternative, which is not no tools but tools built by someone else.
A Tuesday afternoon, rebuilt
What would it look like to build instead?
A Tuesday afternoon, four years from now. Marcus is seven. Rachel is twelve years credentialed. They are in the same clinic in Dayton — or one like it — but the room is different. Multiple cameras are mounted to reduce blind spots, including during physical prompting. A small edge-computing device — roughly the size of a book — runs video and audio analysis locally. Pose estimation — software that tracks body position — is calibrated to clinically relevant response classes, such as falls, head movement, proximity, or self-injury. Speech analysis is tuned to Marcus's limited and idiosyncratic vocalizations. Both run on the device. Nothing leaves the room as raw video. Rachel wears a small earpiece. She is not looking at a tablet. Her attention stays on Marcus.
The system tracks Rachel too — not as a supervisor watching for mistakes, but as a co-therapist executing the program she has already designed. It checks prompt timing, reinforcement latency, and prompt hierarchy against the plan. When the planned next trial is due, it cues her. When a maintenance target has gone too long without a probe, it flags it. When Marcus begins to show the precursors to escalation Rachel asked it to watch for, the earpiece delivers a quiet word. Rachel sets the rules. The system keeps them.
After the session, Rachel opens a dashboard. The session appears as a dense time series. Problem behavior increased when demand density rose after minute twelve. Reinforcement latency drifted from two seconds to five over the session — treatment-integrity data that would have been easy to miss while teaching. Unprompted manding climbed in the final ten minutes, and the trend across the last eight sessions shows as a slope rather than a score. A pattern appears that ordinary session data would not have revealed: problem behavior did not rise with demands in general; it rose when demands followed rapid therapist attention, and fell when the same demands followed a quieter transition. Another pattern, quieter but just as useful: Marcus's vocal approximations of a target request increased fourfold during the minutes when the environmental noise floor dropped below a threshold Rachel had flagged as relevant. Neither pattern was the one the program was primarily written to detect. Both are inspectable because the density of the data makes them inspectable. Rachel reads the patterns through a functional lens, because she is the one who told the system what to surface and what to ignore.
None of this displaces what only a human can do in this room. Autism spectrum disorder is diagnostically defined by persistent deficits in social communication and interaction alongside restricted and repetitive patterns of behavior. In practice, some associated behavior patterns — elopement, self-injury, aggression — can pose concrete safety risks.4242. American Psychiatric Association (2022). Diagnostic and Statistical Manual of Mental Disorders (5th ed., text rev.); NIMH, Autism Spectrum Disorder; Anderson, C., et al. (2012). Occurrence and family impact of elopement in children with autism spectrum disorders. Pediatrics, 130(5), 870-877. A room full of sensors is still a room where Marcus's repertoire is being built, moment to moment, by another person. The sensors do not replace Rachel. They are there so her attention can stay on Marcus — not on a data sheet.
The hardware is not exotic. Cameras, microphones, and edge devices are cheap enough for serious pilots — small, deliberately limited trials designed to learn whether a system is useful before a clinic scales it. The minimum hardware can be under two thousand dollars.4343. NVIDIA, Jetson Orin Nano Super Developer Kit; Logitech, Brio 4K; Louisiana R.S. 17:1948 (2025); Texas Education Agency, special education video surveillance. Louisiana has passed legislation requiring cameras in special education classrooms, effective February 2026. Texas allows them on parent request. The recording infrastructure is arriving whether the field builds on it or not.
What sets that infrastructure apart from a generic surveillance rig is the design work behind it. What counts as a response class, how antecedent events are tagged, whether the system tracks therapist behavior, how treatment integrity is operationalized, how generalization is measured, how the family accesses the data — these are behavior-analytic decisions, not procurement choices. If behavior analysts make them, they build demand for behavior-analytic expertise. If they do not, demand shifts to whoever owns the instrument.
And ownership matters. If every clinic rents this future from a proprietary vendor whose categories, defaults, and data access rules it did not define, behavior analysts become dependent on private systems they cannot inspect or modify. The field should aim at open protocols, shared data schemas, interoperable exports, and open-source reference implementations where possible. Companies can and should build products. But behavior analysts have to demand exportability, inspectability, and behavior-analytic categories before procurement locks in someone else's defaults. A science of behavior cannot be comfortable if the basic instruments of behavioral measurement exist only as vendor property.
ABA serves many young, disabled, nonverbal, or dependent clients. The data are intimate. The power asymmetry is real. A sensor-rich therapy room raises privacy and governance questions that a field taking its work seriously will have to answer — and better to answer them at design time than at deployment.4444. U.S. Department of Health and Human Services, HIPAA guidance on encrypted ePHI and individuals' right to access; U.S. Department of Education, FERPA guidance on student photo and video records.
Two versions of the job
Imagine the behavior analyst's job four years from now.
In one version, the trend toward remote, software-mediated supervision continues.4545. Sipila-Thomas, E. S., & Brodhead, M. T. (2024). A survey of barriers experienced while providing supervision via telehealth; Zoder-Martell, K. A., et al. (2020). Technology to facilitate telehealth in applied behavior analysis. RBTs collect data in apps that remain conceptually close to clipboards. AI drafts reports, mostly to satisfy payers. A utilization-review algorithm recommends hours before the behavior analyst has made the clinical case. The profession survives for a while as a service layer inside systems built by others. Over time, the parts of the role that are easiest to specify — documentation, summaries, dosage recommendations, compliance checks — become the parts easiest to automate or move elsewhere.
In the other version, some behavior analysts become behavioral engineers. That title should not mean a clinician who knows how to ask ChatGPT for a task analysis. It means someone who understands behavior analysis deeply enough to know what must be measured, and understands modern tools well enough to make the measurement real. They know that a form-based classifier is not a functional analysis. They know why treatment integrity matters. They know when data density is meaningful and when it is noise dressed as precision. They can look at an AI-generated pattern and ask the behavior-analytic question: what environmental relation would make this useful?
That kind of clinician has to come from a different training pathway. Baer, Wolf, and Risley's warning applies directly here:16 techniques without generative principles can be repeated, but they cannot adapt when the problem changes, and they cannot produce new techniques when the tools do. A behavioral engineer is only as useful as the principles they can reason from. The field that wants more of these clinicians has to train people in the generative science — not a branded curriculum — deeply enough to build from it. That means conceptual systems alongside computational literacy: enough understanding of the principles to know what the measurement should look like, and enough fluency with modern tools to help build it. Neither half is optional. A clinician with deep principles and no tools designs measurement that never ships. A clinician with tools and no principles builds fast, confident instruments that measure the wrong things.
That is the clinician I would want when a family I cared about needed a behavior analyst. That person is hard to find in ABA right now.
The reason is partly the shape of mid-career professional development. Behavior analysis has a layer of brand-named method certifications — fidelity-checklist programs built around a single protocol or originator, with tiered levels and supervisor sign-offs — that clinicians pursue to advance.4646. Examples of such certifications include the Early Start Denver Model certification (tiered therapist and trainer levels, supervised submissions, minimum fidelity scores); Practical Functional Assessment and Skill-Based Treatment credentialing from FTF Behavioral Consulting (seven credential levels, each verified by document review and direct observation under a credentialed supervisor); and the PEAK Relational Training System with tiered practitioner certifications. These certifications train clinicians to implement a specific protocol with high fidelity. Fidelity matters; it is also a technician's standard. A behavior analyst's standard is to experiment from the principles. A credentialing culture that treats permission-to-modify as a gift from the originator, rather than the ordinary work of the analyst, produces high-fidelity technicians rather than analysts.
Neither version arrives pure. The actual future will be a mixture — large platforms that embed behavior-analytic expertise as a service layer, boutique practices and university labs that build their own stacks, payer-owned decision tools that keep operating whether or not the field engages with them, a small open-source community that builds reference implementations used by a few. The question is not which version wins. It is where inside that mixture the field's authority will end up, and what share of the design decisions will have been made by people trained in the science.
The barriers are real. Current billing codes do not reimburse time spent building measurement infrastructure.4747. ABA Coding Coalition, billing codes and FAQ. The BACB task list includes no technology competencies.4848. Behavior Analyst Certification Board, BCBA test content outline (6th ed., 2024); test content outlines. Healthcare AI has a poor deployment record: the Epic Sepsis Model, deployed across hundreds of hospitals, was later found to perform poorly in independent validation.4949. Wong, A., et al. (2021). External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Internal Medicine, 181(8), 1065-1070. Computer vision degrades with occlusion — common during therapy with young children.5050. de Belen, R. A. J., Bednarz, T., Sowmya, A., & Del Favero, D. (2020). Computer vision in autism spectrum disorder research: A systematic review; Kojovic, N., et al. (2021). Using 2D video-based pose estimation for automated prediction of autism spectrum disorders in young children. Education has its own cautionary failures.5151. Blume, H. “LAUSD shelves its hyped AI chatbot to help students after collapse of firm that made it,” Los Angeles Times, July 3, 2024.
Those are not reasons to stop. They are reasons for ordinary clinical organizations — the typical multi-site provider, the single-clinic BCBA owner, the university training program — to start small. A clinic can test whether AI-assisted documentation reduces report time. A provider can test whether automated transcription improves supervision. A university clinic can test whether computer vision detects treatment-integrity drift. These are not moonshots. They are the kind of controlled comparison behavior analysts already know how to run.
The Permanente template is a working example. A single medical group piloted ambient AI scribes on its own terms, iterated through the predictable failure modes, and published the result: close to sixteen thousand hours of physician documentation time returned, with measurable improvements in patient interaction and measurable reductions in clinician burnout.29 A decade ago the same pilot could not have been run; the tools did not exist. They do now.
The reason to start is not abstract. It is competitive. The provider that discovers a workflow where one behavior analyst with better measurement tools supervises more effectively than two behavior analysts with clipboards has a real advantage over the clinic next door. If a new role — say, a behavior analyst with technical fluency embedded in the supervision workflow — returns its cost through better authorization evidence, lower documentation burden, or earlier detection of treatment-integrity drift, that provider has a reason to invest more. If it does not return its cost, the provider stops. That is not failure. That is the selection process the field should understand.
Scale the logic up. If enough providers run enough small experiments, the ones that work get copied. The ones that fail get dropped. The field as a whole becomes more diverse — not just autism therapy, but behavioral measurement, system design, outcome architecture, treatment-integrity engineering. Some behavior analysts will sit closer to engineering teams and build the instruments. Some will sit closer to policy and define what the instruments are allowed to do. Some will keep doing high-fidelity clinical work with better tools than they have today. The monoculture breaks by distributing expertise across those roles, not by forcing every clinician down the same pathway. That diversification is not a luxury. For a field that has concentrated its professional identity in a single service line inside a single payer system, it is survival. A species that occupies one niche dies when the niche changes. A field that understands selection by consequences should not be afraid to let consequences select its next professional forms.
Diagnostic, not insult
There is a temptation to frame all of this as a threat story. AI will come for ABA. That is too crude.
The more interesting possibility is that AI will expose what behavior analysis has and what it has neglected.
It will expose that the field has a more serious account of environmental arrangement than most adjacent disciplines — that its best work has always been about finding leverage in the relation between person, action, and world. It will expose that functional analysis is not mere prediction — it is prediction tested through intervention. Asking what happens when the environment changes is a different and more useful question than asking only what correlates with what. It will expose that single-case logic is suited to individualized, streaming, intervention-sensitive data.
It will also expose that the field has allowed too much of its data to be privatized — enormous clinical data streams inside platforms, no public research commons, no shared dataset comparable to those that transformed other fields.5252. PMC Journal List — Journal of Applied Behavior Analysis; NLM Catalog — JEAB; Wiley's data sharing policies; Jennings, A. M., Breeman, S. L., Vladescu, J. C., & Cox, D. J. (2025). Survey of open science practices in behavior analysis (preprint). The open-science gap is concrete: as of April 2026, JABA is listed by PubMed Central as no longer participating, with coverage ending in 2012, and NLM Catalog marks both JABA and JEAB as PMC Inactive.5353. PMC Journal List; NLM Catalog — JABA; NLM Catalog — JEAB; PubMed Central policies and guidelines. Status and coverage only; no direct publisher or society statement explaining the reason for inactivity is cited. Adjacent disciplines moved toward open access. The field's flagship journals moved away from it.
The public agenda-setting gap is equally concrete. The Association for Behavior Analysis International (ABAI) holds one of the largest annual gatherings in the behavioral sciences. Between 2019 and 2025, its presidential and Presidential Scholar addresses covered climate change, domestic violence prevention, nonviolent social action, science communication, giant rats trained for landmine detection, cultural evolution, and translational research — all important topics, none about the technological transformation reshaping every adjacent field.5454. Association for Behavior Analysis International, 51st annual convention preview (April 29, 2025); 2025 annual convention highlights (June 6, 2025); 2022 President's column; 2024 Presidential Scholar essay (McKibben); 2022 Presidential Scholar essay (Fast / APOPO). In its July 2024 newsletter, the BACB illustrated generative AI use with an example of refining a task analysis for toothbrushing. Compare that with the American Psychological Association's Council of Representatives, which approved a formal AI policy by a reported 156-2 vote, or the American Speech-Language-Hearing Association, which maintains an Innovations in Technology convention topic area that explicitly includes AI, machine learning, and large language models.5555. Behavior Analyst Certification Board, July 2024 newsletter and newsletters index; Wessel, J. “Highlights of August 2024 APA conference and Council of Representatives meeting,” The Industrial-Organizational Psychologist, Society for Industrial and Organizational Psychology; American Psychological Association, Artificial intelligence and the field of psychology (2024, policy statement); American Speech-Language-Hearing Association, Innovations in Technology topic area. The 156-2 vote figure is sourced to SIOP's summary of the APA Council of Representatives meeting. The disparity is not about blame. It is about priorities made visible.
These exposures are not insults. They are diagnostics. A field that knows what it has — and is honest about what it has neglected — can still choose to build.
Frontera's Clinical Prompt Engineer job posting is useful precisely because it is not an insult. It is ambivalent. It shows that behavior-analytic expertise has market value. It also shows where that expertise may be placed if the field does not build its own instruments: after the model, after the product, after the capital, after the architecture.
Behavior analysis was born when its founder built an apparatus because existing methods could not answer the questions he cared about. It matured through people who treated measurement, intervention, and engineering as continuous with science. Its defining paper warned against becoming a collection of tricks. Its core contribution is not a credential, a billing code, or a therapy brand. It is a way of treating behavior as lawful without pretending the inner machinery is transparent.
The field does not need to chase every AI trend. It does not need a chatbot for everything. It does not need to pretend that all technological change is progress. It needs to build the instruments that make its own science visible in the world now arriving.
If it does, the future behavior analyst may be more important than the current one: not less human-centered, but more capable; not replaced by measurement, but freed by it; not a prompt engineer in someone else's stack, but an architect of systems that know what behavior analysts mean by behavior.
If it does not, behavior analysis persists — but as vocabulary. Reinforcement, shaping, stimulus control, functional analysis — terms that get absorbed, translated, and rebranded by the engineers, psychologists, and AI labs doing the building. The field's ideas show up in other people's citations the way operant conditioning now shows up in RLHF papers: mentioned once, politely, on the way to somewhere else.
A machine that says sorry
Anthropomorphic AI gives us familiar but flawed controls.
PreviewContext and persistence
The real skill is not prompting. It is designing the working context around the model.
PreviewThe decision before the decision
How AI shapes what you choose, before the visible choice is even made.