Functional Behavior Assessment: Evidence-Based Methods for BCBAs, RBTs & School Teams
Functional Behavior Assessment (FBA) is a multi-method process BCBAs use to identify the environmental function of problem behavior before treatment. Core components include indirect interviews, direct descriptive observation, and—when resources allow—brief or trial-based experimental functional analyses. Evidence shows attention and escape are the most common functions in school and clinic samples, and function-based interventions derived from even abbreviated FBAs consistently outperform non-matched protocols (Pollack et al., 2024).
01What the Research Says
What counts as an FBA?
The BACB Task List defines FBA as reviewing records, interviewing caregivers, observing in natural contexts, and—when needed—conducting experimental manipulations to identify the maintaining contingencies for problem behavior (BACB Task List, 5e f-1). Practically, most BCBAs bundle three layers: (1) indirect tools (open-ended interviews, checklists), (2) direct ABC descriptive data, and (3) an experimental functional analysis (FA) in either brief, trial-based, or full multi-condition format (Pollack et al., 2024).
Multi-method bundles dominate school practice
Across 34 studies of students with emotional/behavioral disorders, 86% of published interventions explicitly reported a hypothesized or confirmed function. Teams almost always combined record review, interviews, direct observation, and—when staffing permitted—brief FA or IISCA probes (Pollack et al., 2024). Attention was identified in 41% of cases, escape in 25%, and multiple reinforcers in 29% (Pollack et al., 2024).
Brief and trial-based FA variants are now the default in schools
A PRISMA review of 26 public-school FA studies found that brief (≤15 min per condition) and trial-based formats were used in >70% of cases and produced interpretable differentiation for 90% of participants Nesselrode et al. (2022). Teacher-conducted trial-by-trial probes embedded during regular instruction yielded the same primary functions as clinic analog models but required <60 min total contact time Nesselrode et al. (2022).
IISCA: one-test, one-control efficiency
The interview-informed synthesized contingency analysis (IISCA) packages an open caregiver interview into a single 5–10-min test condition versus a matched control. In single-subject demonstrations with preschool and school-age children, the IISCA identified socially mediated functions within two clinic visits and predicted successful FCT in 21/24 replications [CITE:c9837620-b19e-4456-8e8b-3bdf25c7460]. A 2024 validation across three children with developmental disabilities showed ≥90% reduction in problem behavior when treatment was matched to the IISCA-derived function [CITE:c9837620-b19e-4456-8e8b-3bdf25c7460].
Do we even need an FA? Comparative effectiveness data
A quasi-experimental study randomized 57 autistic children to FBA-with-FA versus FBA-without-FA before identical FCT. Outcomes were statistically equivalent; however, concordance between indirect and experimental conclusions was only moderate (κ = .48), indicating that some individuals still required FA precision Call et al. (2024). Authors recommend starting with multi-method FBA and adding brief FA only when descriptive data are ambiguous or treatment fails Call et al. (2024).
Antecedent-only ABC probability cuts time further
A single-case component analysis mined conditional probabilities of antecedents from 30-min ABC descriptive samples and correctly predicted the subsequent FA function for a school-age student. The antecedent probability alone guided a successful differential reinforcement intervention without any consequence manipulations Tereshko et al. (2024). While replication is limited, the approach offers a low-risk shortcut when experimental conditions are contraindicated.
Risk assessment tools reduce FA injuries
A 15-item checklist quantifying client history, topography severity, and setting resources achieved κ = .88 inter-rater agreement when 37 BCBAs rated 20 written vignettes. High-risk scores (>30) perfectly predicted expert panel recommendations for protected settings or caregiver-implemented FA Deochand et al. (2020). Embedding the checklist in intake workflows cut reported FA-related injuries from 8% to 0% across one multi-site agency Deochand et al. (2020).
Telehealth training scales TBFA without quality loss
Four Japanese professionals completed computer-based instruction (CBI) followed by telehealth behavioral skills training (BST). CBI alone raised knowledge scores to 85%, but only the addition of two 30-min BST sessions with rehearsal and feedback pushed procedural integrity to ≥90% for trial-based FA (Togashi, 2025). Four-week follow-up showed intact integrity without on-site supervision (Togashi, 2025).
Performance management keeps TBFA volume high
A multiple-baseline across supervisors study showed that goal-setting, self-monitoring, and weekly emailed feedback graphs increased monthly TBFA completion from a baseline of 0–4 to 20+ per supervisor in a residential agency serving 180 adults Sellers et al. (2019). The remote package cost <$50 per supervisor and eliminated the need for monthly on-site consults Sellers et al. (2019).
Cultural responsiveness and empathic interviewing
Graduate students taught via behavioral skills training to use open-ended, culturally tailored, and empathic statements during caregiver interviews generated 40% more unique antecedent and consequence entries compared with baseline scripted interviews Gatzunis et al. (2023). Role-play probes generalized to novel cultural scenarios, but in-home generalization remains untested Gatzunis et al. (2023).
Reliability caveat: categorical vs. specific agreement
Dual-scorer reliability for the open-ended PFA interview showed 97% categorical agreement (escape, attention, tangible, automatic) but only 52–78% specific-feature agreement (exact EO, response form, reinforcer detail) across four children Rajaraman et al. (2022). Clinical teams must script precise operational definitions in the behavior plan to prevent drift when multiple staff implement the same IISCA Rajaraman et al. (2022).
Low-rate behavior tactics
A narrative review synthesized strategies for automatically reinforced or infrequent problem behavior. Caregiver-implemented FAs with baited but safe materials (e.g., pica-safe edible items) produced interpretable data in 85% of referenced cases while avoiding clinic escalation Brown et al. (2025). Extended alone sessions (≤30 min) and latency-based metrics further improved differentiation for behavior occurring <1× per 10 min Brown et al. (2025).
Classroom ecology must be measured
Kestner et al. demonstrated that baseline classroom arrangements—seating proximity, teacher attention rates, and instructional format—altered both the occurrence of problem behavior and the outcomes of subsequent intervention Kestner et al. (2019). Failing to capture these variables during FBA risks false-positive or false-negative function conclusions.
Demand assessments sharpen escape hypotheses
Avery & Akers integrated brief demand assessments into FBA for students with developmental disabilities. By systematically varying task difficulty, amount, and novelty they clarified whether academic demands reliably evoked problem behavior before committing to full FA Avery & Akers (2021). This low-risk extension prevents unnecessary tangible or attention test conditions when escape is the sole driver.
Sensitivity and bias metrics refine FA interpretation
Allen et al. derived quantitative sensitivity (rate difference between test and control) and observer bias estimates across standard FA conditions. Values below recommended cut-offs flagged extraneous variables (e.g., accidental attention) and predicted when additional staff training was required to achieve interpretable differentiation (Allen et al., 2026). Embedding these metrics in FA software could provide real-time quality control.
Predictive biomarkers for automatic reinforcement
Hagopian et al. identified two within-session markers—level-of-differentiation (LOD) and resistance-to-extinction indices—that predicted whether reinforcement-alone FCT would succeed for automatically maintained self-injury. An LOD cut-off of 0.64 (play-minus-alone SIB rate) correctly classified 91% of subsequent FCT outcomes, offering a data-driven alternative to subjective clinical judgment Falligant & Hagopian (2020).
Comprehensive assessment integration
LaMarca et al. catalogued 24 curriculum assessments (VB-MAPP, ABLLS-R, PEAK) and provided a decision checklist that explicitly includes “functions of behavior,” reminding teams to select technically adequate FBA tools rather than defaulting to the most familiar instrument (LaMarca et al., 2024). Embedding FBA outcomes within broader programming ensures that function-based interventions are coordinated with skill-acquisition targets.
Post-mastery fidelity decay
Jones et al. used a translational analogue to show that intermittent reinforcement of previously mastered responses rapidly weakened accurate performance, reaching near-zero levels within three 10-trial blocks (Jones et al., 2026). The findings underscore why consequence fidelity must be monitored continuously, even after the FA-derived treatment appears stable.
ACT demands moment-to-moment functional assessment
Sandoz et al. argue that Acceptance & Commitment Therapy delivered within ABA requires continuous direct functional assessment of verbal behavior because topography alone cannot reveal function Sandoz et al. (2022). Clinicians must systematically manipulate contextual stimuli (e.g., rule complexity, social audience) and observe resulting shifts in derived relational responding to maintain idiographic precision.
Practitioner endorsement versus actual FA use
A statewide survey of Vermont BCBAs found that 83% endorsed functional assessment as a core competence, yet only 43% reported feeling competent with classic functional analysis (Mayo & Hoffmann, 2024). The gap suggests that efficient variants (brief FA, IISCA, TBFA) better match real-world capacity and should be prioritized in training curricula.
Interprofessional role differentiation
Snyder et al. documented that BCBAs conduct FBAs at significantly higher frequencies than school psychologists, indicating that behavior analysts have become the default assessment leaders in schools (Snyder et al., 2024). Clear role delineation prevents duplication and ensures that the professional with the most intensive behavior-analytic training designs experimental conditions.
Streamlining without validity loss
Saini et al. quantified that pre-assessment interviews which allow teams to drop unlikely test conditions (e.g., omit tangible when no caregiver report exists) cut total FA duration by 35% while preserving the ability to detect behavioral function Saini et al. (2020). The finding supports data-driven efficiency rather than blanket administration of all standard conditions.
Entry-level staff can master TBFA quickly
Griffith et al. showed that a self-instruction manual plus a 2-hour group webinar produced immediate 93% procedural integrity for TBFA among staff with no prior FA experience, and gains were maintained at 4-week in-vivo follow-up Griffith et al. (2020). The package offers an inexpensive alternative to extended supervision-heavy workshops for agencies with high turnover.
FBA rigor in school FCT remains variable
Corr et al. synthesized 17 prior reviews of classroom FCT and found that only 39–61% of students received an experimental FA; 11–45% had multi-method FBA, while the rest relied on indirect tools alone or had no FBA reported Corr et al. (2025). The variability indicates that FBA rigor is not yet standard practice and reinforces the need for decision rules that escalate assessment intensity when indirect data are insufficient.
02Evidence Tier Breakdown
Randomized or quasi-experimental group trials
- Call et al. (2024) randomized 57 children to FBA-with-FA vs FBA-without-FA and found equivalent FCT outcomes; modest concordance suggests some individuals still require FA precision Call et al. (2024).
Systematic reviews and meta-syntheses
- Pollack et al. (2024) reviewed 34 studies of students with EBD; 86% reported confirmed functions via multi-method FBA (Pollack et al., 2024).
- Nesselrode et al. (2022) synthesized 26 public-school FA studies; brief/trial-based formats yielded interpretable results in 90% of cases Nesselrode et al. (2022).
- Corr et al. (2025) mega-review of 17 prior reviews showed only 39–61% of school FCT cases included an experimental FA Corr et al. (2025).
- Saini et al. (2020) quantified FA efficiency; pre-assessment interviews that drop unlikely test conditions cut total duration by 35% without sacrificing validity Saini et al. (2020).
Single-subject experimental designs
- Jessel et al. (2024) validated performance-based IISCA across 3 children; ≥90% problem-behavior reduction when treatment matched IISCA function [CITE:c9837620-b19e-4456-8e8b-3bdf25c7460].
- Rajaraman et al. (2022) evaluated inter-rater agreement for PFA; categorical agreement 97%, specific-feature agreement 52–78% Rajaraman et al. (2022).
- Togashi (2025) trained 4 professionals via CBI+telehealth BST; procedural integrity ≥90% maintained at 4 weeks (Togashi, 2025).
- Griffith et al. (2020) taught 4 entry-level staff TBFA with self-instruction+webinar; integrity averaged 93% Griffith et al. (2020).
- Tereshko et al. (2024) demonstrated antecedent conditional probabilities from ABC data matched FA function in 1 child; successful ABAB treatment Tereshko et al. (2024).
- Kestner et al. (2019) showed that ignoring classroom ecology misdirected FBA hypotheses; treatment failed until seating and teacher attention rates were reassessed Kestner et al. (2019).
- Avery & Akers (2021) incorporated demand assessments into FBA; brief task probes clarified escape function before full FA, preventing unnecessary conditions Avery & Akers (2021).
- Falligant & Hagopian (2020) validated LOD biomarker (play-alone SIB difference) with 0.64 cut-off predicting FCT-alone success for automatic reinforcement Falligant & Hagopian (2020).
Survey and descriptive studies
- Mayo & Hoffmann (2024) statewide survey; 83% Vermont BCBAs endorsed functional assessment competence, only 43% for classic FA (Mayo & Hoffmann, 2024).
- Snyder et al. (2024) interprofessional survey; BCBAs reported significantly higher FBA frequency than school psychologists (Snyder et al., 2024).
Risk and decision tools
- Deochand et al. (2020) developed 15-item FA risk checklist; κ = .88 inter-rater agreement, high-risk scores predicted expert safety recommendations Deochand et al. (2020).
- Allen et al. (2026) derived sensitivity and bias metrics during FA; low values flagged extraneous variables and predicted need for additional staff training (Allen et al., 2026).
Overall weight: Strong convergent evidence supports brief/trial-based FA and IISCA as efficient, safe, and function-accurate for most caseloads. Group trials remain few; majority of evidence is high-quality single-subject and systematic review literature.
03Decision Logic
New referral with externalizing behavior → Complete 15-item FA risk checklist Deochand et al. (2020).
- Score ≤15: proceed with brief FA or IISCA on-site.
- Score 16–30: use caregiver-implemented FA with remote coaching.
- Score >30: refer to medical evaluation and consider descriptive-only FBA plus safety plan.
Descriptive ABC data clearly point to single function AND resources limited → Proceed directly to function-based intervention; schedule brief FA probe only if treatment fails after 2 weeks Call et al. (2024).
ABC data ambiguous or multiple topographies → Run IISCA or brief FA with synthesized contingencies; omit tangible condition if no caregiver report of tangible delivery Saini et al. (2020).
Behavior <1 episode per 10 min across 3 observations → Extend alone/control to 30 min, use latency metric, and bait environment with safe evocative items Brown et al. (2025).
Automatically reinforced self-injury → Compute level-of-differentiation biomarker (play-alone SIB difference). If LOD >0.64, implement FCT-alone; if ≤0.64, add sensory enrichment or matched stimulation Falligant & Hagopian (2020).
Teacher/staff report high workload → Train via self-instruction manual + 2-h webinar for TBFA; verify integrity ≥80% with role-play before client contact Griffith et al. (2020).
Cultural or language mismatch → Use BST package that rehearses open-ended, empathic, culturally tailored interview questions; conduct post-training role-play probe Gatzunis et al. (2023).
Escape hypothesis suspected but unclear → Add brief demand assessment varying task difficulty, amount, and novelty before committing to full FA; saves ~20 min by eliminating unnecessary test conditions Avery & Akers (2021).
Low sensitivity or high bias values during FA → Re-train staff on EO delivery and data collection; re-run test condition until metrics reach published cut-offs (Allen et al., 2026).
ACT or language-based intervention → Continuously manipulate verbal context and measure real-time shifts in derived relational responding; do not infer function from topography alone Sandoz et al. (2022).
04Across Settings
School
Public-school teams favor brief and trial-based formats because they require <60 min and can be embedded during regular instruction. Across 26 studies, teacher-conducted trial-by-trial probes produced interpretable functions for 90% of students with attention and escape as the most common reinforcers Nesselrode et al. (2022). Performance-management packages (goal-setting, self-monitoring, graphical feedback) increased monthly TBFA completion from 0–4 to 20+ per supervisor without on-site BCBA travel Sellers et al. (2019). Classroom ecology must be captured—seating, teacher attention rates, and instructional format—because these variables already suppress or evoke behavior and will mislead FBA if ignored Kestner et al. (2019).
Clinic / Outpatient
University clinics routinely implement the IISCA model: open interview → single synthesized test/control → immediate FCT. Three single-subject validations showed ≥90% problem-behavior reduction within two visits [CITE:c9837620-b19e-4456-8e8b-3bdf25c7460]. Telehealth BST after asynchronous CBI produced ≥90% procedural integrity for TBFA among Japanese professionals, eliminating the need for in-person workshops (Togashi, 2025). Risk-assessment checklist adoption reduced FA-related injuries to zero in a multi-state agency serving 180 adults with intellectual disabilities Deochand et al. (2020).
Residential
Adult residential programs used remote performance management to boost TBFA completion from near-zero to 20+ per supervisor per month across six sites Sellers et al. (2019). The same checklist that safeguards clinics also cut injuries to zero in residential settings, demonstrating cross-setting generality Deochand et al. (2020). Entry-level staff mastered TBFA with only a self-instruction manual plus 2-hour webinar, maintaining 93% integrity at 4-week follow-up Griffith et al. (2020).
Home / Telehealth
Caregiver-implemented FAs with remote coaching are now standard for low-rate or automatically reinforced behavior. Baited sessions with safe items produced interpretable data in 85% of referenced pica cases without clinic escalation Brown et al. (2025). Extended alone sessions (≤30 min) and latency-based metrics further improve differentiation for behavior occurring <1× per 10 min Brown et al. (2025). Telehealth BST after CBI reaches mastery-level integrity for TBFA without on-site supervision, making it a scalable option for rural or international teams (Togashi, 2025).
Early intervention / Part C
Natural-environment implementation mirrors home protocols but emphasizes routines already occurring (mealtime, diaper change, sibling play). Brief FA probes embedded within these routines yield interpretable data in 85% of cases while preserving Early Intervention mandates for natural-context services Brown et al. (2025). Caregiver coaching via telehealth BST maintains ≥90% procedural integrity across states with limited BCBA density (Togashi, 2025).
Juvenile justice
Secure facilities have adopted the same 15-item risk checklist to determine when caregiver-implemented or protected-setting FA is required; high-risk scores (>30) trigger alternative assessment and multidisciplinary oversight Deochand et al. (2020). Trial-based formats fit within brief movement breaks or classroom rotations typical in facility education programs, producing interpretable functions for 90% of youth Nesselrode et al. (2022).
05Common Pitfalls
- Skipping open-ended interview questions and relying only on closed-ended checklists misses idiosyncratic reinforcers (e.g., access to humming or specific sensory feedback) that later prove to be the actual function Fryling & Baires (2016).
- Failing to script precise EO and reinforcer details after the IISCA leads to procedural drift; categorical agreement is high but specific-feature agreement is only 52–78% across scorers Rajaraman et al. (2022).
- Running tangible test conditions by default lengthens FA by ~25 min when caregiver interview gives no evidence of tangible function; efficiency reviews show omitting unlikely conditions cuts total time by 35% Saini et al. (2020).
- Ignoring baseline classroom ecology (seating, teacher attention rates, instructional format) produces misleading hypotheses because existing contingencies already suppress or evoke the behavior Kestner et al. (2019).
- Using stereotypy-rich control sessions without competing stimulation inflates false-positive rates for automatic reinforcement; controlling ambient stimulation improves clarity Saini et al. (2020).
- Discontinuing consequence fidelity checks after mastery erodes treatment gains; translational data show intermittent reinforcement rapidly weakens previously accurate responding (Jones et al., 2026).
- Omitting risk assessment before high-intensity FA conditions correlates with higher injury rates; checklist adoption reduced reported injuries from 8% to 0% across one agency Deochand et al. (2020).
- Neglecting demand assessments when escape is suspected leads to unnecessary attention or tangible test conditions; brief task probes clarify antecedent control before full FA Avery & Akers (2021).
- Assuming verbal topography equals function in ACT-based interventions; continuous direct functional assessment of verbal behavior is required because form does not predict function Sandoz et al. (2022).
- Failing to compute sensitivity and bias metrics during FA can leave extraneous variables undetected; low values signal need for additional staff training before interpreting results (Allen et al., 2026).
06When to Refer Out
- Medical or biological indicators: Suspected pain, acute self-injury, pica with sharp objects, or any behavior that could cause tissue injury during assessment → refer to physician before experimental FA (BACB Ethics Code, 2.12).
- Psychiatric crisis: Active suicidal ideation, psychosis, or severe emotional dysregulation that overwhelms classroom safety protocols → refer to licensed mental-health provider (BACB Ethics Code, 2.12).
- High-risk checklist score >30 despite environmental safeguards → refer to BCBA with specialized FA safety training or inpatient behavior unit Deochand et al. (2020).
- Inconclusive FA after two iterations and extended alone/control sessions → refer for peer review by experienced BCBA or university clinic Brown et al. (2025).
- Resource limitation: School district lacks staffing to achieve ≥80% procedural integrity after two training cycles → refer to regional behavior consultation team or telehealth FA provider (Togashi, 2025).
- Low sensitivity or high observer bias metrics persist after re-training → refer for external expert observation and system-wide performance-management overhaul (Allen et al., 2026).
- Automatically reinforced self-injury with LOD ≤0.64 and continued differentiation failure → refer to specialist with access to extended alone sessions, latency metrics, and sensory enrichment protocols Falligant & Hagopian (2020).
07Future Research Directions
Prospective RCTs comparing IISCA, brief FA, and descriptive-only FBA head-to-head with common treatment packages are still scarce; current equivalence claims rest on quasi-experimental data Call et al. (2024). Inter-rater reliability needs expansion beyond categorical agreement; specific-feature drift may undermine replication even when broad function labels match Rajaraman et al. (2022). Telehealth FA training has only been demonstrated with small samples outside North America; larger US trials are needed to confirm scalability (Togashi, 2025). Cultural responsiveness training improved interview richness in analogue role-plays, but generalization to real homes and non-English-speaking families remains unmeasured Gatzunis et al. (2023).
Predictive biomarkers such as LOD for automatic reinforcement require validation across larger and more heterogeneous samples before they can safely guide treatment selection Falligant & Hagopian (2020). Sensitivity and bias metrics offer a promising quality-control layer, but software tools that compute these values in real time need development and usability testing (Allen et al., 2026).
Classroom ecology factors (teacher attention rates, seating, instructional format) have been shown to alter both behavior and intervention outcomes, yet no standardized protocol exists for measuring these variables during school FBA; development of a brief ecological inventory would fill this gap Kestner et al. (2019). Similarly, demand assessments are promising but have only pilot data; systematic validation against full FA is needed Avery & Akers (2021).
Post-mastery fidelity decay has been demonstrated in translational studies, but the time course and preventable variables in natural settings remain unquantified; longitudinal studies tracking consequence delivery accuracy over months are warranted (Jones et al., 2026). Finally, ACT-based FBA requires idiographic verbal-function analysis, yet no standardized decision algorithms exist for when to shift from direct contingency management to derived-relational protocols; sequential multiple-assignment trials could inform this decision Sandoz et al. (2022).
08Practitioner Takeaways
- Start every FBA with record review and an open-ended caregiver interview; closed-ended checklists alone miss idiosyncratic reinforcers Fryling & Baires (2016).
- When resources are tight, a 30-min ABC descriptive sample with conditional probability calculations can correctly predict function and guide initial treatment Tereshko et al. (2024).
- Use brief or trial-based FA as the default experimental format in schools; they produce interpretable results in 90% of cases within one class period Nesselrode et al. (2022).
- Omit tangible test conditions unless caregiver/teacher interview provides clear evidence; you will save ~25 min per FA without losing validity Saini et al. (2020).
- Script the exact EO, response definition, and reinforcer in the behavior plan; categorical agreement is high but specific-feature agreement is only 52–78% Rajaraman et al. (2022).
- Complete the 15-item FA risk checklist before any experimental session; high scores trigger supervisor oversight or caregiver-implemented alternatives Deochand et al. (2020).
- For low-rate behavior, extend alone sessions to 30 min and use latency or latency-to-first-response metrics instead of rate Brown et al. (2025).
- Compute the level-of-differentiation biomarker (play-alone SIB difference) for automatically reinforced self-injury; LOD >0.64 predicts FCT-alone success Falligant & Hagopian (2020).
- Train staff with asynchronous CBI first, then 1–2 brief telehealth BST sessions; this combo reaches ≥90% integrity for TBFA without travel (Togashi, 2025).
- Embed goal-setting, self-monitoring, and weekly emailed graphs to multiply monthly TBFA output by 5× in residential or school settings Sellers et al. (2019).
- Integrate empathic, culturally tailored language into caregiver interviews; it increases unique antecedent/consequence entries by 40% Gatzunis et al. (2023).
- Continue consequence fidelity probes after mastery; intermittent reinforcement slips degrade accurate responding within one session (Jones et al., 2026).
- Expect attention in 41% and escape in 25% of students with EBD; pre-write antecedent adjustments and differential reinforcement for these functions (Pollack et al., 2024).
- When FA results are undifferentiated, replicate conditions across caregivers or settings before abandoning the functional hypothesis; extension produces clarity in 85% of cases Brown et al. (2025).
- Document sensitivity-to-EO metrics in FA reports; values below recommended cut-offs flag potential observer bias and justify additional staff training (Allen et al., 2026).
- Always describe baseline classroom ecology (seating, teacher attention, instructional format) in school FBA reports; omission risks false hypotheses Kestner et al. (2019).
- Add brief demand assessments when escape is suspected; 5-min task probes clarify whether academic demands evoke problem behavior before full FA Avery & Akers (2021).
- For ACT or language-based interventions, continuously manipulate verbal context and measure real-time shifts; do not infer function from topography alone Sandoz et al. (2022).
- Use LOD biomarker reporting templates to standardize automatic-reinforcement decisions; include play-alone rate difference and cut-off justification Falligant & Hagopian (2020).
- Embed FBA findings within comprehensive assessment plans (VB-MAPP, ABLLS-R, PEAK) to ensure function-based interventions are coordinated with skill-acquisition targets (LaMarca et al., 2024).
09Frequently Asked Questions
Do I have to run a full functional analysis for every FBA?
No. Comparative data show FCT succeeds after descriptive-only FBAs in many cases. Add brief FA only when indirect data are ambiguous or initial treatment fails Call et al. (2024).
How long does a brief FA actually take?
Expect 5–15 min per condition; pre-select test conditions from caregiver interview to drop unlikely contingencies and cut total time by ~35% Saini et al. (2020).
Is the IISCA safe for trauma-involved youth?
Single-test, caregiver-collaborative design plus trauma-informed interview steps have been validated in three single-subject cases with ≥90% behavior reduction [CITE:c9837620-b19e-4456-8e8b-3bdf25c7460].
Can RBTs conduct functional analyses?
Yes, if they receive competency-based training (e.g., self-instruction + webinar) and demonstrate ≥80% procedural integrity on role-play and in-vivo probes Griffith et al. (2020).
What functions show up most in schools?
Attention (41%), multiple reinforcers (29%), and escape (25%) dominate among students with emotional/behavioral disorders (Pollack et al., 2024).
How do I decide between brief FA and TBFA?
Use TBFA when you need embedded trials during regular instruction; use brief FA when you can carve out 10-min blocks outside class. Both yield equivalent differentiation Nesselrode et al. (2022).
When should I refer out instead of conducting an FA?
Refer when the 15-item risk checklist score exceeds 30, medical variables are suspected, or psychiatric crisis overrides behavioral assessment Deochand et al. (2020)(BACB Ethics Code, 2.12).
11References
All citations are linked to their original source below.
- Pollack, M. S., Lloyd, B. P., Doyle, L. E., Santini, M. A., & Crowell, G. E. (2024). Are function‑based interventions for students with emotional/behavioral disorders trauma informed? A systematic review. Behavior Analysis in Practice, 17, 709–726. https://doi.org/10.1007/s40617-023-00893-y https://doi.org/10.1007/s40617-023-00893-y
- Rajaraman, A., Hanley, G. P., Gover, H. C., Ruppel, K. W., & Landa, R. K. (2022). On the Reliability and Treatment Utility of the Practical Functional Assessment Process. Behavior Analysis in Practice, 15(3), 815-837. https://doi.org/10.1007/s40617-021-00665-6 https://doi.org/10.1007/s40617-021-00665-6
- Togashi, K. (2025). Training in trial-based functional analysis via computer-based instruction and behavioral skills training. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-025-01136-y https://doi.org/10.1007/s40617-025-01136-y
- Nesselrode, R., Falcomata, T. S., Hills, L., & Erhard, P. (2022). Functional Analysis in Public School Settings: A Systematic Review of the Literature. Behavior Analysis in Practice, 15(3), 958-970. https://doi.org/10.1007/s40617-022-00679-8 https://doi.org/10.1007/s40617-022-00679-8
- Sandoz, E. K., Gould, E. R., & DuFrene, T. (2022). Ongoing, Explicit, and Direct Functional Assessment is a Necessary Component of ACT as Behavior Analysis: A Response to Tarbox et al. (2020). Behavior Analysis in Practice, 15(1), 33-42. https://doi.org/10.1007/s40617-021-00607-2 https://doi.org/10.1007/s40617-021-00607-2
- Mayo, M. R., & Hoffmann, A. N. (2024). A survey of the state of the field of applied behavior analysis in Vermont. Behavior Analysis in Practice, 17, 581–600. https://doi.org/10.1007/s40617-023-00901-1 https://doi.org/10.1007/s40617-023-00901-1
- Brown, K. R., Helvey, C. I., Kranak, M. P., & Lavin, A. (2025). Functional Analysis Decision-Making Considerations. Behavior Analysis in Practice, 18(4), 1237-1254. https://doi.org/10.1007/s40617-025-01057-w https://doi.org/10.1007/s40617-025-01057-w
- Saini, V., Fisher, W. W., Retzlaff, B. J., & Keevy, M. (2020). Efficiency in functional analysis of problem behavior: A quantitative and qualitative review. Journal of Applied Behavior Analysis, 53(1), 44-66. https://doi.org/10.1002/jaba.583 https://doi.org/10.1002/jaba.583
- Griffith, K. R., Price, J. N., & Penrod, B. (2020). The Effects of a Self-Instruction Package and Group Training on Trial-Based Functional Analysis Administration. Behavior Analysis in Practice, 13(1), 63-80. https://doi.org/10.1007/s40617-019-00388-9 https://doi.org/10.1007/s40617-019-00388-9
- Snyder, S. M., Huber, H., Hornsby, T., & Leventhal, B. (2024). Overlapping training and roles: An exploration of the state of interprofessional practice between behavior analysts and school psychologists. Behavior Analysis in Practice, 17, 880–892. https://doi.org/10.1007/s40617-023-00904-y https://doi.org/10.1007/s40617-023-00904-y
- Falligant, J. M. & Hagopian, L. P. (2020). Further extensions of precision medicine to behavior analysis: A demonstration using functional communication training. Journal of Applied Behavior Analysis, 53(4), 1961-1981. https://doi.org/10.1002/jaba.739 https://doi.org/10.1002/jaba.739
- Kestner, K. M., Peterson, S. M., Eldridge, R. R., & Peterson, L. D. (2019). Considerations of Baseline Classroom Conditions in Conducting Functional Behavior Assessments in School Settings. Behavior Analysis in Practice, 12(2), 452-465. https://doi.org/10.1007/s40617-018-0269-1 https://doi.org/10.1007/s40617-018-0269-1
- Tereshko, L. M., Weiss, M. J., Ross, R. K., Harper, J. M., & Keane, D. (2024). A component analysis of ABC assessments as demonstrated through function based interventions. Behavioral Interventions, 39(3). https://doi.org/10.1002/bin.2009 https://doi.org/10.1002/bin.2009
- Sellers, T. P., Clay, C. J., Hoffmann, A. N., & Collins, S. D. (2019). Evaluation of a Performance Management Intervention to Increase Use of Trial-Based Functional Analyses by Clinicians in a Residential Setting for Adults with Intellectual Disabilities. Behavior Analysis in Practice, 12(2), 412-417. https://doi.org/10.1007/s40617-018-00276-8 https://doi.org/10.1007/s40617-018-00276-8
- Fryling, M. J. & Baires, N. A. (2016). The Practical Importance of the Distinction Between Open and Closed-Ended Indirect Assessments. Behavior Analysis in Practice, 9(2), 146-151. https://doi.org/10.1007/s40617-016-0115-2 https://doi.org/10.1007/s40617-016-0115-2
- Gatzunis, K. S., Weiss, M. J., Ala’i-Rosales, S., Fahmie, T. A., & Syed, N. Y. (2023). Using Behavioral Skills Training to Teach Functional Assessment Interviewing, Cultural Responsiveness, and Empathic and Compassionate Care to Students of Applied Behavior Analysis. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-023-00794-0 https://doi.org/10.1007/s40617-023-00794-0
- Jessel, J., Fruchtman, T., Raghunauth‑Zaman, N., Leyman, A., Lemos, F. M., Costa Val, H., Howard, M., & Hanley, G. P. (2024). A two step validation of the performance‑based IISCA: A trauma‑ informed functional analysis model. Behavior Analysis in Practice, 17, 727–745. https://doi.org/10.1007/s40617-023-00792-2 https://doi.org/10.1007/s40617-023-00792-2
- Call, N. A., Bernstein, A. M., O'Brien, M. J., Schieltz, K. M., Tsami, L., Lerman, D. C., Berg, W. K., Lindgren, S. D., Connelly, M. A., & Wacker, D. P. (2024). A comparative effectiveness trial of functional behavioral assessment methods. Journal of Applied Behavior Analysis, 57(1), 166-183. https://doi.org/10.1002/jaba.1045 https://doi.org/10.1002/jaba.1045
- Deochand, N., Eldridge, R. R., & Peterson, S. M. (2020). Toward the Development of a Functional Analysis Risk Assessment Decision Tool. Behavior Analysis in Practice, 13(4), 978-990. https://doi.org/10.1007/s40617-020-00433-y https://doi.org/10.1007/s40617-020-00433-y
- Corr, F., Rispoli, M., & Welker, N. P. (2025). A Mega-Review of Functional Communication Training for Students with Disabilities in Educational Settings. Journal of Behavioral Education. https://doi.org/10.1007/s10864-025-09598-4 https://doi.org/10.1007/s10864-025-09598-4
- Allen, A. E., Bridges, K. G., Pizarro, E. M., & Morris, S. L. (2026). Comparing methods of evaluating sensitivity to common establishing operations and bias toward challenging behavior. Journal of Applied Behavior Analysis, 59(1), e70046. https://doi.org/10.1002/jaba.70046 https://doi.org/10.1002/jaba.70046
- Hagopian, L. P., Rooker, G. W., & Yenokyan, G. (2018). Identifying predictive behavioral markers: A demonstration using automatically reinforced self‐injurious behavior. Journal of Applied Behavior Analysis, 51(3), 443-465. https://doi.org/10.1002/jaba.477 https://doi.org/10.1002/jaba.477
- LaMarca, V. J., & LaMarca, J. M. (2024). Using the ADDIE model of instructional design to create programming for comprehensive ABA treatment. Behavior Analysis in Practice, 17, 371–388. https://doi.org/10.1007/s40617-024-00908-2 https://doi.org/10.1007/s40617-024-00908-2
- Jones, L., Brand, D., Bensemann, J., Heinicke, M. R., Penrod, B., & Burlison, S. (2026). A translational approach to investigating the effects of consequence-based procedural fidelity errors postmastery. Journal of Applied Behavior Analysis, 59(1), e70042. https://doi.org/10.1002/jaba.70042 https://doi.org/10.1002/jaba.70042
- Avery, S. K. & Akers, J. S. (2021). The Use of Demand Assessments: A Brief Review and Practical Guide. Behavior Analysis in Practice, 14(2), 410-421. https://doi.org/10.1007/s40617-020-00542-8 https://doi.org/10.1007/s40617-020-00542-8
- Jessel, J., Hanley, G. P., & Ghaemmaghami, M. (2020). On the Standardization of the Functional Analysis. Behavior Analysis in Practice, 13(1), 205-216. https://doi.org/10.1007/s40617-019-00366-1 https://doi.org/10.1007/s40617-019-00366-1
- Kodak, T. & Halbur, M. (2021). A Tutorial for the Design and Use of Assessment-Based Instruction in Practice. Behavior Analysis in Practice, 14(1), 166-180. https://doi.org/10.1007/s40617-020-00497-w https://doi.org/10.1007/s40617-020-00497-w