This continuing education event examines whether structured, research-supported decision-making tools—such as the Systematic Worksheet for the Evaluation of Effective Prompting Strategies (SWEEPS)—can be extended beyond prompt selection to support supervision of more complex clinical decision-making, including when and how RBTs should conduct independent skill probes and remove prompts. The training is framed around the experimental question: Can a validated decision-support tool designed for prompting strategy selection be applied to higher-level clinical implementation skills, such as evaluating skill acquisition, determining readiness for independent responding, and reducing prompt dependency risk in applied settings?
Participants will review the development and research evaluation of decision-support tools in applied behavior analysis and analyze how these tools address real-world risks such as inconsistent staff decision-making, default use of prompting strategies, and variability in probe implementation. The course will explore whether similar structured frameworks could improve RBT implementation of probe trials, stimulus control transfer procedures, and independence verification within skill acquisition programs.
Through guided discussion and applied case analysis, participants will evaluate common field scenarios involving probe errors, prompt dependency development, and misleading acquisition data. The event will emphasize the role of the supervising BCBA in translating complex clinical decision-making into clear, teachable systems for RBTs while maintaining flexibility for individualized treatment planning.
By the end of the event, participants will be able to describe the research foundation for structured decision-making tools in ABA, evaluate potential applications of these tools to complex skill acquisition supervision tasks, and design supervision supports that improve staff consistency, data integrity, and client independence outcomes.