Fact-checked by Grok 2 weeks ago

Single-subject design

Single-subject design, also known as single-case or single-subject experimental design, is a rigorous that evaluates the effects of interventions on individual participants by using each as their own , through repeated measurements of behavior or outcomes across and phases to establish experimental and demonstrate functional relationships. This approach focuses on the unit of analysis at the individual level, distinguishing it from group-based designs by emphasizing intra-subject variability and replication within the same participant rather than statistical aggregation across groups. Originating in the fields of and communication sciences and disorders in the mid-20th century, single-subject design gained prominence in the 1960s with early applications in fluency treatments and behavior therapy, and it flourished in and counseling from the late 1970s onward, particularly through pioneering work at institutions like the . Key features include the establishment of baseline stability with at least three points, systematic manipulation of the independent variable (e.g., introduction or of an ), and visual of graphed to evaluate changes in level, trend, and variability of the dependent variable. Common design types encompass the simple A-B design (baseline followed by ), reversal or designs (e.g., A-B-A-B to reinstate baseline for comparison), multiple-baseline designs (staggered across behaviors, settings, or subjects), and changing-criterion designs (gradually shifting performance criteria). Single-subject designs are particularly valuable in evidence-based practice for their strong in demonstrating causal effects at the individual level, enabling quick data-driven adjustments in clinical or educational settings, and supporting generalization through replication across multiple participants or contexts, though they have limited without such extensions. Applications span , , and sciences, such as assessing behavioral interventions for (e.g., positive in picture exchange communication systems), increasing speech volume in , or evaluating language effects, often serving as a precursor to larger randomized controlled trials. Despite their utility, challenges include the need for ethical considerations in designs and a historical decline in , underscoring the importance of standardized reporting guidelines like those from the What Works Clearinghouse.

Fundamentals

Definition and Purpose

Single-subject design is an experimental that focuses on a single participant, who serves as their own , to evaluate the effects of an independent —such as an —on a dependent , such as . This approach employs repeated measurements over time to capture an individual's variability and assess changes attributable to the . The primary purpose of single-subject design is to demonstrate functional relationships between interventions and individual outcomes, emphasizing an idiographic perspective that prioritizes unique, person-specific responses over generalizations derived from group data. By enabling detailed analysis of intra-individual changes, it supports evidence-based practices in fields like and , particularly when group studies are impractical due to small or heterogeneous samples. At its core, the logic of single-subject design involves systematic manipulation of experimental conditions through within-subject comparisons, such as alternating and phases, to establish and rule out alternative explanations for observed effects. For example, in , researchers might use this method to test the impact of paced reading instruction on a third-grade student's acquisition of reading skills, tracking improvements in reading rate from levels to post-intervention performance.

Key Principles

Single-subject designs rely on a set of foundational principles derived from logic to establish experimental and infer from effects on . These principles—, , and replication—enable researchers to systematically demonstrate that observed changes are due to the independent variable rather than extraneous factors. The principle of prediction involves establishing a baseline phase through repeated of the dependent variable prior to introducing the , typically with at least three points to ensure characterized by low variability and minimal trends. This baseline provides a reliable forecast of how the would continue in the absence of any manipulation, allowing researchers to anticipate the expected pattern if no change occurs. The principle of verification tests the accuracy of the prediction by implementing the and observing whether the dependent variable changes in a manner consistent with the expected impact. If the behavior shifts predictably upon intervention introduction—such as an increase in a target skill or decrease in an undesired response—this verifies that the is responsible for the alteration, rather than coincidental events. Verification strengthens by confirming the intervention's role in disrupting the predicted trajectory. The principle of replication further bolsters evidence by repeating the conditions of prediction and verification multiple times, either within the same phase sequence or across different elements of the study. This repetition demonstrates the consistency of effects, reducing the likelihood of spurious results and building a robust case for . Seminal work by Sidman emphasized replication as the cornerstone of , arguing that repeated demonstrations within and across studies are necessary to evaluate behavioral relations reliably. Replication occurs in two primary forms: intra-subject replication, which involves repeating the intervention effects within the same across successive phases (e.g., alternating and conditions), and inter-subject replication, which extends the effects across multiple individuals, behaviors, or settings to generalize findings. Intra-subject replication provides strong of for a single case by showing consistent behavioral changes tied to phase transitions, while inter-subject replication enhances by confirming the intervention's efficacy beyond one participant. Together, these forms ensure that effects are not idiosyncratic but reflect functional relations applicable in applied contexts. Control in single-subject designs is achieved primarily through rigorous baseline measurement, which requires ongoing, frequent to establish a clear reference point for comparison. Stable , obtained via repeated probes under consistent conditions, allow researchers to isolate influences by contrasting pre- and post-manipulation data patterns. Without such controlled , variables like maturation or could mimic effects, undermining the design's ability to demonstrate .

Types of Designs

Reversal Design

The reversal design, also known as the or ABAB design, is a single-subject experimental approach that systematically alternates between a (A) and an (B) to evaluate the effects of an independent variable on . The structure typically progresses from an initial , where the target is measured without , to an , followed by back to conditions, and finally reinstatement of the intervention to replicate observed effects. This , often requiring at least four s with multiple data points per for rigor, enables within-subject comparisons to isolate the intervention's impact. Procedures for implementing the reversal design emphasize establishing prior to introduction, typically through collection of at least three points demonstrating consistent levels, trends, or variability in the dependent . Once is confirmed, the is applied, and ongoing tracks changes in behavior; if a reliable effect emerges, the is withdrawn to assess reversion toward levels. The is then reintroduced to verify replication, with visually inspected for immediate, clear shifts across phases to confirm experimental . in each phase is crucial, often requiring extended to rule out extraneous influences. The rationale underlying the reversal design centers on demonstrating functional relations, wherein predictable behavioral changes occur contingent on the introduction, withdrawal, and reintroduction of the , thereby establishing the intervention as the controlling variable. It is particularly suited to reversible behaviors, such as those modifiable through contingent , as it provides strong through intra-subject replication without needing multiple participants. This design aligns with the analytic dimension of , prioritizing experimental demonstration of causality over correlational evidence. A representative example involves evaluating a token reinforcement program to decrease disruptive classroom behaviors, such as out-of-seat and talking-out incidents, among elementary students in an adjustment class. During the baseline phase, observations revealed high disruption rates (e.g., averaging over 80% of intervals for talking-out); introduction of tokens exchangeable for privileges reduced these to near-zero levels, with behaviors returning to baseline upon withdrawal and decreasing again during reinstatement, thus confirming the program's efficacy. Variations of the reversal design address practical constraints, such as brief reversals that limit duration to ethical minimums or non-reversal adaptations (e.g., using differential reinforcement of alternative behaviors) when full risks harm or irreversibility. These modifications maintain the design's core logic while accommodating real-world applications in educational or clinical settings.

Alternating Treatments Design

The alternating treatments design, also referred to as the multielement design, enables the of two or more within a single subject through rapid alternation of conditions, often beginning with an initial probe rather than a prolonged phase. This structure facilitates direct side-by-side evaluation of treatment effects without the need for sequential phases typical in other . For example, conditions might alternate in a such as A-B-A-B or a counterbalanced like A-B-B-A to ensure each is applied multiple times. Unlike the , which involves introducing and withdrawing a single sequentially, the alternating treatments applies multiple treatments concurrently across sessions, thereby avoiding complete of effective . Procedures for implementing this design emphasize minimizing biases through careful sequencing and . Treatments are alternated rapidly—often daily or even within the same day (e.g., morning versus afternoon sessions)—using random or systematic orders with counterbalancing to reduce sequence effects, such as order of presentation influencing outcomes. Distinct discriminative stimuli, like different settings or materials, signal each to the . on the target are gathered per session and plotted separately for each , allowing for of differences in levels, trends, or variability. This approach supports replication across conditions by providing multiple exposures to each within the same . The rationale for the alternating treatments design lies in its efficiency for identifying the most effective intervention among options, particularly when reversal is impractical due to ethical concerns or irreversible changes. By comparing effects in close proximity, it accelerates in applied settings like or , reducing the time and resources needed compared to designs requiring stable baselines or staggered introductions. In contrast to the , which delays interventions across behaviors, , or settings to demonstrate , this design focuses comparisons within a single or for quicker differentiation. A representative example involves evaluating reinforcers for a child's low task completion rates in a setting. One condition applies verbal contingent on completed tasks, while the other uses ; these are alternated across sessions, revealing as more effective if completion rates consistently higher under that , guiding selection of the optimal strategy. Specific limitations include potential from sequence effects, where the order of treatments impacts results, or carryover effects, where one intervention's persists into the next—though these can be mitigated via counterbalancing or spacing.

Multiple Baseline Design

The multiple baseline design involves the staggered introduction of an across multiple concurrent baselines, typically three or more, to establish experimental control in . This approach originated as an alternative to designs in early , as described by Baer, Wolf, and Risley in their seminal 1968 paper outlining dimensions of the field. By applying the sequentially to different baselines while keeping others unchanged, the design demonstrates that behavior changes occur only in response to the intervention, not extraneous factors. Procedures for implementing a multiple baseline design begin with simultaneous to establish stable baselines across all selected conditions, ensuring sufficient data points (often 3–5) to demonstrate variability and trend before . Once is achieved in the first , the is introduced there, while continues on the remaining baselines without change. The process repeats sequentially for each subsequent , with the timing of intervention staggered to allow replication of effects, typically after 3–7 sessions of in prior conditions. This staggered application verifies control through the temporal alignment of behavior changes with onset across baselines. The rationale for the lies in its ability to control for threats to , such as maturation, , or external events, by showing that effects are specific to the intervened baseline and do not generalize prematurely to untreated ones. It is particularly suitable for behaviors where or of the would be unethical or impractical, such as acquisition that is irreversible once learned. Unlike designs, it avoids potential carryover effects from treatment removal, making it ideal for demonstrating functional relations in applied settings like or . Multiple baseline designs are categorized by the dimension along which baselines are applied:
  • Across behaviors: The intervention targets different but related behaviors within the same subject, such as improving social initiations and compliance sequentially in a child with .
  • Across subjects: The same is applied to identical behaviors in multiple participants, staggered by individual, as in evaluating a prevention program across three schools where treatment reduced aggressive incidents only after implementation in each.
  • Across settings: The addresses the same in one subject across different environments, such as increasing on-task performance in a from to to contexts.
A representative example is the use of to enhance on-task behavior in elementary students. In a multiple across subjects , Moore, Anderson, and Cross (2008) implemented a tactile self-monitoring device (MotivAider) for three students with difficulties; baselines showed low on-task rates (20–40%), with immediate increases to 80–95% following staggered introduction, maintaining effects during follow-up.

Changing Criterion Design

The changing criterion design is a single-case experimental design used to evaluate interventions that shape target behaviors through successive approximations by systematically altering reinforcement criteria across phases. It typically follows a structure of an initial baseline phase (A) followed by multiple intervention phases (B1, B2, B3, etc.), where each intervention phase introduces a progressively modified criterion for reinforcement, such as increasing the required frequency or duration of the behavior. This stepwise progression allows researchers to demonstrate functional control by showing that behavior changes correspond directly to the criterion adjustments, with stable responding at each level before advancing. Procedures for implementing the design begin with establishing a stable baseline to determine the initial behavior level, from which the first criterion is set—often as the next achievable increment above the baseline mean, such as a small percentage increase (e.g., 10-25%) to ensure success and avoid frustration. Reinforcement is provided contingent on meeting the criterion for a sufficient number of sessions (typically three or more) until stability is achieved, after which the criterion is raised or lowered for the next phase; phase lengths and step sizes are adjusted based on behavior variability and intervention goals to maintain experimental validity. At least three criterion changes are recommended to provide multiple demonstrations of effect, with optional mini-reversals (brief returns to prior criteria) to further verify control. The rationale for this design lies in its suitability for behaviors requiring gradual modification, such as skill acquisition or habit formation, where abrupt changes might be ineffective or unethical; it establishes experimental control by linking observed behavior shifts exclusively to the criterion manipulations, replicating the effect across phases similar to phase shifts in other single-subject designs. This approach is particularly effective for accelerating or decelerating quantifiable behaviors, providing clear evidence of intervention efficacy through predictable, stepwise improvements. A representative example involves gradually increasing daily exercise duration in an individual from a baseline of 10 minutes to 30 minutes, with criteria set at 15 minutes (B1), 20 minutes (B2), and 25 minutes (B3), reinforced by contingent tokens or praise at each level until stability, demonstrating how the intervention systematically builds the habit. Variations of the design include the range-bound changing criterion, which uses upper and lower limits per phase to accommodate natural variability in performance, and multiple changing criteria applied simultaneously to different dimensions of a complex behavior, such as both frequency and accuracy in a skill-building task.

Standards and Best Practices

Design Quality Criteria

The quality of single-subject designs is evaluated through established standards that ensure , reliability of measurements, and the potential for broader applicability. The What Works Clearinghouse (WWC), developed by the Institute of Education Sciences, provides a widely adopted framework for assessing single-case research, categorizing studies as meeting standards, meeting standards with reservations, or not meeting standards based on design features that control for alternative explanations of results. Central to WWC standards are requirements for sufficient data collection and phase stability to demonstrate experimental control. Each phase, including baseline and intervention, must include at least three data points, with a preference for five or more to allow for clear assessment of level, trend, and variability; initial baseline phases in designs like multiple baseline often require at least six points for higher ratings. Baseline stability is essential, defined as a consistent pattern without systematic trends or excessive variability that could confound intervention effects, enabling reliable comparisons across phases. Intervention consistency demands that the independent variable be applied uniformly, with procedural fidelity documented to rule out implementation errors as explanations for outcomes. Internal validity in single-subject designs is strengthened by structural elements that minimize common threats such as (external events influencing outcomes) and maturation (natural changes over time). Reversal designs address these by withdrawing and reintroducing the , replicating effects to confirm and distinguish intervention impact from extraneous factors. Multiple baseline designs stagger intervention onset across behaviors, settings, or participants, preventing maturation or from uniformly affecting all cases and thus isolating the intervention's role. Consistent replication within a study—typically at least three demonstrations of effect—further bolsters by countering threats like testing (familiarity with measures) or instrumentation (changes in measurement tools). External validity, or the generalizability of findings beyond the specific case, is achieved primarily through systematic replication rather than large samples. Effects must be demonstrated across at least three participants, settings, or behaviors within a study to suggest broader applicability, with further replication in independent studies enhancing confidence in generality. Detailed operational definitions of participants, interventions, and contexts facilitate this extension, allowing researchers to assess how well conditions match real-world scenarios. Best practices for rigorous single-subject designs include incorporating randomization where feasible, particularly in alternating treatments designs, to assign intervention order randomly and reduce sequence effects or bias. Clear operational definitions of dependent and independent variables are also critical, specifying exact measurement procedures and criteria to ensure replicability and minimize ambiguity in interpreting results.

Effect Size Measures

In single-subject designs, effect size measures provide quantitative estimates of the magnitude of intervention effects, enabling objective evaluation and comparison across studies beyond visual analysis. These non-parametric and parametric indices address limitations of visual inspection by accounting for data overlap, trends, and variability, facilitating meta-analyses and evidence synthesis in fields like education and behavior analysis. Seminal developments emphasize non-overlap methods for their simplicity and robustness to small sample sizes typical in single-subject research. The percentage of non-overlapping data (PND) is a foundational non-parametric measure that quantifies the proportion of intervention-phase data points that exceed the highest baseline-phase value (for behaviors where increases are desirable). It is calculated using the formula: \text{PND} = \left( \frac{\text{number of intervention data points exceeding highest baseline point}}{\text{total intervention points}} \right) \times 100 PND ranges from 0% (complete overlap, no effect) to 100% (no overlap, strong effect), with interpretations classifying scores of 90% or higher as large effects, 70–89% as moderate, and below 70% as small or ineffective. Introduced for aggregating single-subject data in meta-analyses, PND is straightforward but sensitive to baseline extremes and ignores trends. The of all non-overlapping data (PAND) extends PND by incorporating all data points from both phases to assess overlap more comprehensively, calculating the minimal number of points that must be removed to achieve complete non-overlap, then deriving the of remaining non-overlapping data. PAND = 100 - (% overlap), where % overlap is determined via a 2×2 of phase comparisons, often yielding a related for . This approach remedies PND's overreliance on a single extreme, providing a more stable estimate linked to recognized sizes like , complete with confidence intervals (e.g., a PAND of 78.6% corresponds to phi ≈ 0.57). PAND is particularly useful for complex designs like multiple , enhancing reliability in quantification. Tau-U is a trend-adjusted non-overlap measure that integrates between-phase non-overlap with within-phase trend , particularly trend, to better capture impacts in trending data. It is computed as the proportion of pairwise comparisons where phase values exceed phase values, minus the Kendall's tau correlation within the phase: \tau_U = \frac{U}{n_A n_B} - \tau_A where U is the Mann-Whitney (number of favorable pairs), n_A and n_B are the lengths of and phases, and \tau_A is the trend. Ranging from -1 ( effect) to 1 (perfect ), Tau-U outperforms PND by avoiding ceiling effects in no-trend scenarios and handling , making it suitable for autocorrelated time-series data common in single-subject studies. Field tests across hundreds of series show it yields modest but consistent adjustments for trends compared to unadjusted non-overlap indices. Other notable metrics include the improvement rate difference (IRD), a robust non-overlap index calculated as the difference between the proportion of pairwise improvements in the intervention phase and the baseline phase: IRD = P(B > A) - P(B < A), where P denotes the proportion of favorable cross-phase comparisons (assuming no ties). IRD ranges from -1 to 1, with values above 0.70 indicating large effects, and is valued for its resistance to outliers and trends, facilitating in reviews. For time-series , the standardized difference (SMD) offers a parametric alternative comparable to Cohen's d in group designs, defined as: d = \left(1 - \frac{3}{4m - 5}\right) \frac{\bar{y}_B - \bar{y}_A}{s_A} where m is the baseline length, \bar{y}_B and \bar{y}_A are phase means, and s_A is the baseline standard deviation (with a pooled variant available if variances are equal). SMD quantifies effects in standard deviation units, enabling cross-design comparisons, though it assumes normality and requires longer phases for stability. Guidelines for single-subject research advocate using multiple effect size indices—such as combining non-overlap measures like PND or Tau-U with parametric ones like SMD—for robust interpretation, as no single metric fully captures variability, trends, or overlap. This multi-index approach mitigates biases inherent to any one method and complements brief visual trend assessments by prioritizing quantification.

Reporting Guidelines

Reporting guidelines for single-subject designs emphasize transparency, completeness, and replicability to allow readers to evaluate the rigor and generalizability of findings. The 2016 provides a comprehensive 26-item to standardize reporting across studies, ensuring key elements such as study rationale, participant details, and procedural integrity are clearly documented. Developed through a consensus process involving experts in behavioral interventions, SCRIBE addresses common reporting gaps identified in prior reviews of single-case literature. Central to SCRIBE are requirements for describing participant characteristics, including demographic details (e.g., age, gender) and relevant clinical or behavioral features pertinent to the , while maintaining . Authors must also justify the choice of (e.g., or multiple baseline) by outlining the , their sequence, and criteria for transitioning between , such as specific behavioral thresholds. Intervention fidelity, or the extent to which the independent variable was implemented as intended, requires detailed reporting on assessment methods across each , including any deviations and their impact. Journal-specific standards, such as those from the () and the Journal of Applied Behavior Analysis (JABA), build on these principles by mandating visual and tabular representations of data. APA guidelines recommend including graphs that display all data points across phases for each participant, with clear axes, legends, and annotations to facilitate visual analysis, alongside estimates to quantify impact. JABA requires high-resolution figures submitted separately, summarizing data efficiently while ensuring through detailed methods sections that operationalize dependent variables and describe procedures. Both emphasize tables for raw or summary data when graphs alone are insufficient, and JABA specifically calls for discussing at the individual level. Transparency in reporting extends to procedural details and reliability measures, such as interobserver (IOA), which should be calculated for at least 25% of sessions and reported as both averages and ranges, with a minimum of % to confirm data accuracy. Full access to via public repositories (e.g., Open Science Framework) is increasingly expected to enable verification and secondary analyses. Post-2020 developments in practices further stress preregistration of study protocols, including design elements and analysis plans, to mitigate bias in single-case experimental designs, particularly in . Handling , common due to repeated measures, must be explicitly described—such as through imputation methods or sensitivity analyses—and visualized in graphs to avoid distorting phase trends. These updates align with broader calls for reporting to contextualize behavioral changes, enhancing the credibility of single-subject findings.

Data Analysis and Interpretation

Visual Analysis Techniques

Visual analysis serves as the foundational method for interpreting in single-subject designs (SSDs), relying on the graphical inspection of plotted to identify functional relations between and dependent variables. This approach emphasizes subjective yet systematic evaluation of patterns to determine whether an produces reliable changes, prioritizing the idiographic nature of SSDs over group-level . By examining line graphs that display sequential observations over time, researchers assess the presence and strength of effects without assuming distributions. The core dimensions evaluated in visual analysis include level, trend, variability, and immediacy of change. Level refers to the or value of points within a , indicating the overall height of the on the y-axis. Trend captures the and of over time, such as increasing, decreasing, or patterns, often estimated by the best-fitting line through the points. Variability measures the or fluctuation of around the or trend, quantified by or deviation, with low variability suggesting conditions. Immediacy assesses the of behavioral change at transitions, typically expecting noticeable shifts within the first three to five points. These dimensions are examined both within phases (for stability) and between phases (for contrasts), providing a holistic view of . Procedures for visual analysis begin with constructing clear line graphs, where the x-axis represents time or sessions and the y-axis the dependent variable, with vertical lines demarcating changes in conditions such as (A) and (B). Data from adjacent s are overlaid or compared side-by-side to evaluate contrasts, including the degree of overlap between datasets—no overlap between s often signals a strong intervention effect, while substantial overlap suggests weak or absent effects. Researchers systematically compare all relevant pairs, ensuring at least three data points per for reliable , and integrate findings across the entire design to confirm replication of effects. Decision rules for establishing functional relations, as outlined in guidelines by et al., focus on consistent changes in level, trend, or variability across replications, such as reliable level shifts upon introduction in multiple baseline designs without corresponding changes in untreated tiers. The What Works Clearinghouse standards require at least three demonstrations of effect (e.g., consistent phase contrasts) with minimal non-effects to infer strong experimental control, incorporating low overlap and high immediacy as supporting criteria. A minimum experimental control score of 3 out of 5, derived from yes/no evaluations of these dimensions, indicates a probable functional relation. Common pitfalls in visual analysis include overreliance on individual data points, which can mislead interpretations in the presence of outliers, and neglecting variability, leading to false positives for effects in unstable baselines. High interrater disagreement (e.g., coefficients around 0.58–0.60) often arises from subjective judgments without standardized protocols, potentially undermining conclusions about functional relations. To mitigate these, analysts should prioritize designs with sufficient data points and stable phases. Basic graphing software, such as or , facilitates visual analysis by enabling the creation of line graphs with phase demarcations and overlay features, ensuring accessibility for practitioners while maintaining the precision needed for SSD evaluation. Specialized tools like GraphPad Prism or SSD-specific software (e.g., for Tau-U calculations) may enhance overlay and trend estimation but are not essential for core graphical inspection.

Statistical Methods

Statistical methods in single-subject design (SSD) provide inferential tools to quantify effects, model temporal dependencies, and test hypotheses, supplementing visual techniques by offering objective confirmation of trends and changes. These approaches address limitations of , such as subjectivity, by incorporating and non-parametric models tailored to the sequential, within-subject nature of SSD data. Time-series analysis, particularly autoregressive integrated moving average (ARIMA) models, is widely used for behavioral trajectories and detecting intervention impacts in SSD. ARIMA models account for and non-stationarity in repeated measures, enabling researchers to estimate pre-intervention trends and assess whether observed changes deviate significantly from expected patterns post-intervention. For instance, in reversal designs, ARIMA can model the reversion to baseline levels to evaluate the stability and magnitude of effects. Multilevel modeling, often implemented through hierarchical linear models (), analyzes within-subject trends across phases while accommodating variability in repeated observations. HLM treats time as a nested , allowing of level changes, shifts, and immediate effects attributable to interventions. A basic HLM equation for SSD data is: Y_{ti} = \beta_0 + \beta_1 \text{Time}_{ti} + \beta_2 \text{Intervention}_{ti} + \beta_3 (\text{Time}_{ti} \times \text{Intervention}_{ti}) + e_{ti} where Y_{ti} is the outcome for subject i at time t, \beta_0 is the , \beta_1 captures the pre- time trend, \beta_2 represents the immediate on level, \beta_3 the change in due to , and e_{ti} is the error term. This framework is particularly effective for multiple baseline designs, where staggered interventions across behaviors or subjects can be modeled hierarchically to infer causal impacts. Randomization tests, including methods, offer non-parametric alternatives to assess in SSD without relying on distributional assumptions. These tests generate an empirical by randomly reassigning phase labels or treatment timings within the design constraints, then comparing the observed effect statistic (e.g., mean difference) to this to compute p-values. In alternating treatments designs, tests evaluate condition differences by permuting sequence orders, providing robust control of Type I error rates even with autocorrelated data. Their flexibility makes them suitable for small sample sizes typical in SSD. Post-2020 advancements have emphasized Bayesian approaches for handling small samples and uncertainty in SSD, incorporating prior knowledge to estimate intervention effects and autocorrelations via methods. Bayesian time-series models, such as those extending with hierarchical priors, quantify posterior probabilities of change points, offering probabilistic interpretations of effect sizes in designs like multiple baseline. Complementing this, techniques, including support vector machines and random forests, have been adapted for pattern detection in SSD graphs, training algorithms on simulated data to classify effects with high accuracy while minimizing false positives. These methods excel in identifying non-linear trends or heterogeneous responses, as demonstrated in replications showing controlled error rates across design types. More recent developments as of 2025 include advancements in techniques for aggregating single-case experimental design data and increased utilization of SSDs in research to enhance individual-level insights. Software integration facilitates these analyses through R packages like SCRT for randomization tests, which computes exact p-values via permutations tailored to SSD structures. Similarly, SingleCaseES supports multilevel and time-series modeling by calculating standardized effects and model fits, enabling seamless implementation of and Bayesian extensions in applied (version 0.7.3, as of July 2025). These tools promote and , bridging advanced statistics with practical SSD applications.

Applications

Interdisciplinary Uses

Single-subject designs have found extensive application in , particularly within (ABA) for in individuals with (ASD) and attention-deficit/hyperactivity disorder (ADHD). In ABA interventions for ASD, single-subject studies have reported enhancements in and social-communication behaviors through targeted behavioral strategies. Similarly, for children with ADHD, a single-subject A-B-A design evaluated a cognitive-functional intervention, demonstrating improvements in such as and across participants aged 9-10 years. These designs enable precise measurement of individual progress, supporting personalized ABA plans that address core symptoms like social deficits in ASD or impulsivity in ADHD. Recent advocacy as of 2023 has promoted greater use of single-subject designs in to enhance individual-level analysis in brain research. In , single-subject designs underpin the development and assessment of individualized education programs (IEPs), allowing educators to test the efficacy of tailored interventions for students with disabilities. By focusing on individual response patterns, these designs identify evidence-based practices that align with IEP goals, such as skill acquisition in reading or . For instance, a single-subject examined one-on-one tutoring's impact on for a high school with a , revealing increased perceptions of academic competence through repeated baseline-intervention probes. A 2024 study further applied single-subject designs to evaluate digital technology's role in improving text comprehension for students with mild intellectual disabilities in . This approach ensures interventions are adjusted based on direct, observable changes, enhancing outcomes in settings. Rehabilitation sciences, especially , utilize single-subject experimental designs to evaluate recovery, such as post- where group studies may be impractical due to patient variability. Modified constraint-induced movement therapy for lower extremity function in elderly chronic survivors employed a single-subject design, showing significant gains in speed and motor capacity during phases. These designs facilitate intensive of progress in small cohorts, confirming effects like improved upper-limb function through repeated measures. In contexts, they highlight adaptive plasticity, guiding personalized recovery protocols after neurological events. In , single-case experimental designs (SCEDs), including n-of-1 trials, are critical for studying disorders and , where traditional group trials face challenges. SCEDs assess responses in individual patients with heterogeneous conditions, enabling causal inferences about drug efficacy or dosage adjustments. For in diseases, n-of-1 designs evaluate symptom changes under alternating conditions, addressing issues like outcome selection and in low-prevalence populations. This methodology supports accelerated therapy evaluation for genetic or orphan disorders by prioritizing individual-level evidence. Post-2020 applications have expanded single-subject designs to interventions amid the , allowing remote assessment of behavioral and therapeutic efficacy while minimizing in-person risks. A series of SCED studies investigated telehealth-delivered interventions for various conditions, demonstrating sustained behavioral improvements comparable to traditional formats through virtual baseline and intervention phases. In , protocols using single-case research designs have targeted anxiety and post-pandemic symptoms, with individualized sessions yielding reductions in self-reported anxiety levels via real-time brain activity feedback. A 2024 single-subject illustrated the utility of for alleviating post-COVID brain fog symptoms, including cognitive challenges related to fatigue, anxiety, and .

Software and Tools

Several software tools facilitate the graphing, analysis, and reporting of data in single-subject designs, ranging from general-purpose applications to specialized packages tailored for single-case experimental research. For graphing, and provide accessible options for creating basic line plots and phase-change designs, enabling researchers to visualize trends and effects through simple and chart customization. More advanced graphing is supported by GraphPad Prism, which offers templates for multiple and designs, including features for adding phase lines, , and publication-quality outputs. In terms of analysis software, the R programming language hosts several packages dedicated to single-case data. The scdhlm package estimates hierarchical linear models and design-comparable standardized mean difference effect sizes, accommodating multilevel structures common in single-subject time-series data. The SingleCaseES package computes non-overlap measures such as Tau-U, which adjusts for baseline trend to quantify intervention effects, along with other parametric and non-parametric indices. Additional R tools like the scan package support regression-based modeling, visualization, and management of single- and multiple-baseline designs. For Python users, general libraries such as statsmodels enable time-series analysis through autoregressive integrated moving average (ARIMA) models, while pingouin provides tools for repeated-measures ANOVA and effect size calculations adaptable to single-subject sequences. Comprehensive platforms streamline design planning, analysis, and reporting. SingleCase.org offers an interactive web-based tool for training and assessing visual analysis skills, simulating graphs to evaluate interrater agreement in detecting effects. Similarly, the Single Case Research website provides free online calculators for effect sizes like Tau-U and percentage of non-overlapping data, supporting both visual and quantitative interpretation. Post-2020 developments include open-source mobile applications for , such as the , which deploys customizable apps for behavioral monitoring in single-subject studies, facilitating through smartphone-based tracking. Emerging AI-assisted tools leverage to automate visual analysis of single-case graphs, reducing subjectivity by classifying effect presence with high accuracy, as demonstrated in proof-of-concept studies achieving interrater agreement improvements. Tutorials and resources often integrate with the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016, ensuring software outputs align with its 26-item checklist for transparent reporting of , data handling, and effect evaluation.

Comparisons and Limitations

Comparison with Group Designs

Single-subject designs (SSDs), also known as single-case experimental designs, fundamentally differ from group designs in their approach to establishing and generalizability. SSDs adopt an idiographic perspective, emphasizing detailed of individual through repeated measures within the same , serving as their own via within-subject manipulations. In contrast, group designs pursue a nomothetic approach, aggregating data across multiple participants to derive population-level inferences, typically employing between-subjects controls such as to conditions. This distinction arises from SSDs' focus on intra-individual variability and temporal sequences, whereas group designs prioritize inter-individual comparisons to mitigate factors like maturation or history. SSDs offer several advantages over group designs, particularly in sensitivity to individual differences. They provide higher by enabling precise detection of effects within a single participant, avoiding the masking of unique responses that can occur in group averages. Implementation is often faster and more resource-efficient, requiring fewer participants and allowing rapid iteration in applied settings. Additionally, SSDs are ethically preferable for rare conditions or populations where withholding treatment from a control group would be impractical, such as in personalized clinical interventions for neurodevelopmental disorders. Despite these strengths, SSDs have notable disadvantages relative to group designs, especially regarding broader applicability. Their generalizability is inherently limited to the studied, necessitating systematic replication across cases to infer effects, which can be time-intensive. may also arise if the chosen participant is atypical, potentially skewing interpretations without diverse sampling. Group designs, by contrast, excel in establishing normative patterns through statistical power from larger samples, though they may overlook clinically meaningful variations. The choice between SSDs and group designs depends on research goals and context. SSDs are particularly suited for personalized interventions in clinical or educational settings, where demonstrating functional relations for a specific is paramount, as seen in behavior therapy for . Group designs, such as randomized controlled trials (RCTs), are preferred for establishing population norms and testing general hypotheses, ideal for pharmacological or studies requiring broad . Hybrid approaches integrate SSDs with group data to leverage complementary strengths, enhancing both individual relevance and population insights. For instance, initial SSDs can identify promising interventions for rare cases, followed by group studies to validate effects across demographics, as recommended in mixed-methods frameworks for . This combination mitigates SSDs' generalizability limitations while addressing group designs' insensitivity to individual heterogeneity.

Methodological and Ethical Limitations

Single-subject designs face several methodological challenges that can compromise their and reliability. One key issue is the irreversibility of effects, where interventions produce permanent changes, such as the acquisition of a new skill, preventing behaviors from reverting to levels during withdrawal phases in reversal designs like ABAB. This limitation is particularly evident in educational or therapeutic contexts, where learned behaviors do not easily reverse, undermining the demonstration of experimental . Carryover and effects also pose risks, as residual influences from prior phases can confound subsequent conditions, especially in multiple-treatment designs where one interferes with another due to sequence or additive impacts. Additionally, demand characteristics may arise, with participants altering behaviors based on perceived expectations, though this is mitigated through blinded assessments where feasible. Generalizability from single-subject designs is inherently limited by their focus on individual cases, raising concerns about overgeneralizing results to broader populations without sufficient . Unlike group designs, findings from a single participant cannot reliably extend beyond that context due to the small sample size, potentially leading to idiosyncratic interpretations. To address this, systematic replication across multiple participants, settings, or behaviors is essential, as direct replication confirms reliability within similar conditions while systematic variations test broader applicability. For instance, interventions like functional communication training have demonstrated generality only after decades of replications across diverse cases. Ethical considerations in single-subject designs center on participant , particularly the potential harm from withholding or withdrawing effective interventions. During phases, delaying —as in multiple-baseline designs—can prolong suffering for individuals with pressing needs, such as those with disabilities or issues. Withdrawal of beneficial interventions in reversal designs raises further concerns, especially if reversibility causes distress. Obtaining is complicated in vulnerable populations, like children or those with cognitive impairments, requiring surrogate decision-making and heightened safeguards to ensure and minimize . Equity issues also emerge, as access to these intensive designs may favor certain demographics, exacerbating disparities in research participation. Ethical guidelines from bodies like the emphasize avoiding , promoting fairness, and ensuring in research involving diverse populations. The 2025 revision of the introduces principles such as "Respect for Persons and Peoples" and "Justice and ," advocating for inclusive practices, , and equitable access for vulnerable communities. These guidelines highlight the importance of transparent reporting of participant to enhance validity across populations. Mitigation strategies include selecting non-reversal designs, such as multiple-baseline, for irreversible effects to avoid altogether while still establishing through staggered introductions. Therapeutic baselines—providing minimal supportive interventions during initial phases—can ethically reduce withholding harms without results. of phase orders and extended final phases with the most effective further counteract carryover effects and ensure lasting benefits. For generalizability, prioritizing systematic replications in protocols aligns with APA-endorsed practices to build cumulative evidence.

Historical Development

Origins in Behavior Analysis

Single-subject design emerged from the principles of operant conditioning developed by B.F. Skinner in the 1930s and 1940s, emphasizing the study of individual behavior through controlled environmental manipulations rather than group averages. Skinner's experiments, conducted primarily with rats and pigeons in operant conditioning chambers (commonly known as Skinner boxes), focused on how reinforcements shaped responses in single subjects, laying the groundwork for analyzing behavioral changes within individuals over time. A key innovation was the cumulative recorder, invented by Skinner in 1933, which produced real-time graphical depictions of an individual's response rate, enabling precise visual analysis of behavioral patterns without aggregating data across subjects. The formalization of single-subject methodologies advanced significantly with Murray Sidman's 1960 book, Tactics of Scientific Research: Evaluating Experimental Data in Psychology, which introduced the as a rigorous to reversal designs for establishing functional in behavioral interventions. Sidman argued that by staggering the introduction of interventions across behaviors, settings, or subjects, researchers could demonstrate causal effects without withdrawing treatments, addressing ethical and practical limitations in studying individual . This approach built directly on Skinner's emphasis on replication and steady-state responding in single organisms, providing a systematic framework for experimental validation in . Early applications of single-subject design were rooted in the Experimental Analysis of Behavior (EAB), a laboratory-based field pioneered by Skinner in the 1950s, which prioritized intensive study of individual organisms to uncover basic behavioral principles. The Journal of the Experimental Analysis of Behavior, founded in 1958, exemplified this focus by publishing studies using single-case methods to explore reinforcement contingencies in controlled settings. By the mid-1960s, EAB principles transitioned to applied contexts, addressing real-world problems like child development and education, as researchers sought to extend laboratory findings to socially significant behaviors. Parallel to these developments in behavior analysis, single-subject designs gained early traction in communication sciences and disorders () during the 1960s, particularly in fluency treatments for . Seminal studies included and Siegel's 1966 examination of contingent shock effects on stuttering and Haroldson et al.'s 1968 investigation of timeout procedures as punishment. Further applications in the , such as and Godden's 1977 study on verbal punishment for preschool stutterers and Hanson's 1978 analysis of light-flash interventions, demonstrated the methodology's utility in clinical settings. These efforts were supported by tutorials and reviews in the , including McReynolds and Kearns' 1983 book Single-Subject Experimental Designs in Communicative Disorders, solidifying SSEDs in CSD research. A pivotal development occurred in 1968 with the establishment of the Journal of Applied Behavior Analysis (JABA) by the Society for the Experimental Analysis of Behavior, which dedicated itself to single-case studies demonstrating practical interventions. This marked the institutionalization of (ABA), with JABA's inaugural issue featuring seminal work that outlined the field's core dimensions. Influential figures and Todd Risley, working at the under Sidney Bijou, played central roles in this shift; Wolf co-founded the Juniper Gardens Children's Project in 1965 to apply EAB methods to urban youth, while Risley contributed to early ABA programs emphasizing naturalistic teaching. Together with Donald Baer, they defined ABA's standards in their 1968 JABA article, ensuring single-subject designs remained central to ethical, effective behavioral interventions.

Modern Advancements

During the and , efforts to standardize single-subject designs gained momentum, culminating in the development of indicators to enhance methodological rigor. In , Horner and colleagues outlined 21 indicators for single-case research, categorized into areas such as participant description, dependent variable measurement, evaluation, experimental control, and social validity, which have since become benchmarks for evaluating s in . These indicators facilitated the integration of single-subject designs into broader frameworks, allowing researchers to systematically identify effective interventions by emphasizing replicability and . By the early , this standardization supported the recognition of single-case studies as a viable alternative to group designs in and , promoting their use in policy decisions for interventions. Post-2010, single-case experimental designs (SCEDs) experienced significant growth in , with annual publications increasing steadily and reflecting their utility in evaluating individualized treatments for conditions like anxiety and . This expansion paralleled advancements in combining SCEDs with techniques, enabling single-subject analyses of brain activity changes in response to interventions, as seen in studies predicting disorders like or from individual fMRI data. Such integrations have allowed for precise mapping of neural mechanisms at the individual level, enhancing the applicability of SCEDs in psychiatric research where group averages may obscure heterogeneity. In the 2020s, the COVID-19 pandemic prompted adaptations in single-subject designs, including remote data collection via mobile applications to maintain experimental integrity during lockdowns, as highlighted by tools like SCD-MVA that support virtual graphing and analysis. Concurrently, incorporation of big data and artificial intelligence has revolutionized SCED analysis, with machine learning algorithms improving the detection of functional relations in time-series data and reducing errors in visual inspections compared to traditional methods. These advancements, including automated graph analysis, have increased the scalability of SCEDs for large datasets in behavioral health. Global expansion has also accelerated, with growing adoption in non-Western educational contexts, such as special education programs in developing countries like Ghana and India, where SCEDs inform culturally adapted interventions for students with disabilities. As of 2025, further methodological progress includes multilevel model selection for SCED data analysis, enhancing statistical power in heterogeneous samples, and systematic quality reviews in fields like occupational therapy and school psychology. Meta-visual-analyses have also refined visual analysis techniques for detecting functional relations, while discussions on questionable research practices emphasize improved reporting standards. Key publications have further solidified these developments, including the third edition of Ledford and Gast's Single Case Research Methodology (2018), which provides updated guidelines on design implementation and analysis for and behavioral sciences. Recent meta-analyses underscore the efficacy of SCEDs, demonstrating moderate to large effect sizes across interventions in and , while highlighting the need for improved reporting standards to aggregate findings reliably. These syntheses confirm SCEDs' role in establishing evidence-based practices globally.

References

  1. [1]
    Single-Subject Experimental Design: An Overview - ASHA TLR Hub
    The essence of single-subject design is using repeated measurements to really understand an individual's variability, so that we can use our understanding of ...
  2. [2]
    Single Subject Research | Educational Research Basics by Del Siegle
    Single subject research (also known as single case experiments) is popular in the fields of special education and counseling.
  3. [3]
    Single-Subject Experimental Design for Evidence-Based Practice
    Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication ...
  4. [4]
    Encyclopedia of Research Design - Single-Subject Design
    The defining feature of single-case research is the use of each participant (subject) as his or her own experimental control. This approach to ...
  5. [5]
    Single-Subject Research in Psychiatry: Facts and Fictions - Frontiers
    Nov 12, 2020 · Although single-subject and case reports both focus on individuals (i.e., are both idiographic; see Figure 2), there are some major differences.
  6. [6]
    [PDF] The Use of Single-Subject Research to Identify Evidence-Based ...
    ABSTRACT: Slnglesubject research plays an important role in the development of evidence-based practice in special education. The defining features of ...<|control11|><|separator|>
  7. [7]
    Single-Subject Research Designs – Research Methods in Psychology
    Single-subject research designs typically involve measuring the dependent variable repeatedly over time and changing conditions (e.g., from baseline to ...
  8. [8]
    [PDF] Single-Case Experimental Research: A Methodology for ... - ERIC
    Baseline logic encompasses three factors: prediction, verification, and replication Cooper et ... This prediction was observed across participants 1 and 3.
  9. [9]
    Applied Behavior Analysis - Google Books
    Title, Applied Behavior Analysis ; Authors, John O. Cooper, Timothy Heron, William L. Heward ; Publisher, Pearson UK, 2020 ; ISBN, 1292324651, 9781292324654.
  10. [10]
  11. [11]
  12. [12]
    Single-Subject Design - Sage Publishing
    A single-subject design assesses intervention effectiveness by observing changes from before to during and after, with repeated measurements, a baseline, and ...Missing: rationale seminal
  13. [13]
    (PDF) Single-subject research designs - ResearchGate
    Single-subject research designs (SSDs) are experimental strategies used to evaluate the effects of an intervention on a target behavior in one or more ...
  14. [14]
  15. [15]
    [PDF] The Effects of Self-monitoring with a MotivAider® on the On-task
    A multiple baseline across students design was used to examine the effects of a self-monitoring procedure on the on-task behavior of three students. The ...
  16. [16]
    The changing criterion design - PMC - NIH
    This article describes and illustrates with two case studies a relatively novel form of the multiple-baseline design called the changing criterion design.
  17. [17]
    Best Practices in Utilizing the Changing Criterion Design - PMC - NIH
    Verification can be accomplished by varying ... baseline design which requires replication demonstrated across multiple persons, settings, or behaviors.
  18. [18]
    WWC | Single-Case Design Technical Documentation
    The Standards are bifurcated into Design and Evidence Standards (see Figure 1). The Design Standards evaluate the internal validity of the design. Reviewers ...
  19. [19]
    [PDF] Key Criteria Used in WWC Reviews of Single-Case Design Research
    In 2015, the WWC worked with a panel of experts to develop criteria for determining the rating of e˜ectiveness for an intervention, based on the single-case ...
  20. [20]
  21. [21]
    The Single-Case Reporting Guideline In BEhavioural Interventions ...
    This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016.
  22. [22]
    Reporting Guideline In - BEhavioural - Interventions - (SCRIBE)
    Feb 18, 2025 · The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 Statement. Reporting guidelines for main study types.
  23. [23]
    [PDF] Scribe Checklist
    The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 Checklist. Item number. Topic. Item description. Notes. TITLE and ABSTRACT. 1.
  24. [24]
    The Single-Case Reporting Guideline In BEhavioural Interventions ...
    Apr 14, 2016 · The SCRIBE 2016 Explanation and Elaboration article describes a set of 26 items to guide and structure the reporting of SCED research.
  25. [25]
    Journal of Applied Behavior Analysis Author Guidelines
    All required sections should be contained in your manuscript, including abstract, introduction, methods, results, and conclusions. Figures and tables should ...
  26. [26]
    Reporting single-case design studies: Advice in relation to ... - Elsevier
    The current text provides advice on the content of an article reporting a single-case design research. The advice is drawn from several sources, such as the ...<|control11|><|separator|>
  27. [27]
    (PDF) Handling Missing Data in Single-Case Studies - ResearchGate
    Oct 12, 2017 · The occurrence of missing data is prevalent in SCED studies due to the repeated observation and assessment of an outcome behavior in such settings.
  28. [28]
    Extensions of open science for applied behavior analysis
    Aug 9, 2025 · We discuss potential concerns with and unique considerations for preregistering experiments that use single-case designs, with practical ...
  29. [29]
    A Description of Missing Data in Single-Case Experimental Designs ...
    Feb 19, 2024 · Missing data is inevitable in single-case experimental designs (SCEDs) studies due to repeated measures over a period of time.
  30. [30]
    [PDF] WWC Single-Case Design Technical Documentation
    Although the basic SCD has many variations, these designs often involve repeated, systematic measurement of a dependent variable before, during, and after the ...
  31. [31]
    Systematic Protocols for the Visual Analysis of Single-Case ... - NIH
    Visual analysis is the primary method by which researchers analyze SCR data to determine whether a causal relation (i.e., functional relation, experimental ...
  32. [32]
    (PDF) The Relationship Between Visual Analysis and Five Statistical ...
    Aug 5, 2025 · using single-case designs are discussed. Keywords: visual analysis ... single-subject research had little influence on visual analysis.
  33. [33]
    Behavioral Interventions for Autism Spectrum Disorder - NIH
    In Korea, single-subject design studies have reported increased joint attention [86] and improvements in social interaction and communication behaviors [87,88] ...
  34. [34]
    Effects of a Cognitive-Functional Intervention Method on Improving ...
    Method: A single-subject A-B-A research design was employed in this study. Three children aged 9-10 years who were diagnosed with ADHD were selected. A total ...Missing: applications autism
  35. [35]
    Applied Behavior Analysis in Children and Youth with Autism ...
    Evidence-based practices for young children with autism: Contributions for single-subject design research. Focus on Autism & Other Developmental Disabilities.
  36. [36]
    [PDF] How Does One-on-One Tutoring Support Student Self-Efficacy? A ...
    This study utilized a single-subject case study design to investigate the per- ceptions of a high school student with a learning disability of one-on-one.
  37. [37]
    (PDF) Overview of Single-Subject Design in Special Education Studies
    Oct 14, 2024 · This paper provides an insightful overview of the application of single-subject design within special education studies.
  38. [38]
    Modified Constraint-Induced Therapy for the Lower Extremity in ...
    Dec 22, 2014 · Modified Constraint-Induced Therapy for the Lower Extremity in Elderly Persons With Chronic Stroke: Single-Subject Experimental Design Study.Missing: skill post-
  39. [39]
    Single-case experimental designs to assess intervention ...
    In order to meet the standards of a SCED, the study must include at least three attempts to demonstrate an intervention effect (e.g. at least three phase change ...
  40. [40]
    Rehabilitation of Motor Function after Stroke: A Multiple Systematic ...
    Sep 13, 2016 · Constraint-induced movement therapy (CIMT) is a therapeutic approach that applies motor skill learning principles to stroke rehabilitation. ...
  41. [41]
    The Family of Single-Case Experimental Designs - PubMed - NIH
    Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes.Missing: pharmacotherapy | Show results with:pharmacotherapy
  42. [42]
    Overcoming treatment implementation barriers for individuals with ...
    However, designing and performing SCEDs in rare diseases comes with specific challenges related to heterogeneity, selection of outcome measures, accessibility ...Missing: pharmacotherapy | Show results with:pharmacotherapy
  43. [43]
    Accelerating therapy development, application, and evaluation for ...
    May 18, 2025 · Single-case experimental designs (SCEDs), including n-of-1 trials, offer a. solution. These designs offer benefits that are also applicable ...
  44. [44]
    Exploring Single-Case Research Design With Individualized Anxiety ...
    Sep 30, 2023 · We employed a single-case research design (SCRD) methodology to highlight the individual variations or change across participants' neurofeedback ...Missing: subject | Show results with:subject
  45. [45]
    Effect of neurofeedback therapy on neurological post-COVID-19 ...
    Jul 27, 2022 · In this pilot study, we investigated the effectiveness of neurofeedback (Othmer method) for treatment of fatigue, anxiety, and depression after COVID-19.<|control11|><|separator|>
  46. [46]
    Analysis resources
    The following list of resources provides an overview of key methods for analyzing single-case experimental designs.
  47. [47]
    Creating Single-Subject Research Design Graphs with Google ... - NIH
    Nov 29, 2021 · Data analysis and graphing applications like Excel, Prism, Numbers, and SigmaPlot are fully featured applications that are well-suited to the ...
  48. [48]
    Prism tip - Creating a multiple baseline design chart - GraphPad
    This example shows how to make a multiple baseline design plot. This is accomplished by customizing the appearance of one graph, cloning this graph appearance ...
  49. [49]
    CRAN: Package scdhlm
    Feb 25, 2024 · The scdhlm package estimates hierarchical linear models and effect sizes for single-case designs, calculating standardized mean difference ...
  50. [50]
    SingleCaseES: A calculator for single-case effect size indices
    This package provides R functions for calculating basic effect size indices for single-case designs, including several non-overlap measures and parametric ...
  51. [51]
    [PDF] scan: Single-Case Data Analyses for Single and Multiple Baseline ...
    Sep 11, 2025 · SingleCaseES: A Calculator for Single-Case Effect. Sizes. R package version 0.7.1.9999, https://jepusto.github.io/SingleCaseES/. See Also. Other ...
  52. [52]
    Assessing Visual Analysis of Single Case Research Designs
    The purpose of Singlecase.org is to provide researchers with a tool for assessing and improving their skills at visual analysis of single-case research designs.
  53. [53]
    Tau-U Calculator - Single Case Research
    Tau-U is a method for measuring data non-overlap between two phases (A and B). It is a “distribution free” nonparametric technique, with statistical power of 91 ...Missing: formula | Show results with:formula
  54. [54]
    schema: an open-source, distributed mobile platform for deploying ...
    This paper introduces schema, an open-source, distributed, app-based platform for researchers to deploy behavior monitoring and health interventions onto mobile ...
  55. [55]
    Machine learning to analyze single‐case graphs - PubMed Central
    The results suggest that machine learning may support researchers and practitioners in making fewer errors when analyzing single‐case graphs, but replications ...
  56. [56]
    The Single-Case Reporting Guideline In BEhavioural Interventions ...
    SCRIBE 2016 is a reporting guideline with a 26-item checklist for single-case research, aiming to improve clarity and accuracy of reporting.
  57. [57]
  58. [58]
  59. [59]
  60. [60]
    Ethical principles of psychologists and code of conduct
    General Principles, Section 1: Resolving Ethical Issues, Section 2: Competence, Section 3: Human Relations, Section 4: Privacy and Confidentiality.
  61. [61]
    Operant Conditioning - PMC - NIH
    The term operant conditioning was coined by B. F. Skinner in 1937 in the context of reflex physiology, to differentiate what he was interested in—behavior that ...
  62. [62]
    [PDF] CUMULATIVE RECORD - Definitive Edition - B. F. Skinner Foundation
    As the title for a collection of papers by B. F. Skinner, the person who first used the cumulative recorder in studying behavior, Cumulative Record is a pun of ...
  63. [63]
    A MISSING LINK IN THE EVOLUTION OF THE CUMULATIVE ... - NIH
    Skinner's experimental analysis of behavior was the cumulative record, which graphically depicted individual responses in real time. Cumulative records were ...Missing: subject | Show results with:subject
  64. [64]
    Tactics of scientific research; evaluating experimental data in ...
    May 29, 2013 · Tactics of scientific research; evaluating experimental data in psychology. by: Sidman, Murray. Publication date: 1960. Topics: Psychology ...
  65. [65]
    Lessons worth repeating: Sidman's Tactics of Scientific Research
    Aug 7, 2025 · Murray Sidman's (1960) Tactics of Scientific Research: Evaluating Experimental Data in Psychology just celebrated its sixtieth anniversary.
  66. [66]
    Tactics of Scientific Research Evaluating Experimental Data in ...
    In stockDiscussing the major themes of replication, variability, and experimental design, Sidman describes the step-by-step planning of experiments.
  67. [67]
    Journal of the Experimental Analysis of Behavior - Wiley Online Library
    Journal of the Experimental Analysis of Behavior (JEAB) publishes research relevant to the behavior of individual organisms.
  68. [68]
    A Study in the Founding of Applied Behavior Analysis Through Its ...
    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., ...
  69. [69]
    Journal of Applied Behavior Analysis - Wiley Online Library
    Journal of Applied Behavior Analysis publishes research about applications of the experimental analysis of behavior to problems of social importance.
  70. [70]
    Some current dimensions of applied behavior analysis - PMC - NIH
    91 Articles from Journal of Applied Behavior Analysis are provided here courtesy of Society for the Experimental Analysis of BehaviorMissing: subject designs
  71. [71]
    Montrose M. Wolf (1935–2004) - PMC - NIH
    The Origins of Applied Behavior Analysis. In the early summer of 1962, Wolf joined Sidney Bijou at the Institute of Child Development at the University of ...
  72. [72]
    The Use of Single-Subject Research to Identify Evidence-Based ...
    Aug 6, 2025 · A concurrent multiple-baseline across-participants design (Horner et al., 2005) was employed across four sites. After 4-6 stable baseline data ...
  73. [73]
    Single-Case Experimental Designs: A Systematic Review of ...
    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer ...
  74. [74]
    Single Subject Prediction of Brain Disorders in Neuroimaging
    More than 500 studies have been published during the past quarter century on single subject prediction focused on a multiple brain disorders.
  75. [75]
    Psychiatric neuroimaging designs for individualised, cohort, and ...
    Aug 14, 2024 · Single-participant and individualised designs analyse a single participant or a series of individuals over repeated sessions, and often multiple ...
  76. [76]
    SCD‐MVA: A mobile application for conducting single‐case ...
    Sep 30, 2020 · The purpose of this article is to introduce the mobile application, SCD-MVA (2019), developed to assists in the design of an SCD, data gathering ...
  77. [77]
    Single-case design meta-analyses in education and psychology
    Nov 12, 2023 · This paper examines the single-case meta-analyses within the Education and Psychology fields. The amount of methodological studies related to the meta-analysis ...Introduction · Methods · Results · Discussion
  78. [78]
    Advancements in meta-analysis of single-case experimental designs
    Nov 23, 2023 · A mini-series of special issues devoted to the current state of the art in meta-analysis of single-case experimental designs (SCEDs).