Sep to Dec 2024 R · RCT Simulation

Boosting Dog
Adoption Rates
at ASPCA

A randomized controlled experiment designed for the American Society for the Prevention of Cruelty to Animals (ASPCA) to evaluate three actionable interventions and quantify their causal impact on dog adoption rates over a three-month window.

Overview

In 2023, 6.5 million cats and dogs entered shelters across the US. Animal shelters face persistent capacity constraints, and while outreach and fee promotions are common tactics, few organizations rigorously test which levers actually move adoption rates. Intuition isn't enough when resources are limited and shelter overcrowding has real welfare consequences.

This project designed a full randomized controlled experiment for the ASPCA to evaluate three modifiable factors: adoption fee reductions, increased social media exposure, and vaccination status. The goal wasn't just to establish statistical significance — it was to determine which interventions are practically detectable, scalable, and cost-effective enough to justify broader rollout under real operational constraints.

Experimental Design

Each intervention was tested as an independent randomized controlled trial. To reduce confounding from health or behavioral factors, eligibility was restricted to healthy, non-aggressive dogs aged 1–5 years and weighing 5–30kg. Dogs with repeated aggression, significant health conditions, prior return history, pregnancy, or service dog status were excluded — ensuring comparability across groups and isolating the treatment effect cleanly.

Intervention 01
Adoption Fee Reduction
Treatment dogs received a 10% discount off standard fees. Control dogs retained standard pricing. Cost is a documented decision-making factor in prior literature.
n = 506 · 253 per group · d = 0.20
Intervention 02
Social Media Exposure
Treatment dogs received higher-frequency, higher-quality Instagram posts. Control dogs received minimal or no promotion. Smaller sample justified by larger expected effect size.
n = 128 · 64 per group · d = 0.45
Intervention 03
Vaccination Status
Treatment dogs were fully vaccinated prior to listing. Control dogs were not vaccinated during the study period. Prior research shows vaccinated pets are adopted up to 20% faster.
n = 226 · 113 per group · d = 0.38

Randomization was implemented in R using set.seed() for reproducibility. Each experiment followed a parallel treatment–control structure with adoption outcome measured as a binary indicator: adopted or not within three months.

Statistical Analysis

The primary analysis method is a two-sample t-test for each intervention, comparing mean adoption rates between treatment and control groups. T-tests were chosen over ANOVA or logistic regression deliberately — the goal is clean causal inference for individual policy levers, not a predictive model. The t-test produces interpretable effect estimates and p-values that are directly actionable for non-technical stakeholders.

Power Analysis & Sample Sizing

Sample sizes were determined through power analysis targeting 90% power at α = 0.05. Effect sizes (Cohen's d) were specified from prior literature and historical shelter data, not assumed arbitrarily. Fee reduction carries the smallest expected effect (d = 0.20), requiring the largest sample. Social media carries a larger expected effect (d = 0.45), so fewer dogs are needed to detect it reliably.

Anchoring baseline adoption probability at 0.60, the target lifts were: fee reduction → 0.70 (+10pp), social exposure → 0.80 (+20pp), vaccination → 0.75 (+15pp). These assumptions were used only for power planning — not to predetermine outcomes.

Each intervention was simulated 1,000 times to validate false positive rates, false negative rates, and mean effect estimates under both null and alternative scenarios. This simulation layer confirms the sample sizes are sufficient before any real data collection begins — a critical step that moves the study from theory to operational planning.

Results

All three interventions produced statistically significant results under the expected effect scenario (p < 0.05), confirming the study is adequately powered to detect meaningful adoption rate differences. Under null scenarios (no true effect), false positive rates stayed at or below 6.3% — within the expected α = 0.05 bound.

Intervention Scenario False Positives False Negatives True Positives p-value
Fee Reduction No Effect 4.1% 95.9% 0.12
Fee Reduction +10pp Lift 23.6% 76.4% 0.03
Social Media No Effect 5.0% 95.0% 0.15
Social Media +20pp Lift 20.1% 79.9% 0.01
Vaccination No Effect 6.3% 93.7% 0.10
Vaccination +15pp Lift 22.3% 77.7% 0.02

Social media exposure produced the strongest signal relative to its sample size — the highest true positive rate (79.9%) from just 128 dogs. This reflects its larger assumed effect size and suggests it may be the highest-ROI lever for a first rollout. Fee reduction required the most dogs to detect a smaller effect, which has direct cost implications when scaling.

Cost Analysis

Beyond statistical significance, the practical question is which intervention delivers the most adoption lift per dollar spent. Cost per 100 dogs was estimated for each lever to directly inform implementation priority.

Fee Reduction
$1.5K
per 100 dogs · smallest unit cost, largest sample needed
Social Media
$1.0K
per 100 dogs · lowest cost, highest effect size — best ROI
Vaccination
$4.0K
per 100 dogs · highest cost, but addresses adopter health concerns directly

Social media exposure emerges as the most cost-effective starting point: lowest per-dog cost, largest detectable effect, and smallest required sample. Fee reduction is a viable secondary lever, particularly targeted at long-stay dogs during peak intake periods. Vaccination is the most expensive per dog but may be justified for segments where health uncertainty is a known barrier to adoption.

Key Charts

Visual summaries of the core findings across all three interventions — from true positive rates and adoption lift to cost efficiency. These charts translate the simulation outputs into a format that supports operational decision-making.

True Positive Rate by Intervention

Under the expected effect scenarios, all three interventions exceeded 76% true positive rate, confirming the study design is sensitive enough to detect real differences in adoption outcomes. Social media exposure led with the highest detection rate despite having the smallest sample.

True positive rate (%) — expected effect scenario, 1,000 simulations
Social Media
79.9%
Vaccination
77.7%
Fee Reduction
76.4%

Projected Adoption Rate: Control vs. Treatment

Each intervention targets a specific lift above the 60% baseline adoption rate. Vaccination and social media exposure are expected to deliver the largest absolute gains, while fee reduction provides a more modest but still meaningful improvement.

Fee Reduction
Control 60%
Treatment 70%
+10 percentage points
Social Media
Control 60%
Treatment 80%
+20 percentage points
Vaccination
Control 60%
Treatment 75%
+15 percentage points

Cost vs. Effect Size

This chart plots each intervention's estimated cost per 100 dogs against its Cohen's d effect size. Social media stands out as the clear efficiency winner — delivering the largest effect at the lowest price point. Vaccination carries the highest cost but addresses a distinct and often-cited barrier: adopter anxiety about a pet's health.

Cost per 100 dogs (indexed) vs. effect size (Cohen's d)
Social Media · d=0.45
$1.0K
Fee Reduction · d=0.20
$1.5K
Vaccination · d=0.38
$4.0K

Limitations

No study design is without constraints. Acknowledging these openly helps calibrate confidence in the findings and identify where future research should focus before full-scale operational rollout.

Sample Selection
Restricting eligibility to healthy, non-aggressive dogs aged 1–5 years and 5–30 kg improves internal validity but reduces generalizability. Older dogs, medically complex animals, and smaller breeds are underrepresented — and may respond differently to each intervention. Results should not be extrapolated to those populations without additional study.
RCT Design
Even in a randomized trial, unmeasured confounders can introduce noise. Factors like dog personality, coat color, breed popularity, and seasonal adoption trends were not systematically controlled. Shelter staff behavior — if they unconsciously treat treatment animals differently — could also contaminate the results.
Testing Method
Three separate t-tests are run without adjustment for multiple comparisons (e.g., Bonferroni correction). This increases the family-wise error rate — the chance that at least one false positive appears across the three tests. Future analyses bundling all three interventions should apply appropriate corrections.
Interaction Effects
The three interventions are tested independently, so synergistic or antagonistic effects between them are unknown. A dog receiving both a fee reduction and social media promotion may not see a simple additive benefit. A factorial design would be necessary to model those interaction terms.
Three-Month Window
Adoption outcomes are measured within a 90-day observation period. Longer-term effects — such as whether fee-waived adoptions lead to higher return rates, or whether vaccinated dogs sustain health advantages post-adoption — are outside the study scope.

Takeaways

Power analysis is not a formality — it determines whether a study can actually answer its question. Specifying effect sizes from prior literature rather than defaults made the sample sizes meaningful and defensible to stakeholders.

Simulating 1,000 experiments before collecting real data is how you validate a design. It surfaces whether your false positive rate is under control and whether your sample is large enough to detect the effect you care about — before committing resources.

Choosing t-tests over more complex methods was a deliberate decision, not a limitation. The goal was clean, communicable causal inference — not a predictive model. The right method depends on the decision being made, not on technical sophistication for its own sake.

Results translate directly into a prioritization playbook: start with social media exposure (best ROI, smallest sample), layer in fee reductions for long-stay dogs during peak intake, and apply vaccination support where health concerns are a documented barrier.

The study design intentionally excludes interaction effects between interventions. If ASPCA's goal evolves toward bundled strategies — fee reduction combined with social promotion — a factorial design or logistic regression framework would be the natural next step.