Boosting Dog
Adoption Rates
at ASPCA
A randomized controlled experiment designed for the American Society for the Prevention of Cruelty to Animals (ASPCA) to evaluate three actionable interventions and quantify their causal impact on dog adoption rates over a three-month window.
Overview
In 2023, 6.5 million cats and dogs entered shelters across the US. Animal shelters face persistent capacity constraints, and while outreach and fee promotions are common tactics, few organizations rigorously test which levers actually move adoption rates. Intuition isn't enough when resources are limited and shelter overcrowding has real welfare consequences.
This project designed a full randomized controlled experiment for the ASPCA to evaluate three modifiable factors: adoption fee reductions, increased social media exposure, and vaccination status. The goal wasn't just to establish statistical significance — it was to determine which interventions are practically detectable, scalable, and cost-effective enough to justify broader rollout under real operational constraints.
Experimental Design
Each intervention was tested as an independent randomized controlled trial. To reduce confounding from health or behavioral factors, eligibility was restricted to healthy, non-aggressive dogs aged 1–5 years and weighing 5–30kg. Dogs with repeated aggression, significant health conditions, prior return history, pregnancy, or service dog status were excluded — ensuring comparability across groups and isolating the treatment effect cleanly.
Randomization was implemented in R using set.seed() for reproducibility. Each experiment followed a parallel treatment–control structure with adoption outcome measured as a binary indicator: adopted or not within three months.
Statistical Analysis
The primary analysis method is a two-sample t-test for each intervention, comparing mean adoption rates between treatment and control groups. T-tests were chosen over ANOVA or logistic regression deliberately — the goal is clean causal inference for individual policy levers, not a predictive model. The t-test produces interpretable effect estimates and p-values that are directly actionable for non-technical stakeholders.
Power Analysis & Sample Sizing
Sample sizes were determined through power analysis targeting 90% power at α = 0.05. Effect sizes (Cohen's d) were specified from prior literature and historical shelter data, not assumed arbitrarily. Fee reduction carries the smallest expected effect (d = 0.20), requiring the largest sample. Social media carries a larger expected effect (d = 0.45), so fewer dogs are needed to detect it reliably.
Anchoring baseline adoption probability at 0.60, the target lifts were: fee reduction → 0.70 (+10pp), social exposure → 0.80 (+20pp), vaccination → 0.75 (+15pp). These assumptions were used only for power planning — not to predetermine outcomes.
Each intervention was simulated 1,000 times to validate false positive rates, false negative rates, and mean effect estimates under both null and alternative scenarios. This simulation layer confirms the sample sizes are sufficient before any real data collection begins — a critical step that moves the study from theory to operational planning.
Results
All three interventions produced statistically significant results under the expected effect scenario (p < 0.05), confirming the study is adequately powered to detect meaningful adoption rate differences. Under null scenarios (no true effect), false positive rates stayed at or below 6.3% — within the expected α = 0.05 bound.
| Intervention | Scenario | False Positives | False Negatives | True Positives | p-value |
|---|---|---|---|---|---|
| Fee Reduction | No Effect | 4.1% | 95.9% | — | 0.12 |
| Fee Reduction | +10pp Lift | — | 23.6% | 76.4% | 0.03 |
| Social Media | No Effect | 5.0% | 95.0% | — | 0.15 |
| Social Media | +20pp Lift | — | 20.1% | 79.9% | 0.01 |
| Vaccination | No Effect | 6.3% | 93.7% | — | 0.10 |
| Vaccination | +15pp Lift | — | 22.3% | 77.7% | 0.02 |
Social media exposure produced the strongest signal relative to its sample size — the highest true positive rate (79.9%) from just 128 dogs. This reflects its larger assumed effect size and suggests it may be the highest-ROI lever for a first rollout. Fee reduction required the most dogs to detect a smaller effect, which has direct cost implications when scaling.
Cost Analysis
Beyond statistical significance, the practical question is which intervention delivers the most adoption lift per dollar spent. Cost per 100 dogs was estimated for each lever to directly inform implementation priority.
Social media exposure emerges as the most cost-effective starting point: lowest per-dog cost, largest detectable effect, and smallest required sample. Fee reduction is a viable secondary lever, particularly targeted at long-stay dogs during peak intake periods. Vaccination is the most expensive per dog but may be justified for segments where health uncertainty is a known barrier to adoption.
Key Charts
Visual summaries of the core findings across all three interventions — from true positive rates and adoption lift to cost efficiency. These charts translate the simulation outputs into a format that supports operational decision-making.
True Positive Rate by Intervention
Under the expected effect scenarios, all three interventions exceeded 76% true positive rate, confirming the study design is sensitive enough to detect real differences in adoption outcomes. Social media exposure led with the highest detection rate despite having the smallest sample.
Projected Adoption Rate: Control vs. Treatment
Each intervention targets a specific lift above the 60% baseline adoption rate. Vaccination and social media exposure are expected to deliver the largest absolute gains, while fee reduction provides a more modest but still meaningful improvement.
Cost vs. Effect Size
This chart plots each intervention's estimated cost per 100 dogs against its Cohen's d effect size. Social media stands out as the clear efficiency winner — delivering the largest effect at the lowest price point. Vaccination carries the highest cost but addresses a distinct and often-cited barrier: adopter anxiety about a pet's health.
Limitations
No study design is without constraints. Acknowledging these openly helps calibrate confidence in the findings and identify where future research should focus before full-scale operational rollout.
Takeaways
Power analysis is not a formality — it determines whether a study can actually answer its question. Specifying effect sizes from prior literature rather than defaults made the sample sizes meaningful and defensible to stakeholders.
Simulating 1,000 experiments before collecting real data is how you validate a design. It surfaces whether your false positive rate is under control and whether your sample is large enough to detect the effect you care about — before committing resources.
Choosing t-tests over more complex methods was a deliberate decision, not a limitation. The goal was clean, communicable causal inference — not a predictive model. The right method depends on the decision being made, not on technical sophistication for its own sake.
Results translate directly into a prioritization playbook: start with social media exposure (best ROI, smallest sample), layer in fee reductions for long-stay dogs during peak intake, and apply vaccination support where health concerns are a documented barrier.
The study design intentionally excludes interaction effects between interventions. If ASPCA's goal evolves toward bundled strategies — fee reduction combined with social promotion — a factorial design or logistic regression framework would be the natural next step.