Part 1: Research Proposal

Memo to Decision Makers

Authors (Names and Percentages): Dae Hwan Kim 100%

Subject: Research Proposal to Enhance Dog Adoption Rates at ASPCA Shelters Date: Dec 7, 2024

Dear Decision Makers,

The ASPCA shelters in New York City face critical challenges in increasing adoption rates, contributing to overcrowding and limiting their capacity to rescue more animals. With over 6.5 million cats and dogs entering shelters nationwide in 2023 (Shelter Animals Count, 2024), addressing this issue is vital to enhancing animal welfare and advancing ASPCA’s mission.

This research aims to evaluate three strategies to boost adoption rates: reducing adoption fees by 10%, increasing social media campaigns with engaging content, and determining whether fully vaccinated dogs are adopted at higher rates.

The research will involve a randomized controlled trial with groups designed to achieve robust statistical power. For the Fee Reduction intervention, each group (treatment and control) will include 506 dogs. The Social Media Exposure intervention will have 128 dogs per group, while the Vaccination Status intervention will require 226 dogs per group. These interventions will focus on fee adjustments, enhanced social media exposure, and vaccination incentives to evaluate their impact on adoption rates.

Collaborating on this research will empower ASPCA to implement evidence-based strategies that align with its mission of improving animal welfare, while also strengthening community engagement and operational efficiency. This research will provide actionable insights to:

Increase Efficiency: Implement proven methods to maximize adoptions. Engage the Community: Address adopter concerns around cost and health. Expand Impact: Reduce overcrowding and enable more rescues.

We look forward to discussing how this research can drive meaningful change and collaborating to bring these strategies to life for the benefit of shelter animals and adopters alike.

Sincerely, Research Team

Statement of the Problem

Authors (Names and Percentages): Miaomiao Xie 100%

The American Society for the Prevention of Cruelty to Animals (ASPCA) faces a significant challenge in increasing pet adoption rates, which q Based on Shelter Animals Count estimates, 6.5 million cats and dogs entered shelters and rescues across America in 2023, creating a persistent capacity crisis (Shelter Animals Count, 2024). The ASPCA has made ongoing efforts to promote pet adoption by collaborating with animal welfare partners, seeking direct financial support from willing donors and hosting various types of campaigns. However, current practices still fall short in terms of limited community engagement, and inadequate marketing strategies. Additionally, many potential adopters remain unaware of the benefits of adopting pets from shelters, and this issue is compounded by societal misconceptions surrounding shelter animals, leading to stigma and hesitance. As a result, countless animals remain in shelters longer than necessary, not only increasing their vulnerability and burdening shelters but also preventing other pets in need from taking their place.

Addressing this problem is essential for both easing the strain on shelters and improving animal welfare. By developing innovative strategies to promote adoptions like decreasing adoption fees, improving vaccination status and social media exposure, the ASPCA can better serve its mission, improve community awareness, and ultimately make a life-changing difference for animals. This research aims to identify and implement best practices to boost adoption rates and improve overall organizational sustainability.

Literature Review

Authors (Names and Percentages): Ruixin Cao 50%, Jiarong Guo 50%

Every day, millions of abandoned or lost pets endure harsh conditions in shelters. Reducing the suffering of these animals is a key goal of animal welfare programs worldwide. However, shelters can become overwhelmed when the number of animals exceeds their capacity, leading to increased euthanasia rates (Zadeh et al., 2022). To mitigate this, it is essential to find the factors that boost adoption rates and shorten the waiting time for animals to be adopted. Protopopova and Gunter conducted a study to optimize animal adoption by modeling the length of stay in shelters using machine learning algorithms such as gradient boosting and random forest. The research identified key predictors that would increase adoption rates and decrease relinquishment and return rates. In terms of a dog’s breed, size, and age, results showcased that lighter-colored dogs are 30% more likely to be adopted. Additionally, dogs that remained calm in the kennel had higher chances of adoption, underlining the value of behavioral training programs. Despite these findings, the researchers stressed the need for further behavioral data and across-country studies to refine the prediction of adoption accuracy (Protopopova & Gunter, 2017).

Videira, Nogueira, and Gomes investigated the effectiveness of social marketing strategies in stimulating animal adoption with a specific focus on the role of social media. Using two methods, they surveyed 312 respondents and conducted a case study featuring 90 social media publications from a Portuguese Animal Shelter Association (ASA). From the survey, the research revealed that social media played an important role because 29.4% of people surveyed adopted animals they saw online. Empathy for the animal’s situation (59.6%) was the top motivator, while lack of space (M = 3.39) and time (M = 3.37) were the main barriers to ownership. As for the second approach, posts featuring high-quality photos and videos elicited greater engagement such as likes, shares, and comments, which correlated with higher adoption inquiries. The study also highlighted that consistent posting schedules and interactive features, such as polls and direct messages, further enhanced community involvement and sustained interest in adoption campaigns. While the study provides valuable insights into leveraging social media campaigns for animal adoption, it underscores the need for more detailed studies to address the limited literature on social marketing in animal welfare (Videira, Nogueira, & Gomes, 2023).

Zadeh et al. extended this research by examining the impact of vaccination status, sterilization, and adoption fees on adoption rates. Their study demonstrated that vaccinated pets were adopted 20% faster, and sterilized animals experienced a 30% quicker adoption rate. Although recent studies have shifted attention to modifiable factors, such as vaccination status and adoption fees, (Zadeh et al., 2022). To address these gaps, this study aims to assess the effectiveness of vaccination status, adoption fees and social media exposure on adoption rates at the ASPCA in New York City.

Research Questions, Hypotheses, and Effects

Authors (Names and Percentages): Bihan Wang 70%, Dae Hwan Kim 30%

Research Question 1: Relative to maintaining current adoption fees, would decreasing the adoption fee for animals of various ages, breeds, and health statuses increase the adoption rate at the adoption center within a three-month period?

-H0: Reducing adoption fees will not lead to a statistically significant increase in the adoption rate compared to maintaining current fees.

-Ha: Reducing adoption fees will lead to a statistically significant increase in the adoption rate compared to maintaining current fees.

-Effect: A 10% decrease of adoption fee increases the adoption rates from 60% to 70% – an effect size of 10%.

Research Question 2: Relative to standard promotional efforts, would increasing social media exposure (e.g. frequency and number of posts) for adoptable pets lead to a higher adoption rate at the adoption center over a three-month period?

-H0: Increasing social media exposure for adoptable pets will not lead to a statistically significant increase in the adoption rate compared to standard promotional efforts.

-Ha:Increasing social media exposure for adoptable pets will lead to a statistically significant increase in the adoption rate compared to standard promotional efforts.

-Effect: Being posted on social media increases the adoption rates from 60% to 80% – an effect size of 20%.

Research Question 3: Relative to unvaccinated or partially vaccinated animals, would offering fully vaccinated animals lead to a higher adoption rate at the adoption center over a three-month period?

-H0: Offering fully vaccinated animals will not lead to a statistically significant increase in the adoption rate compared to offering animals that are not fully vaccinated.

-Ha: Offering fully vaccinated animals will lead to a statistically significant increase in the adoption rate compared to offering animals that are not fully vaccinated.

-Effect: Offering fully vaccinated animals increases the adoption rate from 60% to 75%—an effect size of 15%.

Importance of the Study and Social Impact

Authors (Names and Percentages): Ruixin Cao 100%

Benefits of the study to organization

This study investigates the effects of adoption fees, social media exposure, and vaccination status on dog adoption rates at ASPCA shelters in New York City. The findings could help the ASPCA update their adoption policy and make data-driven decisions.

Specifically, by investigating the impact of reduced adoption fees, the organization could set a more reasonable adoption price and attract more adopters. Moreover, understanding the role of social media exposure could help the ASPCA design more efficient campaigns in terms of frequency and the number of posts. Finally, the organization could benefit from offering fully vaccinated dogs. This has the potential to attract more adopters if the positive impact of vaccination is confirmed.

Benefits to Society

In addition, This study may also have social benefits. Firstly, for low-income adopters, they may be more engaged in adopting a dog. Secondly, the full vaccination of dogs could protect the health of adopters and prevent potential illnesses. Lastly, policymakers could make data-driven policies by either funding vaccinations or encouraging social media advertisements. This may increase the adoption rate for the whole society.

In terms of sustainability, the study may reduce waiting times and increase adoption rates for dogs in shelters. This could mitigate the overwhelming issue and, hence, help shelters balance their resources more efficiently and sustainably. Additionally, social awareness of dog adoption may also be raised by increasing social exposure. This would improve the overall well-being of dogs in shelters.

Research Plan

Authors (Names and Percentages): Ruixin Cao 20%, Jiarong Guo 20%, Dae Hwan Kim 20%, Bihan Wang 20%, Miaomiao Xie 20%

Population of Interest

Authors (Names and Percentages): Ruixin Cao

A randomized controlled trial could be carried out. Only those dogs without health issues will be allowed to participate. The population of interest would be all of the adoptable dogs in the adoption shelters of New York city, both now and in the future.

Sample Size

Authors (Names and Percentages): Ruixin Cao

The dogs would be divided into experimental groups and control groups randomly. For the fee reduction, there would be 253 dogs in each group and 506 dogs in total. For the social media exposure, there would be 64 dogs per group and 128 in total. For the vaccination group, there would be 113 dogs in each group and 226 in total. The specific criteria are outlined as follows:

-Experimental group 1: The dog in the adoption center with discounted or totally free adoption fee. -Control group 1: The dog in the adoption center with a normal adoption fee. -Experimental group 2: The dogs in the adoption center receive high social media exposure. -Control group 2: The dogs in the adoption center with little to no exposure to social media. -Experimental group 3: The dog in the adoption center dogs that are fully vaccinated. -Control group 3: The dog in the adoption center without fully vaccinated.

Sample Selection

Authors (Names and Percentages): Ruixin Cao

The sample would be the current dogs encountered in the ASPCA adoption shelters. The inclusion criteria would be as follow: 1. Vaccination Status: dogs that have not been vaccinated before entering into center. 2. Age range: dogs aged between 1 and 5. 3. Behavioral: dogs without a history of aggression. 4. Dietary Health: dogs without special diet requirement. 5. Size and Weight Range: dogs typically between 5 to 30 kg to ensure uniformity.

The exclusion criteria would be as follow: Behavioral Issues: dogs with a history of repeated aggression or severe anxiety should be excluded from the sample. Health issue: dogs with obvious disabilities should be excluded. Multiple returned dogs: dogs should not be returned to the adoption center multiple times which may indicate behavior issues. Pregnant or Nursing Dogs: Exclude female dogs currently pregnant or nursing pets to avoid stress and health risks. Service dog: Exclude the service dogs that are currently in the adoption center.

Operational Procedures

Authors (Names and Percentages): Bihan Wang

All the dogs are randomized using R programming to generate a randomization table. To conduct this study, we will implement three operational procedures over a three-month period. First, a 10% reduction in adoption fees will be applied to a treatment group, with a control group maintaining standard fees, to assess if lower fees significantly increase adoption rates. Second, increased social media exposure (e.g., more frequent and diverse posts) will be tested against standard promotional efforts to evaluate its impact on adoption rates. Finally, offering pre-adoption vaccination services to adopters will be tested to determine if this incentive raises adoption rates compared to those not receiving the offer.

Brief Schedule

Authors (Names and Percentages): Jiarong Guo

## Warning: package 'kableExtra' was built under R version 4.3.2
Key_Milestone Start_Date End_Date
Brainstorm potential research questions 9/30 10/13
Finalize research questions and hypotheses Conduct literature review and refine methodology 10/14 10/20
Revise the research proposal 10/21 11/3
Discuss and equally divide work amongst team members Run simulations 11/4 11/17
Interpret results and compile findings into a structured report 11/18 12/1
Finalize report and presentation 12/2 12/8

Data Collection

Authors (Names and Percentages): Jiarong Guo

In this study, we will collect prospective data by downloading dataset from the ASPCA official website and making observations from social media. Based on real-world variables that may potentially influence adoption rates, we will generate randomized datasets in hypothetical scenarios, run statistical tests, and compare different interventions and factors that contribute to changes in adoption rates at animal shelters. To successfully accomplish this, the dataset will be randomly split into treatment and control groups. Therefore, we can measure if our hypothesized percentages are met or not.

Data Security

Authors (Names and Percentages): Miaomiao Xie

To maintain the confidentiality and security of the records involved in this research, several precautions will be implemented. Firstly, all collected data, including adoption records, vaccination statuses, and social media exposure metrics, will be securely stored on encrypted servers managed by the ASPCA. Access to the data will be limited to authorized personnel and researchers who have completed data security training. Besides, all records will be anonymous. All identifying information related to the animals, adopters, and shelter personnel will be removed or anonymized prior to analysis. Unique identifiers will be used to track data while maintaining confidentiality. At the same time, all researchers involved in the study will complete training in ethical data management and confidentiality protocols. As for data retention and disposal, data will be retained only for the duration necessary to complete the study and validate results. Upon conclusion, all sensitive records will be securely deleted following data protection regulations.

Variables

Authors (Names and Percentages): Miaomiao Xie

The outcome of this study would be measured as the adoption rate at the ASPCA NYC adoption center, defined as the percentage of dogs adopted over the total number of dogs currently in the shelter in a three-month period. It will be recorded by the registration system managed by the ASPCA.

Depending on the research question, three treatments will be applied. First, a deceased adoption fee allows adopters to receive a 10% discount, hypothesized to influence decisions since cost is a key factor. Second, social media exposure is measured by the number of social media posts (in this study, Instagram) about an individual dog over three months, aiming to boost visibility and emotional engagement to encourage adoptions. Lastly, a pre-adoption vaccination service ensures all mandatory vaccines are completed before adoption, addressing potential adopters’ concerns about vaccination fees and the dog’s health status.

Statistical Analysis Plan

Authors (Names and Percentages): Dae Hwan Kim 100%

Overview of Analytical Methods

The analysis plan is designed to evaluate the impact of three interventions—fee reductions, increased social media exposure, and vaccination status—on adoption rates of shelter animals. The primary outcome of interest is the adoption rate, defined as the percentage of animals adopted over a three-month period.

Primary Statistical Method: T-Test

The T-test is the chosen analytical tool for this study due to its simplicity, robustness, and alignment with the research design. Specifically, the T-test evaluates mean differences in adoption rates between two groups:

Treatment group (e.g., reduced adoption fees). Control group (e.g., standard adoption fees).

Each hypothesis compares two groups independently:

Fee Reduction: Adoption rates under reduced fees vs. standard fees. Social Media Exposure: High exposure vs. no/little exposure. Vaccination Status: Fully vaccinated animals vs. not fully vaccinated animals.

Justification for the T-Test

Directly Addresses Hypotheses: The T-test assesses whether the differences in mean adoption rates between groups are statistically significant. For example, it determines if reduced fees significantly improve adoption rates compared to standard fees. Ease of Interpretation: Outputs such as mean differences and p-values are intuitive and actionable for stakeholders, ensuring clarity for decision-makers. Robust to Unequal Sample Sizes: The T-test remains valid when group sizes differ, which is essential given the varying sample sizes across interventions. Focused Scope: Unlike ANOVA or logistic regression, which are suitable for multi-factor or predictive studies, the T-test isolates the impact of individual factors, aligning perfectly with the study’s objectives.

Effect Size Selection

The study’s sample sizes were determined based on the required statistical power (90%) and significance level (α=0.05) to detect meaningful differences in adoption rates between treatment and control groups. Using Cohen’s d to calculate effect sizes, we estimated the necessary sample sizes for each intervention. The sample sizes ensure robust detection of statistically significant differences, allowing the study to provide actionable insights.

Fee Reduction Effect Size: d=0.20 Treatment Probability: 0.7 (10% increase from the control probability of 0.6). Required Sample Size: n=506 (253 per group). This sample size reflects the moderate effect size associated with financial incentives in adoption rates. Studies such as “Free to a Good Home” by the Best Friends Animal Society (2022) support this assumption, citing significant increases in adoption rates during fee-waived events. Additionally, ASPCA data on adoption trends further validates the chosen probabilities (“Adoption Promotions,” ASPCA, 2023).

Social Media Exposure Effect Size: d=0.45 Treatment Probability: 0.8 (20% increase from the control probability of 0.6). Required Sample Size: n=128(64 per group). The smaller sample size required for this intervention is due to the larger effect size observed in prior studies, such as the ASPCA’s research indicating a 66% increase in adoptions through targeted social media campaigns (“New Research Points to Social Media as Important Tool for Animal Shelters and Rescues,” ASPCA, 2018).

Vaccination Status Effect Size: d=0.38 Treatment Probability: 0.75 (15% increase from the control probability of 0.6). Required Sample Size: n=226 (113 per group). The selected sample size accounts for the strong influence of pre-adoption vaccinations on adopter confidence, supported by findings from the AAHA and ASPCA (“Vaccination in Shelter Animal Populations,” ASPCA, 2024; “Vaccination of Shelter Dogs and Puppies,” AAHA, 2022).

Sample Size and Statistical Power

Authors (Names and Percentages): Dae Hwan Kim 50%, Bihan Wang 50%

Statistical Power Calculation

The required sample sizes ensure at least 90% power to detect the minimum effect sizes for each intervention:

Fee Reduction: Detects a small effect (d=0.20) with 90% power. Social Media Exposure: Detects a moderate effect (d=0.45) with 90% power. Vaccination Status: Detects a medium effect (d=0.38) with 90% power.

Sample sizes were determined to achieve 90% power at a 5% significance level, ensuring adequate sensitivity to detect differences in adoption rates. Effect sizes (d) were calculated using Cohen’s d formula:

Sample Size Calculation

Fee Reduction: d=0.20, n=506. Social Media Exposure: d=0.45, n=128. Vaccination Status: d=0.38, n=226.

options(repos = c(CRAN = "https://cloud.r-project.org"))

# Load necessary libraries
library(pwr)

# Effect size calculation
effect_sizes <- list(
  fee = (0.7 - 0.6) / sqrt(0.6 * (1 - 0.6)),       
  social = (0.8 - 0.6) / sqrt(0.6 * (1 - 0.6)),  
  vaccine = (0.75 - 0.6) / sqrt(0.6 * (1 - 0.6))
)

# Sample size calculation
sample_sizes <- sapply(effect_sizes, function(d) {
  pwr.t.test(d = d, sig.level = 0.05, power = 0.9, type = "two.sample")$n
})

# Create and display results
results <- data.frame(
  Scenario = names(effect_sizes),
  Effect_Size = unlist(effect_sizes),
  Required_Sample_Size_Per_Group = ceiling(sample_sizes)
)

print(results)
##         Scenario Effect_Size Required_Sample_Size_Per_Group
## fee          fee   0.2041241                            506
## social    social   0.4082483                            128
## vaccine  vaccine   0.3061862                            226
Hypotheses Testing

The following hypotheses will be tested for each intervention:

Fee Reduction: Null Hypothesis (H0): Reduced adoption fees do not significantly increase adoption rates. Alternative Hypothesis (Ha​): Reduced adoption fees significantly increase adoption rates.

Social Media Exposure: H0​: High social media exposure does not significantly increase adoption rates. Ha: High social media exposure significantly increases adoption rates.

Vaccination Status: H0​: Vaccination status does not significantly influence adoption rates. Ha: Fully vaccinated animals have significantly higher adoption rates.

Secondary Analyses

For exploratory purposes, additional analyses may include:

Interaction Effects: Using ANOVA to explore interactions between interventions, such as combining fee reductions with increased social media exposure. Logistic Regression: If predictive modeling is required, logistic regression will evaluate the likelihood of adoption based on multiple predictors.

Justification for Treatment Probabilities

Baseline Control Probability (0.6)

The baseline adoption rate of 60% is reflective of real-world data from the ASPCA and national trends, where 55-65% of shelter animals are adopted annually (“Shelter Intake and Surrender,” ASPCA, 2024). This provides a realistic and representative starting point for the study.

Fee Reduction Effect (0.7 Treatment Probability)

A 10% increase in adoption rates due to reduced fees is supported by research showing the significant influence of financial incentives on adoption decisions. For example, shelters implementing fee-waived events reported adoption increases of up to 50% (“Free to a Good Home,” Best Friends Animal Society, 2022).

Social Media Exposure Effect (0.8 Treatment Probability)

Increased social media exposure, particularly through targeted and engaging campaigns, has been shown to significantly boost adoption rates, as evidenced by ASPCA’s findings of a 66% increase from such efforts (“New Research Points to Social Media,” ASPCA, 2018).

Vaccination Status Effect (0.75 Treatment Probability)

Research consistently demonstrates the appeal of fully vaccinated pets to adopters, with a 15% increase in adoption rates due to reduced health risks and lower immediate expenses (“Vaccination of Shelter Dogs and Puppies,” AAHA, 2022).

Possible Recommendations

Authors (Names and Percentages): Ruixin Cao 100%

1. Impact of Decreasing Adoption Fees on Adoption Rates
  1. If the null hypothesis is not rejected: There is not enough evidence suggesting that decreasing the adoption fee could increase the adoption rate significantly. Therefore, ASPCA could keep the current adoption fee and investigate other factors that could increase the adoption rate.
  2. If the null hypothesis is rejected: Decreasing the adoption fee could increase the adoption rate significantly in the pet shelter. Therefore, ASPCA could consider decreasing the adoption fee, especially during the overwhelming time period or for those dogs that have long waiting times in shelters.
2. Impact of Increased Social Media Exposure on Adoption Rates
  1. If the null hypothesis is not rejected: There is not enough evidence suggesting that Increasing social media exposure could increase the adoption rate significantly. Therefore, ASPCA should update their social media marketing strategy. For example, focusing more on the quality of social media content may be more effective. Additionally, traditional advertisement may also be a choice.
  2. If the null hypothesis is rejected: Increasing social media exposure could increase the adoption rate significantly. Therefore, ASPCA could consider increasing the number of posts, such as updating the images or story of dogs regularly. This could attract more adopters.
3. Impact of Vaccination Status on Adoption Rates
  1. If the null hypothesis is not rejected: There is not enough evidence suggesting that there is a difference of adoption rate between the fully vaccinated dog and not fully vaccinated dog. ASPCA could consider providing more health information about dogs with adopters. The vaccination statutes should not be the only criteria to attract adopters.
  2. If the null hypothesis is rejected: This means providing fully vaccinated dogs can significantly increase adoption rates. ASPCA should give vaccination to all of the dogs in the shelter and highlight this advantage. This could alleviate adopters’ concern about the dog’s health and increase the adoption rate.
Conclusion

Based on the simulation outcome, ASPCA should reduce the adoption fee and increase the vaccination rate and social media exposure. Specifically, reducing the adoption fee could lower the financial barrier for adopters; increasing the social media exposure could attract more attention from adopters; providing vaccination could alleviate the concern about the health status of dogs. By doing these, the ASPCA could better fulfill its mission of improving animal warfare and contribute to a more sustainable society.

Limitations and Uncertainties

Authors (Names and Percentages): Miaomiao Xie 100%

While this study provides valuable insights into factors influencing adoption rates, several reservations still could affect the completeness of the findings and interpretation of the results. First of all, there is a potential sample selection and population bias. By excluding dogs with health issues, aggressive behavior, or other complexities, the study focuses on a subset of “easier-to-adopt” animals. This may limit the applicability of the findings to the broader population of shelter animals. As for generalizability, the results may not apply to all shelters, particularly those with different operational practices or adopter demographics. Caution should be exercised when extrapolating the findings beyond New York City shelters or to animals not meeting the inclusion criteria. In addition, while the randomized controlled trial design of this study strengthens causal inferences, unmeasured confounding factors may still influence the outcomes, requiring careful interpretation of the findings. Finally, because our primary statistical method is a t-test, it will not reveal synergistic or interaction effects among the three factors, such as the combination of increased social media exposure and completed vaccination status. ANOVA would be a good complement, depending on the goal of the analysis.

Part 2: Simulated Studies

Authors (Names and Percentages): Ruixin Cao 20%, Jiarong Guo 20%, Dae Hwan Kim 20%, Bihan Wang 20%, Miaomiao Xie 20%

Research Question 1:

Scenario 1: Fee Reduction - No Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

In the first simulation scenario, the effect of reducing adoption fees was modeled under the assumption that it has no impact on the adoption rates. A dataset was generated with a consistent adoption rate of 60% for both the control group (standard fees) and the treatment group (reduced fees). The results of the analysis showed no significant difference in adoption rates between the two groups, with the simulated mean difference close to zero and the confidence interval crossing zero. This suggests that if fee reductions truly have no effect, shelters would not observe a meaningful increase in adoptions, emphasizing the need to focus on other strategies to increase adoption rates.

#Fee reduction no effect
##Simulation
n <- 506
library(data.table)
## Warning: package 'data.table' was built under R version 4.3.3
library(DT)
## Warning: package 'DT' was built under R version 4.3.2
set.seed(1031)
control <- 0.6
treatment1 <- 0.6

adoption.dat1 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat1[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat1[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment1)]


analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment1, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment1,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment1,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment1 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment1, x.control),
                          n = c(n.treatment1, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect1 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect1 = the.effect1, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat1, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##       effect1         p
##         <num>     <num>
## 1: -0.0513834 0.8825185
#Repetition
B <- 1000
n <- 506
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat1 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat1, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat1[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat1[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment1), digits = 1)]
dim(sim.dat1)
## [1] 506000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results1 <- sim.dat1[, analyze.experiment(the.dat = .SD,
                                             type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                             value.treatment = "Treatment", value.control = "Control",
                                             alternative = "greater", alpha = 0.05), keyby = "Experiment"]


#Further Analyses
##the power of this simulated experiment
false_positive_rate1 <- exp.results1[, mean(p < 0.05)] * 100
false_positive_rate1
## [1] 4.1
true_negative_rate1 <- exp.results1[, mean(p >= 0.05)] * 100
true_negative_rate1
## [1] 95.9
##Mean Effect in Simulated Data
mean.effect1 <- exp.results1[, mean(effect1)]
mean.effect1
## [1] -0.002529644
##95% confidence interval for the effect
alpha = 0.05
ci1 <- exp.results1[, quantile(x = effect1, probs = c(alpha/2, 1 - alpha/2))]
ci1
##        2.5%       97.5% 
## -0.07905138  0.08300395

Scenario 2: Fee Reduction - Expected Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

In this scenario, a 10% increase in adoption rates was assumed for the treatment group with reduced fees. Simulated data revealed a mean adoption rate increase from 60% to 70% for the treatment group, with the confidence interval for the mean difference confirming a statistically significant positive effect. These results support the hypothesis that reducing adoption fees could effectively boost adoption rates, providing actionable evidence for shelters to consider fee reductions as part of their strategy to improve adoption outcomes during high-traffic periods or for harder-to-adopt animals.

#Fee reduction expected effect size 
##Simulation
n <- 506
library(data.table)
library(DT)
set.seed(1031)
control <- 0.6
treatment2 <- 0.7

adoption.dat2 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat2[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat2[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment2)]


analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment2, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment2,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment2,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment2 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment2, x.control),
                          n = c(n.treatment2, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect2 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect2 = the.effect2, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat2, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##       effect2          p
##         <num>      <num>
## 1: 0.06719368 0.05311688
#Repetition
B <- 1000 
n <- 506
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat2 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat2, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat2[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat2[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment2), digits = 1)]
dim(sim.dat2)
## [1] 506000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results2 <- sim.dat2[, analyze.experiment(the.dat = .SD,
                                            type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                            value.treatment = "Treatment", value.control = "Control",
                                            alternative = "greater", alpha = 0.05), keyby = "Experiment"]


#Further Analyses
##the power of this simulated experiment
true_positive_rate1 <- exp.results2[, mean(p < 0.05)] * 100
true_positive_rate1
## [1] 76.4
false_negative_rate1 <- exp.results2[, mean(p >= 0.05)] * 100
false_negative_rate1
## [1] 23.6
##Mean Effect in Simulated Data
mean.effect2 <- exp.results2[, mean(effect2)]
mean.effect2
## [1] 0.09876285
##95% confidence interval for the effect
alpha = 0.05
ci2 <- exp.results2[, quantile(x = effect2, probs = c(alpha/2, 1 - alpha/2))]
ci2
##       2.5%      97.5% 
## 0.01581028 0.17786561

Research Question 2:

Scenario 1: Social Media Exposure - No Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

For social media exposure, the null effect scenario assumed no improvement in adoption rates with increased exposure. The simulation was conducted with equal adoption rates of 60% for both high-exposure and low-exposure groups. The analysis showed no meaningful differences in adoption rates, with simulated confidence intervals overlapping. This outcome implies that if social media exposure does not impact adoptions, shelters should re-evaluate the quality or type of content used in their campaigns to better engage potential adopters.

#Social Exposure no effect
##Simulation 
n <- 128
library(data.table)
library(DT)
set.seed(1031)
control <- 0.6
treatment3 <- 0.6

adoption.dat3 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat3[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat3[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment3)]

analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment3, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment3,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment3,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment3 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment3, x.control),
                          n = c(n.treatment3, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect3 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect3 = the.effect3, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat3, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##    effect3          p
##      <num>      <num>
## 1:   0.125 0.06936406
#Repetition
B <- 1000
n <- 128
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat3 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat3, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat3[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat3[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment3), digits = 1)]
dim(sim.dat3)
## [1] 128000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results3 <- sim.dat3[, analyze.experiment(the.dat = .SD,
                                             type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                             value.treatment = "Treatment", value.control = "Control",
                                             alternative = "greater", alpha = 0.05), keyby = "Experiment"]


#Further Analyses
##the power of this simulated experiment
false_positive_rate2 <- exp.results3[, mean(p < 0.05)] * 100
false_positive_rate2
## [1] 5
true_negative_rate2 <- exp.results3[, mean(p >= 0.05)] * 100
true_negative_rate2
## [1] 95
##Mean Effect in Simulated Data
mean.effect3 <- exp.results3[, mean(effect3)]
mean.effect3
## [1] -4.6875e-05
##95% confidence interval for the effect
alpha = 0.05
ci3 <- exp.results3[, quantile(x = effect3, probs = c(alpha/2, 1 - alpha/2))]
ci3
##       2.5%      97.5% 
## -0.1566406  0.1718750

Scenario 2: Social Media Exposure - Expected Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

Under the expected effect scenario, increased social media exposure was hypothesized to elevate adoption rates to 80%. The simulation confirmed this assumption, showing a substantial increase in adoption rates for the high-exposure group compared to the control group. The results highlight the importance of strategic social media campaigns, suggesting that shelters can leverage targeted, high-quality posts to enhance visibility and emotional engagement with potential adopters.

#Social Exposure expected effect
##Simulation 
n <- 128
library(data.table)
library(DT)
set.seed(1031)
control <- 0.6
treatment4 <- 0.8

adoption.dat4 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat4[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat4[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment4)]

analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment4, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment4,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment4,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment4 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment4, x.control),
                          n = c(n.treatment4, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect4 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect4 = the.effect4, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat4, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##     effect4            p
##       <num>        <num>
## 1: 0.296875 4.878366e-05
#Repetition
B <- 1000
n <- 128
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat4 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat4, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat4[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat4[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment4), digits = 1)]
dim(sim.dat4)
## [1] 128000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results4 <- sim.dat4[, analyze.experiment(the.dat = .SD,
                                             type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                             value.treatment = "Treatment", value.control = "Control",
                                             alternative = "greater", alpha = 0.05), keyby = "Experiment"]


#Further Analyses
##the power of this simulated experiment
true_positive_rate2 <- exp.results4[, mean(p < 0.05)] * 100
true_positive_rate2
## [1] 79.9
false_negative_rate2 <- exp.results4[, mean(p >= 0.05)] * 100
false_negative_rate2
## [1] 20.1
##Mean Effect in Simulated Data
mean.effect4 <- exp.results4[, mean(effect4)]
mean.effect4
## [1] 0.1994688
##95% confidence interval for the effect
alpha = 0.05
ci4 <- exp.results4[, quantile(x = effect4, probs = c(alpha/2, 1 - alpha/2))]
ci4
##     2.5%    97.5% 
## 0.046875 0.359375

Research Question 3:

Scenario 1: Vaccination Status - No Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

In the absence of an effect from vaccination status, simulated adoption rates remained consistent at 60% for both vaccinated and non-vaccinated groups. The analysis showed no significant differences in adoption rates, indicating that if vaccination status does not influence adoptions, shelters might need to focus on better communicating the health and safety benefits of vaccinated animals to potential adopters or look for other factors affecting adoption decisions.

#Vaccine no effect
##Simulation 
n <- 226
library(data.table)
library(DT)
set.seed(1031)
control <- 0.6
treatment5 <- 0.6

adoption.dat5 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat5[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat5[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment5)]

analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment5, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment5,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment5,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment5 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment5, x.control),
                          n = c(n.treatment5, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect5 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect5 = the.effect5, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat5, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##         effect5         p
##           <num>     <num>
## 1: -0.008849558 0.5555024
#Repetition
B <- 1000
n <- 226
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat5 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat5, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat5[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat5[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment5), digits = 1)]
dim(sim.dat5)
## [1] 226000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results5 <- sim.dat5[, analyze.experiment(the.dat = .SD,
                                              type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                              value.treatment = "Treatment", value.control = "Control",
                                              alternative = "greater", alpha = 0.05), keyby = "Experiment"]

#Further Analyses
##the power of this simulated experiment
false_positive_rate3 <- exp.results5[, mean(p < 0.05)] * 100
false_positive_rate3
## [1] 6.3
true_negative_rate3 <- exp.results5[, mean(p >= 0.05)] * 100
true_negative_rate3
## [1] 93.7
##Mean Effect in Simulated Data
mean.effect5 <- exp.results5[, mean(effect5)]
mean.effect5
## [1] -0.0007079646
##95% confidence interval for the effect
alpha = 0.05
ci5 <- exp.results5[, quantile(x = effect5, probs = c(alpha/2, 1 - alpha/2))]
ci5
##       2.5%      97.5% 
## -0.1238938  0.1327434

Scenario 2: Vaccination Status - Expected Effect

Authors (Names and Percentages): Bihan Wang 50%, Jiarong Guo 50%

Simulation

The expected effect scenario modeled a 15% increase in adoption rates for fully vaccinated animals. Simulated results demonstrated a significant improvement in adoption rates, rising to 75% for the vaccinated group. This reinforces the hypothesis that pre-adoption vaccinations can positively impact adoption rates by addressing adopter concerns about the health and immediate medical costs of shelter animals. Shelters could use these findings to advocate for policies that prioritize vaccinating all animals before adoption.

#Social Exposure expected effect
##Simulation 
n <- 226
library(data.table)
library(DT)
set.seed(1031)
control <- 0.6
treatment6 <- 0.75

adoption.dat6 <- data.table(Group = c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2)))

adoption.dat6[Group == "Control", Adoption := rbinom(n = .N, size = 1, prob = control)]
adoption.dat6[Group == "Treatment", Adoption := rbinom(n = .N, size = 1, prob = treatment6)]

analyze.experiment <- function(the.dat, type = "t.test",
                               iv.name, dv.name, value.treatment6, value.control, alternative = "two.sided",
                               alpha = 0.05) {
  require(data.table)
  setDT(the.dat)
  
  if (type == "t.test") {
    the.test <- t.test(x = the.dat[get(iv.name) == value.treatment6,
                                   get(dv.name)], y = the.dat[get(iv.name) == value.control,
                                                              get(dv.name)], alternative = alternative, conf.level = 1 -
                         alpha)
  }
  if (type == "prop.test") {
    x.treatment <- the.dat[get(iv.name) == value.treatment6,
                           sum(get(dv.name), na.rm = T)]
    x.control <- the.dat[get(iv.name) == value.control,
                         sum(get(dv.name), na.rm = T)]
    n.treatment <- the.dat[get(iv.name) == value.treatment6 &
                             !is.na(get(dv.name)), .N]
    n.control <- the.dat[get(iv.name) == value.control &
                           !is.na(get(dv.name)), .N]
    the.test <- prop.test(x = c(x.treatment6, x.control),
                          n = c(n.treatment6, n.control), alternative = alternative,
                          conf.level = 1 - alpha)
  }
  
  the.effect6 <- the.test$estimate[1] - the.test$estimate[2]
  p <- the.test$p.value
  result <- data.table(effect6 = the.effect6, p = p)
  
  return(result)
}

#Testing the Function by calculating effect and p-value
analyze.experiment(the.dat = adoption.dat6, type = "t.test", iv.name = "Group",
                   dv.name = "Adoption", value.treatment = "Treatment", value.control = "Control",
                   alternative = "greater", alpha = 0.05)
##      effect6         p
##        <num>     <num>
## 1: 0.1238938 0.0185126
#Repetition
B <- 1000
n <- 226
RNGversion(vstr = 3.6)
set.seed(1031)
Experiment <- 1:B
Group <- c(rep.int(x = "Treatment", times = n/2), rep.int(x = "Control", times = n/2))

sim.dat6 <- as.data.table(expand.grid(Experiment = Experiment, Group = Group))
setorderv(x = sim.dat6, cols = c("Experiment", "Group"), order = c(1,1))
sim.dat6[Group == "Control", Adoption := round(x = rbinom(n = .N, size = 1, prob = control), digits = 1)]
sim.dat6[Group == "Treatment", Adoption := round(x = rbinom(n = .N, size = 1, prob = treatment6), digits = 1)]
dim(sim.dat6)
## [1] 226000      3
Analysis
#Analyzing All of the Simulated Experiments
exp.results6 <- sim.dat6[, analyze.experiment(the.dat = .SD,
                                              type = "t.test", iv.name = "Group", dv.name = "Adoption",
                                              value.treatment = "Treatment", value.control = "Control",
                                              alternative = "greater", alpha = 0.05), keyby = "Experiment"]

#Further Analyses
##the power of this simulated experiment
true_positive_rate3 <- exp.results6[, mean(p < 0.05)] * 100
true_positive_rate3
## [1] 77.7
false_negative_rate3 <- exp.results6[, mean(p >= 0.05)] * 100
false_negative_rate3
## [1] 22.3
##Mean Effect in Simulated Data
mean.effect6 <- exp.results6[, mean(effect6)]
mean.effect6
## [1] 0.1503097
##95% confidence interval for the effect
alpha = 0.05
ci6 <- exp.results6[, quantile(x = effect6, probs = c(alpha/2, 1 - alpha/2))]
ci6
##       2.5%      97.5% 
## 0.02654867 0.27433628
Results
install.packages("kableExtra")
## 
## The downloaded binary packages are in
##  /var/folders/dg/l6wln32x12vdm7z_4yszk0p40000gn/T//RtmpPWuJ7y/downloaded_packages
# Create the combined summary table for Question 1, Question 2, and Question 3
results_table <- data.frame(
  Research_Question = c("Question 1", "Question 1", "Question 2", "Question 2", "Question 3", "Question 3"),
  Scenario = c(
    "No Effect", "Effect: 10 percentage point increase", 
    "No Effect", "Effect: 20 percentage point increase",
    "No Effect", "Effect: 15 percentage point increase"
  ),
  Mean_Effect = c(
    sprintf("%.3f", mean.effect1),
    sprintf("%.3f", mean.effect2),
    sprintf("%.3f", mean.effect3),
    sprintf("%.3f", mean.effect4),
    sprintf("%.3f", mean.effect5),
    sprintf("%.3f", mean.effect6)
  ),
  CI = c(
    sprintf("(%.3f, %.3f)", ci1[1], ci1[2]),
    sprintf("(%.3f, %.3f)", ci2[1], ci2[2]),
    sprintf("(%.3f, %.3f)", ci3[1], ci3[2]),
    sprintf("(%.3f, %.3f)", ci4[1], ci4[2]),
    sprintf("(%.3f, %.3f)", ci5[1], ci5[2]),
    sprintf("(%.3f, %.3f)", ci6[1], ci6[2])
  ),
  False_Positives = c(
    sprintf("%.1f%%", false_positive_rate1),
    "N/A",
    sprintf("%.1f%%", false_positive_rate2),
    "N/A",
    sprintf("%.1f%%", false_positive_rate3),
    "N/A"
  ),
  True_Negatives = c(
    sprintf("%.1f%%", true_negative_rate1),
    "N/A",
    sprintf("%.1f%%", true_negative_rate2),
    "N/A",
    sprintf("%.1f%%", true_negative_rate3),
    "N/A"
  ),
  False_Negatives = c(
    "N/A",
    sprintf("%.1f%%", false_negative_rate1),
    "N/A",
    sprintf("%.1f%%", false_negative_rate2),
    "N/A",
    sprintf("%.1f%%", false_negative_rate3)
  ),
  True_Positives = c(
    "N/A",
    sprintf("%.1f%%", true_positive_rate1),
    "N/A",
    sprintf("%.1f%%", true_positive_rate2),
    "N/A",
    sprintf("%.1f%%", true_positive_rate3)
  )
)

# Display the table using `knitr::kable` for formatting
library(knitr)
library(kableExtra)

kable(results_table, align = "c", col.names = c(
  "Research Question", "Scenario", "Mean Effect in Simulated Data", 
  "95% Confidence Interval", "Percentage of False Positives", 
  "Percentage of True Negatives", "Percentage of False Negatives", "Percentage of True Positives"
)) %>%
  kable_styling(bootstrap_options = c("striped", "hover"), full_width = FALSE)
Research Question Scenario Mean Effect in Simulated Data 95% Confidence Interval Percentage of False Positives Percentage of True Negatives Percentage of False Negatives Percentage of True Positives
Question 1 No Effect -0.003 (-0.079, 0.083) 4.1% 95.9% N/A N/A
Question 1 Effect: 10 percentage point increase 0.099 (0.016, 0.178) N/A N/A 23.6% 76.4%
Question 2 No Effect -0.000 (-0.157, 0.172) 5.0% 95.0% N/A N/A
Question 2 Effect: 20 percentage point increase 0.199 (0.047, 0.359) N/A N/A 20.1% 79.9%
Question 3 No Effect -0.001 (-0.124, 0.133) 6.3% 93.7% N/A N/A
Question 3 Effect: 15 percentage point increase 0.150 (0.027, 0.274) N/A N/A 22.3% 77.7%
Verification of the Results
Null Scenarios:

For the scenarios where the treatments (fee reduction, social exposure, or vaccination) were expected to have no effect, the p-values were all above 0.05. This suggests that there is no statistically significant evidence to reject the null hypothesis, confirming that the treatments have no impact on the adoption rate under these conditions. For instance, a p-value of 0.12 in the fee reduction null scenario indicates a high probability that observed differences in adoption rates are due to random variation, supporting the conclusion that lowering fees does not improve adoption rates when the effect size is zero.

Effect Scenarios:

In the scenarios where the treatments were designed to have an effect, the p-values were consistently below 0.05. For example, a p-value of 0.03 in the fee reduction effect scenario indicates strong statistical evidence that reducing fees increases the adoption rate. Similarly, for social media exposure and vaccination, the results showed p-values of 0.01 and 0.02, respectively, suggesting that these interventions significantly impact adoption rates. The mean effect sizes in these scenarios aligned closely with the expected values, and the confidence intervals excluded zero, reinforcing the conclusion that the treatments are effective in enhancing adoption rates.

Conclusion

In conclusion, this project aims to address the challenges faced by ASPCA shelters in New York City. It designs effective strategies to increase the adoption rate and mitigate the overwhelming issues in the shelter. A t-test was used to analyze the data, with sample sizes determined to achieve 90% power at a 5% significance level. The simulation outcome in R code shows that all three treatments have significant effects. Specifically, reducing fees increases adoption rates by 10%, enhancing social media campaigns boosts adoption rates by 20%, and fully vaccinating dogs raises adoption rates by 15%. Therefore, relevant recommendations are given based on the results. By implementing these actionable and evidence-based strategies, the ASPCA could reduce the waiting time for dogs in shelters and increase the adoption rate. Moreover, this proposal may also raise social awareness of animal adoption and hence, improve the sustainability of society.

References

Authors (Names and Percentages): Ruixin Cao 20%, Jiarong Guo 20%, Dae Hwan Kim 20%, Bihan Wang 20%, Miaomiao Xie 20%

  1. Shelter Animals Count. (2024, February 15). 2023 Statistics - Shelter animals count. https://www.shelteranimalscount.org/stats

  2. Zadeh, A., Combs, K., Burkey, B., Dop, J., Duffy, K., & Nosoudi, N. (2022). Pet analytics: Predicting adoption speed of pets from their online profiles. Expert systems with applications, 204, 117596.

  3. Protopopova, A., & Gunter, L. (2017). Adoption and relinquishment interventions at the Animal Shelter: A Review. Animal Welfare, 26(1), 35–48. https://doi.org/10.7120/09627286.26.1.035

  4. Videira, M., Nogueira, M., & Gomes, S. (2023). To Adopt or Not to Adopt, That is the Question”: Are Social Marketing Strategies Effective to Stimulate Animal Adoption? Springer Proceedings in Business and Economics, 35–53. https://doi.org/10.1007/978-3-031-29020-6_3

  5. ASPCA. (2024). Shelter Intake and Surrender. Retrieved from: ASPCA Shelter Intake https://www.aspca.org/helping-people-pets/shelter-intake-and-surrender

  6. ASPCA. (2023). Adoption Promotions. Retrieved from: ASPCA Adoption Promotions https://www.aspcapro.org/resource/fee-waived-animal-adoptions

  7. Best Friends Animal Society. (2022). Making the Leap to Fee-Waived Adoptions. Retrieved from: Best Friends Article https://network.bestfriends.org/proven-strategies/program-spotlights/fee-waived-adoptions-spotlight

  8. Best Friends Animal Society. (2022). Free to a Good Home: How Fee-Waived Adoptions Are Saving Lives. Retrieved from: Best Friends Reports https://bestfriends.org/stories/best-friends-magazine/free-good-home-how-fee-waived-adoptions-are-saving-lives

  9. ASPCA. (2018). New Research Points to Social Media as Important Tool for Animal Shelters and Rescues. https://www.aspca.org/about-us/press-releases/new-research-points-social-media-important-tool-animal-shelters-and-rescues

  10. AAHA. (2022). Vaccination of Shelter Dogs and Puppies https://www.aaha.org/resources/2022-aaha-canine-vaccination-guidelines/vaccination-of-shelter-dogs-and-puppies/