View on GitHub

rtg

Project page for NSF grant "RTG: Understanding dynamic big data with complex structure"

Tim’s research primarily focuses on increasing the power and external validity of large-scale cluster randomized controlled trials (RCTs). While RCTs are considered the highest-quality possible evidence for determining the efficacy of interventions, they often lack power to detect that treatment effect even with large sample sizes due to the increased practice of pre-registering analysis plans. Furthermore, it is often difficult to generalize findings from RCTs to the population at large, despite their strong internal validity. To address this first issue, Tim is constructing a method of generating synthetic clusters of treatment and control observations using only observations from the true control group. This allows the creation of a pseudo-experiment that mimics the actual design of the RCT without using any outcome data. Through replication of these pseudo-experiments, it is possible to test various methods of covariate adjustment in order to select one that will provide maximum power and precision when conducting the true outcome analysis.

To address the issue of generalization, Tim is working on a novel non-parametric hypothesis test of association that will inform deliberation about whether an impact estimate might be extrapolated directly to a target group. Typically, generalization to a population can only occur when the RCT sample looks similar to that population. The exception occurs when the population average treatment effect is constant, as seen in the figure below. This hypothesis test of association provides researchers with a tool to assess whether or not the average treatment effect in the RCT is heterogeneous or constant. In particular, this research builds on the Kendall’s Tau Hypothesis Test through a randomization inference framework to allow an estimate of a parameter within our Kendall’s Tau statistic.

RCT