Search

Kylie L. Anglin
  • Home
  • Publications
  • CV
  • Contact
Avatar

Kylie L. Anglin

PhD Candidate

NAEd/Spencer Dissertation Fellow

EdPolicyWorks at the University of Virginia

I am an evaluation methodologist committed to helping researchers identify effective educational programs and policies.

My research develops data science methods for conducting implementation research in field settings, as well as improving the causal validity and replicability of impact estimates. In recent research, I developed efficient methods for collecting local policy data from school district websites and for measuring fidelity in standardized educational interventions. Currently, I am testing the validity of specification tests in repeated measures designs and training a classifier to identify features of quality collaboration in teacher coaching interventions.

I became interested in evaluation and implementation at my first job out of college teaching middle schoolers in the Mississippi Delta. As a teacher, I wanted to know what programs would work for my students and my circumstances, not the average student in the average circumstances. Today, I address these questions by helping researchers address variation in implementation, ensuring that average outcomes do not hide inequalities.

Interests

  • causal inference
  • data science
  • replication
  • implementation and effect heterogeneity
  • natural language processing

Education

  • PhD in Education Policy Evaluation, 2021 (expected)

    University of Virginia

  • Masters in Public Policy, 2018

    University of Virginia

  • Post-Baccalearate in Mathematics, 2015

    Northwestern University

  • BA in Political Science, 2013

    Southwestern University

Publications and Working Papers

October 2020 Wong, V. C., Steiner, P. M., & Anglin, K. L. (2020). Design-Based Approaches to Systematic Replication. EdPolicyWorks Working Paper Series No. 74. October 2020

Design-Based Approaches to Systematic Conceptual Replication Studies

Recent interest to promote and support replication efforts assume that there is well-established methodological guidance for designing and implementing these studies. However, no such consensus exists in the methodology literature. This article addresses these challenges by describing design-based approaches for planning systematic replication studies. Our general approach is derived from the Causal Replication Framework (CRF), which formalizes the assumptions under which replication success can be expected. The assumptions may be understood broadly as replication design requirements and individual study design requirements. Replication failure occurs when one or more CRF assumptions are violated. In design-based approaches to replication, CRF assumptions are systematically tested to evaluate the replicability of effects, as well as to identify sources of effect variation when replication failure is observed. In direct replication designs, replication failure is evidence of bias or incorrect reporting in individual study estimates, while in conceptual replication designs, replication failure occurs because of effect variation due to differences in treatments, outcomes, settings, and participant characteristics. The paper demonstrates how multiple research designs may be combined in systematic replication studies, as well as how diagnostic measures may be used to assess the extent to which CRF assumptions are met in field settings.
October 2020 Anglin, K. L. & Wong, V. C. (2020) The Validity of Causal Claims with Repeated Measures Data. EdPolicyWorks Working Paper Series No. 73. October 2020. https://curry.virginia.edu/sites/default/files/uploads/epw/73_Semantic_Similarity_to_Assess_Adherence_and_Replicability_revised.pdf

Using Semantic Similarity to Assess Adherence and Replicability of Intervention Delivery

Researchers are rarely satisfied to learn only whether an intervention works, they also want to understand why and under what circumstances interventions produce their intended effects. These questions have led to increasing calls for implementation research to be included in evaluations. When an intervention protocol is highly standardized and delivered through verbal interactions with participants, a set of natural language processing techniques termed semantic similarity can be used to provide quantitative measures of how closely intervention sessions adhere to a standardized protocol, as well as how consistently the protocol is replicated across sessions. Given the intense methodological, budgetary, and logistical challenges in conducting implementation research, semantic similarity approaches have the benefit of being low-cost, scalable, and context agnostic for use. In this paper, we demonstrate the application of semantic similarity approaches in an experiment and discuss strengths and limitations, as well as the most appropriate contexts for applying this method.
December 2019 Kylie L. Anglin (2019) Gather-Narrow-Extract: A Framework for Studying Local Policy Variation Using Web-Scraping and Natural Language Processing, Journal of Research on Educational Effectiveness, 12:4, 685-706, https://doi.org/10.1080/19345747.2019.1654576

Gather-Narrow-Extract: A Framework for Studying Local Policy Variation Using Web-Scraping and Natural Language Processing

Education researchers have traditionally faced severe data limitations in studying local policy variation; administrative data sets capture only a fraction of districts’ policy decisions, and it can be expensive to collect more nuanced implementation data from teachers and leaders. Natural language processing and web-scraping techniques can help address these challenges by assisting researchers in locating and processing policy documents located online. School district policies and practices are commonly documented in student and staff manuals, school improvement plans, and meeting minutes that are posted for the public. This article introduces an end-to-end framework for collecting these sorts of policy documents and extracting structured policy data: The researcher gathers all potentially relevant documents from district websites, narrows the text corpus to spans of interest using a text classifier, and then extracts specific policy data using additional natural language processing techniques. Through this framework, a researcher can describe variation in policy implementation at the local level, aggregated across state- or nationwide populations even as policies evolve over time.
January 2019 Anglin, K. L., Krishnamachari, A. & Wong, V. C. (2020). Methodological Approaches for Impact Evaluation in Educational Settings. In Oxford Bibliographies in Education. New York: Oxford University Press.

Methodological Approaches for Impact Evaluation in Educational Settings

Since the start of the War on Poverty in the 1960s, social scientists have developed and refined experimental and quasi-experimental methods for evaluating and understanding the ways in which public policies, programs, and interventions affect people’s lives. The overarching mission of many social scientists is to understand “what works” in education and social policy. These are causal questions about whether an intervention, practice, program, or policy affects some outcome of interest. Although causal questions are not the only relevant questions in program evaluation, they are assumed by many in the fields of public health, economics, social policy, and now education to be the scientific foundation for evidence-based decision making. Fortunately, over the last half-century, two methodological advances have improved the rigor of social science approaches for making causal inferences. The first was acknowledging the primacy of research designs over statistical adjustment procedures. Donald Campbell and colleagues showed how research designs could be used to address many plausible threats to validity. The second methodological advancement was the use of potential outcomes to specify exact causal quantities of interest. This allowed researchers to think systematically about research design assumptions and to develop diagnostic measures for assessing when these assumptions are met. This article reviews important statistical methods for estimating the impact of interventions on outcomes in education settings, particularly programs that are implemented in field, rather than laboratory, settings.
October 2018 Wong, V. C., Steiner, P. M. & Anglin, K. L. (2018). What Can Be Learned from Empirical Evaluations of Nonexperimental Methods? Evaluation Review, 42(2), 147–175. https://doi.org/10.1177/0193841X18776870

What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?

Given the widespread use of nonexperimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the same target population. The goal is to assess whether the stringent assumptions required for NE methods are likely to be met in practice. This essay provides an overview of recent efforts to empirically evaluate NE method performance in field settings. We discuss a brief history of the design, highlighting methodological innovations along the way. We also describe papers that are included in this two-volume special issue on WSC approaches and suggest future areas for consideration in the design, implementation, and analysis of WSCs.
September 2018 Steiner, P. M., Wong, V. C., & Anglin, K. L. (2019). A Causal Replication Framework for Designing and Assessing Replication Efforts. Zeitschrift für Psychologie / Journal of Psychology, 227(4), 280–292. https://doi.org/10.1027/2151-2604/a000385

A Causal Replication Framework for Designing and Assessing Replication Efforts

Replication has long been a cornerstone for establishing trustworthy scientific results, but there remains considerable disagreement about what constitutes a replication, how results from these studies should be interpreted, and whether direct replication of results is even possible. This article addresses these concerns by presenting the methodological foundations for a replication science. It provides an introduction to the causal replication framework, which defines “replication” as a research design that tests whether two (or more) studies produce the same causal effect within the limits of sampling error. The framework formalizes the conditions under which replication success can be expected, and allows for the causal interpretation of replication failures. Through two applied examples, the article demonstrates how the causal replication framework may be utilized to plan prospective replication designs, as well as to interpret results from existing replication efforts.

Contact

  • kal3nh@virginia.edu

Powered by the Academic theme for Hugo.

Cite
Copy Download