key: cord-0073143-9soeoiz3 authors: Onyura, Betty; Mullins, Hollie; Hamza, Deena M title: Five ways to get a grip on the shortcomings of logic models in program evaluation date: 2021-12-29 journal: Can Med Educ J DOI: 10.36834/cmej.71966 sha: c1555ccf8215857cd0f1ba40e8547f0deeaf9a3e doc_id: 73143 cord_uid: 9soeoiz3 Logic models are perhaps the most widely used tools in program evaluation work. They provide reasonably straightforward, visual illustrations of plausible links between program activities and outcomes. Consequently, they are employed frequently in stakeholder engagement, communication, and evaluation project planning. However, their relative simplicity comes with multiple drawbacks that can compromise the integrity of evaluation studies. In this Black Ice article, we outline key considerations and provide practical strategies that can help those engaged in evaluation work to identify and mitigate some limitations of logic models For decades, logic models have been a quintessential tool for program evaluation. They provide relatively simple, diagrammatic representations of the 'how' and 'why' that underlies how specified interventions function. 1,2 Often, they illustrate the strategic intentions behind interventions, which highlight the perceived linkages between planned activities, and anticipated outcomes 3 . Logic models are frequently used to guide communication with program stakeholders 1, 4 as well as to identify target areas for monitoring and evaluation. 5 However, given several limitations, some question whether they are an appropriate tool for current evaluation intitiaves. 6, 7 Black Ice As commonly used, logic models focus on succinct, visual synthesis of program components, and linearly depict program functioning. Correspondingly they can lead stakeholders to: 1) Ignore the complexity that underlies social and educational interventions 6 . This can manifest, for example, through inattention to variable contextual factors (e.g., resourcing, political pressures) as well as to cultural and value differences that may compound to influence intervention outcomes. Les modèles logiques sont vraisemblablement les outils d'évaluation de programme les plus utilisés. Ils illustrent visuellement de façon assez simple les liens plausibles entre les activités du programme et les résultats obtenus. Par conséquent, ils sont fréquemment utilisés pour la mobilisation des parties prenantes, la communication et la planification de tels projets. Toutefois, leur relative simplicité s'accompagne de multiples inconvénients qui peuvent compromettre l'intégrité des études d'évaluation. Dans cet article de (la rubrique) Terrain glissant, nous proposons des éléments essentiels et des stratégies pratiques à prendre en considération lorsqu'on entreprend une évaluation pour être en mesure de cibler et de remédier à certaines limites des modèles logiques. Logic models are perhaps the most widely used tools in program evaluation work. They provide reasonably straightforward, visual illustrations of plausible links between program activities and outcomes. Consequently, they are employed frequently in stakeholder engagement, communication, and evaluation project planning. However, their relative simplicity comes with multiple drawbacks that can compromise the integrity of evaluation studies. In this Black Ice article, we outline key considerations and provide practical strategies that can help those engaged in evaluation work to identify and mitigate some limitations of logic models 2) Assume that a given program is a legitimate solution to an identified problem; this can constrain thinking about external influences on the problem or limit exploration of adaptations to enhance contextual compatibility. 2, 8 For example, a serious oversight might involve an intervention to enhance youth self-esteem that does not acknowledge or attend to underlying systemic oppression of racialized groups. 3) Neglect to identify the undesirable outcomes that interventions can inadvertently precipitate. 9 Even well-intended interventions can result in problematic effects. 10 Though unintended these effects are not necessarily unpredictable. 11, 12 Rather, adverse outcomes may be anticipated by purposeful attention to research evidence or experiential expertise. 11, 13 Examples of such effects include evidence that cultural competency training can in fact trigger expressions of racism toward cultural minority groups 14 or that global health educational innovations can increase trainees' willingness to perform skills outside their scope of training. 15 Overall, these limitations make some evaluators hesitant to use logic models. 6 There are, however, useful strategies that can minimize these limitations. Here, we draw attention to five ways to get a grip on these drawbacks of logic models while reaping the benefits they offer. Proactive work to identify how interventions may lead to undesirable outcomes or harm during logic modelling has been described as negative or dark logic modelling. 9 Supplementing a conventional logic model with one that outlines the potential downsides of an innovation (e.g., loss of autonomy and identity; challenging one's self-efficacy) has several advantages. It allows for harm mitigation planning, while informing evaluation designs such that they proactively monitor undesirable and adverse outcomes. Further, it promotes sharing of balanced information about both risks and benefits associated with innovations. A logic analysis provides a deeper level of scrutiny of select claims that are presented in logic models. 16, 17 A direct logic analysis is a formative process that examines whether the program design and implementation is consistent with available evidence about the critical conditions required for achieving desired effects. 16 A reverse logic analysis is a summative process that contrasts program characteristics with alternative models of realizing similar outcomes. Evidence for logic analyses can be found by synthesizing research evidence or soliciting experiential expertise. 18 Whereas both forms of logic analysis offer valuable checks of the credibility and trustworthiness of the claims posited in logic models, 19 they require substantial effort 16 . Thus, we recommend judicious selection of specific sub-sets of the logic model for logic analysis such as areas of recurrent implementation difficulties, key program principles, and/or values. 16 There have been numerous evolutions in evaluators' approaches to visually illustrating the 'hows and whys' of intervention functioning. Thus, traditional logic models can be meaningfully complemented with more sophisticated approaches. For example, nested logic models incorporate multiple logic models that are linked through shared activities or outcomes and thus 'nested' into one another. 20 They compartmentalize the structural complexity of multiprogram systems: the highest level model of a nested logic model appears concise, but is constituted by multiple, aligned 'sub-logic models.' 2, 20 Action models are another contemporary option that are more expansive than traditional logic models. In addition to articulating pathways via which a program may influence specified outcomes, action models outline the prescriptive assumptions about how those pathways may be activated. 21 Specifically, they include details about implementation protocols, required characteristics of target participants and implementing organizations, essential capacities of front line implementers, as well as critical norms or resources in the program's ecological context (i.e., culture, norms, and resources of the greater system, such as an academic institution or health services unit). 21 Action models can be optimal tools for guiding evaluations focused on implementation processes. Recent scholarship outlines a typology of logic models that includes a relatively advanced type four logic model. Type four logic models both 1) outline the change mechanisms of interventions (in lieu of listing discrete program activities and resources), and 2) illustrate the contextual factors upon which target outcomes depend. 6 The result is a more flexible type of logic model that allows for variation in activities and outcomes across contexts. 6 The rationale for using logic models should inform decisions about how to employ the technique. 6 For example, if the primary goal is to engage stakeholders and build consensus about program activities the simpler, conventional model may be optimal. The aim here is to capture stakeholder attention by being both aesthetically appealing and visually efficient 1 (i.e., maximizing accurate interpretation while minimizing cognitive effort 22 ). Where evaluation of implementation is being prioritized, action models may be optimal, due to their expansive details on the prescriptions for program functioning. More complex models (e.g., Type 4 models) are not ideal for stakeholder engagement or consensus building 6 . Instead, they are instrumental for guiding reflections about how to improve the intervention's use in organizational context. 6 INotably, they can be a valuable toolkit for evaluation-leads, who wish to improve or optimize program monitoring and evaluation designs. Typical logic models often represent perceived rather than verified relationships among program activities and outcomes. Further, illustrated relationships may change due to evolving contexts and outcomes as programs are implemented. 7 As such, logic models should be treated as dynamic rather than static, with an expectation that they will evolve over time. 7 We recommend cyclical efforts to revisit and revise logic models as contexts changes (e.g., COVID-19 pandemic) and as evaluation data become available. Logic models offer a pragmatic approach to synthesizing information about interventions that could otherwise be challenging to describe. Clarity about why a logic model is needed is critical to deciding how to go about creating one. The five strategies outlined above can help educators and evaluators get a grip on the limitations of logic models and maximize their utility. Enhancing the Effectiveness of Logic Models Enhancing Program Performance with Logic Models Developing and optimising the use of logic models in systematic reviews: Exploring practice and good practice in the use of programme theory in reviews Creating and using logic models: Four Perspectives Towards a taxonomy of logic models in systematic reviews and health technology assessments: A priori, staged, and iterative approaches Advancing complexity science in healthcare research: The logic of logic models Using programme theory to evaluate complicated and complex aspects of interventions The Logic Model Guidebook : Better Strategies for Great Results Dark logic": Theorising the harmful consequences of public health interventions Adverse effects of public health interventions: A conceptual framework Do we really care about unintended outcomes? An analysis of evaluation theory and practice When 'unintended effects' reveal hidden intentions: Implications of 'mutual benefit' discourses for evaluating development cooperation Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute Cultural minority students' experiences with intercultural competency in medical education do no harm: unintended consequences of skill-based training in global health Logic analysis: testing program theory to better evaluate complex interventions Defining, illustrating and reflecting on logic analysis with an example from a professional development program Evaluation: A Systematic Approach Using logic analysis to evaluate knowledge transfer initiatives: The case of the research collective on the organization of primary care services Building a framework for the evaluation of knowledge translation for the Canadian Network for Observational Drug Effect Studies Interfacing theories of program with theories of evaluation for advancing evaluation practice: Reductionism, systems thinking, and pragmatic synthesis Measuring effectiveness of graph visualizations: A cognitive load perspective We have no conflicts of interest to declare. Funding: This project was not funded.