key: cord-0048392-9jkxdn8y authors: Knottnerus, J. André; Tugwell, Peter title: Effectiveness research: stable principles, dynamic methods date: 2020-08-04 journal: J Clin Epidemiol DOI: 10.1016/j.jclinepi.2020.06.031 sha: 63c9f721cc7665c7ab0a5b5dc1b30934c065e2c8 doc_id: 48392 cord_uid: 9jkxdn8y nan Effectiveness research: stable principles, dynamic methods About 70 years ago, we saw the groundbreaking publications on the first modern randomized controlled trial (RCT) in medicine and Bradford Hill's fundamental work in outlining its methodological principles in evaluating interventions [1, 2] . Today, compared to other research designs and applications, the RCT design is widely regarded as the most established and crystallized clinical epidemiological methodology. However, while its principles, as outlined by Hill, are still valid and stable, its methods are far from being fully developed and never will be. On the contrary, the RCT and RCT-based methods (including reviews and meta-analyses of RCTs) still represent one of the most dynamic fields of methodology development. This is both logical and necessary, giveneon the one handealmost inherent methodological limitations and challenges (from defining the research question to improving practice) needing continuous attention, and -on the other handeconstantly new methodological and technical opportunities and clinical and societal needs that have implications for effectiveness research. The dynamics of design development regarding evidence on effectiveness is illustrated in this journal issue that includes a remarkable number of interesting examples covering various phases of effectiveness research. Following the determination of the research hypothesis, the study protocol must be developed. This is especially challenging if the characteristics of the intervention to be evaluated or the outcomes of primary interest make it difficult or even impossible to realize a blinded comparison. Recognizing this, Jacobsen and Wood developed and piloted a simple standard framework to support the assessment of the risk of contamination in psychological therapy trials [3] at the protocol development stage. They categorized a sample of registered psychological therapy trial protocols as being low or high risk via three sources of contamination: 1) participants in the control arm and 2) participants in the intervention arm, and 3) therapists in the intervention arm. The authors found that the risk of contamination across all three sources was low for most studies, but 14% of the studies had a potentially high risk for contamination, mostly arising from a therapist in the intervention arm. It was concluded that the piloted framework could be helpful to identify and manage risk of contamination, and that the risk of contamination was relatively low in the studied trials and could be mitigated through design adjustments. Once the protocol is completed, one should stick to it. Sender et al. studied to what extent this was indeed the case by determining the proportion of trial reports referring to the protocol, the proportion of protocols accessible from trial registries, and types of additional publications associated with trial reports, using a previously gathered sample of randomized trials of nonpharmacological interventions. The investigators concluded that trial protocols were often not linked to the main report or registry and difficult to obtain, and that many trial-related publications were inconsistently linked. The potential of trial registries to facilitate the threading of articles related to a trial is underutilized. As to the realization of trials, this journal wrote about the increasingly untenable situation of funding and organizing trials for each separate piece of the knowledge mosaic of intervention research [4] . Tran and Ravaud characterize the current clinical research system as relying on a '''oneoff' project-by-project model involving a costly and timewasting permanent construction and deconstruction of the research infrastructure.'' They propose a new model based on collaborative principles: the COllaborative Open Platform (COOP 0 ) e-cohort, to build a large community of patients willing to participate in the generation of a large database of patient-reported data, which can be individually linked with routinely collected care data. Investigators can benefit from already enrolled participants or collected data or add new data collections to answer research questions. An example of this approach is The Community of Patients for Research (ComPaRe), a proof-of-concept COOP 0 e-cohort in the field of chronic conditions, already being used for many research projects. According to the authors COOP' e-cohorts will accelerate the research process by avoiding redundancy and limiting research waste. In conducting trials, achieving the required recruitment of participants can be supported by evidence-based prediction [5] . Gkioni cs. investigated the current practice for recruitment prediction and monitoring within clinical trials. For this purpose, they surveyed chief investigators and statisticians to identify data sources, methods, and adjustments made to support predicting and monitoring recruitment. Participants were identified from clinical studies in Europe. It was found that multiple data sources were used to support recruitment rates, and that simple prediction methods are preferred to statistical models, that are rarely used because of lack of familiarity with statistical methods and time constraints. The authors expect that simplistic methods will continue as the mainstay of prediction, but recommend generation of evidence supporting the benefits of complex statistical models, to promote their implementation. They also propose further work on the quality of multiple data sources to support recruitment prediction. In designing and implementing the recruitment process for RCTs, external validity is a key issue. Bradburn and coworkers assessed whether the 'Relative Effectiveness of Pumps over MDI [multiple daily injections] and Structured Education trial' in people with type I diabetes mellitus mirrored the wider population and studied the impact of differences on the trial's findings. This was possible since the trial was nested within a large UK cohort of people with type I diabetes mellitus undergoing structured diabetes-specific education. It was found that although the trial participants differed from the cohort regarding a number of demographic and clinical variables, the treatment effects were similar to that of the original RCT, i.e., unaffected by sampling adjustments. While the authors encourage investigators to address criticisms of generalizability, they recognize that doing so is problematic since external data, even if available, may contain limited information and analyses can be susceptible to model misspecification. A related issue is that trial findings may suffer from fragility. According to Walter et al. this has been defined as the number of changes in outcomes that are required to change statistical significance, which provides a potentially misleading perspective because statistical significance conveys no information about treatment effect size. They therefore additionally incorporated clinical importance of trial results and their quantitative stability into an enhanced framework to assess fragility. The authors found that the small data changes required to affect statistical significance may actually be unlikely to occur, and describe the interpretation of studies with various combinations of statistical significance, clinical importance of the results and their quantitative stability. They conclude that the concept of fragility should indeed include clinical importance of trial findings and their quantitative stability, as well as statistical significance. Stability of study results should imply that they are both statistically significant and quantitatively stable, but they can still be either clinically important or unimportant. The assessment of stability of effectiveness findings may require a longer time axis for making final conclusions. Laporte et al. studied this for an early detected treatment effect over time. They used the example of an unexpected promising effect of low molecular weight heparins (LMWHs) on survival in patients with cancer, as it was observed in early trials in post hoc subgroup analyses but not found in more recent trials. The authors performed a cumulative meta-analysis of survival data from RCTs. For the earlier trials this showed a significant improvement in overall survival with LMWHs, but this benefit then regressed and even disappeared over time. According to the authors, this finding suggests 'p-hacking' [6] and selective reporting of post hoc subgroup results in the early studies. When a trial has been published, it should be easily found in literature. Since, as Taljaard et al. write, identifying pragmatic trials from among all randomized trials is challenging because of inconsistent reporting, they developed and validated a search filter to identify reports of such trials from Ovid MEDLINE. For this purpose they generated candidate terms for pragmatic trials, selected discriminating terms for clinical trials and explanatory trials, and used externally derived sets to validate sensitivity and specificity of filters. The performance of the developed and validated sensitivity-maximizing filter was found to be superior to other ad hoc filters for pragmatic trials. The investigators conclude that for identifying reports of trials more likely to be pragmatic a highly specific filter with moderate sensitivity is available. Investigators are increasingly invited to make their original trial data available for individual participant data metaanalysis (IPDMAs). To what extent eligible randomized controlled trials (RCTs) indeed contributed data to IPD-MAs was investigated Azar cs. for identified IPDMAs with at least 10 eligible RCTs from MEDLINE, EMBASE, CI-NAHL, and Cochrane. They found that among these IPD-MAs 67% of 774 eligible RCTs contributed data. Data contribution was positively associated with a higher impact factor of journals that published the RCTs, the RCT being conducted in the United Kingdom compared with the United States, and a more recent publication year of the RCT. As possible solutions to improve data contribution the authors suggest centralized repositories for clinical trial data and data sharing as a criterion for promotion and career advancement. Meta-analysis is becoming increasingly complex to serve real practice as precisely as possible, for example, by comparing the effectiveness of the most important candidate interventions for a specific health condition. Papakonstantinou et al. examined the relative contribution to estimates of treatment effects in network meta-analysis (NMA) from network paths of different lengths: length 1 (direct evidence), length 2 (indirect evidence with one intermediate comparator), length 3 (indirect evidence with two intermediate comparator), et cetera. In an analysis of 213 published NMAs, they found that 33% of the information came from paths of length 1, 47% from paths of length 2, and 20% from paths of length 3. It was found that the contribution of different paths depends on the size and structure of networks. The authors emphasize the importance of their findings for assessing the risk of bias in NMA results. An urgent need for comprehensive, high quality evidence synthesis on effectiveness has been brought about by the COVID-19 pandemic. The related methodological challenges are multiple and huge, as shown in our COV-ID-19 series. In a commentary Ruano et al. discuss what A7 Editorial / Journal of Clinical Epidemiology 124 (2020) A6eA8 (4582):769e82. A Medical Research Council Investigation. Streptomycin treatment of pulmonary Tuberculosis The clinical trial A scoping review of the problems and solutions associated with contamination in trials of complex interventions in mental health Trials embedded in cohorts, registries, and health care databases are gaining ground A systematic review describes models for recruitment prediction at the design stage of a clinical trial P-curve: a key to the filedrawer evidence-based medicine researchers can do to help clinicians fighting COVID-19, in view of the many thousands of articles on this topic making it difficult for clinicians to find appropriate answers. They summarize a number of relevant initiatives worldwide (e.g., WHO, EPPIcentre (UK), LOVE platform, Cochrane COVID-19 Study Register, and COVID-END (Canada)). The authors make a plea for methodological experts to support clinicians and present the COVID-evidence project to serve this important aim.