About the Author(s)


Emir Efendic symbol
Department of Human Performance Management, University of Eindhoven, Eindhoven, Netherlands

Psychological Sciences Research Institute, University of Louvain, Louvain-la-Neuve, Belgium

Llewellyn E. van Zyl Email symbol
Department of Human Performance Management, University of Eindhoven, Eindhoven, Netherlands

Optentia Research Programme, Faculty of Humanities, North-West University, Vaal Triangle Campus, Vanderbijlpark, South Africa

Citation


Efendic, E., & Van Zyl, L. E. (2019). On reproducibility and replicability: Arguing for open science practices and methodological improvements at the South African Journal of Industrial Psychology. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 45(0), a1607. https://doi.org/10.4102/sajip.v45i0.1607

Opinion Paper

On reproducibility and replicability: Arguing for open science practices and methodological improvements at the South African Journal of Industrial Psychology

Emir Efendic, Llewellyn E. van Zyl

Received: 10 Nov. 2018; Accepted: 23 Apr. 2019; Published: 30 May 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Problematisation: In recent years, psychology has been going through a crisis of sorts. Research methods and practices have come under increased scrutiny, with many issues identified as negatively contributing to low replicability and reproducibility of psychological research.

Implications: As a consequence, researchers are increasingly called upon to overhaul and improve their research process. Various stakeholders within the scientific community are arguing for more openness and rigor within industrial and organisational (I-O) psychological research. A lack of transparency and openness further fuels criticisms as to the credibility and trustworthiness of I-O psychology which negatively affects the evidence-based practices which it supports. Furthermore, traditional gate-keepers such as grant agencies, professional societies and journals, are adapting their policies, reflecting an effort to curtail these trends.

Purpose: The purpose of this opinion paper is, therefore, to stimulate an open dialogue with the South African Journal of Industrial Psychology (SAJIP) contributing authors, its editorial board and readership about the challenges associated with the replication crisis in psychology. Furthermore, it attempts to discuss how the identified issues affect I-O psychology and how these could be managed through open science practices and other structural improvements within the SAJIP.

Recommendations: We enumerate several easily implementable open science practices, methodological improvements and editorial policy enhancements to enhance credibility and transparency within the SAJIP. Relying on these, we recommend changes to the current practices that can be taken up by researchers and the SAJIP to improve reproducibility and replicability in I-O psychological science.

Keywords: Open science; replication; reproducibility; industrial psychology; organisational psychology; academic publishing.

Introduction

In recent years, psychology has been facing challenges to its scientific integrity. These have ranged from low replicability, methodological flaws and misapplied statistical practices (Camerer et al., 2018; Simmons, Nelson, & Simonsohn, 2011). The apex of this criticism came with the Open Science Collaboration’s (2015) attempt to replicate 100 empirical studies published in three high-ranking psychology journals. They found that only 36% of the replications produced statistically significant results with the mean effect sizes in the replications being approximately half the magnitude of the mean effect sizes in the original studies. Comparing this with the initial state, where 97% of the original 100 studies had produced significant results, it was evident that something was amiss.

Arguably, the failed replications which received the most (media) attention stem from social and cognitive psychology. However, in other sub-disciplines of psychology similar trends have been noted. Findings from management psychology, consumer psychology, psychophysics, psycho-linguistics and positive psychology are increasingly being questioned and/or refuted (Bergh, Sharp, Aguinis, & Li, 2017; Brown, Sokal & Friedman, 2014a, 2014b; Dimitrov, 2014; Doyen, Klein, Pichon, & Cleeremans, 2012; Hubbard & Vetter, 1996; Lehrer, 2010; McDaniel & Whetzel, 2005; Ritchie, Wiseman, & French, 2012; Rouder & Morey, 2011). In many cases where studies in these sub-disciplines were subjected to replications, findings did not support those found in the original publications (Martin & Clarke, 2017). These trends have justifiably led to claims that the entire discipline of psychology is undergoing a ‘crisis of confidence’ or a ‘replication crisis’ (Pashler & Wagenmakers, 2012).

This crisis also casts doubt on applied psychological sub-disciplines such as industrial and organisational (I-O) psychology (Grand et al., 2018; Kepes & McDaniel, 2013). In the past 5 years, we have seen a significant increase in critiques of the research practices which I-O psychological researchers employ, as well as critiques of the validity and trustworthiness of I-O psychological research (Banks & O’Boyle, 2013; Banks et al., 2016a, 2016b; Bosco, Aguinis, Field, Pierce, & Dalton, 2016; Grand et al., 2018; O’Boyle, Banks, & Gonzalez-Mule, 2017). There is a growing body of evidence which shows that I-O psychologists employ questionable research practices (Banks et al., 2016a; Bedeian, Taylor, & Miller, 2010), are guilty of misconduct (Atwater, Mumford, Schriesheim, & Yammarino, 2014), and that the evidence base for its practices are questionable (Bosco et al., 2016; McDaniel, Kepes, Hartman, & List, 2017). For instance, Kepes, Banks, McDaniel, and Whetzel (2012) have found evidence of sample suppression in that only samples with significant effect sizes tend to get published. Furthermore, Bedeian et al.’s (2010) survey of management faculties (where I-O psychologists constitute a large number of personnel) examined knowledge of so-called methodological flexibility (cf., Simmons et al., 2011). They found that 60% of faculty knew of a colleague who ‘dropped observations of data points from analyses’ and 50% knew of a colleague who ‘withheld data that contradicted their previous research’. In a comparison between I-O student dissertations and the articles which were eventually published from them, McDaniel et al. (2017) found that sample sizes, covariates and hypotheses differed in 63% of the cases. They also found that while 64% of the hypotheses in the academic papers were statistically significant, only 31.8% were reported as such in the student dissertations. A clear sign that a significant amount of conscious effort has been invested to rework analyses, to change the aims of papers (based on the statistical analyses’ results) and to alter the data to ensure that support was found for the hypotheses.

Similarly, Kepes and McDaniel (2013) argued that I-O’s structural problems,1 various issues in the conduct of research2 and the editorial processes of journals3 have led to a lack in credibility and trustworthiness of its findings. This led Kepes and McDaniel (2013) to conclude that ‘the I-O research literature may likely contain an uncomfortably high rate of false-positive results and other misestimated effect sizes’ (p. 256).

Adding to this, it has also been established that: (1) exact replication studies tend to be discouraged by I-O psychology journals (Makel, Plucker, & Hegarty, 2012; Martin & Clarke, 2017; Simmons et al., 2011), (2) that null (and negative) findings are rarely published or actively discouraged4 (Martin & Clarke, 2017), (3) that I-O psychology is prone to HARKing5 (Bosco et al., 2016) (4) that reanalysis on the same datasets by other I-O researchers produces different results from the original articles (Kepes, Banks, & Oh, 2014), (5) that I-O psychologists themselves question the credibility, validity and trustworthiness of their own discipline’s research (Kepes & McDaniel, 2013), (6) that researchers think the discipline has lost its ability to self-correct and to produce accurate cumulative knowledge (Giner-Sorolla, 2012; Ioannidis, 2005, 2012; Kepes & McDaniel, 2013) and (7) that practitioners, organisations and the general public fail to see the relevance- or that they even question the validity of I-O psychological theories or ‘tools’ (Earp & Trafimow, 2015; Pashler & Wagenmakers, 2012; Wong & Roy, 2017). If one considers the above, the only reasonable conclusion left is that I-O psychology is facing the same crisis in confidence/replication as other areas of psychology. Thus, debates relating to the credibility or trustworthiness of I-O have moved away from general critiques of how data are processed and interpreted or the ‘relevance to practice’ (Field, Baker, Bosco, McDaniel, & Kepes, 2016; Mazzola & Deuling, 2013), into specific critiques of the analysis, interpretation, replication and application of findings (Grand et al., 2018; Roll, Van Zyl, & Griep, in press).

Although the causes of the replication crisis and the associated crisis in confidence vary, several overarching contributing factors have been identified. These are common to all areas of psychological research, including I-O and it is imperative that both the South African Journal of Industrial Psychology (SAJIP) and its contributors take proactive action to address them. As such, the purpose of this opinion paper is to stimulate an open dialogue with prospective SAJIP authors, its editorial board and readership around the challenges associated with the replication/confidence crisis and how these could be managed through open science practices and other structural improvements within the SAJIP. Specifically, the focus is two-fold: (1) on providing a brief overview of the underlying challenges associated with the replication crisis and (2) proposing recommendations and practical guidelines to authors and the SAJIP on how to address these through open science practices and other structural improvements. It is our hope that the scholarly commentaries from the SAJIP stakeholders on this opinion paper will aid in developing a clear strategy on how these matters could be managed, what the role of SAJIP is in this process, and how SAJIP and its contributors could proactively engage to address the issues.

Issues contributing to the crisis

Grand et al. (2018) argued that the replication crisis within I-O psychology is fundamentally a function of questionable research practices, editorial policies and processes, and conscious misconduct of researchers. However, in contrast to reports in the media, only a small number of failures to replicate are because of outward fraud or conscious data manipulation/fabrication (Atwater et al., 2014; De Boeck & Jeon, 2018). Instead, the crisis is most likely a result of a series of systemic issues that have converged and are now considered as ‘standard research practices’ (Nuijten, 2018). The main issues identified are (1) reliance on small sample sizes and low statistical power, (2) publication bias, (3) questionable research practices and publication pressure, (4) perverse incentives for publication and (5) lack of transparency (Simmons et al., 2011; Pashler & Wagenmakers, 2012).

Statistical power and small samples

A perennial issue is the lack of statistical power, defined as the probability of detecting an effect of interest when there is indeed an effect to be detected (Faul, Erdfelder, Lang, & Buchner, 2007). Historically, statistical power has been poor in psychological research (Fraley & Vazire, 2014). Compared with studies that have low power, highly powered studies are likely to detect valid effects, buffer the literature against false positives, and produce findings that other researchers can replicate. Statistical power is related to sample size and an acceptable statistical power for detecting an effect (conventionally set at 80%) often requires much larger sample sizes (Cohen, 1988). Justifying one’s sample size and performing a power analysis to come at this number is arguably one of the easiest changes that can be implemented.

Publication bias

Another fundamental issue is the reporting of only significant effects because of publication bias – the phenomenon where statistically significant (rather than non-significant) findings have a higher probability of being published (Kepes & McDaniel, 2013). There is ample evidence that psychology is affected by publication bias (e.g. Kühberger, Fritz, & Scherndl, 2014). However, the reasons for such differ between sub-disciplines (Kepes et al., 2014). According to Banks and McDaniel (2011), within I-O psychology, publication bias is largely a function of decisions that authors consciously make regarding the design and analytics of their studies, the decision of the organisation from which the data are obtained, the editorial policies of a given I-O journal/conference, and reviewer suggestions. At the individual/author level, for example, McDaniel et al. (2017) found that more than 64% of researchers were likely to change hypotheses and the associative arguments between students’ dissertations and the eventual academic publication which flows from it. This was more likely when the publications are in high-impact journals. At the organisational level, for example, companies are less likely to provide a release to publish studies, if the results shine a negative light on their products, services or organisational functioning (Kepes et al., 2014). At the editorial level, processes may make it difficult to publish null findings, as journals are pressured to chase high-impact factors and therefore only novel or innovative studies are accepted (McDaniel et al., 2017). Editors may also suggest the removal of hypotheses or changes to the original design/analytic strategy (Banks & McDaniel, 2011). Finally, at the reviewer level, reviewers may reject articles with null findings as, perceptively, it may not hold any value (Banks & McDaniel, 2011). More generally, Kepes et al. (2012) found that, in I-O meta-analytic reviews, authors seem to pay little or no attention to the possibility of publication bias. Needless to say, publishing only significant findings can give a distorted view of the literature.

Questionable research practices and publication pressure

Researchers are under increasing pressures to publish in high-impact journals (Bosco et al., 2016). This pressure is largely the result of universities wanting to enhance their global rankings and to generate more funds through grants and subsidies (Feeney, 2018; Mudrak et al., 2018). At an individual level, the pressure to publish is driven by a need to be promoted, to secure tenure, for academic awards and to enhance job mobility (Feeney, 2018; Kepes & McDaniel, 2013). Given this immense pressure to publish, I-O researchers are inclined to employ strategic initiatives to optimise their data, and to employ a variety of techniques or analytical practices to ensure that articles get published (Bosco et al., 2016). The flexibility in and availability of data analytical processes, practices and tools, affords researchers the opportunity to find creative means to proverbially ‘make the model fit’ in any circumstance.

The seminal paper by Simmons et al. (2011) enumerated many flexibilities in data analysis and how they can lead to an overrepresentation of false-positive findings. The implementation of these ‘questionable research practices’ can lead to the literature getting populated by noise. John, Loewenstein, and Prelec (2012) tried to measure the prevalence of these practices and found that a substantial number of researchers admitted to applying these techniques. The techniques identified in their paper are failing to report all of a study’s dependent variables, deciding to collect more data after looking at the results, failing to report all conditions, stopping data collection early because a significant result was found, rounding of p-values, only reporting studies that worked, deciding whether to exclude data after looking at the results, reporting an unexpected finding as predicted from the start, claiming the results are unaffected by demographic variables when this is actually untrue, transforming data or removing cases/participants that skew results and falsifying data. Furthermore, from a systematic review of articles published within the domain of I-O psychology, Banks et al. (2016a) found that 91% of papers in their sample were guilty of a number of questionable research practices. They argued that:

Engagement in questionable research practices is occurring at rates that far surpass what should be considered acceptable. Thus, some type of action is clearly needed to improve the state of our science. (p. 328)

We must be careful and clear here that we are not implying some malice on the side of I-O researchers. Rather, the combination of a lack of awareness of these issues, capped with external, systematic pressures, can and have contributed to a widespread application of these behaviours.

Perversely incentivising research

Within the South African context, academic institutions and its affiliates are monetarily incentivised for each paper published in the Department of Higher Education and Training’s (DHET) list of accredited journals (Woodiwiss, 2012). This sort of incentivisation, while increasing productivity, could lead to the materialisation of ‘perverse incentives’ (Tomaselli, 2018). When monetary awards are connected to research outputs, the temptation to engage in ‘sloppy’, yet fast-producing work is naturally heightened. Researchers may be more inclined to ‘cut corners’ in order to prioritise quantity over quality. This type of context could even lead to the proliferation of questionable research practices and a further de-valuing of negative research results – as these are inherently less likely to be published. Furthermore, monetarily incentivising research provides an opportunity for fraudulent research practices, data manipulation (Tomaselli, 2018) and even submissions to predatory journals to thrive and enhance one’s personal research output6 (Smillie, 2014; Thomas, 2018).

Lack of transparency

The credibility of research and the scientific process it subscribes to rests on opportunities to transparently verify the research products upon which it is based (Conte & Landy, 2018). Up until very recently, I-O psychology journals did not require authors to provide access to their data or materials (Martin & Clarke, 2017). This has made it extremely difficult to independently probe, and understand the quality of the research outputs (Ioannidis, 2012). There are several benefits to a transparent workflow. Imagine reading about a statistical analysis that simply stated that an effect was found. Without details of the analysis method or any supporting information, one would be hard pressed to accept such reporting at face value. And yet, studies are often reported without much information that is crucial for the establishment of reproducibility and replicability. Furthermore, as papers often lack even the most basic statistical and/or methodological information, it is difficult to establish so-called analytic reproducibility by re-running the reported statistical analyses (Hardwicke et al., 2018). It is often also not possible to examine the analytic robustness of findings. One cannot, for instance, look at how conclusions depend on specific choices in data analysis. Finally, when stimuli or materials are made available, it makes it easier for researchers to conduct replications and also to build further research on the presented findings (Simons, 2014). There are many other benefits of transparency that have been enumerated in Klein et al. (2018) which we urge the reader to consult. An open science is a verifiable science which fulfils our core mission of producing and identifying truths. By embracing open practices and increasing the transparency of their work, I-O researchers would be making a concrete and positive step in the right direction for the discipline.

What can be done? – Recommendations for practice and policy

Although there is a lot of debate around the confidence/replication crisis, there is agreement that it can be addressed through employing more ‘open science practices’ (Brandt et al., 2014; Frankenhuis & Nettle, 2018; Klein et al., 2018) and improving our research methods (Simmons et al., 2011; Van’t Veer & Giner-Sorolla, 2016). The same has been proposed for I-O psychology specifically (Banks & McDaniel, 2011; Feeney, 2018; Grand et al., 2018; Kepes et al., 2012; Kepes & McDaniel, 2013). Below, we enumerate several recommendations for researchers and the SAJIP in managing this systemic issue. Albeit not an exhaustive list, the aim here is to address the most imperative and prevalent issues which are directly relevant to the SAJIP7 and its authors.8

In order to highlight some of the problems and to support the recommendations which are proposed, a random sample of seven quantitative articles from the two most recent volumes (44 and 45) of the SAJIP was drawn for illustrative purposes. At the time of writing, 15 quantitative articles had been published in these two volumes. We checked the seven selected articles on several of the issues which were identified. Table 1 provides an overview of these checks.

TABLE 1: Descriptive overview of seven randomly selected quantitative papers from SAJIP volumes 44 and 45.
Recommendations for authors

Authors can do much to promote transparent, replicable and ethical research practices. This means adapting one’s research-process and investing more effort in the preparation phase of projects. Below, several recommendations are presented to aid active contributors to the SAJIP.

Tackling statistical issues

It is advised to properly power one’s studies and experiments. Statistical power, sample size and p-values are related and one can use many free tools to calculate the sample sizes necessary for having adequate power to detect effects of interest. Tools like G*Power (Faul et al., 2007) can provide sample size calculations for most traditional types of analyses. Researchers’ intuitions about effect sizes are usually wrong, with one study finding that 89% of researchers overestimated the power of specific research designs with a small expected effect size, and 95% underestimating the sample size needed to obtain 80% power for detecting a small effect (Bakker, Hartgerink, Wicherts, & Van der Maas, 2016). We recommend authors consult either Perugini, Gallucci, and Costantini (2018) or Westland (2010) on power analyses instructions.

In our sample of SAJIP articles, only one of the papers (P2) provided a justification of sample size and conducted a power analysis.9 Furthermore, based on the guidelines of Perugini et al. (2018), Guo and Pandis (2015), as well as the Westland’s (2010) formula for sample size estimation in Structural Equation Modelling, it was found that five out of the seven sample articles was insufficiently powered, and that the sample size was too low to accurately determine an effect.

Many issues are also related to the use and reporting of statistics. A large number of papers have statistical errors which make them hard to reproduce and many of the errors seem to be caused by typographical errors (Nuijten, Hartgerink, Van Assen, Epskamp, & Wicherts, 2016). However, there are new tools that can be used as a sort of ‘spell checker’ for statistics. For instance, Statcheck (http://statcheck.io) can scan a paper, extract statistics reported in the American Psychological Association (APA) reporting style and check whether the reported p-values match the test statistics and degrees of freedom. It is recommended that researchers run their papers through Statcheck before submitting to the SAJIP. A note of caution: Statcheck is limited (as of now) to detecting only basic statistical reporting such as t-tests and the results of ANOVAs or regressions. It is our hope that this incredibly useful program will continue to add more statistic procedure checks (e.g. Latent Class Analysis) to be of more use to a wider audience.

Reducing analytical flexibility and systematic bias(es)

As Simmons et al. (2011) reported, ‘flexibilities’ in data reporting and analysis may lead to a staggering proliferation of false positives. One of the most effective ways to reduce this is preregistration. It implies that researchers specify, up front, certain aspects of a study, for example, the design, sampling, analysis technique and exclusions (if any). Preregistration makes a distinction between two modes of research: exploratory and confirmatory, and its benefits have been known for quite some time (de Groot, 2014). Researchers can preregister their studies which remain sealed so-to-speak, and unchangeable, once the final version is submitted. Researchers could even preregister their intended analytical plans or study protocols in order to enhance transparency and credibility. This should, of course, all be done in line with the local ethical guidelines and legislation governing the use and distribution of data. One of the easiest tools that can be used for this is the Open Science Framework (OSF) where many formats of preregistration are available as templates (Van’t Veer & Giner-Sorolla, 2016).

In our sample of SAJIP articles, none of the papers reported any form of preregistration.

Employing best practice guidelines for statistical analyses and reporting

In recent years, there has been a push within the psychological community to standardise both the analytical processes being employed to analyse data, as well as the reporting guidelines (Appelbaum et al., 2018). Various ‘best practice guidelines’ or ‘checklists’ have been published on everything ranging from assessing bi-factor structures (c.f. Rodriguez, Reise, & Haviland, 2016) to measurement invariance (c.f. Van De Schoot, Lugtig, & Hox, 2012). It is suggested that authors refer to appropriate best practice guidelines in their articles to guide them in processing their data in a systematic and standardised manner. Furthermore, authors should ensure that their manuscripts are in line with the APA’s new ‘Journal Article Reporting Standards for Quantitative Research in Psychology’ (Appelbaum et al., 2018).

In our sample of SAJIP articles, all of the articles only partially conformed to the requirements of the APA’s Quantitative Research reporting standards. Furthermore, large discrepancies in reporting methods exist between articles with similar methodologies and outcomes (P1, P2, P4 and P7).

Transparency and openness

It is recommended to make a study’s data and analysis scripts (syntaxes) openly available, so that one can establish a study’s analytic reproducibility (Van Zyl, Efendic, Rothmann, & Shankland, in press). This could facilitate the detection and correction of any unintended errors before the manuscript is in press (Hardwicke et al., 2018). Wicherts, Bakker, and Molenaar (2011), for example, found that the reluctance to share data is associated with weak statistical evidence against the null hypothesis and errors in the reporting of statistical results. Furthermore, providing access to data and syntaxes could even aid younger researchers to learn from the expertise of others. The most comprehensive repository for sharing is developed by the OSF (https://osf.io/). Other repositories include PsychData (http://psychdata.zpid.de/) and GESIS (https://datorium.gesis.org/xmlui/).10

In our sample of SAJIP articles, none of the papers supplied supplementary material in terms of SPSS/MPlus/AMOS/R syntaxes, nor was open data provided.

Multi-institutional-/research unit collaborations

Research suggests that collaboration between researchers from different institutions, research units or laboratories reduces systemic biases and enhances the reproducibility of findings (Stevens, 2017). Through collaboration with others outside of one’s own institution or research unit, studies are more thoroughly planned and executed, and the chances of mistakes in analyses are reduced. Authors can benefit from different viewpoints and access to larger resources. Indeed, some of the recommendations above, require larger resources. The recommendation to increase sample sizes usually mean cost bumps that many researchers are not able to sustain. Nevertheless, several initiatives have been developed and they focus on large-scale collaborations. One such initiative is ‘StudySwap’ (https://osf.io/view/StudySwap/) which is a platform for collaboration between different research units where researchers can find collaborators by posting descriptions of resources that they have available, resources that they need and others might have, or coordinating projects across teams. Another initiative is the ‘Psychological Science Accelerator’ (https://osf.io/93qpg/), where the idea is to increase robustness and reliability of Psychological Science through massive multi-institutional or research unit collaborations. The accelerator currently has over 350 participating laboratories, distributed over 45 countries worldwide.

In our sample of SAJIP articles, none of the authors collaborated with research partners from other institutions, research units or laboratories. Those collaborations which are present are either between students and supervisors (P1, P2 and P6) or between colleagues from the same institution (P5).

Recommendations for the South African Journal of Industrial Psychology

Although it is important to empower authors with practical guidelines on how to manage the replication crisis through open science initiatives, the SAJIP can play a vital role in dealing with these issues as a custodian of I-O psychology within South Africa. The SAJIP can thus develop and implement author guidelines and editorial policies which facilitate, acknowledge and reward transparent research processes. Below are several suggestions, based on international best practices, which could aid SAJIP to enhance credibility.

Transparency and openness promotion

Nosek et al. (2015) developed guidelines for journal policies and practices to empower journals, authors and reviewers to facilitate transparent and open science research practices. The transparency and openness promotion (TOP) guidelines currently have more than 5000 signatories, they have been adopted by more than 1500 top tier academic journals (including ‘Science’ and ‘Nature’), and have become one of the indicators affecting acquisition into major journal indexers such as the Thomson Reuters Web of Knowledge Index (formerly ISI).

TOP guidelines (see Table 2) propose eight modular standards ranging between three levels (tiers) which incrementally facilitate scientific communication to greater openness. They propose that transparency and open science is the function of: (1) citation standards (citing shared data to incentivise publication), (2) data transparency (disclosing, requesting, requiring or verifying shared data), (3) analytical methods transparency (disclosing shared analysis syntaxes or codes), (4) research materials transparency (disclosure of research materials like intervention protocols), (5) design and analysis transparency (setting standards for research design disclosures), (6) preregistration of studies (specifying study details and hypotheses before data collection), (7) preregistration of analysis plans (specification of analytical details before data collection) and (8) replication (encourages the publication of replication studies). These eight standards range across three tiers or levels of implementation: Level 1 – Disclosure (e.g. authors must disclose whether or not materials are available), Level 2 – Requirement (e.g. authors are required to share materials during submission, within the review process or after publication) and Level 3 – Verification (e.g. materials are verified by a third party in order to ensure that standards are met). The SAJIP can modularly select the levels of transparency it requires in order to advance transparent research practices, while keeping the local laws, research ethics and associative author rights into consideration.

TABLE 2: Transparency and openness promotion guideline Standards and level (tiers) of adoption for transparent and open research.

It is suggested that the SAJIP leads by example and adopts at least a Level 1 stance on all eight principles and systematically moves towards Level 2 through the education, training, and capacity development of its contributors. As one of the main drives of the journal is to be included into the main Thomson Reuters Web of Knowledge Index (Coetzee, 2018; Coetzee & Van Zyl, 2013, 2014), adopting the TOP guidelines would be a big step in the proverbial right direction.

It is also suggested that Principle 3 (Analytic Method Code Transparency), be positioned at Level 2, as the modifications to models in articles published in SAJIP is sometimes unreported and unclear. These can clearly be seen by looking at the reported degrees of freedom between different models, but explanations of model modifications and a theoretical justification of such are rarely provided by authors. For example, in our sample, Article P7 employed Structural Equation Modelling but did not report all the fit indices required to evaluate model fit. It also failed to mention the number of items on the Engagement scale employed and the direction of the Likert Scales. It is, therefore, difficult to not only understand the reasoning behind the choices the author made in the following analyses, but also to estimate the trustworthiness of the findings. When looking at the degrees of freedom reported (Model 1: df = 40 and Alternative Model: df = 39) and taking the sample size (n = 300), the item to latent factor loadings and the relationships the author tested into consideration, it is clear that there is a rather large discrepancy between the expected and reported degrees of freedom. If the author provided the AMOS syntaxes, it would provide more information as to which modifications were made to the model.

Submission process and incentives

Journals have the ability to set procedures that can signal the values they espouse. Some have started to implement and require authors to provide, during submission, justifications for their sample sizes and where (or why not – as there can be legitimate reasons for not being able to share) they are sharing data/materials. Simple directives like Psychological Science’s 2014 decision to start awarding badges as rewards for papers that share data, have preregistered studies, etc. could incentivise researchers to engage in more open research practices. It was found that by the first half of 2015, the proportion of articles stating that data were available increased to around 40%, even when rates in four other psychology journals remained at below 10% (Baker, 2016; Kidwell et al., 2016). It is recommended that the SAJIP implement similar procedures during submission and also encourages authors to partake in open and transparent sharing of materials and data. Correct and extensive reporting of methods and statistics is sometimes constrained in journals because of space and word counts. Many journals now support the submission of online supplementary materials where more detail can be provided. This could play an important role in minimising publication bias (Banks & McDaniel, 2011) and is recommended for the SAJIP.

Registered reports

Recently, a new publishing option that may neutralise bad incentives, permit the publication of null results, and encourage replication attempts, has been proposed (Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014). Registered reports entail that the studies’ rationale, introduction, hypotheses, experimental procedures and statistical power analysis are reviewed before any data are collected. Upon review, the article can be rejected or accepted in principle for publication. If in principle acceptance is received, the authors proceed with the study, adhering to the peer-reviewed procedures. The results are published independently of what they show. Currently, 168 journals use the registered reports format as either a regular submission option or as part of a special issue (https://cos.io/rr/). It is recommended that the SAJIP explores this publishing option.

Developing best practice guidelines and publication standards for the most frequently occurring analytical methods

To ensure further transparency, it is suggested that the SAJIP determine the most frequently occurring methods being employed by contributors and provide structured guidelines and reporting standards for such. The SAJIP could invite methodological experts to provide brief ‘best practice guidelines’ for these methods, in order to aid authors to conduct their analyses in a structured and standardised manner. Furthermore, the SAJIP could incorporate the APA’s new reporting standards for quantitative research into its editorial policies, and systematically educate authors, and section editors as to its contents (Appelbaum et al., 2018).

Experimenting with an open, collaborative peer review process

A collaborative (open) peer review process involves facilitating an active dialogue between researchers, reviewers and editors throughout the review process. In contrast to the traditional peer review process, where an article is independently reviewed by two anonymous reviewers, the collaborative review makes reviewers, authors and editors publicly known in order to facilitate a collaborative discussion between the different parties until the paper is of a publishable nature. Reviewers have a chance to build up each other’s views, and authors have an opportunity to discuss the reviews in a collaborative fashion. This type of review not only illuminates reviewer bias (Dobele, 2015; Miller, 2006) but also provides opportunities for inter-professional learning, transparently negotiated feedback and enhances the overall quality of the manuscript (Kriegeskorte, 2012). Various high-impact journals such as Frontiers in Psychology and Systems (a Nature publication) have successfully implemented open collaborative or open interactive review processes, which has led to improved transparency, improved citation ratios, higher impact factors and more international collaboration (Kriegeskorte, 2012).

Conclusion

The current confidence crisis plaguing Psychological Science is neither an isolated event, nor is it contained within a specific discipline. I-O psychological researchers, -journals and -publishers are adapting to this crisis by working on implementing major changes as to how research is conducted (Feeney, 2018). On both the individual, as well as the policy level, I-O psychological researchers are leading the charge in trying to solve the issues which have been identified, working on practical solutions and strategies directed at solving them (De Boeck & Jeon, 2018; Feeney, 2018; Grand et al., 2018; McDaniel et al., 2017). Individual researchers can increase their sample sizes, justify their analysis plans, preregister their studies, improve their statistical inferences and through openness, allow access to their data and materials, thus facilitating the growth of a reproducible and replicable science.

Academic journals, such as the SAJIP, could also actively contribute to the reforming of academic publication practices through encouraging and rewarding open and transparent science. For nearly half a century, SAJIP has been at the forefront of both scientific and editorial advancement within Africa (e.g. it was the first psychology-related open access journal in Africa; first to adopt a dual distribution channel; first journal in Africa to build the capacity of junior reviewers; the first to incentivise reviewers, etc.) and it may play an even more vital role with the further professionalising of I-O psychological research practices. The SAJIP may act as a thought leader in not only the discipline of psychology, but in academic publishing throughout Africa by advancing open and transparent research practices. Many of the issues reported above require more systematic support from the SAJIP, its editors and editorial board, its contributors, and AOSIS (Pty) Ltd in order to effectively manage them in the years to come.

Finally, the SAJIP can set the benchmark for what academic publishing in Africa should be, through incorporating the TOP guidelines and for providing the best practice guidelines on conducting and reporting analyses. It has the potential to drive the development of shared standards for open science practice, to translate scientific norms and values into concrete actions and to become the custodian of transparency and academic integrity within Africa.

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationships which may have inappropriately influenced them in writing this article. The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

Author’s contributions

E.E. and L.E.v.Z. contributed equally to this project.

References

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3.

Atwater, L. E., Mumford, M. D., Schriesheim, C. A., & Yammarino, F. J. (2014). Retraction of leadership articles: Causes and prevention. Leadership Quarterly, 25, 1174–1180.

Baker, M. (2016). Digital badges motivate scientists to share data. Nature. Retrieved from http://www.scielo.br/scielo.php?pid=S1415-65552018000400639&script=sci_arttext&tlng=pt and https://rac.anpad.org.br/index.php/rac/article/download/1282/1277; https://doi.org/10.1038/nature.2016.19907.

Bakker, M., Hartgerink, C. H., Wicherts, J. M., & Van der Maas, H. L. (2016). Researchers’ intuitions about power in psychological research. Psychological Science, 27(8), 1069–1077.

Banks, G. C., & McDaniel, M. A. (2011). The kryptonite of evidence-based I–O psychology. Industrial and Organizational Psychology, 4(1), 40–44.

Banks, G. C., & O’Boyle, E. H. Jr. (2013). Why we need industrial-organizational psychology to fix industrial-organizational psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 6, 284–287.

Banks, G. C., O’Boyle, E. H. Jr., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., … Adkins, C. L. (2016a). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 5–20.

Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016b). Editorial: Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Business and Psychology, 31(3), 323–338.

Bedeian, A. G., Taylor, S. G., & Miller, A. N. (2010). Management science on the credibility bubble: Cardinal sins and various misdemeanors. Academy of Management Learning & Education, 9, 715–725.

Bergh, D. D., Sharp, B. M., Aguinis, H., & Li, M. (2017). Is there a credibility crisis in strategic management research? Evidence on the reproducibility of study findings. Strategic Organization, 15(3), 423–436.

Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (2016). HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology, 69(3), 709–750.

Brandt, M. J., Ijzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … Van‘t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224.

Brown, N. J. L., Sokal, A. D., & Friedman, H. L. (2014a). Positive psychology and romantic scientism. American Psychologist, 69(6), 636–637.

Brown, N. J. L., Sokal, A. D., & Friedman, H. L. (2014b). The persistence of wishful thinking. American Psychologist, 69(6), 629–632.

Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., … Altmejd, A. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637.

Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. (2014). Instead of ‘playing the game’ it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17.

Coetzee, M. (2018). South African Journal of Industrial Psychology: Annual editorial overview 2018. SA Journal of Industrial Psychology, 44(1), 3.

Coetzee, M., & Van Zyl, L. E. (2013). Advancing research in industrial and organisational psychology: A brief overview of 2013. SA Journal of Industrial Psychology, 39(1), 1–4.

Coetzee, M., & Van Zyl, L. E. (2014). A review of a decade’s scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA Journal of Industrial Psychology, 40(1), 1–16.

Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.), Hillsdale, NJ: Erlbaum.

Conte, J. M., & Landy, F. J. (2018). Work in the 21st century: An introduction to industrial and organizational psychology. London: Wiley Publications.

De Boeck, P., & Jeon, M. (2018). Perceived crisis and reforms: Issues, explanations, and remedies. Psychological Bulletin, 144(7), 757–777.

de Groot, A. D. (2014). The meaning of ‘significance’ for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han LJ van der Maas]. Acta Psychologica, 148, 188–194.

Dimitrov, K. (2014). Geert Hofstede et al.’s set of national cultural dimensions: Popularity and criticisms. Economic Alternatives, 1(2), 30–60.

Dobele, A. R. (2015). Assessing the quality of feedback in the peer-review process. Higher Education Research & Development, 34(5), 853–868.

Doyen, S., Klein, O., Pichon, C. L., & Cleeremans, A. (2012). Behavioral priming: It’s all in the mind, but whose mind?. PLoS One, 7(1), e29081.

Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology, 6, 621.

Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191

Feeney, J. R. (2018). Robust science: A review of journal practices in Industrial-Organizational Psychology. Industrial and Organizational Psychology, 11(1), 48–54.

Field, J. G., Baker, C. A., Bosco, F. A., McDaniel, M. A., & Kepes, S. (2016, April). The extent of p-hacking in I-O psychology. Paper presented at the 31st Annual Conference of the Society for Industrial and Organizational Psychology, Anaheim, CA.

Fraley, R. C., & Vazire, S. (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLoS One, 9(10), e109019.

Frankenhuis, W. E., & Nettle, D. (2018). Open science is liberating and can foster creativity. Perspectives on Psychological Science, 13(4), 439–447.

Giner-Sorolla, R. (2012). Science or art? How aesthetic standards grease the way through the publication bottleneck but undermine science. Perspectives on Psychological Science, 7, 562–571. https://doi.org/10.1177/1745691612457576

Grand, J. A., Rogelberg, S. G., Allen, T. D., Landis, R. S., Reynolds, D. H., Scott, J. C., … Truxillo, D. M. (2018). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology, 11(1), 4–42.

Guo, Y., & Pandis, N. (2015). Sample-size calculation for repeated-measures and longitudinal studies. American Journal of Orthodontics and Dentofacial Orthopedics, 147(1), 146–149.

Hardwicke, T. E., Mathur, M. B., MacDonald, K. E., Nilsonne, G., Banks, G. C., … Frank, M. C. (2018, March 19). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. Retrieved from https://osf.io/preprints/bitss/39cfb/

Hubbard, R., & Vetter, D. E. (1996). An empirical comparison of published replication research in accounting, economics, finance, management, and marketing. Journal of Business Research, 35(2), 153–164.

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124

Ioannidis, J. P. A. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological Science, 7, 645–654. https://doi.org/10.1177/1745691612464056

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532.

Kepes, S., & McDaniel, M. A. (2013). How trustworthy is the scientific literature in industrial and organizational psychology? Industrial and Organizational Psychology, 6(3), 252–268. https://doi.org/10.1111/iops.12045

Kepes, S., Banks, G. C., & Oh, I. S. (2014). Avoiding bias in publication bias research: The value of ‘null’ findings. Journal of Business and Psychology, 29(2), 183–203.

Kepes, S., Banks, G. C., McDaniel, M., & Whetzel, D. L. (2012). Publication bias in the organizational sciences. Organizational Research Methods, 15(4), 624–662.

Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L. S., … Errington, T. M. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14(5), e1002456.

Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., … Frank, M. C. (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1), 20. https://doi.org/10.1525/collabra.158

Kriegeskorte, N. (2012). Open evaluation: A vision for entirely transparent post-publication peer review and rating for science. Frontiers in Computational Neuroscience, 6, 79.

Kühberger, A., Fritz, A., & Scherndl, T. (2014). Publication bias in psychology: A diagnosis based on the correlation between effect size and sample size. PLoS One, 9(9), e105825.

Lehrer, J. (2010). The truth wears off. The New Yorker, 86, 53–57.

Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology r: How often do they really occur? Perspectives on Psychological Science, 7(6), 537–542. https://doi.org/10.1177/1745691612460688

Martin, G. N., & Clarke, R. M. (2017). Are psychology journals anti-replication? A snapshot of editorial practices. Frontiers in Psychology, 8, 523.

Mazzola, J. J., & Deuling, J. K. (2013). Forgetting what we learned as graduate students: HARKing and selective outcome reporting in I-O journal articles. Industrial and Organizational Psychology: Perspectives on Science and Practice, 6(3), 279–284. https://doi.org/10.1111/iops.12049

McDaniel, M. A., & Whetzel, D. L. (2005). Situational judgment test research: Informing the debate on practical intelligence theory. Intelligence, 33(5), 515–525.

McDaniel, M. A., Kepes, S., Hartman, N. S., & List, S. K. (2017, April). Questionable research practices among researchers in top management programs. Paper presented at the 32nd Annual Conference of the Society for Industrial and Organizational Psychology. Orlando, FL.

Miller, C. C. (2006). Peer review in the organizational and management sciences: Prevalence and effects of reviewer hostility, bias, and dissensus. Academy Management Journal, 49(3), 425–431.

Mudrak, J., Zabrodska, K., Kveton, P., Jelinek, M., Blatny, M., Solcova, I., & Machovcova, K. (2018). Occupational well-being among university faculty: A job demands-resources model. Research in Higher Education, 59(3), 325–348.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425.

Nuijten, M. B., Hartgerink, C. H. J., Van Assen, M. A. L. M, Epskamp, S, & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 1205–1226.

Nuijten, M. B. (2018). Practical tools and strategies for researchers to increase replicability. Developmental Medicine & Child Neurology, 61(5), 535–539. https://doi.org/10.1111/dmcn.14054

O’Boyle, E. H. Jr., Banks, G. C., & Gonzalez-Mule, E. (2017). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43(2), 376–399.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943–954.

Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science A crisis of confidence? Perspectives on Psychological Science, 7(6), 528–530.

Perugini, M., Gallucci, M., & Costantini, G. (2018). A practical primer to power analysis for simple experimental designs. International Review of Social Psychology, 31(1), 157–180.

Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the future: Three unsuccessful attempts to replicate Bem’s ‘Retroactive Facilitation of Recall’ Effect. PLoS One, 7(3), e33423.

Rodriguez, A., Reise, S. P., & Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21(2), 137.

Roll, L., Van Zyl, L. E., & Griep, Y. (in press). Brief positive psychological interventions within multi-cultural organisational contexts: A systematic literature review. In L. E. Van Zyl & S. Rothmann (Eds.), Theoretical approaches to multi-cultural positive psychological interventions. Cham: Springer.

Rouder, J. N., & Morey, R. D. (2011). A Bayes factor meta-analysis of Bem’s ESP claim. Psychonomic Bulletin & Review, 18(4), 682–689.

Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69, 487–510.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366.

Simons, D. J. (2014). The value of direct replication. Perspectives on Psychological Science, 9(1), 76–80. https://doi.org/10.1177/1745691613514755

Smillie, S. (2014, October 13). Journal ‘fails the test’. Retrieved from https://www.timeslive.co.za/news/south-africa/2014-10-13-journal-fails-the-test/.

Stevens, J. R. (2017). Replicability and reproducibility in comparative psychology. Frontiers in Psychology, 8, 862.

Thomas, A. (2018, September 05). African academics are being caught in the predatory journal trap. Retrieved from http://theconversation.com/african-academics-are-being-caught-in-the-predatory-journal-trap-48473.

Tomaselli, K. G. (2018). Perverse incentives and the political economy of South African academic journal publishing. South African Journal of Science, 114(11–12), 1–6.

Van de Schoot, R., Lugtig, P., & Hox, J. (2012). A checklist for testing measurement invariance. European Journal of Developmental Psychology, 9(4), 486–492.

Van Zyl, L. E., Efendic, E., Rothmann, S., & Shankland, R. (in press). Best practice guidelines for positive psychological research designs. In L. E. Van Zyl & S. Rothmann (Eds.), Positive psychological intervention design and protocols for multi-cultural contexts. Cham: Springer.

Van’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology – A discussion and suggested template. Journal of Experimental Social Psychology, 67, 2–12.

Westland, J. C. (2010). Lower bounds on sample size in structural equation modeling. Electronic Commerce Research and Applications, 9(6), 476–487.

Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS One, 6(11), e26828.

Wong, P. T. P., & Roy, S. (2017). Critique of positive psychology and positive interventions. In N. J. L. Brown, T. Lomas, & F. J. Eiroa-Orosa (Eds.), The Routledge international handbook of critical positive psychology. London: Routledge.

Woodiwiss, A. J. (2012). Publication subsidies: Challenges and dilemmas facing South African researchers. Cardiovascular Journal of Africa, 23(8), 421.

Footnotes

1. For instance, 97% of articles in I-O psychology journals rejected the null hypothesis; the ‘reward structure’ of journals leads to more value being placed on supported hypotheses; I-O researchers manipulate data to get model fit and to support their claims; in attempts to validate consulting products like psychometric tools, or interventions, I-O practitioners/consultants may also manipulate data to show that their products ‘worked well’.

2. For instance, if one analytical methodology doesn’t provide the desired results, researchers are flexible to employ others who could provide support for their ideas; researchers stop data collection early, or collect more data, ex post facto; dropping outliers; abandon hypotheses or change the hypotheses to match the results.

3. For instance, editors and reviewers encourage HARKing, and changing hypotheses based on the data; editorial policies result in or encourage publication bias.

4. In their study, Martin and Clarke (2017) found that only one I-O psychology journal accepted null results and replications.

5. HARKing refers to the practice where researchers’ hypothesise after the results are known.

6. For example, in 2014, some academics at higher education institutions in South Africa published in predatory journals, such as the Mediterranean Journal of Social Sciences (Smillie, 2014; Thomas, 2018). Some of these academics have previously published manuscripts in SAJIP and its sister journal the SAHRM.

7. For an extensive list of recommendations, kindly consult Shrout and Rodgers (2018)

8. These recommendations are based on current trends noticed within the SAJIP.

9. For illustrative purposes, Article P7 (see Table 1) reported a sample size of 300. About 42 observed variables were reported, leading to six first-order latent variables. The author reported a p-value of 0.01 for the structural models. Assuming that the author employed the lowest possibly acceptable power level of 0.80 (i.e. there is an 80% probability that the researcher did not commit a Type II error), based on Westland’s (2010) formula, they would need a minimum of 400 participants to accurately estimate model fit, and 2171 in order to determine a specific effect.

10. For a comprehensive list of repositories see the ‘How to Share?’ section reported in Klein et al. (2018).