key: cord-0937546-e0u5cvya authors: Garcia-Garzon, Eduardo; Angulo-Brunet, Ariadna; Lecuona, Oscar; Barrada, Juan Ramón; Corradi, Guido title: Exploring COVID-19 research credibility among Spanish scientists date: 2022-02-28 journal: Curr Psychol DOI: 10.1007/s12144-022-02797-6 sha: 7b17de3ec5d8836f3617d62a45cc35ac9a523527 doc_id: 937546 cord_uid: e0u5cvya Amidst a worldwide vaccination campaign, trust in science plays a significant role when addressing the COVID-19 pandemic. Given current concerns regarding research standards, we were interested in how Spanish scholars perceived COVID-19 research and the extent to which questionable research practices and potentially problematic academic incentives are commonplace. We asked researchers to evaluate the expected quality of their COVID-19 projects and other peers’ research and compared these assessments with those from scholars not involved in COVID-19 research. We investigated self-admitting and estimated rates of questionable research practices and attitudes towards current research status. Responses from 131 researchers suggested that COVID-19 evaluations followed partisan lines, with scholars being more pessimistic about others’ colleagues’ research than their own. Additionally,researchers not involved in COVID-19 projects were more negative than their participating peers. These differences were particularly notable for areas such as the expected theoretical foundations or overall quality of the research, among others. Most Spanish scholars expected questionable research practices and inadequate incentives to be widespread. In these two aspects, researchers tended to agree regardless of their involvement in COVID-19 research. We provide specific recommendations for improving future meta-science studies, such as redefining QRPs as inadequate research practices (IRP). This change could help avoid key controversies regarding QRPs’ definition while highlighting their detrimental impact. Lastly, we join previous calls to improve transparency and academic career incentives as a cornerstone for generating trust in science. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s12144-022-02797-6. The scientific community's response to the Coronavirus Disease 19 has been, to put it mildly, overwhelming. For example, over four million entries for "COVID-19 research" are available in Google Scholar, with more than 60,000 related pre-prints being published in portals such as PsyArXiv or MedRXiv (November 2021; Dimensions Database, 2021) . However, as scientific production swiftly grew during the past two years, many academics forwarded calls for caution, stressing that faster publication timelines should never compromise research quality (London & Kimmelman, 2020; Nieto et al., 2020) . A direct consequence of publishing unreliable, low-quality research is that trust in science could be severely undermined. Trust in research has been related to compliance with COVID-19 measures (Plohl & Musil, 2020) and positive behaviors related to COVID-19 health education (Dohle et al., 2020; Sailer et al., 2021) . Accordingly, trust in science plays a critical role when addressing not only the COVID-19 pandemic but also the ongoing worldwide vaccination campaign. Ultimately, the loss of trust in science could result in the loss of human lives and billions of euros (Ioannidis, 2020) . The inability for many scientific results to replicate has led many scholars to become highly skeptical of published results, resulting in the so-called "replicability" or "confidence crisis" in science (e.g., Open Science Collaboration, 2015; for a review, see Garcia-Garzon et al., 2018) . Two factors have played a significant role in this crisis: Questionable research practices (QRPs) and inadequate academic career incentives. Today, the term QRPs is used as an umbrella to cover all grey-area decisions that could potentially affect the credibility of a given set of results. However, a subtle but relevant distinction between different QRP definitions is whether they constitute intentional behaviors or not. For example, Gerrits et al. (2019) defined QRPs as "to report, either intentionally or unintentionally, conclusions or messages that may lead to incorrect inferences and that do not accurately reflect the objectives, the methodology, or the results of the study" (p.2). In contrast, Banks et al. (2016) , suggested that QRPs represent "design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favor of an assertion" (p.7). From our point of view, the consideration of researchers' intentions hampers the study of QRPs. It is time to acknowledge that most QRPs (e.g., excluding data after observing its effect in the analysis) are not questionable but faulty practices. By considering these practices as 'questionable', we may implicitly consider that, under some circumstances, they could be acceptable (e.g., in the case of non-intentional errors). However, we understand that such practices should be generally avoided, despite reflecting intentional behavior, carelessness, or sheer ignorance. Second, while different ethical considerations may arise from equivalent QRPs depending on specific research context (Gerrits et al., 2019; Sacco & Bruton, 2018; Laraway et al., 2019) , in most cases, we can only observe whether QRPs are present or not, and not to its underlying motivation. While most QRPs-related research has focused on their prevalence, understanding researchers' reasons for engaging in QPRs could be relevant, for instance, to understand how researchers perceive the same. For example, we already know that researchers expect colleagues to be more likely to engage in QRPs than themselves (Banks et al., 2016) , which could be a barrier to be considered when reducing their presence in a research field. The extent that QRPs or inadequate incentives could affect COVID-19 research is unknown. So far, QRPs seem to be present worldwide and to occur at any academic career level (Banks et al., 2016) . Given the time pressure to publish COVID-19 results, some authors lacked the necessary time to ensure standard research quality. For example, when reviewing the literature available about the effect of COVID-19 measures on mental health, Nieto et al. (2020) reported that some worrisome practices were commonplace: the abuse of internet-based, unrepresentative samples, lack of evidence regarding the reliability and validity of measurement instruments applied, and low compliance with open science practices. Similar concerns regarding methodological quality, faster review processes, and shortened acceptance time have been expressed for highly disseminated articles (Khatter et al., 2021) and COVID-19 related articles compared with non-COVID-19 articles published in the same journal (Jung et al., 2021) . Therefore, it could be questioned whether previous concerns regarding research quality might also apply to COVID-19 research. We aimed to understand how scholars perceived COVID-19 research and connect such perceptions with QRPs and problematic incentives. As our study is fundamentally exploratory, as we aim to provide initial evidence regarding the relationship between perceived quality of QRPs research, QRPs, and attitudes towards the academic career. In the end, we aimed not to test any specific hypotheses but to answer six main research questions (RQ). RQ1: How have Spanish researchers evaluated COVID-19 projects in several key areas (e.g., expected quality)? RQ2: How could involvement in COVID-19 research (assessing their own versus peer research) influence such evaluations? RQ3: Do Spanish scholars expect QRPs to be present, and to what extent do they admit to engaging in such? RQ4: Do scholars involved in COVID-19 research differ from non-involved ones in their perception of QRPs prevalence, their QRPs self-admitting rates? RQ5: How are Spanish researchers' attitudes towards current research status (i.e., presence of inadequate research incentives)? RQ6: Do scholars involved in COVID-19 research differ in their attitudes towards current research status? In this study, we will focus on the Spanish research community. Our interest in this country is two-folded. Firstly, Spanish academia is a system transitioning towards an empirically oriented research culture (Fernández-Quijada & Masip, 2013; Rodríguez-Gómez & Goyanes, 2020) , illustrative of several similar research contexts (e.g., Italy). In this sense, the Spanish and Italian research systems share many common characteristics (e.g., closed research job markets that favor local researchers) not found in Anglo-Saxon academic contexts (Seeber & Mampey, 2021) .Secondly, Spanish researchers could be particularly susceptible to QRPs and problematic incentives due to the adverse effects of the 2008 economic crisis on Spanish academia: increased job instability, unemployment, and the exacerbation of a publish-or-perish culture (Rodríguez-Gómez & Goyanes, 2020). Thirdly, Spanish researchers might show a high sensitivity to academic rewards (as illustrated with Spain being the second most efficient country in research spending; Nature, 2020). As such, some controversial practices could be extended, with certain research areas already presenting high selfcitation rates (Fernández-Quijada & Masip, 2013) or methodological issues (Martínez-Nicolás & Saperas-Lapiedra, 2016) . Ultimately, as we have little information about the effect of QRPs or inadequate incentives for the Spanish case, We recruited an online convenience sample in May 2020 using university mailing lists, professional research associations, social media, and personal contacts. We offered no compensation to participants. One hundred thirty-one participants (53% of all participants who initiated the survey) completed the survey and were considered valid and analyzed. We only collected information from researchers working at some capacity in the Spanish research system. We avoided the inclusion of abroad researchers to control for research system and research culture. The sample was composed of male (57%), early career researchers (M = 36.8, SD = 9.6, range = 22-69), mostly from public universities (66%). Researchers reported being experienced in research (number of published articles: M = 15.3, SD = 18.5), with most over 75% having obtained a Ph.D., 85% being involved in at least two publicly funded projects, and a majority with experience collaborating with JCR indexed journals (61% acting as reviewers or editors; for the complete sample description see Table S1A , supplementary material 1). While a large part of the sample was composed of social science researchers (65%), health science (16%), science (7%), arts and humanities (5%), and engineering (4%) were also represented. 3% of our participants did not report their research area. The first part of the questionnaire included sociodemographic information. Afterward, the questionnaire was presented as follows: (a) If a participant affirmed to be involved in COVID-19 research, they would evaluate COVID-19 research twice, firstly assessing their COVID-19 projects and secondly, evaluating peer's COVID-19 research. Furthermore, these participants declared the nature of their collaboration in the project and whether it was related to its common area of study; (b) if the researcher were not involved in COVID-19 research, they would only assess other's COVID-19 related research. Next, all participants evaluated the expected prevalence of different QRPs and were asked whether they had concurred on any of them. Lastly, all participants responded to a measure of attitudes toward the current status of their scientific areas. We included questions regarding participant status in academia: Employing institution, academic position held, national research certifications provided by the competent Spanish board of certifications (ANECA) obtained, research experience (number of participated research projects supported by public funding), research field, previous collaborations with JCR-indexed journals, and number of JCRindexed publications. We also collected age and gender. We developed two versions of a survey to assess how participants perceived COVID-19 research (see Table S1B , supplementary material 1). We evaluated 14 relevant aspects of the same identified by the research team. All items were scored on a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree). Questions were redacted to evaluate either a COVID-19 project participated by the respondent (e.g., "I have felt supported by my institution to conduct this research") or to assess other peers' COVID-19 projects (e.g., "Research institutions should support COVID-19 related research"). Some questions (e.g., "This situation represents a unique research opportunity") were similar in both versions of the scale. Items regarding obtaining fast publications were reversed, so higher scores reflect better expectations of the quality of COVID-19 research. Additional analyses on the scale (i.e., dimensionality assessment and factor analyses) are presented in supplementary material 2, with descriptions of the items in English and Spanish being provided in supplementary material 3. We based our QRPs assessment on John et al. (2012) , which provides for the evaluation of 10 different QRPs (e.g., "in a paper, failing to report all of the study's dependent measures"). The first author and an independent researcher translated and back-translated this questionnaire to Spanish. We modified some QRPs descriptions to adapt the items to the most common use cases (i.e., obtaining significant results) or to emphasize their questionable nature (e.g., we presented the item "rounding off a p-value to show that the results are more significant than observed [e.g., rounding a p-value of 0.0451 as 0.04]" instead of "rounding off a p-value [e.g., reporting that a p-value of 0.054 is less than 0.05]",). We removed the QRPs "Falsifying data", as this act constitutes scientific fraud and not a questionable practice. We additionally measured two new potentially problematic behaviors related to the specific context of COVID-19 research: a) "To conduct a research project because I believe it will count as a research merit (beyond its intrinsic scientific interest)". This behavior was added to assess the extent that researchers saw the COVID-19 mainly as an opportunity to boost their curricula; b) "To modify the original analysis plan to find a significant result". We encompassed this new item to assess p-hacking using an explicit reference to finding significant results. Participants firstly evaluated QRPs expected prevalence (i.e., % of peers that they expected to be involved in such practices) and whether they have ever engaged in such QRPs or not. We developed a six-item tool for assessing researchers' attitudes towards their respective academic fields' current state. The research team developed these items based on current surveys and the researcher's general expectations. Items included six issues related to the impact of inadequate incentive structures, beliefs in research credibility, and research evaluation (e.g., "I believe that the academic career rewards in excess to those researchers with a larger number of publications"). All items were measured using a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree). Further analyses on the measure (i.e., dimensionality assessment and factor analyses) are introduced in supplementary material 2, with descriptions of the items in English and Spanish available in supplementary material 3. Data was collected using Google Forms. Participants consent to participate in the questionnaire on the first page of the same. All analyses were performed in R 4.0.3 (R Core Team, 2021). We conducted our main analyses using the lme4 package (Bates et al., 2015) and graphical displays using the ggplot2 package (Wickham, 2016) . The remaining packages and versions used are reported in supplementary material 1. We conducted three primary analyses: (a) we compared COVID-19 research evaluations from researchers involved (divided by how they perceived their project versus peers' projects) and not involved in COVID-19 research; (b) we assessed the expected prevalence of QRPs and self-admitting rates, and compared these rates with those from alternative samples and between researchers involved COVID-19 research or not; (c) we evaluated the attitudes towards academic status, exploring differences between researchers based on their participation in COVID-19 research. We firstly evaluated how scholars perceived COVID-19 research. We compared three main groups of evaluations: (a) Evaluations of non-participating researchers to COVID-19 research, (b) evaluations of participating researchers to peer's COVID-19 research, and (c) evaluations of participating researchers to their COVID-19 projects. We faced an incomplete crossed design, as researchers not involved in COVID-19 research could not evaluate their own (nonexistent) projects. As researchers participating in COVID-19 research responded to the measure twice, we employed mixed linear models to model within-researcher variance. We defined a research group as a predictor and the researcher as a random effect for each area evaluated. We assessed the adequacy of using mixed models by inspecting the explained variance at the researcher level using the adjusted intraclass-correlation coefficient (Nakagawa et al., 2017) . We visually checked all models' main assumptions (linearity, normality residuals, and homoscedasticity). Lastly, we employed post hoc tests to explore differences between the three groups. Post hoc tests were corrected for multiple comparisons using the Kenward-Roger approach to compute degrees of freedom and p-values corrected using the multivariate t-distribution adjustment. This strategy is particularly suited to mixed models with few clusters (Luke, 2016) . We explored whether sociodemographic variables influenced these results by repeating these analyses while controlling for all sociodemographic variables. We retained our final models by assessing explained variance and model fit indices (i.e., AIC and BIC). We secondly studied the expected prevalence of QRPs and the self-admitting rates. We compared our self-admitting rates with similar questions from the original US study from John et al. (2012) and an Italian sample (Agnoli et al., 2017) . Also, we compared QRPs' expected prevalence. This strategy was intended to explore: (a) whether prevalence and self-admitting rates have changed since QRPs started being studied; (b) how Spain fares when compared with a country of a similar cultural and academic environment. We further explored whether involvement in COVID-19 was related to differences in the rates of expected QRPs prevalence and QRPs self-admitting rates across participants. To do so, we employed a linear regression model including COVID-19 research participation as predictor (i.e., participated or not) and each QRPs as dependent variable. In this case, we could not apply mixed models as we compared two independent groups in all different QRPs separately. To study attitudes toward academia status, we first assessed which areas of concern were more common among researchers, followed by exploring differences between researchers due to their participation in COVID-19 research. For the latter case, we employed the same similar approach in the QRPs case. Table S1B -D, supplementary material 1, divided for research groups, gender, and academic area, respectively). Overall, most researchers reported that COVID-19 research should be provided with more research resources (M = 4.1; SD = 0.9), that this crisis constitutes a unique research opportunity (M = 4.1; SD = 1.0), and that COVID-19 research will allow researchers to uncover unique aspects of other phenomena of interest (M = 4.1; SD = 1.0). On the other hand, most researchers agreed that COVID-19 projects could be seen as an opportunity for obtaining fast publications (for research that would not have been conducted otherwise; M = 2.0; SD = 1.0), that these projects will be easily published (M = 2.2; SD = 1.0), and that researchers could not have sufficient time to adequately prepare research materials (M = 2.7; SD = 1.1). When comparing groups of evaluations, we retained the models only including evaluation type as a predictor (i.e., assessment from COVID-19 researchers to their project, peers' projects, or evaluations from non-participating researchers) as our final models. To do so, we compared the AIC and BIC of these models against null models (i.e., only including the random participant term) and full models (i.e., additionally including all sociodemographic variables; Table 1 ). Noteworthy, full models controlling for sociodemographic variables did not significantly improve model fit, explained variance, or changed our main results (full results available in Table S1E , supplementary material 1). Thus, we retain the simpler versions only including the research group as predictor. Our final models revealed that variance at the researcher level was sufficiently relevant to support that the application of mixed linear models (i.e., Intraclass correlation coefficient [ICC] ranged between 13% for perceived institutional support and 70% for research rigor, Table 2 ). The results Results also indicated those scholars participating in COVID-19 projects judged other COVID-19 research more stringently than its own. For example, they expected their own studies to present stronger theoretical foundations (ΔM = 0.55, 95% CI [0.17, 0.93], t(72.08) = 3.48, p < .001) In line with the previous results, we observed that QRPs self-admitting rates were always lower than prevalence estimations ( Table 3) . Half of the Spanish researchers admitted to "not reporting all dependent measures evaluated" (51.2%) for self-admission rates. In comparison, only 6.1% of researchers admitted engaging in optional stopping practices (i.e., "stopping collecting data earlier than planned because one found the significant results that one had been looking for"). When compared with their Italian and US samples, Spanish researchers tended to present equal or lower self-admitting rates, except for "presenting an unexpected significant result as if it was hypothesized from the starting point of the research" (i.e., HARKing; Spanish = 42.0%, Italian = 27.0%; US = 37.4%) and "Claiming that results hold in other conditions other than tested (i.e., other demographic groups) when one is unsure (or one has not tested it)"; Spanish = 13.0%, Italian = 3.0%; US = 3.1%). For QRPs expected prevalence, we observed that participants expected more than half of their colleagues to engage in at least six of the QRPs. The item with the highest overall prevalence was one of the new studied potentially problematic behaviors ("conducting research for the sole purpose of publishing an article [ beyond its scientific interest]"; i.e., research only as academic merit; sample average = 60.6%), with the lowest being again optional stopping (sample average = 30.4%). Average prevalence estimates were similar to Agnoli et al. (2017) , with differences lower than |10%| for all QRPs but two: "Adding participants after looking to see whether results were significant" (sample average: Spanish = 33.8%, Italian = 63.2%) and "In a paper, selectively reporting studies that worked" (sample average: Spanish = 55.0%, Italian = 65.8%). Results suggested that expected prevalence and selfadmitting rates of QRPs were similar regardless of researcher involvement in COVID-19 research. Accordingly, no significant differences between groups were observed (Table 4 ). Explained variance revealed that models including research participation as predictors did not explain any relevant variance for any considered QRPs. Thus, we failed to find evidence that involvement in COVID-19 research modified was the expected occurrence or the current engagement in QRPs. Table 3 Expected prevalence and self-admitting rates for QRPs and novel questionable behaviors divided by COVID-19 project involvement, plus results from Agnoli et al. (2017) Involved = Researchers involved in COVID-19 projects; Not Involved = Researchers not involved in COVID-19 projects. Prevalence: Expected prevalence. QRPs with self-admitting rates or prevalence estimates over 50% are bolded . New included potential questionable behaviors are presented in italics. Results suggested that most researchers were concerned about their research area status (Fig. 2 , full descriptive statistics in Table S1G , supplementary material 1). In detail, most participants agreed that research evaluation should be based on quality rather than journal impact factor (M = 4.6; SD = 0.8), that inadequate incentives were present in the academic career (M = 4.5; SD = 0.8), and that research volume is excessively rewarded (M = 4.4; SD = 1.0). However, researchers were less convinced that research results present a credibility issue in their research area (M = 2.7; SD = 1.0). We compared researchers participating in COVID-19 projects against their peers not involved in these projects. The only significant difference was observed in believing COVID-19 projects as necessary in crisis time, with involved researchers scoring higher than non-involved researchers (β = -0.62, 95% CI [-1.00, -0.24], t (127) = -3.24, p = .002; Table 5 ). However, explained variance revealed that differences were of little relevance even for this particular case (R 2 adjusted = 0.069). Thus, overall, it is safe to conclude that we fail to observe evidence of researchers presenting different attitudes towards the current state of research based on their participation in COVID-19 projects. The COVID-19 pandemic represents a historical challenge to our societies. The scientific community has responded to this situation with a massive research production to understand this virus and the impact of the measures taken for its containment. In this context, trust in science has become a crucial factor for policymaking and compliance with adopted measures. However, this trust could be compromised if research is perceived to be unreliable or questionable. This study aimed to understand how Spanish scholars evaluated COVID-19 research contributions and how they assessed the presence of QRPs and inadequate incentives in their academic environments. Overall, Spanish researchers were dubious of COVID-19 research (answering our first research question), particularly in key areas such as theoretical foundations or expected research quality. This result aligns with recent cautionary notes on COVID-19 projects (Ioannidis, 2020; London & Kimmelman, 2020; Nieto et al., 2020; Plohl & Musil, 2020; Soltani & Patini, 2020) . Beyond that, researchers' opinions mostly followed partisan lines. In other words, and to resolve our second research question, we observed a pot calling the kettle black effect: Researchers involved in COVID-19 projects seemed to be more concerned with other COVID-19 projects than their own, and researchers not involved in COVID-19 projects at all being more skeptical of COVID-19 research than their involved peers (being expected rigor and ethical considerations the only exceptions). These results might be the result of several biases: A plausible possibility is a mixture of in-group/out-group biases (e.g., De Dreu et al., 2016) , where individuals tend to underestimate out-group diversity (e.g., stereotyping all or most COVID-19 research as untrustworthy) while also overestimate in-group diversity (e.g., overestimating quality differences in COVID-19 projects). Another possibility is social desirability (Tourangeau & Yan, 2007; Yan, 2021) . Other biases that could explain these effects could be overplacement (i.e., the exaggerated belief that one is better than others; Moore & Schatz, 2017; Radzevick & Moore, 2011) . In addition, there are other possible biases associated with overconfidence related to researchers, like groupthink or strategy calcification (Mumford & Maynard, 2020) . It is debatable whether these effects could be more prominent in Spanish researchers due to economic and social struggles in the Spanish research system. However, further evidence is needed to map the impact of this potential cognitive bias and cultural research environments on these matters. To answer our third research question, we observed that our QRPs rates were similar those previously reported (Agnoli et al., 2017; Fiedler & Schwarz, 2016 . This situation is nothing but discouraging, as expected prevalence and self-admitting QRPs might not have changed since they were brought to the research community's attention a decade ago (John et al., 2012) . Noteworthy, one of the new potentially problematic behaviors studied ("conducting research for the sole reason of obtaining academic merit [beyond its scientific interest]") obtained the highest expected prevalence of all questionable practices. This result implies that researchers expect more than half of research projects to be conducted to answer a meaningful scientific question or even be born out of innate curiosity, but as a sole means for career advancement. However, and as a reviewer [-0.44, 0.23] highlighted, it could occur that this conduct would not be problematic if the ensuing research is sound and transparent. In the specific context of the COVID-19 research, we considered as relevant to assess whether individuals could be taking advantage of the situation to conduct research (and obtain academic merits) that would not be conducted otherwise. In our case, and similar to previous QRPs exploratory research (Gerrits et al., 2019) , we aimed to present the broadest range of possible debatable behaviors. However, we acknowledge that this scientific conduct might not be worthy of study in all research contexts. In addition, we believe that QRPs should be no longer considered black or white behaviors but dependent on each study's specific context and conditions (Sacco & Bruton, 2018) . Unfortunately, while the presence of a QRP tells us that we should be wary of a given set of results, it says nothing about its causes. In other words, we often lack the information of whether a QRP represents a deliberate attempt to present significant results or it is just the result of misinformation or honest errors. As such, we deem the term "questionable" to be unfit to reflect the true nature of these behaviors, as it opens the door to consider researchers' intentions when assessing their detrimental effects. We propose the QRPs definition to be 'blind' to reasons and motivations instead and to denominate them as "inadequate research practices". This new terminology would underscore the negative effect of these practices, even if the researcher unwillingly engages in them (i.e., honest mistakes). In other words, by moving from "questionable" to "inadequate", we hope that future research would focus on a message that such practices should be avoided at all cost and that they pose a threat to the robustness of any given research field. This proposal does not imply that knowing the underlying reasons for inadequate research practices to occur is not relevant. To avoid their presence in future research, we must know why they appear in the first place. It is only that the differentiation between occurrence and motivation should be clear in future research, as both provide different key information to study and prevent QRPs. As observed with the negative evaluations of others' COVID-19 projects, researchers consider others to be more likely to engage in QRPs (a result previously reported by Banks et al., 2016) . However, participating in a COVID-19 project was not a factor in neither expected prevalence nor self-admitting rates (resolving the fourth and the first part of our fifth research question). To answer our last two research questions, most researchers (regardless of their involvement in COVID-19 projects) shared negative attitudes toward research status, such as considering inappropriate incentives and the presence of excessive rewards to the volume of publications over quality being commonplace. We observed a shared feeling that the advancement in scientific careers is today based on problematic aspects of the same, where some peers could potentially engage in QRPs as a means for ensuring publications and job security. Moreover, we are just starting to understand how this structure could impair mental health, particularly among young researchers (Sorrel et al., 2020) . Our results stress the utility of fostering open-research policies at the national and international level (e.g., European Union's open science policy). However, the success of such policies could be expected to depend upon their adaptation to each specific academic structure. For example, Spanish, Italian, and US researchers share similar problems and concerns (at least regarding the presence of QRPs). However, crucial elements such as the academic job market structure, R&D investment levels, or the academic structure vary across countries (Seeber & Mampey, 2021) , particularly when comparing the US and its European counterparts (UNESCO, 2021) . Therefore, country-specific policies should be developed in each case to ensure that the issues mentioned above are ameliorated. In our part, we would like to join previous calls for Spanish national research agencies to change the incentive system to favor researchers engaging in robust, open, and transparent scientific efforts (Rodríguez-Gómez & Goyanes, 2020; Ruiz-Pérez & Delgado-López-Cózar, 2017). Lastly, it is essential to remark that our results do not diminish science's role when addressing the COVID-19 pandemic. However, as we can only build trust in science if sound research ensues, an attitude of safe skepticism should be embraced in all scientific endeavors. This paradigm would ensure that only reproducible, robust results are considered when designing policy actions to address this pandemic, as many have recommended before (e.g., Ruggeri et al., 2020) . This research is not without limitations. Firstly, the limited sample size. While this data collection scheme allowed us to control the research system's effect on the results, we expect future research to explore other countries employing larger samples. Secondly, the results are subject to selfselection bias, and researchers concerned with the status of science might have been more likely to have answered the survey. Future studies should reproduce our results employing improved sampling techniques (e.g., stratified sampling). Researchers should also be aware of the potential effect of non-responding bias due to the sensitive nature of the questions. Such bias could have resulted in certain researchers (e.g., those concerned with sharing whether they have engaged in QRPs) being less likely to have been represented in our sample. Again, these results should be explored in future research with representativesamples. Thirdly, we decided to modify how QRPs were evaluated from previous studies slightly (e.g., omitting the QRP falsifying data) and include two potentially problematic behaviors, to our knowledge, not evaluated before. Future researchers should explore whether these additional QRPs are worthy of evaluation in alternative research contexts. Fourthly, we studied differences at the item level. However, future research should conduct proper validation studies of scales concerning perceived quality and attitudes towards academic status. These scales could provide novel, insightful information to identify major concerns or potential improvements if valid and reliable. In this sense, initial information regarding the measures (i.e., dimensionality and factor analyses) is provided in supplementary data 2. Lastly, given the high numbers of tests conducted, future research should replicate these results in individual, specific research projects. This research highlighted how Spanish researchers perceived COVID-19 research in the face of a worldwide vaccination campaign. We observed that Spanish scholars were generally concerned with COVID-19 research, especially if they were not involved in such projects. We also found that Spanish scholars presented significant concerns about the quality of published scientific results in general (measured in prevalence and self-rates of QRPs) and the presence of inadequate incentives. Accordingly, we cannot but stress the necessity of improving research integrity and transparency in all research. The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s12144-022-02797-6. Funding All authors certify that they have no affiliations with or involvement in any organization or entity with any financial or nonfinancial interest in the subject matter or materials discussed in this manuscript. Data availability All data and materials are publicly available at https:// osf. io/ 5mp67/. Code availability Code is publicly available at https:// osf. io/ 5mp67/. Ethics approval This study was approved by the Ethics Committee at Universidad Camilo José Cela (CEI-UCJC). The procedures used in this study adhere to the tenets of the Declaration of Helsinki. Consent to participate Informed consent was obtained from all individual participants included in the study. Participants were informed following all legal prescriptions provided by the Ethics Committee at Universidad Camilo José Cela (CEI-UCJC). No compensation was awarded for their participation. The authors declare no conflict of interest. Questionable research practices among Italian research psychologists Fitting linear mixed-effects models using lme4 Editorial: Evidence on questionable research practices: The good, the bad, and the ugly Social Distancing, Quarantine, and Isolation In-group defense, out-group aggression, and coordination failures in intergroup conflict. Proceedings of the National Academy of Sciences of the United States of America COVID-19 Report: Publications Acceptance and Adoption of Protective Measures During the COVID-19 Pandemic: The Role of Trust in Politics and Trust in Science Tres décadas de investigación española en comunicación: Hacia la mayoría de edad Questionable research practices revisited Estudios de replicación, pre-registros y ciencia abierta en Psicología Occurrence and nature of questionable research practices of messages and conclusions in international scientific Health Services Research publications authored by researchers in the Netherlands Coronavirus disease 2019: The harms of exaggerated information and non-evidence-based measures Measuring the prevalence of Questionable Research Practices with incentives for truth telling Methodological Quality of COVID-19 clinical research Is rapid scientific publication also high quality ? Bibliometric analysis of highly disseminated COVID-19 research papers An overview of scientific reproducibility: Consideration of relevant issues for behavior science/analyses Against pandemic research exceptionalism Evaluating significance in linear mixed-effects models in R Objetos de estudio y orientación metodológica de la reciente investigación sobre comunicación en España The three facets of overconfidence Mines in the end zone: Are there downsides to team performance? The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded German science on the word stage: Visualized. Germany Nature Index The quality of research on mental health effects of the COVID-19 pandemic: A note of caution after a systematic review Estimating the reproducibility of psychological science Modeling compliance with COVID-19 prevention guidelines: The critical role of trust in science Competing to be certain (but wrong): Market dynamics and excessive confidence in judgement Retracted coronavirus (COVID-19) papers The commoditization of the publication culture in Spain: A cost-and time-effective model to systematize communication sciences Standards for evidence in policy decision-making Spanish researchers' opinions, attitudes and practices towards open access publishing Brown In defense of the questionable: Defining the basis of research scientist' engagement in Questionable Research Practices Science knowledge and trust in medicine affect individuals' behavior in pandemic crises How do university systems' features affect academic inbreeding? Career rules and language requirements in France Germany Italy and Spain. Higher Education Quarterly Retracted COVID-19 articles: A sideeffect of the hot race to publication It must have been burnout: Prevalence and related factors among Spanish PhD students Ethical problems in academic research Sensitive questions in surveys How much does your country invest in R&D Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking ggplot2: Elegant Graphics for Data Analysis Consequences of asking sensitive questions in surveys Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations