About the Author(s)


David J.F. Maree Email symbol
Department of Psychology, Faculty of Humanities, University of Pretoria, Pretoria, South Africa

Citation


Maree, D.J.F. (2019). Burning the straw man: What exactly is psychological science?. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 45(0), a1731. https://doi.org/10.4102/sajip.v45i0.1731

Opinion Paper

Burning the straw man: What exactly is psychological science?

David J.F. Maree

Received: 22 Aug. 2019; Accepted: 02 Oct. 2019; Published: 06 Dec. 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Problemification: Efendic and Van Zyl (2019) argue for following open access-based principles in IO psychology following the recent crises in psychological research. Among others, these refer to the failure to replicate empirical studies which cast doubt on the trustworthiness of what we believe to be psychological knowledge. However, saving knowledge is not the issue at stake: focusing on transparency and compliance to standards might solve some problems but not all.

Implications: The crisis focuses our attention on what science is and particularly science in psychology and its related disciplines. Both the scientist–practitioner model of training psychologists and the quantitative–qualitative methods polarity reveal the influence of the received or positivistic view of science as characterised by quantification and measurement. Postmodern resistance to positivism feeds these polarities and conceals the true nature of psychological science.

Purpose: This article argues for a realist conception of science that sustains a variety of methods, from interpretative and constructionist approaches to measurement. However, in this view, measurement is not a defining characteristic of science, but a way to find things out and the latter supports a critical process.

Recommendations: Revising our understanding of science, thus moving beyond the received view to a realist one, is crucial to manage misconceptions about what counts as knowledge and as appropriate measures when our discipline is in the crossfire. Thus, Efendic and Van Zyl’s (2019) proposals make sense and can be taken on board where measurement as one of the ways to find things out is appropriate. However, realism supports a broader enterprise that can be called scientific because it involves a critical movement of claim and counter-claim while executing its taxonomical and explanatory tasks. Thus, the psychosocial researcher, when analysing discourse, for example, can also be regarded as a scientist.

Keywords: psychological science; realism; measurement; scientist–practitioner model; quantitative-qualitative.

Although Efendic and Van Zyl (2019) and others make a good case for focusing on open access, transparency and standards for the empirical research process – suggestions one can fully support (Shrout & Rodgers, 2018) – the replication/reproducibility crisis might be symptomatic of our understanding of what science is. For this argument, I will assume that replication and reproducibility refer to similar aspects in the process of doing science. Both support transparency although I am aware that some distinguish between (a) replication as the understanding that others not involved with a project be able to obtain similar findings, and (b) reproducibility as the ability to reproduce any aspect of the research process, for example, given the data, someone else would be able to recreate similar statistical results although reproducibility is not restricted to analysis (Peng, 2015).

Replication, in this case supporting transparency, characterises a scientific endeavour. Transparency refers to the availability of sufficient information so that others can replicate and reproduce the study. It relates to the publicness of knowledge and information that can be checked and interrogated by others, and thus, it supports the ideals of an open scientific process. The publicness of science is one of its fundamental characteristics; without it, a process cannot be characterised as science.

A second implication of the replicative ability of science cuts to the heart of science, namely its ability to produce knowledge. Whether the knowledge is supposed to be universal or locally applicable is irrelevant. The question for science is whether others can reach the same conclusions about a phenomenon given a replication of conditions and processes. Replicability and its related concepts seem to be crucial for viewing a process as scientific. Thus, if one would not be able to replicate a study, at least in the outcome of the process, what does this say about the scientific process? A logical conclusion would be that if replication is a characteristic of science, then psychology and its cognate disciplines are not a science in the light of the absence of replication.

That psychology’s status as a science is wobbly at best is a narrative fed to and supported by public opinion and media (Fanelli, 2018; Ferguson, 2015; Jamieson, 2018; Pashler & Wagenmakers, 2012). However, Efendic and Van Zyl (2019) certainly think that psychology is a science; otherwise, they, along with others, would not have proposed a number of remedies to enable transparency and replication/reproducibility (Maxwell, Lau, & Howard, 2015). Over and above pointing out publication bias and the changing nature of the phenomenon under study (Greenfield, 2017; Schmidt & Oh, 2016), a number of these measures relate to compliance to quantitative and measurement requirements and rigour (Ferguson, 2015; Peng, 2015; Shrout & Rodgers, 2018; Świątkowski & Dompnier, 2017). This certainly creates the impression that along with transparency and replicability, measurement and quantitative analysis characterise science (Sandelowski, Voils, & Knafl, 2009).

The replication crisis happened against a particular view of what we believe science is and should be. Not that replication as such is a poor criterion of scientific character, a point I will discuss below. Among others, the replication crisis requires us to examine yet again what the criteria for a science are although these criteria are not currently at the centre of the debate and implicitly assumed; we think we know what a science is and should be. Psychologists get explicitly trained in research methods, usually divided into qualitative and quantitative approaches, and because of their training, they view science in a particular way. One consequence of our training and our views of what science is, is how we perceive the profession and science of psychology. Psychologists get trained usually within the scientist–practitioner (SP) model with the result that psychologists see themselves either as the one or the other. In South Africa, the existence of a research psychology category of registration is symptomatic of this particular divisive split caused and upheld by the SP training model.

The SP model was devised at a conference at Boulder, Colorado, in 1949 and became one of the leading models for trying to integrate practitioner training and research (Baker & Benjamin Jr, 2000; Chang, Lee, & Hargreaves, 2008; Jones & Mehr, 2007; Petersen, 2007; Vespia & Sauer, 2006). One should realise that the understanding of what science was at that stage was probably strongly coloured by behaviourist, empirical and positivist perceptions, but as such it was not the focus of the strong motivation to integrate practitioner and scientific training and practices. In principle, integrating the two approaches is necessary for the quality and sustainability of psychology as a profession (Maddux & Riso, 2007; Spengler & Lee, 2017; Stricker, 2002). Many approaches were investigated but no one particular strategy was found to successfully integrate science and practice (Lampropoulos, Spengler, Dixon, & Nicholas, 2002; Malott, 2018; Van Der Watt, 2016; Vespia, Sauer, & Lyddon, 2006; Wakefield & Kirk, 1996). The struggle for integration is not necessarily an indication of the lack of importance of the principle of integrating science and practice (Overholser, 2007; Spengler & Lee, 2017). As a principle, it serves as an ideal and encompasses what many believe should be true of the profession of psychology (Overholser, 2015). Along with some authors I believe that the split is not desirable and fundamental integration is required, and one of the main reasons integration fails us is the perception we have of what science is (Chwalisz, 2003; Lane & Corrie, 2007). Simply put, practitioners, whether of clinical or counselling variety, do not think measurement and empirical experimentation provide adequate access, if at all, to an essentially hermeneutic and constructionist phenomenon (Corrie & Callahan, 2000; Skourteli & Apostolopoulou, 2015). Most of their training focus on linguistic-based methods of intervention and guidance and as such do not resonate very well with the empiricist and quantification demand of scientific approaches prevalent in models such as evidence-based practices and so on. Thus, the received view of science, namely, one characterised by measurement and empirical justification, also incorrectly called a positivist view of science, is the major stumbling block to integration.

The same received view or positivist image of science underlies and maintains the major quantitative–qualitative methodological dichotomy in the social and psychological sciences (Johnson & Onwuegbuzie, 2004; Krantz, 1995; Lincoln, Lynham, & Guba, 2018). The received view was constructed with good intentions but also skewed what social and psychological scientists believe about their science (Morgan, 2007). The conception about human reality is fundamentally different from natural reality, and informed justification of qualitative methods is more appropriate than the received view (Prasad, 2017). Measurement and empirical investigation contradict the nature of our psychosocial realities. Any association with measurement and empirical studies was qualified as positivist and formed part of the received view of science that proponents of the qualitative paradigm utilised as justification for their interpretative and critical approaches (Petty, Thomson, & Stew, 2012; Ponterotto, 2005; Proctor & Capaldi, 2001). The received view or positivism became a straw man relentlessly attacked by relativists (Morgan, 2007; Persson, 2010; Shadish, 1995). This straw man, which we cannot even dare to genderise because of its pejorative nature, continues to be burned at the stake in service of postmodernist views. I have no interest in saving the poor straw person or honouring its ashes because it does not exist.

What do we then do with measurement and quantification? Recently, Michell (1997, 2005) made a major effort with a number of publications to convey the message that psychology (a) suffers from a chronic pathological thought disorder with regard to measurement (Michell, 2008), and (b) measurement does not define science. Firstly, Michell (1999) provides a thorough overview of the history of measurement in psychology which we do not have to repeat here but boils down to what I have claimed above, namely it defines our received view to such an extent that even when it is clear that some concept or construct cannot be measured, we still try to operationalise, quantify, manipulate and measure (Michell, 2012). The relativists know this: personality constructs as measured in trait personality theory do not exist: psychologists constantly reify concepts and illegitimately try to measure them instead of making the reality of so-called constructs and characteristics an empirical investigative matter: attitude might not be measurable but weight certainly is. This does not mean that some of the things we work with might not be real, contra the relativists’ belief that nothing is real and we can thus construct unboundedly.

Secondly, Michell (2003) points out the false necessity of measurement as an imperative defining what science is. This quantitative imperative for various reasons has featured as a crucial characteristic of science in the history of thought and science. The ability to quantify, operationalise and measure is clearly crucial in a number of, if not all, natural sciences (Michell, 1999, 2003). The situation is not so clear in a discipline such as psychology: certain things are measurable while others are not, depending on the level of analysis.

Michel’s (2005, 2013) views of measurement and its role in science are based on a realist metatheory. Firstly, a realist view of science holds reality to be independent of the mind in contrast to constructionism and various forms of idealist and postmodern views (Bhaskar, 2008). Secondly, realism holds some non-observables to be real (Chakravartty, 2007). Does this mean that even though constructionists maintain non-essentialism, namely that there are no real things except our constructions, some of those might just be real so we may happily carry on measuring and categorising things like gender as male and female and personality as containing extraversion or introversion characteristics (assuming there is something like personality)? We have to understand what our metatheories say about reality, science and how we can know this reality given that science is one way of accessing reality. Thus, constructionism and forms of linguistic-grounded theories are correct about how we create and maintain most of our psychosocial realities (Bhaskar, 2005). They are wrong to claim that natural reality is mind-dependent even though they are correct that we can know nothing except through our language and cognitive endeavours. However, realists avoid the epistemic fallacy by claiming that our reality is not determined by what we can know (Bhaskar, 2008).

Constructionists also underestimate the nature of psychosocial realities, and it is only realism that can sustain (a) the reality of both natural and psychosocial domains and (b) their (semi)durability (Bhaskar, 2005). The first statement claims ontological egalitarianism, namely that both the so-called domains of the real, whether natural or social, are part of the same reality (Baker, 1986; Mackay & Petocz, 2011). Yes, psychosocial realities somehow come from the same stuff responsible for gravity, and like the dinosaurs we will leave traces of some of those realities, eventually. The second claim refers to our ability to study the realities scientifically. At some stage in their life cycles, natural, psychological and social realities can be mind-independently studied by someone (Mäki, 2012). Of course, the nature of the epistemic access is determined by the nature of the reality, and as Michell said, trying to find things out and how things work as the basis of science does not dictate the method. The nature of science lies in our attempt to interrogate nature; the only provision is – and this is what makes it science – that these attempts take place critically, that is, making a claim about something and expecting someone, whether reality itself as resisting certain probes but not others, or a scientist, to make counter claims which we can test. Thus, science as criticism, that is, as a process of challenging and counterclaim, requires whatever is appropriate to the phenomenon. If it can and should be measured, then by all means, but if it should be talked to and talked about in a process of claim-counter-claim, then even dialogical, interpretative or reconstructive processes can be utilised to describe and explain realities.

Mäki (2012) proposed a relaxation of realist requirements for the social sciences, that is, mind-independence and non-observability. We know that our psychosocial realities are mind- or concept-dependent, but as I have said above, at some stage these can be studied in what Mäki (2005, p. 246) calls a structure of theory or science-independence. Mäki (2005, pp. 249–250) also proposes that we should suspend making final decisions about psychosocial unobservables and allow them to function in our theories and empirical processes until they are shown to be real. Although helpful and tolerant suggestions, we need to emphasise one aspect about why realism holds unobservables to be real that cannot be relinquished: explanation is always deep, thus aiming at discovering why things work. Bhaskar (2008) calls this depth ontology, and searching for explanatory mechanisms distinguishes realist science from the received view. Both Bhaskar’s (2008) critical realism and the received view are predicated upon the possibility of closure of natural systems. For the positivist, the construction of a closed system is required in order to capture the regularity of events in an experiment by means of precise measurement and statistical or mathematical analysis. The realist regards the regularity as indicative of a depth mechanism, the one that must be triggered in order to effect a pattern of events in the closed system. Both the critical realist, like Bhaskar (2005), and the social constructionist agree that social reality cannot be closed, and hence, patterning, measuring and experimenting cannot be applied to an essentially open reality. Both infer that psychosocial reality is language mediated and accessed, but along with Mäki (2012, p. 15), I think they are wrong. The regularity conception of causality as a characteristic of the positivist and received view is similarly not applicable to both natural and psychosocial domains. What is valid and applicable to reality as a whole, supported by an egalitarian ontology, is a view of science that supports depth explanation. The latter commences with the aim of science, namely to find things out, and commensurate with this aim, measurement is a way, or method, to find things out about phenomena that can be measured (Michell, 2005, p. 287).

Efendic and Van Zyl (2019) are thus on the right track for that part of psychological science that deals with measurable phenomena. The criticism of the received view from constructionist and interpretivist psychology has impressed on us the inappropriateness of measurement of psychosocial phenomena but their umbrella stretched far too widely. The result is that we have cohorts of illiterate students with respect to empirical and quantitative investigation and measurement; in the end, researchers practice sloppy science to such an extent that we need to remind them of the parameters of our scientific practices as Efendic and Van Zyl (2019) do. Although motivating the nature of science based on a realist metatheory would require more space than we are allowed, suffice to say for now, along with Michell (2004, p. 313; 2005, p. 287), that science as the endeavour to find things out does so in a critical manner appropriate to its phenomena. Thus, measure what is allowed but critically engage other phenomena fittingly. Epistemic access to our reality is provided by appropriate critical methods, but only because reality allows this critical engagement (Ferraris, 2014). Thus, the essential scientific movement is a critical one. We make claims and counter-claims about the things we want to describe and explain (Mackay & Petocz, 2011). The latter Bhaskar (2008) calls the taxonomical and explanatory tasks of science: finding things out involves both activities. Whenever the scientist claims something about reality, natural, psychosocial or otherwise, the ensuing debate between people and reality, and people and people constitutes science. Sometimes our measurements constitute a good argument, that is, it grants epistemic access to understanding something, and other times, it might be discourse analysis that enables understanding of relatively enduring psychosocial phenomena. In each instance, these phenomena might resist our interpretations and then we have to, as scientists, start afresh.

Acknowledgements

Competing interests

The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.

Author’s contributions

D.J.F.M. is the sole author of this research article.

Ethical considerations

The author confirms that ethical clearance was not required for the study.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any affiliated agency of the author.

References

Baker, A.J. (1986). Australian realism: The systematic philosophy of John Anderson. Cambridge: Cambridge University Press.

Baker, D.B., & Benjamin, Jr., L.T. (2000). The affirmation of the scientist-practitioner: A look back at Boulder. American Psychologist, 55(2), 241–247. https://doi.org/10.1037//0003-066X.55.2.241

Bhaskar, R. (2005). The possibility of naturalism: A philosophical critique of the contemporary human sciences (3rd edn.). London: Routledge.

Bhaskar, R. (2008). A realist theory of science. London: Routledge.

Chakravartty, A. (2007). A metaphysics for scientific realism: Knowing the unobservable. Cambridge: Cambridge University Press.

Chang, K., Lee, I.-L., & Hargreaves, T.A. (2008). Scientist versus practitioner – An abridged meta-analysis of the changing role of psychologists. Counselling Psychology Quarterly, 21(3), 267–291. https://doi.org/10.1080/09515070802479859

Chwalisz, K. (2003). Evidence-based practice: A framework for twenty-first-century scientist-practitioner training. The Counseling Psychologist, 31(5), 497–528. https://doi.org/10.1177/0011000003256347

Corrie, S., & Callahan, M.M. (2000). A review of the scientist-practitioner model: Reflections on its potential contribution to counselling psychology within the context of current health care trends. British Journal of Medical Psychology, 73, 413–427. https://doi.org/10.1348/000711200160507

Efendic, E., & Van Zyl, L.E. (2019). On reproducibility and replicability: Arguing for open science practices and methodological improvements at the South African Journal of Industrial Psychology. SA Journal of Industrial Psychology, 45, 10. https://doi.org/10.4102/sajip.v45i0.1607

Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences, 115(11), 2628–2631. https://doi.org/10.1073/pnas.1708272114

Ferguson, C.J. (2015). ‘Everybody knows psychology is not a real science’: Public perceptions of psychology and how we can improve our relationship with policymakers, the scientific community, and the general public. American Psychologist, 70(6), 527–542. https://doi.org/10.1037/a0039405

Ferraris, M. (2014). Manifesto of new realism (S. De Sanctis, Trans.). Albany, NY: State University of New York Press.

Greenfield, P.M. (2017). Cultural change over time: Why replicability should not be the gold standard in psychological science. Perspectives on Psychological Science, 12(5), 762–771. https://doi.org/10.1177/1745691617707314

Jamieson, K.H. (2018). Crisis or self-correction: Rethinking media narratives about the well-being of science. Proceedings of the National Academy of Sciences, 115(11), 2620–2627. https://doi.org/10.1073/pnas.1708276114

Johnson, R.B., & Onwuegbuzie, A. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26. https://doi.org/10.3102/0013189X033007014

Jones, J.L., & Mehr, S.L. (2007). Foundations and assumptions of the scientist-practitioner model. American Behavioral Scientist, 50(6), 766–771. https://doi.org/10.1177/0002764206296454

Krantz, D.L. (1995). Sustaining vs. resolving the quantitative-qualitative debate. Evaluation and Program Planning, 18(1), 89–96. https://doi.org/10.1016/0149-7189(94)00052-Y

Lampropoulos, G.K., Spengler, P.M., Dixon, D.N., & Nicholas, D.R. (2002). How psychotherapy integration can complement the scientist-practitioner model. Journal of Clinical Psychology, 58(10), 1227–1240. http://doi.org/10.1002/jclp.10108

Lane, D.A., & Corrie, S. (2007). The modern scientist-practitioner: A guide to practice in psychology. London: Routledge.

Lincoln, Y.S., Lynham, S.A., & Guba, E.G. (2018). Paradigmatic controversies, contradictions, and emerging confluences, revisited. In N.K. Denzin & Y.S. Lincoln (Eds.), The Sage handbook of qualitative research (5th edn., pp. 108–150). Thousand Oaks, CA: Sage.

Mackay, N., & Petocz, A. (2011). Realism and the state of theory in psychology. In N. Mackay & A. Petocz (Eds.), Realism and psychology: Collected essays (pp. 17–51). Boston, MA: Brill.

Maddux, R.E., & Riso, L.P. (2007). Promoting the scientist–practitioner mindset in clinical training. Journal of Contemporary Psychotherapy, 37(4), 213–220. https://doi.org/10.1007/s10879-007-9056-y

Mäki, U. (2005). Reglobalizing realism by going local, or (how) should our formulations of scientific realism be informed about the sciences? Erkenntnis, 63(2), 231–251. https://doi.org/10.1007/s10670-005-3227-6

Mäki, U. (2012). Realism and antirealism about economics. In U. Mäki (Ed.), Philosophy of economics (pp. 4–24). Oxford: Elsevier.

Malott, R.W. (2018). A model for training science-based practitioners in behavior analysis. Behavior Analysis in Practice, 11, 196–203. https://doi.org/10.1007/s40617-018-0230-3

Maxwell, S.E., Lau, M.Y., & Howard, G.S. (2015). Is psychology suffering from a replication crisis? What does ‘failure to replicate’ really mean? American Psychologist, 70(6), 487–498. https://doi.org/10.1037/a0039400

Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355–385. https://doi.org/10.1111/j.2044-8295.1997.tb02641.x

Michell, J. (1999). Measurement in psychology: Critical history of a methodological concept. New York: Cambridge University Press.

Michell, J. (2003). The quantitative imperative: Positivism, naïve realism and the place of qualitative methods in psychology. Theory and Psychology, 13, 5–31. https://doi.org/10.1177/0959354303013001758

Michell, J. (2004). The place of qualitative research in psychology. Qualitative Research in Psychology, 1(4), 307–319. https://doi.org/10.1191/1478088704qp020oa

Michell, J. (2005). The logic of measurement: A realist overview. Measurement, 38(4), 285–294. https://doi.org/10.1016/j.measurement.2005.09.004

Michell, J. (2008). Is psychometrics pathological science? Measurement: Interdisciplinary Research and Perspectives, 6(1–2), 7–24. https://doi.org/10.1080/15366360802035489

Michell, J. (2012). ‘The constantly recurring argument’: Inferring quantity from order. Theory & Psychology, 22(3), 255–271. https://doi.org/10.1177/0959354311434656

Michell, J. (2013). Constructs, inferences, and mental measurement. New Ideas in Psychology, 31(1), 13–21. https://doi.org/10.1016/j.newideapsych.2011.02.004

Morgan, D.L. (2007). Paradigms lost and pragmatism regained. Journal of Mixed Methods Research, 1(1), 48–76. https://doi.org/10.1177/2345678906292462

Overholser, J.C. (2007). The Boulder model in academia: Struggling to integrate the science and practice of psychology. Journal of Contemporary Psychotherapy, 37(4), 205–211. https://doi.org/10.1007/s10879-007-9055-z

Overholser, J.C. (2015). Training the scientist–practitioner in the twenty-first century: A risk–benefit analysis. Counselling Psychology Quarterly, 28(3), 220–234. https://doi.org/10.1080/09515070.2015.1052779

Pashler, H., & Wagenmakers, E.-J. (2012). Editors’ introduction to the special section on replicability in Psychological Science: A crisis of confidence? Perspectives on Psychological Science, 7(6), 528–530. https://doi.org/10.1177/1745691612465253

Peng, R. (2015). The reproducibility crisis in science: A statistical counterattack. Significance, 12(3), 30–32. https://doi.org/10.1111/j.1740-9713.2015.00827.x

Persson, J. (2010). Misconceptions of positivism and five unnecessary science theoretic mistakes they bring in their train. International Journal of Nursing Studies, 47(5), 651–661. https://doi.org/10.1016/j.ijnurstu.2009.12.009

Petersen, C.A. (2007). A historical look at psychology and the scientist-practitioner model. American Behavioral Scientist, 50(6), 758–765. https://doi.org/10.1177/0002764206296453

Petty, N.J., Thomson, O.P., & Stew, G. (2012). Ready for a paradigm shift? Part 1: Introducing the philosophy of qualitative research. Manual Therapy, 17(4), 267–274. https://doi.org/10.1016/j.math.2012.03.006

Ponterotto, J.G. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology, 52(2), 126–136. https://doi.org/10.1037/0022-0167.52.2.126

Prasad, P. (2017). Crafting qualitative research: Beyond positivist traditions (2nd edn.). New York: Taylor & Francis.

Proctor, R.W., & Capaldi, E.J. (2001). Empirical evaluation and justification of methodologies in psychological science. Psychological Bulletin, 127(6), 759–772. https://doi.org/10.1037/0033-2909.127.6.759

Sandelowski, M., Voils, C.I., & Knafl, G. (2009). On quantitizing. Journal of Mixed Methods Research, 3(3), 208–222. https://doi.org/10.1177/1558689809334210

Schmidt, F.L., & Oh, I. (2016). The crisis of confidence in research findings in psychology: Is lack of replication the real problem? Or is it something else? Archives of Scientific Psychology, 4, 32–37. http://doi.org/10.1037/arc0000029

Shadish, W.R. (1995). Philosophy of science and the quantitative-qualitative debates: Thirteen common errors. Evaluation and Program Planning, 18(1), 63–75. https://doi.org/10.1016/0149-7189(94)00050-8

Shrout, P.E., & Rodgers, J.L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69(1), 487–510. https://doi.org/10.1146/annurev-psych-122216-011845

Skourteli, M.C., & Apostolopoulou, A. (2015). An interpretative phenomenological analysis into counselling psychologists’ relationship with research: Motives, facilitators and barriers – A contextual perspective. Counselling Psychology Review, 30(4), 16–33.

Spengler, P.M., & Lee, N.A. (2017). A funny thing happened when my scientist self and my practitioner self became an integrated scientist-practitioner: A tale of two couple therapists transformed. Counselling Psychology Quarterly, 30(3), 323–341. https://doi.org/10.1080/09515070.2017.1305948

Stricker, G. (2002). What is a scientist-practitioner anyway? Journal of Clinical Psychology, 58(10), 1277–1283. http://doi.org/10.1002/jclp.10111

Świątkowski, W., & Dompnier, B. (2017). Replicability crisis in social psychology: Looking at the past to find new pathways for the future. International Review of Social Psychology, 30(1), 111–124. https://doi.org/10.5334/irsp.66

Van Der Watt, R. (2016). Strengthening doctoral supervision in a Doctor of Psychology (DPsych) specialisation in child and adolescent psychology. Journal of Psychology in Africa, 26(1), 84–91. https://doi.org/10.1080/14330237.2016.1149332

Vespia, K.M., & Sauer, E.M. (2006). Defining characteristic or unrealistic ideal: Historical and contemporary perspectives on scientist-practitioner training in counselling psychology. Counselling Psychology Quarterly, 19(3), 229–251. https://doi.org/10.1080/09515070600960449

Vespia, K.M., Sauer, E.M., & Lyddon, W.J. (2006). Counselling psychologists as scientist-practitioners: Finding unity in diversity. Counselling Psychology Quarterly, 19, 223–227. https://doi.org/10.1080/09515070600960506

Wakefield, J.C., & Kirk, S.A. (1996). Unscientific thinking about scientific practice: Evaluating the scientist-practitioner model. Social Work Research, 20(2), 83–95. Retrieved from https://academic.oup.com/swr/article-abstract/20/2/83/1631932.