About the Author(s)


Leon T. de Beer symbol
Department of Psychology, Norwegian University of Science and Technology – NTNU, Trondheim, Norway

WorkWell Research Unit, North-West University, Potchefstroom, South Africa

Unit of Occupational Medicine, Karolinska Institutet, Stockholm, Sweden

Crystal Hoole Email symbol
Department of Industrial Psychology, Stellenbosch University, Stellenbosch, South Africa

Citation


De Beer, L.T., & Hoole, C. (2025). Artificial intelligence adoption: Industrial psychology and the future of work. SA Journal of Industrial Psychology/SA Tydskrif vir Bedryfsielkunde, 51(0), a2385. https://doi.org/10.4102/sajip.v51i0.2385

Editorial

Artificial intelligence adoption: Industrial psychology and the future of work

Leon T. de Beer, Crystal Hoole

Copyright: © 2025. The Author(s). Licensee: AOSIS.
This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).

Introduction

The rapid development of artificial intelligence (AI) has left many organisations, employees and governments simultaneously excited and deeply uncertain. Only a few years ago, many wondered whether the much heralded ‘Fourth Industrial Revolution’ was ever going to materialise. Today, with large language models (LLMs), automation technologies, and intelligent systems advancing at an increasing pace, the transformation is beyond doubt. Each new generation of LLMs delivers step-changes in capacity, with no clear boundaries in sight.

Internationally, government policy reflects striking contrasts. The European Union enacted decisive legislation in February 2025, banning high-risk AI systems such as untargeted facial recognition and workplace emotion recognition (European Parliament, 2025). In the United States, policymakers have adopted a deregulatory stance to stimulate innovation and protect their technological lead (White House, 2025). South Africa, in turn, has tabled its 2024 National AI Policy Framework (Department of Communications and Digital Technologies [DCDT], 2024), signalling intent, but not yet enacting legislation. This uneven global response illustrates the fundamental dilemma: balancing economic opportunity with social protection.

For South Africa, where unemployment reached 32.9% in early 2025 (46.1% among the youth), the dilemma is troublesome (World Economic Forum, 2025). Technological disruption in such a fragile labour market is not only about efficiency gains; it is about everyday survival for millions. Artificial intelligence threatens to displace not only experienced workers but also entry-level positions that are critical for graduates and new entrants. Research already points to strong potential displacement effects in low-skilled sectors (Giwa & Ngepah, 2024), although global evidence remains inconclusive (Santarelli et al., 2025). Yet to adopt only a dystopian narrative is also misleading. The very same technologies that threaten jobs may equally serve as catalysts for growth and new opportunities. This paradox, AI as both promise and peril, is precisely where industrial and organisational psychologists (IOPs) must engage as community, and with employers and organisations.

Industrial psychology and the technology paradox

In 2021, the Society for Industrial and Organisational Psychology of South Africa (SIOPSA) published an extensive project on the future of the profession in the changing world of work (Veldsman et al., 2021). The report underlined the centrality of technology, but it framed the challenge less as a purely technical shift and more as a reconfiguration of perspectives, values, and governance. Artificial intelligence and related technologies represent both potential and risk. Whether they will support human flourishing or deepen inequality depends on the decisions we make now. The report identified themes requiring urgent engagement: sense-making in a turbulent environment, the defence of human values in technologically mediated work, professional identity and responsiveness, the redesign of education and capabilities, and the reimagination of research itself. However, a review of articles published in SA Journal of Industrial Psychology (SAJIP) from 2022 to 2025 reveals that few contributions aligned with these themes. Out of 136 published articles, only 16 explicitly discussed the technological disruption of work. Despite repeated calls for urgency, the profession has mostly continued with ‘business as usual’, which we are all guilty of to some extent.

This gap is concerning. While South African IOPs remain relatively silent, international scholarship has begun to shape a substantial agenda for understanding AI in the workplace. Bankins et al. (2024), for example, mapped five central pathways for future research: worker well-being, human–AI collaboration, AI-supported leadership, fairness in processes and outcomes, and multilevel theorising. These priorities reflect a shift away from purely technical debates towards human-centred questions that IOPs are uniquely positioned to answer.

Artificial intelligence and the world of work

International evidence suggests that AI will have uneven impacts across different sectors and job types. Oosthuizen (2022) explored the implications of smart technology, artificial intelligence, robotics, and algorithms (STARA), highlighting both the displacement of millions of jobs and the creation of new roles. The transition, however, will be mediated by structural challenges such as skills deficits, infrastructure, and inequality. Similarly, Afshani (2022), in a systematic review, found that AI-enabled processes can not only improve efficiency and create new insights but also heighten risks of stress, alienation, and job insecurity. For IOPs, this highlights a dual mandate: to anticipate and mitigate the negative consequences of disruption while guiding organisations to use AI ethically and constructively. Advisory roles become critical: ensuring data integrity, protecting employee dignity, and supporting trust in AI-driven systems. At the same time, IOPs must participate in designing reskilling programmes and contribute to organisational cultures that enable ‘human-in-the-loop’ collaboration between employees and AI agents (Mollick, 2024).

The human–technology interface introduces further complexities. Employees must learn not only to work with AI systems but also to lead and manage them as collaborators or even supervisors. Anthropomorphism, attributing human-like qualities to machines, adds further psychological dynamics (Airenti, 2015; Salles et al., 2020). Interactions with AI tools that may be described as colleagues or supervisors raise ethical and identity questions that the profession cannot ignore, especially as these tools become more embodied in robotic or humanoid forms and embedded in workplace infrastructure. These developments demand not only applied ethical frameworks but also deeper conceptual analysis of how we understand human agency, fairness, and responsibility in technologically mediated work.

Academia, assessment, and the industrial and organisational psychologist identity

Universities are also experiencing disruption. Students increasingly rely on AI to support learning, draft assignments, and even generate content. While some uses represent legitimate learning tools, others challenge academic integrity. Detection systems have mostly proven unreliable, especially for non-native English writers (Liang et al., 2023). However, newer research points to solutions to this problem as well (e.g. Emi & Spero, 2024). At the same time, research shows that generative AI can help reduce structural inequities in academic publishing by improving writing quality and visibility for scholars from underrepresented backgrounds (Kaniel et al., 2025).

This situation challenges academics to rethink how they frame assignments and how to effectively assess students. Some have suggested a return to sit-down examinations, which, although a potential solution, is perhaps regressive. Oral examinations are also feasible at a post-graduate level in many instances, but not feasible for large graduate groups — unless, in future, such tasks are outsourced to AI voice agents.

These debates are not peripheral. They strike at the heart of how the profession educates, evaluates, and reproduces itself. Oosthuizen’s (2022) STARA competence model highlighted the importance of technological literacy, resilience, and intercultural communication for future-fit IOPs. Without embracing such competencies, the profession risks irrelevance.

A call to action

This editorial is intended as an opening, not an ending. SA Journal of Industrial Psychology is uniquely positioned to host this dialogue and invites conceptual contributions, empirical studies, and practice reflections that critically engage the intersections of AI, work, and psychology for peer-review. The paradox of AI, as both threat and opportunity, will not resolve itself. It demands thoughtful, evidence-based, and ethically grounded scholarship. The future belongs not to those who resist change, nor to those who embrace it uncritically, but to those who navigate it pragmatically in the service of human flourishing. Industrial and organisational psychologists must lead the way.

The future of work has not waited for us. Industrial and organisational psychologists in South Africa must urgently move beyond ‘business as usual’. Bankins et al. (2024) identified five interconnected research pathways to advance organisational behaviour scholarship on AI that can be used as a general guideline:

  • Worker well-being: Investigate how AI can impede but also support employee health, satisfaction, and flourishing by reducing demands and enhancing meaning and work–life balance.
  • Human–AI collaboration: Clarify when humans and AI complement rather than replace each other, focusing on system design, perceived capabilities, and cultures of learning.
  • Artificial intelligence-supported leadership: Explore how AI augments decision-making, job design, and feedback, and how leaders and employees navigate authority, trust, and automation bias.
  • Fairness and justice: Examine AI’s impact on distributive, procedural, and interactional justice, particularly for vulnerable groups, and connect findings to responsible AI design. This includes potential bias in AI models.
  • Multilevel approaches: Apply cross-disciplinary and multilevel approaches to understand how individual, team and organisational dynamics interact to shape experiences with AI at work.

Other specific areas we identified include attitudes towards AI, job and career insecurity (particularly among youth and early career), effective reskilling models for human–AI collaboration, sustainability and cultural perspectives to ensure AI systems reflect local knowledge and values (Murphy & Largacha-Martínez, 2022). This field also presents rich opportunities for experimental research. These priorities align with recent South African employer evidence, which similarly calls for digitally dexterous IOPs who can partner strategically in technology-enabled workplaces (Coetzee & Veldsman, 2022).

Conclusion

The emergence of AI as collaborator represents not only a technological revolution but also a profound shift in how we engage with work. The stakes are particularly high for South Africa, where sober decisions could have a positive impact on society. If IOPs do not step forward, others will define the human–AI future of work without us. SA Journal of Industrial Psychology invites contributions that critically engage this challenge, so that we shape a future of work that is not only technologically advanced but also remains human-centred.

References

Afshani, A.M. (2022). The impact of artificial intelligence on industrial-organizational psychology: A systematic review. Journal of Behavioral Science, 17(3), 125–139.

Airenti, G. (2015). The cognitive basis of anthropomorphism: From relatedness to empathy. International Journal of Social Robotics, 7(1), 117–127. https://doi.org/10.1007/s12369-014-0263-x

Bankins, S., Ocampo, A.C., Marrone, M., Restubog, S.L., & Woo, S.E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–189. https://doi.org/10.1002/job.2735

Coetzee, M., & Veldsman, D. (2022). The digital-era industrial/organisational psychologist: Employers’ view of key service roles, skills and attributes. SA Journal of Industrial Psychology, 48(0), a1991. https://doi.org/10.4102/sajip.v48i0.1991

Department of Communications and Digital Technologies (DCDT). (2024). South Africa National Artificial Intelligence Policy Framework (Draft). Retrieved from https://www.dcdt.gov.za/sa-national-ai-policy-framework/file/338-sa-national-ai-policy-framework.html

Emi, B., & Spero, M. (2024). Technical report on the Pangram AI-generated text classifier [Preprint]. arXiv. Retrieved from https://arxiv.org/abs/2402.14873

European Parliament. (2025, February 19). EU AI Act: First regulation on artificial intelligence. Retrieved from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Giwa, F., & Ngepah, N. (2024). The relationship between artificial intelligence and low-skilled employment in South Africa. Heliyon, 10(23), e40640. https://doi.org/10.1016/j.heliyon.2024.e40640

Kaniel, R., Rui, H., Sun, S., & Wang, P. (2025). Writing matters: Generative AI as an academic impact equalizer. SSRN. Retrieved from https://ssrn.com/abstract=5361925

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Mollick, E. (2024). Co-intelligence: Living and working with AI. Penguin Publishing Group.

Murphy, J.W., & Largacha-Martínez, C. (2022). Decolonization of AI: A crucial blind spot. Philosophy & Technology, 35, 102. https://doi.org/10.1007/s13347-022-00588-2

Oosthuizen, R.M. (2022). The Fourth Industrial Revolution – Smart technology, artificial intelligence, robotics and algorithms: Industrial Psychologists in future workplaces. Frontiers in Artificial Intelligence, 5, 913168. https://doi.org/10.3389/frai.2022.913168

Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, 11(2), 88–95. https://doi.org/10.1080/21507740.2020.1740350

Santarelli, E., Carbonara, E., & Tripathi, I. (2025). Assessing the impact of AI on labor market outcomes: A meta-analysis. SSRN. Retrieved from https://ssrn.com/abstract=5126345

Veldsman, T., Hoole, C., Manyaka, S., Kock, R., Titus, S., & Winkler-Titus, N. (2021). Industrial-Organisational Psychologists engaging with the new world of work within the context of the Fourth Industrial Revolution: Emerging shifts and challenges with the required future-fit responses by and capabilities for Industrial-Organisational Psychologists. Knowledge Resources.

White House. (2025). America’s AI action plan: Winning the race. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

World Economic Forum. (2025). How AI is reshaping the career ladder, and other trends in jobs and skills on Labour Day. Is AI closing the door on entry-level job opportunities? Retrieved from https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/