key: cord-0526582-9poze2jd authors: Fulton, Richard; Fulton, Diane; Kaplan, Susan title: Artificial Intelligence: Framework of driving triggers to past, present and future applications and influencers of industry sector adoption date: 2022-03-30 journal: nan DOI: nan sha: 1d72a8b7009caaea8f498ac415c2d1a89d7ff148 doc_id: 526582 cord_uid: 9poze2jd To gain a sense of the development of Artificial Intelligence (AI), this research analyzes what has been done in the past, presently in the last decade and what is predicted for the next several decades. The paper will highlight the biggest changes in AI and give examples of how these technologies are applied in several key industry sectors along with influencers that can affect adoption speed. Lastly, the research examines the driving triggers such as cost, speed, accuracy, diversity/inclusion and interdisciplinary research/collaboration that propel AI into an essential transformative technology. Artificial Intelligence (AI) is an evolving science and art. Developments come in flashes and spurts over time. The scientific community changes its focus on different topics and applications. Technological developments can and will continue to expand the problem solving and innovative capabilities of AI. Researchers build on what has been done in the past, implement in the present and dream about what can happen in the future. Together, these developments over time lead to the state of the art of a technology like AI. This paper presents a time-evolving Framework for AI (FAI) based on past and present adoptions and future expectations of technology uses. Triggers such as cost, speed, accuracy, customization, inclusivity/ diversity and cross discipline/collaboration are factors that push an organization to adopt and transform a new technology. When there are dramatic changes in the environment, what the customer needs, competitiveness in the industry and increased resources to implement a new technology, these become influencers in how rapidly technology becomes transformational as well. In this framework, the state of the art of AI is impacted by triggers, influencers and time. Three distinct industrial sectors including agriculture, education and healthcare illustrate the sectordependent nature of AI application development over time, David spanning the past, present and future. The authors conclude with an in-depth discussion of the six driving triggers of AI transformative technology adoption. This section briefly reports the most related work to examining the triggers and influencers of AI technology adoption over time based upon a variety of theories and research models. The first group of theories and models pertinent to the development of artificial intelligence include those related to technology acceptance and adoption. The 3 most used acceptance/adoption models are the Technology Acceptance Model (TAM), Diffusion of Innovations Theory (DOI) and the Unified Theory of Acceptance and Use of Technology (UTAUT) [1] . TAM, the most widely tested empirical model, proposes three technology acceptance factors including 1) "perceived usefulness", 2) "perceived ease of use" and 3) "attitude towards use" and focuses on the individual [2] . In contrast, DOI focuses on both individuals and organizations and four factors of time, channels of communication, social systems and innovation which impact technology diffusion and adoption [3] . Differences in adopter characteristics in the DOI model categorize firms and individuals within firms as early adopters, innovators, laggards, late majority and early majority leading credibility to industry sector differences discussed in this paper [3] . Lastly the UTAUT model is a compilation model built on 8 models (including TAM and DOI) emphasizing effort expectancy, performance expectancy, social influence, and facilitating conditions [4] . "Facilitating conditions" mean removal of barriers impeding technology adoption. For example, using the UTAUT model in applying new technology in e-learning, "facilitating conditions" included providing financial resources, new infrastructure, additional human resources and innovative educational content [5] . These "facilitating conditions' are included in this study under the "influencer" construct subcategory resources. Both TAM and DOI models use the constructs of "perceived usefulness" and "relative advantage" [6] . A new construct of "perceived benefits of technology adoption" incorporated into the International Technology Adoption (ITA) model combined the previous constructs of technology utility to the individual with benefits to the company's well-being and corresponds closely to the "triggers" of speed, accuracy, cost, and customization and the "influencers" of competitive advantage and customer needs presented in this paper. Digital transformation in industry is a compelling topic and focus of a framework called "The Digital Transformation Journey" [7] . In their framework, the compelling construct of "mounting challenges and drivers" means finding ways to use technology to do business in new and better ways [7] . Coronavirus in late 2019, for example, is considered a pressure point or "driver" of technology transformation in a variety of industrial sectors [7] . An example in the healthcare sector of a "driver" or "influencer" of AI technology is the coronavirus in Wuhan, China in 2019 which used AI tools to provide early detection of the coronavirus, isolating those areas with the virus [8] . It is likely the experience of a global pandemic will have a long-lasting and global impact on AI diffusion finding new ways of early detection which will help prevent future pandemics and influence health policies worldwide [8] . Increased competition is another "challenge" creating market pressure that if not addressed, can lead to loss of market share and revenues [7] . The Framework of AI uses the construct of "influencers" of changing environments and competitive advantage which correspond to transformational "challenges and drivers". In addition to building on previous model constructs of "benefits" such as this research's "triggers" which add value, usefulness and benefits to the individual (less time to do a task) and organization (reducing costs and mistakes) and "influencers" of changing environments, resources and competitive advantage, the authors enhance the existing theories and models by adding two new "triggers of AI technology adoption"diversity/inclusion and crossdiscipline/collaboration in their framework. Not addressing these essential issues could sabotage transformational adoption of AI. In fact, the more leaders understand the biases in technology [8] and the need for collaboration across disciplines/fields, the better they can improve its usefulness and therefore, increase its transformational adoption [9] . Lastly, this research supports the idea that AI technology is a dynamic phenomenon which changes over time and a better understanding of technology changes through past, present and future developments can help increase individual and organizational ability to build their AI maturity [10] . Successes in one industry spur interest in another sector. Some sectors are quick to adopt new technological applications such as AI and others are more cautious. Factors that can prompt or influence adoption include changing environments such as climate change or a pandemic event like the 2019 coronavirus (covid-19) or evolving customer needs for a product or service [11] [12] [13] . Often, organizations are searching for competitive advantage such as cost, quality, or better satisfying a particular niche of consumers. For example, a recent McKinsey study showed advanced AI adopter firms were 52% more likely to increase their market share by 52% and 27% had growth in their marketplace compared to those who were testing or moderately implementing AI [14] . Lastly, there are changing priorities in the allocation and budgeting of resources depending on societal expectations and organizational readiness [15] . Figure 1 gives a schematic of the Framework for AI (FAI), from development triggers to adoption influencers based on past, present and future AI technology trends. AI began in the 1940s, demonstrating that a new form of computing was possible, with an approach derived from known cognitive processes and neurobiology. The initial purpose of AI was to automate, through computers, non-analytical human knowledge, from symbolic computation processes, connectionist ones, or a combination of both. AI was initially considered a branch of computer science with limited application and restricted by the capabilities of the hardware of the time. Turing, a British mathematician, developed a code breaking computer called the Bombe in the early 1940's that successfully broke the Enigma code used by the Germans during World War II, a task thought impossible by most human mathematicians at the time. He also developed the Turing Test, that states "if a human is interacting with another human and a machine and unable to distinguish the machine from the human, then the machine is said to be intelligent" [16] . In 1956, John McCarthy offered one of the first and most influential definitions of AI: "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" [17] . One of the most famous AI examples is IBM's Deep Blue chess playing program, which beat the world chess champion Gary Kasparov in 1997. This expert system processed 200 million possible moves per second and determined the optimal next move looking 20 moves ahead [18] . The current definition of artificial intelligence (AI) has transformed into "computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, selfcorrection and use of data for complex processing tasks" [19] . AI has become a vital element for the development of many services and industrial sectors in the 21st century. This discipline of computer science studies algorithms to develop computer solutions that copy the cognitive, physiological, or evolutionary phenomena of nature and human beings. The data, examples of solutions, or relationships between these facilitate the resolution of diverse problems [20] . AI exhibits, in certain aspects, "an intelligent behavior" that can be confused with that of a human expert in the development of certain tasks [21] . The Deep Blue project inspired the development of Watson, a computer that was able to beat the two best Jeopardy Game players in the world in 2011. Its software could process and reason using natural language, and draw from a massive supply of information poured into it in the months before the competition [22] . At present, AI has been redirected towards the construction of solutions to problems analyzing large volumes of data which change over time. Currently, the systems for approaching functions using iterative techniques, and the neural network architectures interconnected with each other, make up most of the techniques, which are grouped under the terms "Machine Learning" and "Deep Learning". AI is becoming a growing presence in our society. From the intelligent sensors that make a car drive autonomously to mobile assistants, we are already surrounded by AI in some way or the other at all times [23] . Alexa, Siri, Cortana, security surveillance, fitness/dieting apps and online customer service are all examples of AI [24] . A large portion of the global population use these products/services in their everyday lives and the demand and popularity are ever growing [24] . AI is a game changing technology and disruptor. Within 10 years, it is predicted 375 million workers will need to change occupations as a result of widespread use of AI [24] . AI and machine learning are predicted to reshape most sectors but particularly manufacturing, energy, transportation, agriculture, labor markets, and financial management [25] . AI will not only impact our personal lives but also fundamentally transform how organizations make decisions and interact with employees and customers. One of the most vital questions will be how AI systems and humans can coexist with each other. Which decisions should be made by AI, which ones by humans, and which ones in collaboration will be an issues all companies need to address in the future [22] . Agriculture is a sector that includes studies in science, engineering, and economics. The deductive techniques of AI expert systems have been used in the field of agriculture to integrate crop management which encompassed irrigation, nutritional problems and fertilization, weed controlcultivation, herbicide application, and insect control/insecticide application. Additional subject areas were plant pathology, salinity management, crop breeding, animal pathology, and animal herd management [26] . Agricultural applications of expert systems and decision support systems have also benefited the simulation of processes and the management of supply operations [27] [28] . In other studies, AI has been used in quality control processes, whether or not they are supported by artificial vision [29] or in processes of justification of food policy decisions, such as when the use of AI is analyzed as a collaborative tool between the different actors that supply the agri-food chain, using distributed computing processes [30] . In the field of science, climate aspects are studied through modeling and solar radiation is predicted using neural networks [31] [32] . Interest in the application of AI to the world of agriculture and its multiple facets has been growing in recent years as it has proven to be a powerful tool for data analysis [33] . Current AI technology investigates the price behavior of agri-food products [34] [35] [36] . In these cases, artificial neural networks and machine learning techniques are applied to investigate the price variations of agricultural commodities. The expansion and intensification of industrial and technological agriculture have increased production, decreased the number of people suffering from poor nutrition and ensured richer and more resource-intense diets around the world. Industrial agricultural activities also generate employment, improve economic growth and boost the service sector in industrial regions [37] . Now, agriculture 4.0, combines intelligent farms and the interconnection of machines and systems, and seeks to adapt production ecosystems by optimizing the use of resources such as water, fertilizers, and phytosanitary products. In addition, it uses big data and imaging technology to arrive at "precision agriculture" [42] [43][44] [45] . Combined with genetic engineering and the use of data, it can solve an important part of agriculture by maximizing efficiency in the use of resources and adapting to climate change and other challenges [46] . To this end, the use of big data in decision-making is essential [47] [48] . The technification of agriculture, decision support systems and the inclusion of concepts of Industry 4.0 by agri-food companies will continue to generate increased innovation in AI [49] . The IBM supercomputer Watson was watched across school and university campuses and all were delighted with the computer besting the 1994 world chess champion. In 2011, Watson with its victory in the game show Jeopardy against the two highest winners, heralded the era of cognitive computing with its potent natural language processing, knowledge representation and reasoning capabilities. The educational interest in AI was initially captured through computers playing games but early versions of educational tutorials, learning management systems, simulations and iterative computer learning in the 1900s and early 2000s started the AI revolution in education [50] [51] [52] . Universities have been particularly impacted by the 2019 coronavirus pandemic due to the inperson nature of traditional education. They are responding to this threat by investing in digital technologies such as cloud, AI, analytics, immersive learning spaces, and digital curricula. In fact, more than 80% of institutions are allocating over 25% of their 2021 IT budgets toward digital initiatives [53] . "Customization of learning has been happening through rising numbers of adaptive education programs, gaming, and software. These systems are personalized by enabling repeated lessons that students haven't mastered, and generally helping students to work at their own pace, space and liberty" [23] . Individualized automated tutoring has been developed to help students to learn easily and on their own schedules [54] . At Colorado State University, online students and tutors are using AI powered by Cognii, an Edtech company, to improve learning and assessment tools [55] . Another recent example of AI advancement is AlphaGo-a software or 'machine learning' developed by DeepMind, the AI branch of Google-that was able to defeat the world's best player at Go, a very complex board game considered more difficult than chess [56] . The AlphaGo program proved that the computer and deep learning can reach new heights and further advance human understanding in certain topics. 'Machine learning' is a subfield of artificial intelligence that includes software able to recognize patterns, make predictions, and apply the newly discovered patterns to situations that were not included or covered by their initial design. AI has the potential to modify the quality, quantity, delivery, and nature of education. It also promises to change forever the role of parents, students, teachers, and educational systems. Using Artificial Intelligence systems, software and support, students can learn from across the world at any time. These kinds of applications are taking the place of certain types of classroom instruction and may replace teachers in some cases [23] . AI can contribute to changing education via the automation of administrative teaching tasks, software programs that favor personalized education, the detection of topics that need reinforcement in class, the guidance and support of students outside the classroom, and the use of data in an intelligent way to teach and support the students [57] . Three techniques of AI are particularly relevant for future educational developmentspersonalization systems (knowledge and individualized adaptation of the student), software agents (intelligent programs and robots with autonomy and the ability to learn) [58] and ontologies and semantic web [59] . When developed and applied in education, these systems and techniques can be powerful resources for improving the teaching-learning process, since they are able to generate a kind of virtual teacher who is fully trained and has human characteristics, yet is able to interact ubiquitously (that is, at any time and place) [54] . By harnessing the power of AI and deep learning, educators can gain insights from the vast quantities of data collected from their students, make better decisions and improve student retention. Teachers can access detailed feedback on how learners are processing information. Big data can help answer key online learning questions-what are the most ideal ways to teach complex ideas and which parts of a course are best taught in person instead of online. Big data helps students find the right courses; customize them to their needs and keep them on the right track [55] . Most EdTech products will have an AI or deep learning component in the future. AI could help online learners self-assess, increase connectivity in global classrooms and create social simulation. Limitations include the uncertainty of how humans learn and fears among faculty that they must be retrained or could be displaced completely [55] . "Remote learning will coexist with on-campus education. As institutions accelerate their focus on student diversity and address unique educational needs, it is critical for them to make necessary technological investments to support their teaching models" [53] . In the future, higher educational institutions should expand outreach by using online courses and digitization of content to enable on-demand access by students across different geographies for remote learning, self-directed learning or specialized skill development. Secondly, increase funding to facilitate online learning, particularly enhancing IT capabilitiescloud platforms, collaborative tools, data security measures, AI bots and assessments. Lastly, educational organizations must learn to mine data assets and use AI's analytical solutions to develop personalized content, upskill faculty and enable remote proctoring, communications and virtual assistants [53] . A recent review of the history of clinical decision support states the dramatic improvement in the medical sector due to the advent of cognitive aids to support diagnosis, treatment, carecoordination, surveillance and prevention, and health maintenance or wellness [60] [61] . Some studies highlighted the importance of AI in healthcare, especially in medical informatics but there is still work to be done on examining the impacts and consequences of the technology [61] [62] . In the medical profession, image recognition tools are already outperforming physicians in the detection of skin cancer [63] . Molecular imaging modalities have also been effective in diagnosing neurodegenerative diseases [64] . Digital medicine and wearable devices are presently used in healthcare by mining data for anomaly detection, prediction, and diagnosis/decision making. Wearable devices and sensors have been used to continuously track physiologic parameters which guided patient care strategy that improved outcomes and lowered healthcare costs in cardiac patients with heart failure [65] . They also have been effective to improve diagnosis and management in neurological disorders such as Parkinson's disease [66] . Machine learning applications in healthcare have been helpful in earlier disease detection and prediction. For example, machine learning models were used in identifying stable subsets of predictive features for autism behavioral detection and blood biomarkers for autism [67] [68] . Machine-learning algorithms were also used in the prediction of periventricular leukomalacia in neonates after cardiac surgery [69] . replace traditional qualitative and subjective ratings by human interpretation in the future [70] . Future AI in healthcare must be able to use machine learning to handle structured data such as images, data, genetic data, and natural language processing to mine unstructured texts. Then it must Deep learning for automated and/or augmented biomedical image interpretation will continue to be used in radiology, pathology, dermatology, ophthalmology and cardiology with strict protocols and benchmarks in place to ensure data integrity and fairness. However, sensor based, quantitative, objective and easy-to-use systems for assessing many diseases has the potential to be trained through healthcare data before it can assist physicians with disease diagnosis and treatment options [71] . AI in medicine will continue with informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. Also in the future, healthcare will increase its use of robots to assist elderly patients and targeted nanorobots, a unique new drug delivery system [72] . Certain factors are accelerating the growth and use of AI throughout our society and will continue to be triggers for AI's transformative impact. AI can be used as a competitive strategy in all economic sectors particularly in cost/pricing advantages, customizing or personalizing products and services, and research using data mined from present and potential customers. In addition, many AI advances have been accomplished by finding ways to increase the speed and accuracy of data resources and data research which can accelerate innovations while increasing the level of quality for consumers. Lastly, in our pursuit of the positive contributions of AI, we must be mindful of creating products and services that appeal to an inclusive and diverse group of people. Another way to increase the potential of AI is to use collaboration and reach across disciplines and sectors. Please see Figure 2 for the critical triggers impacting AI. Artificial intelligence systems can take control of many factors in an organization. For example, in an educational classroom -AI can control time-consuming tasks like accounting processes, record keeping, filling out forms, producing documents and automatically grade assignments freeing up time for teachers to improve the quality of learning, increase active learning and help students when needed [73] . In a survey about the benefits of AI in the workforce, 61% of respondents said it helped them have a more efficient and productive workday [74] . Almost half (49%) felt it improved their decisionmaking and accelerated time to insights, while 51% said they believed AI enabled them to achieve a better work/life balance [75] . The three highest rated tasks to benefit from AI adoption were: 1) understanding trends and patterns; 2) moving data from one place to another and 3) accessing data residing in different places across the organization [74] . It is also predicted that 70% or more of companies will use some type of Artificial Intelligence in their operations because AI builds efficiency and effectiveness [24] . In the healthcare field, for example, AI can use sophisticated algorithms to 'learn' features from healthcare data, which can bring about insights for clinical practice and because it can be equipped with learning and self-correcting abilities, will improve its accuracy based on feedback over time [71] . In a recent interview with Susan Kaplan, the VP of a high-tech firm called Modal Technology located in Minneapolis, she cited that the "joint venture partnership between Modal Technology and medical researchers and scientists at McGill University Health Center Research in Montreal, Canada, using a new and mathematically proven non-statistical AI training model, ALIX, increased accuracy of finding patients who had cancer". Also, a "biproduct of the training identified and rank ordered the biomarkers from the most relevant to irrelevant. The glass box solution was explainable and repeatable". Such abilities will help in "early detection of cancers, increase precision medicine solutions for patients and treatment outcomes in the future" [78] . Three cost-saving AI solutions include virtual assist (chatbots), human assist (which routes complex customer questions to a human), and screen assist (which provides common answers to humans) [79] . These AI technologies can save millions of dollars for financially stressed businesses in today's challenging times by enabling them to address issues that affect customer service, costs and revenues [79] . AI has already increased productivity and efficiency in healthcare delivery, which has helped improve care outcomes, patient experiences and access to medical services [80] . Customization is the name of the gameindustries are using AI to humanize, personalize and customize products and services to their clients and expand their outreach and engagement [81] . Hyper personalization is the use of customer data to create and present customized contacts, information, or recommendations to customers. These customizations are created based on individual customer profiles. Profiles rely on data from browsing patterns, purchase histories, geographic location, demographic data, and behavioral data [82] . For example, Thread, a UK-based fashion retailer, offers customers AI-based product recommendations as a "personal stylist", from information collected from style quizzes and ongoing reactions to product recommendations and they do this with minimal additional effort or staffing [82] . Hilton Hotels currently uses a robot concierge named Connie in its lobbies to greet guests, answer questions and provide concierge-like answers to guests using natural language processing capabilities to interact with guests and develop meaningful profiles [82] . Although Under Armour is known for clothing, they reach customers through lifestyle activities of health and fitness, so they created the Record app, which collects user information on sleep, diet and physical activity. They then create personalized health goals and workout plans and after customers work out will provide feedback on the user's workout effectiveness to help maximize their future efforts [82] . While AI is quickly becoming a new tool in the CEO tool belt to drive revenues and profitability, it has also become clear that deploying AI requires careful management to prevent "unintentional but significant damage, not only to brand reputation but, more importantly, to workers, individuals, and society as a whole" [83, p1] . Recent research shows that AI bots and voice assistants promoted unfair gender stereotypes by featuring gendered names, voices, or appearances. In the United States, Siri, Alexa, Cortana, and Google Assistant-which collectively total an estimated 92.4% of U.S. market share for smartphone assistants-have traditionally featured female-sounding voices due to the designers' innate biases that female voices are more helpful, pleasant and accommodating than male ones [84] . In addition, racial and cultural biases also make it difficult for many people to interact easily with AI assistants around the world [85] . AI chatbots, recruitment software and risk assessment tools in the past caused harm by being racist, gender-biased or selecting the wrong people to put into jail [76] . People may not care how Facebook identifies who to tag in a given picture, but when AI systems are used to make diagnostic suggestions for skin cancer based on automatic picture analysis, understanding how such recommendations have been derived becomes a critical issue [63] . Experts say that AI is still "fragile, opaque, biased and not robust enough" to provide trustworthiness [87] . Leaders need to take the necessary steps to ensure that AI is being used in an ethical manner by consistent reliance on organizational values. Three ways to accomplish this are: 1) Clarify how values translate into the selection of AI applications, 2) Provide guidance on definitions and metrics used to evaluate AI for bias and fairness, and 3) Prioritize organizational values [83] . Expanding the concept of AI to 'Responsible AI' is essential to ensure fairness, ethics, security/safety, privacy, transparency and accountability issues are considered [88] . "Business leaders may claim that diversity and inclusivity are core goals, but they then need to follow through in the people they hire and the products their companies develop" [19] . Ensuring minorities are well represented among both users and evaluators of AI will make AI more accessible and inclusive [88] . The covid-19 pandemic first discovered in 2019 has accelerated the need for the adoption of digital tools in education, particularly in the science, technology engineering and mathematics (STEM) arena. A majority of software developers are still males with only 25% women in the U.S. and minority racial groups are totally underrepresented in technology fields [89] . The goal is to create a stronger foundation for STEM literacy, inclusion, and diversity of STEM students and preparing the STEM workforce of the future. With the growing demand for advanced skill sets, educators can provide creative and more targeted learning rather than focusing on the repetitive tasks of creating problem sets. The net result is better learning outcomes for a wider group of students and requires collegial partnering, ongoing development, and thorough testing to implement [84] . "Creating differentiated experiences through personalization and immersive education will play a crucial role in the growth of remote learning," said Avasant's Research Leader, Pooja Chopra [53] . "Educational institutions should collaborate with EdTech companies and progressive service providers to accelerate digital transformation" [53] . Bibliometric studies that connect different disciplines are of growing interest in the analysis of the impact of AI synergies and their future within the research community. An example of this is a paper [91] which shows that the structure and model of the scientific production of researchers worldwide and the relationships between quality, references, and synergies among authors increases as collaboration across disciplines is applied. Multi-disciplinary research is vital to effective and natural human-robot connections as well [92] . Interdisciplinary research in artificial intelligence is a way to garner synergistic outcomes across industries from the AI field. To that end, researchers [93] recommend three strategies: 1) Collaborate on ways AI can impact other fields and look to new ideas from other fields to apply to AI; 2) Explain how decisions are made, be transparent about data biases, and use high level evaluators and regulators to evaluate processes; and 3) Scientific and educational experts should increase their AI educational levels. Human-robot interaction challenges AI in many regards: dynamic, partially unknown environments not originally robot-friendly; a broad variety of situations with rich semantics to understand and interpret; human interactions requiring fine yet socially acceptable control strategies; natural and multi-modal communication requiring common-sense knowledge and divergent mental models. Collaboration of researchers and practitioners from across a variety of fields to integrate and share their data, knowledge, understandings and experiences is essential to meeting these challenges [58] . Cross-functional AI teams made up of diverse participants lead to greater innovations, more collaborations and better outcomes [94] . In this paper outlining a 'Framework for Artificial Intelligence', the authors analyzed the triggers for AI development as well as the influencers to AI adoption. There is no doubt that current triggers such as speed, cost, accuracy, diversity/inclusion, competitiveness, personalization and the need for cross-disciplinary collaboration will continue into the foreseeable future. The present factors such as the coronavirus pandemic of 2019, climate change, customer needs, or resources may fluctuate or change in the future, but there will always be influencers that encourage wider AI adoption and those that discourage AI deployment in organizations. In this comprehensive look at the past, present and future applications in key industry sectors, a better and more comprehensive model for AI emerges. A review of technology acceptance and adoption models and theories Technology Acceptance Model for empirically testing new end-user information systems: Theory and results The state of empirical research on the adoption and diffusion of business-to-business ecommerce User acceptance of information technology: Towards a unified view Facilitating condition for e-learning adoption -Case of Ugandan universities The utilization of e-government services: Citizen trust, innovation and acceptance factors The DX journey in the enterprise and its leadership Artificial Intelligence (AI) provided early detection of the coronavirus (COVID-19) in China and will influence future urban health policy internationally How we can use tech to improve diversity in the workplace Artificial intelligence maturity model: A systematic literature review Agricultural technology adoption under climate change in the Sahel: Micro-evidence from Niger Closure of universities due to coronavirus disease 2019 (COVID-19): Impact on education and mental health of students and academic staff A literature review on impact of COVID-19 pandemic on teaching and learning Five on-point reasons why businesses are adopting Artificial Intelligence Ready or not, AI comes -An interview study of organizational AI readiness factors Computing machinery and intelligence Upper Saddle River Deep Blue Exploring the impact of Artificial Intelligence on teaching and learning in higher education A look at the past, present and future research trends of Artificial Intelligence in agriculture Disruptive technology: Economic consequences of Artificial Intelligence and the robotics revolution A brief history of Artificial Intelligence: On the past, present, and future of Artificial Intelligence 10 ways artificial intelligence will impact education sector Artificial Intelligence and the impact on business curricula Who will lead in the age of Artificial Intelligence? Expert systems for agriculture Interactive simulation modeling in farm decisionmaking Comparison of artificial neural network and time series model for forecasting commodity prices Artificial Intelligence applications in financial forecasting: A survey and some empirical results Integrated autonomy: A modeling-based investigation of agrifood supply chain performance Self-Organizing Maps: Applications to synoptic climatology Artificial Intelligence technique for modelling and forecasting of solar radiation data: A review Artificial Intelligence in agriculture Data mining in agriculture on crop price prediction: Techniques and applications Short-term price forecasting for agro-products using Artificial Neural Networks Agricultural crop yield prediction using artificial neural network approach Worldwide research trends on sustainable land use in agriculture Supply chain management using multi-agent systems in the agri-food industry Agricultural robotics for field operations Economics of robots and automation in field crop production Agricultural robotics research applicable to poultry production: A review Potential of laboratory hyperspectral data for in-field detection of Phytophthora infestans on potato Genetic algorithm-based Internet of Precision Agricultural Things (IopaT) for agriculture 4.0 Technology transfer and adoption for smallholder climate change adaptation: opportunities and challenges Agricultural big data analytics and the ethics of power A cloud computing framework for analysis of agricultural big data based on Dempster Shafer theory Decision support systems for agriculture 4.0: Survey and challenges Simulation games: One more tool on the pedagogical shelf Intelligent Tutoring Systems: Lessons Learned RadarViewâ„¢ report Data-driven hint generation in vast solution spaces: A selfimproving Python programming tutor 5 EdTech trends shaping business education from Artificial Intelligence to virtual reality Google secretly tested AI bot AI and mathematical education Trends in scientific research on precision farming in agriculture using science mapping method Agriculture 4.0: Making Rose Artificial cognition for social human-robot interaction: An implementation Artificial Intelligence and big data in public health Clinical decision support: A 25-year retrospective and a 25-year vision Artificial Intelligence in health-The three big challenges Thirty years of intelligence models in management and business: A bibliometric review Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists Role of Artificial Intelligence techniques (Automatic Classifiers) in molecular imaging modalities in neuro-degenerative disease Moving from digitalization to digitization in cardiovascular care: why is it important, and what could it mean for patients and providers? Machine learning for large-scale wearable sensor data in Parkinson's Disease: Concepts, promises, pitfalls, and features Sparsifying machine learning models identify stable subsets of predictive features for behavioral detection of autism Blood biomarker discovery for Autism Spectrum Disorder: A proteomic analysis Prediction of periventricular leukomalacia in neonates after cardiac surgery using machine learning algorithms Application of Artificial Intelligence in pediatrics: Past, present and future Artificial Intelligence in healthcare: Past, present and future Artificial Intelligence in medicine Top 10 AI Trends to Watch Out For in 2020 Employees want more AI in the workplace Employees want more AI to boost productivity, study finds Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images Robust automatic breast cancer staging using a combination of functional genomics and imageomics Interview on collaboration between Modal Technology and Three ways AI can protect and bring costs down during challenging times Transforming healthcare with AI: The impact on the workforce and organizations Top five Artificial Intelligence trends affecting leadership & management Hyper personalization: Customizing service with AI Leading your organization to responsible AI How AI bots and voice assistants reinforce gender bias Racial disparities in automated speech recognition The accuracy, fairness, and limits of predicting recidivism Artificial Intelligence is a work in progress, official says Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI Using Artificial Intelligence to promote diversity The challenges and benefits of adopting AI in STEM Education Global knowledge management research: A bibliometric analysis Effective and natural human-robot interaction requires multidisciplinary research Interdisciplinary research in Artificial Intelligence: Challenges and opportunities How cross functional interactions can boost collaboration