Background: Management information systems (MIS) are pivotal in the efficient and effective running of Further Education and Training (FET) colleges. Therefore, the evaluation of MIS success is an essential spoke in the wheel of FET college success. Based on an extensive literature review it was concluded that no MIS success evaluation model for FET colleges in South Africa exists. Objectives: The main objective was to propose a MIS evaluation model and evaluation tool (questionnaire), and verify the model empirically by evaluating the MIS at a selected FET college. The supporting objectives were firstly, to identify the most appropriate MIS evaluation models from literature. Secondly, to propose a MIS evaluation model for FET colleges based on the literature. Thirdly, to develop the evaluation tool (questionnaire) based on these models. Fourthly, to capture and analyse data from one FET college, in order to evaluate the performance of the MIS at the college. The final supporting objective was to evaluate the proposed model by triangulating the findings from the survey with the findings from the interviews. Method: The proposed MIS evaluation model is based on the integration of three existing MIS evaluation models. The evaluation tool was developed by combining four empirically tested questionnaires that capture the constructs in the underlying models. A survey and semi-structured interviews were used as data collection methods. The statistical tests for consistency, scale reliability (Cronbach’s alpha) and unidimensionality (Principal Component Analysis) were applied to explore the constructs in the model. Results: Results from the empirical testing of the newly designed evaluation tool were used to refine the initial model. The qualitative data capturing and analysis added value in explaining and contextualising the quantitative findings. Conclusion: The main contribution is the SA-FETMIS success model and evaluation tool which managers can use to evaluate the MIS at an educational institution. The novelty of the research lies in using a mixed methods approach where previous MIS success evaluation studies mainly used quantitative methods.
The South African National Department of Education has committed to the establishment of a standardised business management information system in all public FET colleges that will enable colleges to monitor and account for all their administrative business processes which include student administration, academic administration, financial administration, human resource management and development and asset management (Department of Education 2008; Department of Higher Education and Training 2011). The monitoring and evaluation of key success indicators is not only essential for the management of a specific FET college, but is also of critical importance for the Department of Higher Education and Training (DHET) to evaluate its own successes. The problem is that no documented evaluation model or tool to evaluate the success of MIS at public FET colleges in SA could be found. Therefore, there is a need to design and develop such an evaluation model and tool which can be used by managers of FET colleges as well as by DHET to ensure that all the systems at all colleges adhere to the same principles of evaluation.This study constructed a conceptual framework that informs the design of an IS evaluation tool by using the knowledge and trends in the field of information systems evaluation and taking into account the requirements of the South African policy with regard to the administration and functioning of public FET colleges.
Reviewing information systems evaluation models
|
|
Evaluation research applies social science procedures to assess the conceptualisation, design, implementation and utility of social intervention programmes (Rossi & Freeman in Babbie & Mouton 2001:335). Furthermore, evaluation studies have three main purposes, namely, (1) to judge merit or worth, (2) to improve programmes and (3) to generate knowledge (Lange & Luescher 2003).Table 1 provides a synthesised overview of IS success evaluation theories and the models based on the theories. From the table it can be observed that the following theories have been used: • the theory of reasoned action
• the theory of planned behaviour
• the theory of beliefs and attitudes
• the behavioural theory of the firm and the mathematical theory of communications.
TABLE 1: Synthesised overview of information systems success evaluation models and their underlying theoretical frameworks.
|
Based on those theories, the models proposed to evaluate the performance of IS are: • the DeLone and McLean IS success model (D&M IS Success Model)
• the Technology Acceptance Model (TAM)
• the Task-Technology Fit model (TTF)
• the End User Computing Satisfaction model (EUCS). Many researchers in the field of IS evaluation have conducted empirical studies based on portions, combinations or extensions of these models (Chow 2004; Gable, Sedara & Chan 2008; Ifinedo, Rapp, Ifinedo & Sundberg 2010; Ong, Day & Hsu 2009; Palmius 2007; Petter & Mclean 2009; Rai, Lang & Welker 2002; Seddon 1997). As illustrated in Table 1, IS evaluation models are based on either one or a combination of theories. This raised the question: which model, extension or combination will be suitable for this study? The following eight models were considered in more detail to make an informed decision in this regard: • Technology Acceptance Model (TAM) with its extensions (TAM2, UTAUT, TAM3)
• the Wixom and Todd model
• the Task-Technology Fit (TTF) model
• the Original DeLone and McLean (D&M) IS Success model
• the Updated DeLone and McLean (D&M) IS Success model
• the Model of User Satisfaction
• the Re-specified Model of IS success
• the End-user Computing Satisfaction model (EUCS). Three models, namely, the Original D&M IS Success model, the Updated D&M IS Success model and the End-user Computing Satisfaction model were selected as most appropriate and integrated to develop the proposed conceptual model for this study. The selection was based on criteria for the evaluation of theories and models in the Information Systems discipline, namely, importance, level, novelty, parsimony and falsifiability (Weber 2012). The selected models are now discussed in more detail to show why they are deemed to meet these criteria. The original D&M taxonomy was based on Richard Mason’s modification of Shannon and Weaver’s (1949) mathematical theory of communications which identified three levels of information: • the technical level (accuracy and efficiency of the system that produces it)
• the semantic level (its ability to transfer the intended message)
• the effectiveness level (its impact on the receiver) (Shannon & Weaver 1949). Mason (1978) adapted this theory for IS and expanded the effectiveness level into three categories: 1. receipt of information
2. influence on the recipient
3. influence on the system. The original DeLone and McLean (D&M) IS Success model identified six variables of success, namely: • system quality
• information quality
• use
• user satisfaction
• individual impact
• organisational impact. ‘System quality’ was equivalent to the technical level of communication, whilst information quality was equivalent to the semantic level of communication. The other four variables were mapped to Mason’s subcategories of the effectiveness level. ‘Use’ related to Mason’s receipt of information; ‘user satisfaction’ and ‘individual impact’ were associated with the information’s influence on the recipient and ‘organisational impact’ was the influence of the information on the system. Figure 1 shows the Original D&M IS Success model.
Based on further research the Original D&M Success model was updated to the model shown in Figure 2. The new model was modified so that quality included information, system and service quality. Therefore a key addition in the updated model was the inclusion of service quality (Delone & Mclean 2003). DeLone and McLean also recommended assigning different weights to system quality, information quality and service quality, depending on the context and application of the model (Delone & Mclean 2003).
Doll and Torkzadeh (1988) investigated end-user computing satisfaction by contrasting traditional versus end-user computing environments and reported on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Figure 3 provides an illustration of the model, a list of questions used and the identified underlying factors or components of end-user computing satisfaction acquired by factor analysis (content, accuracy, format, ease of use and timeliness).
|
FIGURE 3: A model for measuring end-user computing satisfaction.
|
|
Having considered the Original D&M IS Success model, the D&M Updated IS Success model and the End-user Computing Satisfaction model in more detail, the criteria for selecting these models based on the criteria proposed by Weber (2012) are now discussed: • Importance: All 3 models can be considered as important based on the importance of their focal phenomenon (as depicted in Figure 1, Figure 2 and Figure 3) for MIS evaluation. Furthermore, the models have all been applied and cited. • Level: All three models are based on macro-level theories that cover a broad range of phenomena with a high level of generality. However, the constructs are defined precisely enough to allow empirical testing. • Falsifiability: The specification of constructs and the relationships between the constructs make it possible to do empirical testing that may disconfirm the theory. • Novelty: The Original D&M IS Success model was novel in the sense that it proposed novel relationships between the constructs. The Updated D&M IS Success model was novel in proposing new constructs (i.e. service quality). The End-user Computing Satisfaction model was novel in focusing on the end-user. • Parsimony: All three models have been subjected to quantitative analysis to ensure that the constructs satisfy internal validity without any redundancy. Furthermore, Rai, Lang and Welker (2002) compared the original D&M model (1992) to the re-specified D&M IS Success model created by Seddon (1997) and found that the original model outperformed the Seddon model. Sedera, Gable and Chan (2004) tested several IS success models, including the D&M and Seddon models, against empirical data and determined that the D&M model provided the best fit for measuring enterprise systems success (Petter, DeLone & McLean 2008). This provides further support for the importance of the D&M based models. The End-user Computing Satisfaction model was selected on the basis of two reasons: • Firstly, because the broad concept of ‘use’ (or intention to use) as a measure of IS success only makes sense for voluntary or discretionary users as opposed to captive users, this construct (use) was omitted from the developed model. • Secondly, the construct ‘user satisfaction’ as proposed in the initial D&M IS Success model was a concept without proposed effectiveness measures and, therefore, the established End-user Computing Satisfaction model was included to fill this void. Having met the evaluation criteria stated, these models were selected for their fit to measuring MIS performance in an organisation, whilst models based on other theories such as the Diffusion of Innovation theory (Rogers 2003; Wejnert 2002) are geared towards explaining how, why and at what rate new ideas and technology spread through cultures and therefore includes user’s personal decision to adopt an innovation. In summary, the proposed theoretical model for this study, as depicted in Figure 4, comprises a combination of three models: • the Original D&M IS Success model
• the Updated D&M IS Success model
• the End-user Computing Satisfaction model.
|
FIGURE 4: Proposed management information systems Success Evaluation Model.
|
|
The Original D&M IS success model was adapted to include an additional construct ‘service quality’ which is part of the Updated D&M IS Success model. It was decided to omit the construct ‘use’ and extend the user satisfaction construct in the original D&M IS Success model by incorporating the End-user Computing Satisfaction model.
The proposed theoretical model was used to develop the evaluation tool (survey questionnaire) for evaluating the MIS of the selected public FET college. The quantitative data was gathered through a survey strategy by using the newly developed questionnaire and the qualitative data was gathered through semi-structured interviews with key stakeholders.
Population and sampling
Two sampling frames were involved in the study, namely the population of all public FET colleges (50 in total) and the population of MIS users at the selected public FET college. One public FET college (proposed to serve as a benchmark for the FET sector) was purposively sampled by applying the following criteria: • The college should be one of the top performing public FET colleges.
• The college should be one of the public FET colleges in which the DHET has already implemented the new integrated MIS (there were three at the time). The DHET is currently extending the implementation of this MIS to all public FET colleges and all staff members are obliged to use the system. The selected college, FET College X, (called FET College X according to the confidentiality agreement) has been purposefully selected on the bases of these criteria and also because this specific college was proposed by the head of the FET unit at the DHET (pers. comm., Interview 1, 03 March 2011). The entire population of the second sampling frame, the total number of MIS users (N = 163 participants) at the selected public FET college participated in the survey, hence a 100% response rate was achieved.
Questionnaire design
The questionnaire used in this study (Visser 2011) consists of four sections that respectively cover questions on identification and consent, employment information, MIS evaluation, and personal information. The section in the questionnaire which investigates the evaluation of the MIS was developed by adapting and selecting questions from four standardised empirically tested questionnaires. That section consists of 42 items that were presented in a frequency-of-use Likert rating scale format in terms of which participants had to rate each item on a scale of 1 to 5, where 1 equals almost never; 2 equals some of the time; 3 equals about half of the time; 4 equals most of the time; and 5 equals almost always. Each MIS evaluation construct was generated by calculating the mean of the underlying items for each participant. The proposed conceptual model should therefore be studied in conjunction with the effectiveness measures included in the evaluation tool (questionnaire).
Data management and analysis
The quantitative data capturing, preliminary data cleaning and some of the exploratory data analysis was done with MS Access 2007. Further in-depth exploratory and inferential data analysis, which entailed the application of statistical techniques and procedures, was conducted with SPSS version 19. Additional mathematical calculations and graphical representations of the data were done with MS Excel 2007. Statistical techniques and tests that were applied on the data included: frequency tables, Principal Component Analysis (PCA), Pearson’s Chi-square tests of statistical significance and Cronbach’s alpha value.Ethical clearance for the research was granted by the research ethical clearance committees of the Human Sciences Research Council and the University of South Africa.
This section presents the results of the study by firstly giving a brief description of the biographical characteristics of the users of the MIS at the college; secondly, motivating changes to the initial conceptual model, thirdly, providing summary results on the measurements of the different IS evaluation constructs and finally, providing summarised results on the triangulation of the qualitative and quantitative data analyses.
Profile of system users
The gender distribution of the respondents was almost equal with 58% (or 94 participants) being women and 42% (or 69 participants) men. Fifty-two per cent of the participants were lecturing staff, 37% support staff and 11% management staff. The mean age of all participants was 35, with just over half the participants being younger than 35 years (56%). The average ages of support, lecturing and management staff was 31, 36 and 44 years respectively. More than half of the participants (57%) had a diploma or occupational certificate as their highest academic qualification. This is not surprising, because FET colleges focus primarily on offering vocational education.
Statistical analyses suggest changes to the initial conceptual model
The data analysis provided evidence for adaptations and extensions to the proposed theoretical model. Before each construct variable was calculated, tests for internal consistency and scale reliability (Cronbach’s alpha) and unidimensionality (Principal Component Analysis (PCA) were done. Based on the results of the Principal Component Analyses which measure unidimensionality and the reliability statistic (as presented in Table 2 and discussed in the next section), the following changes to the initial conceptual model were suggested:• The construct ‘information quality’ has two underlying components, namely, data quality and output quality
• The construct ‘system quality’ has two underlying components, namely, ease of access and ease of functioning
• The tests revealed that the construct ‘user satisfaction’ consists of three instead of five underlying components, namely, ease of use, content and format.
TABLE 2: Management information systems evaluation construct measurements and reliability statistic.
|
Each construct as depicted in the conceptual model was evaluated by using the ratings of all the MIS users on a number of items and were calculated as follows (the total variance of the sample explained as well as the number of items used in calculating the variable are given):• Individual impact (indi) explains 78.5% based on the mean of five items.
• Information quality (infq) explains 74.5% based on the mean of eleven items.
• System quality (sysq) explains 66.0% based on the mean of twelve items.
• Service quality (serq) explains 79.4% based on the mean of five items.
• Organisational impact (orgi) explains 72.3% based on the mean of eight items.
• End-user computing satisfaction (eucs) explains 73.7% based on the mean of thirteen items.
• Overall IS performance (bmseval) explains 80.8% and was created by calculating the mean of 41 items that contributed to creating indi, infq, sysq, serq, orgi and eucs. The adapted and extended SA-FETMIS success model is depicted in Figure 5.
|
FIGURE 5: Conceptual model for evaluation of management information systems performance at public Further Education and Training College X – The SA-FETMIS success model.
|
|
Service providers received highest scores
Table 2 provides the mean scores (evaluation measurements) calculated for each construct in the adapted conceptual model. The main MIS evaluation constructs have been shaded in Table 2. The other variables, which have not been shaded, are constructs that underlie the main constructs. The main constructs are sorted in descending order according to the mean scores.The overall mean of the performance of the MIS (bmseval) was calculated at 3.61, suggesting that the system users were satisfied with the system between half of the time and almost always. This is an indication that there is room for improvement in the overall performance of the system. The scores of the other evaluation indicators provide more detail on the specific aspects of the MIS that need improvement. In summary, Figure 6 depicts the evaluation profile of FET College X on all evaluation constructs that were measured and shows that the quality of the services rendered is highly valued.
|
FIGURE 6: Evaluation profile of the management information systems at public Further Education and Training College X.
|
|
As can be seen in Figure 6, all constructs (dimensions) of the MIS have been rated between 3.76 (the highest value) and 3.44 (the lowest value), showing a relatively similar average of performance on all aspects of the system. This trend indicates that further differentiated analysis with regard to different groups or staff characteristics (rank, demography, job description, etc.) needs to be done to investigate differences between ratings within and between groups. It also shows in general that attention should be given to all aspects in order to move the performance of the system to the next level where the system performs well most of the time in all aspects or dimensions tested.
Triangulation results
As noted, the quantitative findings were triangulated with findings from seven semi-structured interviews held with system users which include lecturers, administrators, an IT manager, an MIS manager and two external stakeholders.The qualitative data was analysed using thematic analysis. The findings related to the constructs from the model as explicated in Table 2 and Figure 5 are now discussed in Table 3.
TABLE 3: Triangulation of quantitative and qualitative data with regard to evaluation constructs.
|
Triangulation of the quantitative and qualitative findings as in Table 3 can be summarised as follows:• The MIS is usable provided training is given before use.
• Although the construct ‘organisational impact’ received an average rate in the quantitative data analysis, interviewees unanimously applaud the system as a valuable asset to the college.
• The system adds much value to the management of the college in terms of monitoring and evaluation of key indicators suggesting satisfaction with information quality and quality of the format of output reports.
• The MIS is perceived to have a high impact on individual development and performance of staff who use the system more extensively (administrators) than those who use the system less extensively (lecturers).
• Data triangulation confirms the need for more systematic data quality control procedures at the college.
This study proposed an IS evaluation model and developed a tool based on extant models and tools and empirically tested the proposed model and tool by evaluating the performance of the MIS at public FET college X. A mixed methods approach was used which distinguished this study from previous research in IS evaluation where only quantitative methods were applied. The quantitative data was used to test the proposed model, including the composition of the constructs. The qualitative findings were triangulated with the quantitative to explain and contextualise the quantitative findings and make sense from a management perspective. This paper makes a theoretical contribution by presenting the SA-FETMIS success model supported by the survey tool for evaluating MIS performance at a public FET college. The changes clearly reflect the FET context. For example the construct ‘information quality’ is decomposed into two underlying components, namely, data quality and output quality which resonate with the focus on reporting, as evident from the qualitative findings. The construct ‘system quality’ has two underlying components, namely, ease of access and ease of functioning which reflects infrastructural issues. Having user satisfaction presented by three instead of five underlying components adds to the parsimony of the model. The practical contribution lies in the usefulness of the model and tool on organisational and managerial levels where managers can apply the SA-FETMIS for MIS success evaluation. Further testing is needed to validate the SA-FETMIS success model and verify the general applicability of the model in measuring MIS performance at educational institutions.
We wish to thank the CEO and staff of public FET College X for their support and extensive contribution to the study, the University of South Africa and the HSRC for financial support of the study.
Competing interests
The authors declare that they have no financial or personal relationship(s) which may have inappropriately influenced them in writing this paper.
Authors’ contributions
The study was conducted as part of the master’s degree requirements of M.V. (HSRC). J.v.B (University of South Africa) acted as the supervisor and M.H. (CSIR) as the joint supervisor for the study.
Babbie, E. & Mouton, J., 2001, The practice of social research, Oxford University Press, Cape Town. Bailey, J.E. & Pearson, S.W., 1983, ‘Development of a tool for measuring and analysing computer user satisfaction’, Management Science 29(5), 530–545.
http://dx.doi.org/10.1287/mnsc.29.5.530 Chow, W.S., 2004, ‘An exploratory study of the success factors for extranet adoption in E-supply chain’, Journal of Global Information Management 12(4), 60.
http://dx.doi.org/10.4018/jgim.2004010104 Davis, F.D., Bagozzi, R.P. & Warshaw, P.R., 1989, ‘User acceptance of computer technology: A comparison of two theoretical models’, Management Science 35, 982–1003.
http://dx.doi.org/10.1287/mnsc.35.8.982 DeLone, W. & McLean, E., 1992, ‘Information Systems success: The quest for the dependent variable’, Information Systems Research 3(1), 36.
http://dx.doi.org/10.1287/isre.3.1.60 DeLone, W.H. & McLean, E.R., 2003, ‘The DeLone and McLean model of Information Systems Success: A ten-year update’, Journal of Management Information Systems 19(4), 9–30. Department of Education, 2008, National Plan for Further Education and Training Colleges in South Africa, Department of Education, Pretoria. Department of Higher Education and Training, 2011, Revised Strategic Plan, 2010/11 – 2014/15, and operational plans for the 2011/12 finacial year, Department of Higher Education and Training, Pretoria. Dishaw, M.T., Strong, D.M. & Bandy, D.B., 2002, ‘Extending the Task-Technology Fit Model with self-efficacy constructs’, Human-Computer Interaction Studies in MIS, Eighth Americas Conference on Information Systems, viewed 21 May 2011, from
http://sigs.aisnet.org/SIGHCI/amcis02_minitrack/RIP/Dishaw.pdf Doll, W.J. & Torkzadeh, G., 1988, ‘The measurement of End-User Computing Satisfaction’, MIS Quarterly 12(2), 259–274.
http://dx.doi.org/10.2307/248851 Fishbein, M. & Ajzen, I., 1975, Belief, attitude, intention, and behavior: An introduction to theory and research, Addison-Wesley, Reading, MA. Gable, G.G., Sedera, D. & Chan, T., 2008, ‘Re-conceptualizing Information System Success: The IS-Impact Measurement Model’, Journal of the Association for Information Systems 9(7), 377–408. Goodhue, D.L. & Thompson, R.L., 1995, ’Task-Technology Fit and Individual Performance’, MIS Quarterly 19(2), 213–236.
http://dx.doi.org/10.2307/249689 Ifinedo, P., Rapp, B., Ifinedo, A. & Sundberg, K., 2010, ‘Relationships among ERP post-implementation success constructs: An analysis at the organizational level’, Computers in Human Behavior 26, 1136–1148.
http://dx.doi.org/10.1016/j.chb.2010.03.020 Lange, L. & Luescher, T.M., 2003, ‘A monitoring and evaluation system for South African higher education: Conceptual, methodological and practical concerns’, South African Journal of Higher Education 17(3), 82–89. Lapiere, R.T., 1934, ‘Attitudes vs. actions’, Social Forces 13, 230–237.
http://dx.doi.org/10.2307/2570339 Mason, R.O., 1978, ‘Measuring information output: A communication systems approach’, Information and Management 1(4), 219–234.
http://dx.doi.org/10.1016/0378-7206(78)90028-9 Ong, C.S., Day, M.Y. & Hsu, W.L., 2009, ‘The measurement of user satisfaction with question answering systems’, Information & Management 46, 397–403.
http://dx.doi.org/10.1016/j.im.2009.07.004 Palmius, J., 2007, Criteria for measuring and comparing information systems, proceedings of the 30th Information Systems Research Seminar in Scandinavia IRIS 2007, Murikka, Tampere, Finland, August, 11–14, 2007, n.p. Petter, S., DeLone, W. & McLean, E.R., 2008, ‘Measuring information systems success: Models, dimensions, measures, and interrelationships’, European Journal of Information Systems 17, 236–263.
http://dx.doi.org/10.1057/ejis.2008.15 Petter, S. & Mclean, E.R., 2009, ‘A meta-analytic assessment of the DeLone and McLean IS Success Model: An examination of IS success at the individual level’, Information & Management 46(3), 159–166.
http://dx.doi.org/10.1016/j.im.2008.12.006 Rai, A., Lang, S.S. & Welker, R.B., 2002, ‘Assessing the validity of IS Success Models: An empirical test and theoretical analysis’, Information Systems Research 13(1), 50–69.
http://dx.doi.org/10.1287/isre.13.1.50.96 Rogers, E.M., 2003, Diffusion of innovations, 5th edn., Free Press, New York, NY. Seddon, P.B., 1997, ‘A respecification and extension of the DeLone and McLean Model of IS Success’, Information Systems Research 8(3), 240–253.
http://dx.doi.org/10.1287/isre.8.3.240 Seddon, P.B. & Kiew, M.Y., 1996, ‘A partial test and development of DeLone and McLean’s Model of IS Success’, Australian Journal of Information Systems 4(1), 90–109. Sedera, D., Gable, G.G. & Chan, T., 2004, A factor and structural equation analysis of the Enterprise Systems Success Measurement Model, Proceedings Americas Conference on Information Systems, New York, USA. Shannon, C.E. & Weaver, W., 1949, The Mathematical Theory of Communication, University of Illinois Press, Urbana, IL. Venkatesh, V. & Bala, H., 2008, ‘Technology Acceptance Model 3 and a research agenda on interventions’, Decision Sciences 39, 273–315.
http://dx.doi.org/10.1111/j.1540-5915.2008.00192.x Venkatesh, V. & Davis, F.D., 2000, ‘A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies’, Management Science 46, 186–204.
http://dx.doi.org/10.1287/mnsc.46.2.186.11926 Venkatesh, V., Morris, M.G., Davis, F.D. & Davis, G.B., 2003, ‘User acceptance of Information Technology: Toward a unified view’, MIS Quarterly 27, 425–478. Visser, M.M., 2011, Towards developing an evaluation tool for business management information systems’ success at public Further Education and Training (FET) colleges in South Africa, MSc thesis, University of South Africa, South Africa. Weber, R., 2012, ‘Evaluating and developing theories in the Information Systems discipline’, Journal of the Association for Information Systems 13(1), 1–30. Wejnert, B., 2002, ‘Integrating models of diffusion of innovations: A conceptual framework’’, Annual Review of Sociology (Annual Reviews) 28, 297–306.
http://dx.doi.org/10.1146/annurev.soc.28.110601.141051 Wixom, B.H. & Todd, P.A., 2005, ‘A theoretical integration of user satisfaction and technology acceptance’, Information Systems Research 16(1), 85–102.
http://dx.doi.org/10.1287/isre.1050.0042
|