Open Access

Implications of E-learning systems and self-efficiency on students outcomes: a model approach

Human-centric Computing and Information Sciences20122:6

DOI: 10.1186/2192-1962-2-6

Received: 12 November 2011

Accepted: 15 March 2012

Published: 15 March 2012

The Retraction Note to this article has been published in Human-centric Computing and Information Sciences 2013 3:11

Abstract

Background

This paper presents a model approach to examine the relationships among e-learning systems, self-efficacy, and students' apparent learning results for university online courses.

Methods

Independent variables included in this study are e-learning system quality, information quality, computer self-efficacy, system-use, self-regulated learning behavior and user satisfaction as prospective determinants of online learning results. An aggregate of 674 responses of students completing at least one online course from Wawasan Open University (WOU) Malaysia were used to fit the path analysis model.

Results

The results indicated that system quality, information quality, and computer self-efficacy all affected system use, user satisfaction, and self-managed learning behavior of students.

Conclusion

Proposed path analytical model suggests that hypothesized variables are useful to forecast e-learning results

Keywords

E-learning systems System quality Information quality User-satisfaction Self-regulated learning behavior

Background

An important goal of e-learning systems is to deliver instructions that can produce equal or better outcomes than face-to-face learning systems. To achieve the goal, an increasing number of empirical studies have been conducted over the past decades to address the issue of what antecedent variables affect students' satisfaction and learning outcomes and to examine potential predictors of e-learning outcomes [1, 2]. A primary theme of e-learning systems research has been empirical studies of the effects of information technology, instructional strategies, and psychological processes of students and instructors on the student satisfaction and e-learning outcomes in university online education.

The research model we developed is a blend of a management information systems (MIS) success model [3], a conceptual model of Piccoli et al., [4], and an e-learning success model of Holsapple and Lee-Post [5]. Based on the review of 180 empirical studies, DeLone and McLean presented a more integrated view of the concept of information systems (IS) success and formulated a more comprehensive model of IS success. Their IS success model identified six constructs that are interrelated and interdependent: system quality, information quality, use, user satisfaction, individual impact, and organizational impact. DeLone and McLean's [6] model is further extended and adapted to e-learning settings by many e-learning systems research. Holsapple and Lee-Post [5] adapted the DeLone and McLean model to propose e-learning success model (Figure 1). The proposed e-learning success model consists of three antecedents constructs (system quality, information quality, service quality) and two intervening constructs (system use and user satisfaction) and system outcome measuring academic success and systems efficiency and effectiveness (Figure 2). The primary objective of this study is to investigate the determinants of students' perceived learning outcomes and satisfaction in university online education using e-learning systems. Using the extant literature, we begin by introducing and discussing a research model illustrating variables affecting e-learning systems outcomes and user satisfaction. We follow this with a description of the cross-sectional survey that was used to collect data and the results from a path analysis model. In the final section, we outline the implications of the results for higher educational institutions.
https://static-content.springer.com/image/art%3A10.1186%2F2192-1962-2-6/MediaObjects/13673_2011_Article_10_Fig1_HTML.jpg
Figure 1

E-learning success model and sample matrix. Source: [5].

https://static-content.springer.com/image/art%3A10.1186%2F2192-1962-2-6/MediaObjects/13673_2011_Article_10_Fig2_HTML.jpg
Figure 2

Research model.

E-learning systems and outcomes

The e-learning systems literature has accumulated a considerable body of literature over the past decade [1, 7, 8]. Nevertheless, little empirical research exists to understand the relationships among e-learning systems quality, the quality of information produced by e-learning systems and e-learning systems outcomes. E-learning systems comprised of a myriad of subsystems that interacts each other. They include human factors and design factors. Human factors include personality Characteristics [9, 10], learning styles [1114], and instructor's attributes [15]). Design factors include a wide range of constructs that affect effectiveness of e-learning systems such as technology [5, 1618], learner control, learning model [19, 20], course contents and structure [2123], and interaction [2326].

In a study of Eom et al. [13], structural equation modeling is applied to examine the determinants of students' satisfaction and their perceived learning outcomes in the context of university online courses. Independent variables included in the study are course structure, instructor feedback, self-motivation, learning style, interaction, and instructor facilitation as potential determinants of online learning. A total of 397 valid unduplicated responses from students who have completed at least one online course at a university in the Midwest were used to examine the structural model. The results indicated that all of the antecedent variables significantly affect students' satisfaction. Of the six antecedent variables hypothesized to affect the perceived learning outcomes, only instructor feedback and learning style are significant. The structural model results also reveal that user satisfaction is a significant predictor of learning outcomes. The findings suggest online education can be a superior mode of instruction if it is targeted to learners with specific learning styles (visual and read/write learning styles), and with timely, meaningful instructor feedback of various types. Eom et al. [13] found that all six factors: course structure, self-motivation, learning styles, instructor knowledge and facilitation, interaction, and instructor feedback, significantly influenced students' satisfaction. This is in accordance with the findings and conclusions discussed in the literature on student satisfaction.

Of the six factors hypothesized to affect perceived learning outcomes, only two (learning styles and instructor feedback) were supported. Contrary to previous research [27], Eom et al., [13, 28] found no support for a positive relationship between interaction and perceived learning outcomes. One possible explanation for this finding is that the study did not account for the quality or purpose of the interactions. Although a student's perception of interaction with instructors and other students is important in his/her level of satisfaction with the overall online learning experience, when the purpose of online interaction is to create a sense of personalization and customization of learning and help students overcome feelings of remoteness, it may have little effect on perceived learning outcomes. Furthermore, a well-designed online course delivery system is likely to reduce the need of interactions between instructors and students. The university under study has a very friendly online e-learning system and strong technical support system. Every class Web site follows the similar design structure which reduces the learning curve. Contrary to other research findings, no significant relationships were found between students' self-motivation and perceived learning outcomes. Theoretically, self-motivation can lead students to go beyond the scope and requirements of an educational course because they are seeking to learn about the subject, not just fulfill a limited set of requirements. Self-motivation should also encourage learning even when there is little or no external reinforcement to learn and even in the face of obstacles and setbacks to learning.

This research further extends the study of Eom et al. [13] which did not include several constructs on which this study focuses. This research addresses the effects of system quality, information quality, self-regulated learning, and self-efficacy on the e-learning system use, user satisfaction, and e-learning outcomes. An e-learning system typically consists of learning management systems (LMS) and authoring systems. The LMS is a system for storing and delivering the course content, and tracks student access and progress. The authoring systems allow the instructors to develop the contents for e-learners.

Related research and hypothesis development

System quality and information quality

The IS success model [3, 6] and the e-learning success model [5] posit that the success of IS and e-learning systems is dependent on the intervening variables (user satisfaction and system use), which are in turn dependent on the quality of information, system, and service. Technology acceptance model (TAM) developed in the IS area has emerged as a useful model for explaining e-learning system usage and satisfaction [29]. The TAM defines the relationships between systems use (dependent constructs) and perceived usefulness and perceived ease of use (two independent constructs). Therefore, the TAM theorizes that system use is determined by perceived usefulness and perceived ease of use. The TAM model has been extended by many other researchers. The unified theory of acceptance and use of technology (UTAUT) is an extension of the TAM model. The TAM postulates that perceived usefulness and ease of use determine an individual's intention to use a system, which in turn, determines actual system use. The theory posits that the four key constructs directly determine usage intention and behavior [30]. Moreover, gender, age, experience, and voluntariness of use are posited to mediate the impact of the four key constructs on usage intention and behavior [30]. Arbaugh [31] found that perceived usefulness and ease of use of Blackboard significantly predicted student satisfaction with the Internet as an educational delivery medium. Thus, I hypothesized:

H1a: e-learning system quality will lead to a higher level of system use.

H1b: e-learning system quality will lead to a higher level of user satisfaction.

H2a: Information quality will lead to a higher level of system use.

H2b: Information quality will lead to a higher level of user satisfaction.

Computer self-efficacy

A goal of e-learning empirical research includes the identification and effective management of factors that influence e-learning outcomes [32]. One of such factors is computer self-efficacy of e-learners. Numerous e-learning empirical studies have been conducted to examine the relationships between e-learners' computer self-efficacy and other construct such as student satisfaction and e-learning outcomes.

Self-efficacy and e-learning system use

Significant positive relationships were found between self-efficacy and e-learning system use intention. Computer self-efficacy, attainment value, utility value, and intrinsic value were significant predictors of individuals' intentions to continue using Web-based learning [33].

Therefore, I hypothesize the following.

H3a: Computer self-efficacy will lead to a higher level of system use.

Self-efficacy and e-learner satisfaction

Johnson et al., [34] found that student self-efficacy and perceived usefulness of the system predicted perceived content value, satisfaction, and learning performance. Other system-related studies have examined attitudes and behaviors influencing course management system usage. Significant positive correlations were found among the three e-learning variables (Self-efficacy, e-learner satisfaction and perceived usefulness [35].

Thus, I hypothesized:

H4a: Computer self-efficacy will be positively related to e-learner satisfaction.

Self-efficacy and e-learning outcome

Computer self-efficacy was positively linked to learning outcomes measured by the average test scores in e-learning [36] and in the training literature [37].

Thus, I hypothesized:

H5a: Computer self-efficacy will be positively related to online learning outcomes.

User satisfaction and e-learning outcomes

A study of Eom et al. [13] examined the determinants of students' satisfaction and their perceived learning outcomes in the context of university online courses. Their study found that all of the antecedent variables (course structure, instructor feedback, self-motivation, learning style, interaction, and instructor facilitation) significantly affect students' satisfaction. Their structural model results also reveal that user satisfaction is a significant predictor of learning outcomes.

Thus, I hypothesized:

H6a: User satisfaction will lead to higher levels of student agreement that the learning outcomes of online course are equal to or better than in face-to-face courses.

Thus, I hypothesized:

H6b: Self-regulated learning behavior of e-learners will be positively related to online learning outcomes, which is equal to the quality of traditional classroom learning.

Survey instrument

The survey instrument consisted of 35 questions addressed using a seven point Likert scale ranging from "strongly disagree" to "strongly agree." In addition, students were asked six demographic-type questions. The survey was administered online in the fall semester of 2009 at a Wawasan Open University (WOU) Malaysia. A total of 2,156 online students were invited to reply to the survey. Of those students invited, 809 students responded with 674 surveys being complete and usable for a response rate of 31.3%. Appendix A summarizes the characteristics of the student sample. To conduct a path analysis, we only used the following 7 questions to represent our variables.

◦ System Quality: The system is user-friendly.

◦ Information Quality: The system provides information that is exactly what you need.

◦ System Use: Items I frequently use the system.

◦ User Satisfaction: Overall, I am satisfied with the system.

◦ Learning Outcome: I feel that online learning is equal to the quality of traditional classroom learning.

◦ Self-managed learning Behavior: In my studies, I am self-disciplined and find it easy to set aside reading and homework time.

◦ E-learning System Self-efficacy: I feel confident using a web browser.

Research model and data

The research model (Figure 2) was tested using path analysis. LISREL 8.70 was used to do path analysis. It is a technique to assess the causal contribution of directly an observable variable to other directly observable variables. The model consists of three independent variables (system quality, information quality, and self-efficacy) and 4 dependent variables (system use, user satisfaction, self-regulated learning behavior and e-learning Outcomes). A total of 674 valid unduplicated responses from students who have completed at least one online course at a university in the Wawasan Open University Malaysia were used to fit the path analysis model.

Model testing and evaluation of goodness of fit statistics

Model testing is to test the fit of the correlation matrix of sample data against the theoretical causal model built by researchers based on the extant literature [38]. Goodness of fit statistics includes an extensive array of fit indices that can be categorized into six different subgroups of statistics that may be used to determine model fit. For a very good overview of LISREL goodness- of-fit statistics, readers are referred to [39]. There seems to be an agreement among SEM researchers that it is not necessary to report every goodness of fit statistics from path analysis output. Although there are no golden rules that can be agreed upon, Table 1 includes a set of indices that have been frequently reported and suggested to be reported in the literature [39, 40]. Table 1 includes our model fit statistics of various fit indices and corresponding acceptable threshold levels of each corresponding fit index. Considering all indices together, the specified model (Figure 2) seems to be supported by the sample data. Since our model is tested bas on sample size of 674, Chi-Square statistic is not a good measure of goodness of fit, since Chi-Square statistic nearly always rejects the model when large samples are used [41]. The RMSEA is the second fit statistic reported in the LISREL program. A cut-off value close to .069 indicates a close fit and the values up to 0.08 are considered to represent reasonable error of approximation.
Table 1

The results of the model

Fit Index

Our Model Fit Statistics

Acceptable Threshold Levels

Absolute Indices

  

Chi-Square (χ2)

p values are all less than .05

Low χ2 relative to degrees of

  

freedom with an insignificant p

  

value (p less than 0.05)

RMSEA

0.060

less than 0.07

Goodness-of-fit index (GFI)

0.99

greater than 0.95

Adjusted GFI (AGFI)

0.96

greater than 0.95

Standardized RMR (SRMR)

0.032

less than 0.08

Incremental Indices

  

Normed-fit-index (NFI)

.99

greater than 0.95

Non-Normed-fit-index (NNFI)

0.98

greater than 0.95

Comparative Fit Index (CFI)

1.00

greater than 0.95

Figure 3 shows the summary of path analysis. The bold lines indicate 10 supported hypotheses and the other lines indicated 5 hypotheses that were not supported.
https://static-content.springer.com/image/art%3A10.1186%2F2192-1962-2-6/MediaObjects/13673_2011_Article_10_Fig3_HTML.jpg
Figure 3

Summary of the path analysis.

Discussion and analysis

According to the latest industry statistics, "the e-learning market in the Malaysia is growing approximately 43 percent a year and is expected to reach well beyond $27 billion within the next several years" [42]. Higher educational institutions have invested heavily to constantly update their e-learning management systems. The findings from the current study have significant implications for the distance educators, students, and administrators. We have focused on the effect of e-learning management systems on user satisfaction, and the relationship between user satisfaction and e-learning outcome. E-learner satisfaction is an important predictor of e-learning outcome. On the other hand, system quality, information quality, and self-regulated learning behavior have significant direct impacts on the perceived satisfaction of e-learners. Self-efficacy does not show a direct effect on user satisfaction, but it shows indirect effect on user-satisfaction via self-regulated learning behavior. It is conceivable that, through this type of research, online learning will be enhanced when there is a better understanding of critical success factors for e-learning management systems.

Learning is a complex process of acquiring knowledge or skills involving a learner's biological characteristics/senses (physiological dimension); personality characteristics such as attention, emotion, motivation, and curiosity (affective dimension); information processing styles such as logical analysis or gut feelings (cognitive dimension); and psychological/individual differences (psychological dimension) [43]. Moreover, e-learning outcomes are the results of dynamic interactions among e-learners, instructors, and e-learning systems. This study may be useful as a pedagogical tool for all entities involved in the dynamic learning process. First, university administrators must continuously invest to upgrade the systems so that e-learning systems exhibit faster response time, better systems accessibility, higher system reliability and flexibility, and ease of learning. By doing so, e-learning systems can provide e-learners with the information that are accurate, precise, current, reliable, dependable, and useful. This study provided a basis for justifying technological expenditures at the administrative level.

Second, e-learners must be able to self-manage the entire learning process including self-regulation of behavior, motivation, and cognition, proactively and deliberately. The core of self-regulated learning is self-motivation.

Students' motivation is a major factor that affects the completion rates in the Web-based course and a lack of motivation is also linked to high dropout rates [44]. The instructor in e-learning courses should facilitate, stimulate, guide, and challenge his/her students via empowering them with freedom and responsibility. Instructor feedback to students can improve learner affective responses, increase cognitive skills and knowledge, and activate meta-cognition, which refers to the awareness and control of cognition through planning, monitoring, and regulating cognitive activities [12].

Third, In order for the e-learning system to be successful, it should provide e-learners with the information and knowledge they need. As this study indicates,

Information quality has positive effects on user satisfaction. Information quality has also positive effects on system use, which in turn positively contributes user satisfaction. However, the information quality in e-learning is not dependent on only e-learning management systems' software and hardware. It is the instructor who creates the contents of e-learning material that are useful and essential for gaining necessary knowledge for the future success of students. In information systems, the roles of instructors as a contents creator are even more critical when assembling daily/weekly reading assignments for each semester by selecting chapters, topics within a chapter, project assignments, and creating power point files and supplementary files, due to the fact that the nature of information systems are constantly changing with a fast speed. Information systems educators are continuously witnessing the emergence of a host of disruptive technologies such as virtualization and cloud computing. According to the ranking of technologies Chief Information Officers (CIOs) selected as their top priories in 2010, virtualization and cloud computing were the number one and number two priorities. Cloud computing was not on the radar in 2007 and 2008. It was a distant 14 in 2009. Cloud can help firms do more with less. Moreover, the technologies that CIOs are prioritizing in 2010 are technologies that could be implemented quickly and without significant upfront expense [28]. However, some introductory information systems textbooks even did not mention these topics at all. For this reason, the instructor must play a pivotal role to create and enhance the quality of information for e-learners.

Conclusion

Abundant e-learning empirical research points out those superior e-learning outcomes are one of the critical objectives of e-learning research. Our path analytical model suggests that of these six variables I hypothesized, all of them are useful predictor of e-learning outcomes, except the following three unsupported hypotheses. The paths from system quality and information quality to user satisfaction, system use to user satisfaction, and user satisfaction to e-learning outcomes were significant as hypothesized by the DeLone-McLean model. On the other hand, the paths from system quality to system use, system use to e-learning outcome, and self-efficacy to user satisfaction were not significant. This negative finding may be explained by the mandatory nature of the e-learning system. This is in accordance with the findings of the study of Livari [45], which tested the DM model in a mandatory city government information system context. System use is the pivot of the DM model. System use, either actual or perceived, is one of the most frequently reported and the most objective measure of MIS success or the MIS success measure of choice in MIS empirical research [3] in a voluntary IS use context. The DM model has been empirically tested using structural equation modeling in a quasi-voluntary IS use context [46] and in a mandatory information system context [45]. Nevertheless, the usage of information and systems, as repeatedly pointed out by DeLone and McLean [3], is only relevant when such use is voluntary. Needless to say, e-learning systems are mandatory systems. Regardless of the quality of the e-learning management system, e-learners must use the system. We suggest that future e-learning empirical studies exclude "system use" construct in the model.

Notes

Declarations

Acknowledgements

This research is supported by TWAS Research and Advanced Training fellowship (2010) cycle conducted in UTM Malaysia. Special thanks to Faculty of Computer Science and Information Systems Universiti Teknologi Malaysia (UTM) for facilities, research guidance and comments about the material and structure of this research.

Authors’ Affiliations

(1)
Faculty of Computer Science and Information Systems, Universiti Teknologi Malaysia

References

  1. Ahmed HMS: Hybrid E-Learning acceptance model: learner perceptions. Decis Sci J Innov Educ 2010,8(2):313–346. 2011 2011 10.1111/j.1540-4609.2010.00259.xView ArticleGoogle Scholar
  2. Saba T, Rehman A, Sulong G: ITS: Using A.I. to Improve Character Recognition of Students with Intellectual Disabilities. Int Conf Softw Eng Comput Syst, UMP Malaysia 2009, 1: 6–9.Google Scholar
  3. DeLone WH, McLean ER: Information system success: the quest for the dependent variable. Inf Syst Res 1992,3(1):60–95. 10.1287/isre.3.1.60View ArticleGoogle Scholar
  4. Piccoli G, Ahmad R, Ives B: Web-Based virtual learning environments: a research framework and a preliminary assessment of effectiveness in basic IT skills training. MIS Q 2001,25(4):401–426. 10.2307/3250989View ArticleGoogle Scholar
  5. Holsapple CW, Lee-Post A: Defining, assessing, and promoting E-Learning Success: an information systems perspective. Decis Sci J Innov Educ 2006,4(1):67–85. 10.1111/j.1540-4609.2006.00102.xView ArticleGoogle Scholar
  6. DeLone WH, McLean ER: The DeLone and McLean model of information systems success: a ten-year update. J Manag Inf Syst 2003,194(4):9–30.Google Scholar
  7. Elarbi-Boudihir M, Rehman A, Saba T: Video motion perception using operation gabor filter. Int J Phys Sci 2011,6(12):2799–2806.Google Scholar
  8. Haron H, Rehman A, Adi DIS, Lim SP, Saba T: Parameterization method on B-Spline curve. Math Probl Eng 2012. doi:10.1155/2012/640472Google Scholar
  9. Rahim MSM, Rehman A, Faizal-Ab-Jabal M, Saba T: Close spanning tree approach for error detection and correction for 2D CAD drawing. Int J Acad Res 2011,3(4):525–535.Google Scholar
  10. Schniederjans MJ, Kim EB: Relationship of student undergraduate achievement and personality characteristics in a total web-based environment: an empirical study. Decis Sci J Innov Educ 2005,3(2):205–221. 10.1111/j.1540-4609.2005.00067.xView ArticleGoogle Scholar
  11. Bekhti S, Rehman A, Al-Harbi M, Saba T: AQuASys an Arabic question-answering system based on extensive question analysis and answer relevance scoring. Int J Acad Res 2011,3(4):45–54.Google Scholar
  12. Drago WA, Wagner RJ: Vark preferred learning styles and online education. Manage Res News 2004,27(7):1–13. 10.1108/01409170410784211View ArticleGoogle Scholar
  13. Eom SB, Ashill N, Wen HJ: The determinants of students' perceived learning outcome and satisfaction in university online education: an empirical investigation. Decis Sci J Innov Educ 2006,4(2):215–236. 10.1111/j.1540-4609.2006.00114.xView ArticleGoogle Scholar
  14. Haron H, Rehman A, Wulandhari LA, Saba T: Improved Vertex chain code algorithm for curve length estimation. J Comput Sci 2011,7(5):736–743. 10.3844/jcssp.2011.736.743View ArticleGoogle Scholar
  15. Rehman A, Mohamad D: A simple segmentation approach for unconstrained cursive handwritten words in conjunction of neural network. Int J Image Process 2008,2(3):29–35.Google Scholar
  16. Phetchanchai C, Selamat A, Rehman A, Saba T: Index financial time series based on zigzag-perceptually important points. J Comput Sci 2010,6(12):1389–1395. 10.3844/jcssp.2010.1389.1395View ArticleGoogle Scholar
  17. Rehman A, Saba T, Sulong G: An intelligent approach to image denoising. J Theor Appl Inf Technol 2010,17(1):32–36.Google Scholar
  18. Saba T, Rehman A: Effects of artificially intelligent tools on pattern recognition. Int J Mach Learn Cybern 2012. doi:10.1007/s13042-012-0082-zGoogle Scholar
  19. Kurniawan F, Rahim MSM, Daman D, Rehman A, Mohamad D, Mariyam S: Region-based touched character segmentation in handwritten words. Int J Innovative Comput Inf Control 2011,7(6):3107–3120.Google Scholar
  20. Leidner DE, Jarvenpaa SL: The use of information technology to enhance management school education: a theoretical view. MIS Q 1995,19(3):265–291. 10.2307/249596View ArticleGoogle Scholar
  21. Moore MG: Editorial: distance education theory. Am J Distance Educ 1991,5(3):1–6. 10.1080/08923649109526758View ArticleGoogle Scholar
  22. Rahim MSM, Rehman A, Sholihah N, Kurniawan F, Saba T, Mohamad D: Region-based features extraction in ear biometrics. Int J A Res 2012,4(1):37–42.Google Scholar
  23. Rehman A, Saba T: Performance analysis of segmentation approach for cursive handwritten word recognition on benchmark database. Digit Signal Process (Elsevier) 2011, 21: 486–490. 10.1016/j.dsp.2011.01.016View ArticleGoogle Scholar
  24. Moore MG: Three types of interaction. Am J Distance Educ 1989,3(2):1–6. 10.1080/08923648909526659View ArticleGoogle Scholar
  25. Northrup PT: Online learners' preferences for interaction. Quart Rev Distance Educ 2002,3(2):219–226.Google Scholar
  26. Swan K: Virtual interaction: design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Educ 2001,22(2):306–331. 10.1080/0158791010220208MathSciNetView ArticleGoogle Scholar
  27. LaPointe DK, Gunawardena CN: Developing, testing and refining of a model to understand the relationship between peer interaction and learning outcomes in computer-mediated conferencing. Distance Educ 2004,25(1):83–106. 10.1080/0158791042000212477View ArticleGoogle Scholar
  28. Saba T, Rehman A, Sulong G: Cursive script segmentation with neural confidence. Int J Innov Comput Inf Control (IJICIC) 2011,7(7):1–10.Google Scholar
  29. Landry BJL, Griffeth R, Hartman S: Measuring student perceptions of blackboard using the technology acceptance model. Decis Sci J Innov Educ 2006,4(1):87–99. 10.1111/j.1540-4609.2006.00103.xView ArticleGoogle Scholar
  30. Venkatesh V, Morris MG, Davis GB, Davis FD: User acceptance of information technology: toward a unified view. MIS Q 2003,27(3):425–478.Google Scholar
  31. Arbaugh JB: Is there an optimal design for on-line MBA courses? Acad Manag Learn Educ 2005, 4: 135–149.View ArticleGoogle Scholar
  32. Saba T, Rehman A, Danial H: Proficiency and approach of mathematics teachers in the application of computers as instructional tool in Pakistani schools. J Educ Res Islamia Univ Bahawalpur Pak 2010,3(2):153–170.Google Scholar
  33. Chiu CM, Wang ETG: Understanding web-based learning continuance intention: the role of subjective task value. Inf Manag 2008, 45: 194–201. 10.1016/j.im.2008.02.003View ArticleGoogle Scholar
  34. Johnson RD, Hornik S, Salas E: An empirical examination of factors contributing to the creation of successful e-learning environments. Int J Hum Comput Stud 2008, 66: 356–369. 10.1016/j.ijhcs.2007.11.003View ArticleGoogle Scholar
  35. Womble J: E-Learning: the relationship among learner satisfaction, self-efficacy, and usefulness. Bus Rev 2008,10(1):182–188.Google Scholar
  36. Simmering MJ, Posey C, Piccoli G: Computer self-efficacy and motivation to learn in a self-directed online course. Decis Sci J Innov Educ 2009,7(1):99–121. 10.1111/j.1540-4609.2008.00207.xView ArticleGoogle Scholar
  37. Colquitt JA, LePine JA, Noe RA: Toward an integrative theory of training motivation: a meta-analytic path analysis of 20 years of research. J Appl Psychol 2000,85(5):678–707.View ArticleGoogle Scholar
  38. Saba T, Rehman A: Blinds Children Education and Their Perceptions towards First Institute of Blinds in Pakistan. Int J Mod Educ Comput Sci 2012,4(1):50–60. 10.5815/ijmecs.2012.01.07View ArticleGoogle Scholar
  39. Hooper D, Coughlan J, Mullen MR: Structural equation modeling: guidelines for determining model fit. Electron J Bus Res Methods 2008,6(1):53–60.Google Scholar
  40. Hayduk L, Cummings GG, Boadu K, Pazderka-Robinson H, Boulianne S: Testing! Testing! One, Two Three - Testing the theory in structural equation models! Personal Individ Differ 2007,42(2):841–850.View ArticleGoogle Scholar
  41. Bentler PM, Bonnet DC: Significance tests and goodness of fit in the analysis of covariance structures. Psychol Bull 1980,88(3):588–606.View ArticleGoogle Scholar
  42. Saba T, Rehman A, Elarbi-Boudihir M: Methods and strategies on off-line cursive touched characters segmentation: a directional review. Artif Intell Rev 2011. doi:10.1007/s10462-011-9271-5Google Scholar
  43. Dunn R, Beaudry J, Klavas A: Survey research on learning styles. Educ Leadersh 1989, 46: 50–58.Google Scholar
  44. Frankola K: Why online learners drop out. Workforce 2001,80(10):53–60.Google Scholar
  45. Livari J: An empirical test of the DeLone-McLean model of information system success. DATA BASE Adv Inf Syst 2005,36(2):8–27. 10.1145/1066149.1066152View ArticleGoogle Scholar
  46. Rai A, Lang SS, Welker RB: Assessing the validity of is success models: an empirical test and theoretical analysis. Inf Syst Res 2002,13(1):50–69. 10.1287/isre.13.1.50.96View ArticleGoogle Scholar

Copyright

© Saba; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.