Joseph H. Schuessler Ph.D.
The data analysis and results of the study are presented in this chapter which is organized in eight sections: 1) data collection and response rate, 2) analysis of non-response bias, 3) sample characteristics, 4) threats instrument, 5) countermeasures instrument, 6) information systems security effectiveness instrument, 7) assessments of validity, 8) PLS Analysis, and finally, 9) hypothesis testing. The first section presents a description of data collection procedures for the survey and a discussion of the response rate to the survey. a summary of the demographic characteristics of the respondents. The next section describes the response rate to the survey. This is followed by a discussion of non-response bias which is in turn followed by various sample characteristics such as industries from which surveys were received, gender of those who responded, and so on. Next is the discussion of the validity and reliability of the sample. The chapter concludes with the results of the hypotheses.
To contact IT professionals for the study, the AITP leadership forwarded and email to its 1500 professional members. The email consisted of a brief description of the survey and the benefits of participating in the survey along with a link to the survey itself which itself contained additional information. Of the 1500 professional members, 73 completed surveys were received representing a response rate of 4.9%. While this is a low response rate, the sensitive nature of the subject matter should be taken into account. Additionally, though the response rate was low, of those who actually clicked on the link to the survey (332), approximately 22% actually completed the survey. A cursory examination of the demographics of the respondents reveals that a range of organizations, in terms of size, and industries are represented. Three weeks after the first email, a follow-up email was sent thanking those who had already participated and encouraging those who had not to please do so. Finally, three weeks after the second email, a link was included in the AIPT’s monthly online newsletter.
The purpose of performing non-response analysis is to attempt to identify characteristics that may differ between respondents and non-respondents in order to potentially ferret out any bias that may exist within a dataset. While directly inquiring from non-respondents as to the reasons for not participating in the study would be ideal, it would be unlikely that such non-participants would respond to further inquiries given their lack of participation in the initial inquiry. Another method of assessing non-response bias is to compare early responders and late responders to the survey. Table 7 below displays a means comparisons between early and late responders. An independent samples t-test was performed against responses to eight demographic variables. The table illustrates that there are no significant differences at the .05 level of significance between early and late respondents.
Using the industry categories as established by the Small Business Administration, industry classification was gathered from each respondent. An examination of Table 8 reveals that Finance and Insurance as well as Manufacturing were the most well represented industries. Several industries failed to be incorporated into the study including Real Estate, Utilities, and Wholesale trade to name a few.
In order to efficiently and effectively incorporate industry in the analysis, it was necessary to classify various industries as either being Service oriented or Non-Service oriented. The SBA uses the North American Industry Classification System (NAICS) to identify and classify various industries. Essentially, Standard Industrial Classification (SIC) codes were replaced by NAICS codes which standardize industry classifications for the United States, Canada, and Mexico. Based on a comparison of NAICS sectors and SIC divisions listed on the NAICS web site, NAICS codes were identified as being service oriented. Those that were not classified as being service oriented were identified as being non-service oriented.
Organizational size as defined by number of employees was bimodal with the majority or organizations having 100 or fewer employees or more than 1500. However, for purposes of this research, organizational size was defined using the Small Business Administration’s (SBA’s) definition of small and large businesses. Using the SBA classification scheme to identify a business as either being large or small resulted in 31 small businesses being identified while 36 large businesses were identified. Six businesses did not fit into either classification as the respondents classified themselves as Public Administration which the SBA specifically does not classify in terms of either large or small. The results are summarized in Table 9. Organizational size by annual receipts are also reported in the table and indicate that approximately half of the organizations had annual receipts of less the $32.5 million.
Respondents were also asked to identify the percentage of the IS budget which best reflects the amount spent of security. Table 10 below details the responses. Of particular interest is that a large percentage reported that security represented less 1 percent of the overall IS budget as well as the large percentage indicated that they did not know how much of the IS budget was represented by security. The 19 respondents that indicated that security receives less than 1 percent of the IS budget could be an indication that effective risk management procedures are in place allowing for effective cost controls which limits expenditures. Conversely, it could be an indication that risk management procedures have not been undertaken and as a result, there is a fundamental lack of understanding as to the exposure that an organization may face. At the other extreme, the 20 respondents that indicated that they did not know their organization’s security percentage of the IS budget could also indicate contrasting issues. The fundamental lack of understanding of security issues and risk exposure could also be taking place at these organizations. However, it is possible that the risk management process has become so intertwined with other IS planning, analysis, development, implementation, and maintenance processes, that delineating what should be distinguished as emanating from the security budget and from the rest of the IS budget could be difficult if not impossible (Young, 2008).
In order to gain a further understanding of the relationship between organizational size and the percentage of the IS budget represented by security, a cross-tab and chi-square test of independence was conducted. Because one of the requirements for chi-square is that each of the expected squares must be five or greater, some of the categories for the security budget were combined. Those that indicated unknown for their security budget were not included in this analysis. Table 11 below shows the results. The results indicate that there is in fact a relationship between whether or not an organization is large or small and the percentage of there is budget spend on security. Small organizations tend to spend significantly less of their IS budget on security or significantly more of their IS budget on security than would statistically be anticipated. Conversely, no small organizations were found to spend the more moderate amount of 1-7% of there IS budget on security. Larger organizations tended to be more well distributed though there was a tendency for the larger organizations to be more moderate than would statistically be expected.
Additional insight was sought for the relationship between organizational size and industry in order to make certain that various industries were statistically well represented as either being either servicer or non-service. A cross-tab query was also constructed to test the relationship between these two variables and can be seen in Table 12 below. The results indicate that there was no relationship between respondents in terms of organizational size and industry classification which serves to strengthen any conclusions that are drawn as they relate to each of these constructs.
The organizational roles of the respondents were fairly well distributed. As can be seen in table 12, the vast majority of respondents, 48 out of 73, were responsible for various IS activities at their organization. Only six identified themselves as Top Management while 19 did not fit neatly into one of the predefined categories. The average organizational tenure for respondents was 17.37 years suggesting considerable organizational experience for most of the respondents. Table 12 illustrates that most of those sampled have been with their organization longer than four years.
The average response the question of ‘to what degree do each threat represent to your organization’ was 4.28. Measured on a 7-point likert scale, this indicated a slightly greater than neutral response. Quality of service deviation from service providers represents the greatest threat to those who responded to the survey. Interestingly, this particular statistic also had the lowest standard deviation indicating more agreement among all of the respondents.
The average response the question of ‘to what degree does your organization use each countermeasure’ was 4.91. Measured on a 7-point likert scale, this indicated a slightly greater than neutral response. While the rankings of some countermeasure techniques such as the use of anti-virus software should come as no surprise, others such as user training and education is somewhat troubling given the numerous studies which discuss not only the effectiveness of user education and training, but it’s relatively low costs as well (Schultz, 2004).
The final construct examined is the ISS Effectiveness construct. The means for each item in the construct averaged well above what would be considered a neutral response. One interesting observation that can be seen by examining Table 15 is that the asset categories defined by Straub (1990) seemed to consistently be scored higher than the dimensions developed from General Deterrence Theory.
Construct validity is assessed by performing a factor analysis on each item in a survey and calculating the reliability of the resulting factors. Principle component factor analysis using Direct Oblimin with Kaiser normalization rotation method was conducted on the three measurement instruments. The Direct Oblimin rotation was use because according to Hair et al. (1998), oblique rotational methods are preferable when the goal of the factor analysis is to produce meaningful factors as opposed to item reduction. Table 17 shows the results of the factor analysis for the countermeasure construct as conceptualized using the four dimensions of General Deterrence Theory: Deterrence, Detection, Prevention, and Remedy. Four factors were identified all with eigenvalues greater than 1. According to Hair et al. (1998), loadings of .5 or greater represent items of practical significance. After examining the factor loadings, six items were removed because they failed to reach that level on any factor. The remaining items did not cross-load with any other items using .5 as the cross loading criteria and as a result, the countermeasure construct exhibits a high degree of discriminant validity. Using an exploratory factor analysis approach to analyze the associations of each item, each factor was assigned to the appropriate dimension of General Deterrence Theory. Evidence of convergent validity is demonstrated by factor loadings greater than 0.5 which are highlighted in Table 17.
After the factor analysis, the Cronbach’s alpha of each factor was calculated in order to assess the reliability of each dimension of the countermeasures construct. Cronbach’s alpha measures the internal consistency of the items in the factor. The lower limit for an acceptable Cronbach’s alpha is 0.7 (Hair et al., 1998) though 0.6 may be acceptable for newly defined scales. The results are displayed at the bottom of Table 17. All are well above the 0.7 threshold indicating that there is a high level of internal consistency in each measure. The total variance explained is 75.43% indicating that these four dimensions account for a significant amount of the variance.
A principle component factor analysis using Direct Oblimin with Kaiser normalization rotation method was also conducted on the threats construct. Table 18 shows the results of the factor analysis for the threats. The current research conceptualizes threats as uni-dimensional for ease of analysis however it should be noted that the results of the factor analysis identified four distinct factors all with eigenvalues greater than 1. Again, adhering to Hair’s recommendation of loadings of .5 or greater to represent items with practical significance all but a single item loaded on a single factor. Social Engineering cross-loaded on factors one and three and as a result was removed from the analysis. The remaining items did not cross-load with any other items using .5 as the cross loading criteria and as a result, the threats construct exhibits a high degree of discriminant validity. Because threats are treated uni-dimensionally in the current research, there was no attempt at identifying the unique characteristics of each dimension of the threat construct. Evidence of convergent validity is demonstrated by factor loadings greater than 0.5 which are highlighted in Table 18.
Due to the theoretical development of the research model, an additional factor analysis was conducted, this time constraining the number of factors to one. This was done in order to ease the analysis in the PLS model. Again using Hair’s .5 criteria, accidental entry of bad data, forces of nature, social engineering, and quality of service deviation were eliminated from the list of threats. As a result, each highlighted threat in the “Constrained” column in Table 18 was used in order to conduct the PLS analysis.
After the factor analysis was completed, the Cronbach’s alpha of each factor was calculated in order to assess the reliability of each dimension of the threats construct. The results are displayed at the bottom of Table 18. All are well above the 0.7 threshold indicating that there is a high level of internal consistency in each measure. The constrained items had a Cronbach’s alpha of .889. The total variance explained is 40.149% indicating that the constrained analysis still accounts for a significant amount of the variance in the model.
Lastly, a principle component factor analysis using Direct Oblimin with Kaiser normalization rotation method was conducted on the Information Systems Security Effectiveness construct as displayed in Table 19. Similarly to the threats construct, ISSE is implemented as a uni-dimensional construct for ease of analysis. However it should be noted that the results of the factor analysis identified two distinct factors all with eigenvalues greater than 1. Though the factor analysis does not break neatly across theoretical dimensions as developed by Kankanhalli et al. (2003), it is close with a cross loading of only a single item.
As discussed above, due to the theoretical development of the research model, an additional factor analysis was conducted, this time constraining the number of factors to one. This for done in order to ease the analysis in the PLS model. Again using Hair’s .5 criteria, all items loaded on the single factor. The item loadings are detailed in Table 19.
After the factor analysis for the ISSE construct, the Cronbach’s alpha was calculated in order to assess reliability. The results are displayed at the bottom of Table 19. At .888, the ISSE reliability is well above the 0.7 threshold indicating that there is a high level of internal consistency in the measure. The total variance explained is 59.94% indicating that as a single factor there is still a large amount of variance explained by the model.
PLS was used to analyze and assess the proposed research model and test the hypotheses set forth earlier. PLS is has several advantages over traditional statistical techniques. Similarly to other structural techniques, PLS is able to concurrently test the measurement and structural models. Additionally, PLS is not constrained to data sets that meet homogeneity and normality requirements (Chin et al., 2003). PLS also has the advantage in that it can handle smaller sample sizes relative to other structural techniques. However, PLS is limited with respect to its ability to measure non-recursive relationships. As recommended by Chin (2008), a recursive version of the model was run. The recursive model lacked the proposed relationships from each countermeasure back to the threats construct. Using SmartPLS version 2.0 (Ringle, Wende & Will, 2005), the modified model was analyzed to assess the measurement model and the structural path between the constructs. In order to obtain reliable results and t-values, 200 random samples of 100 were generated using a bootstrap procedure. Then, again following Chin’s suggestion, the non-recursive aspects of the model were assessed by taking the construct scores obtained from the SmartPLS output and importing them into SPSS for analysis. A two-stage least squares analysis was conducted in order to determined the R2 value for the threats construct as well as the path coefficients between each countermeasure and the threats construct. Finally, the hypotheses were evaluated by assessing the sign and significance of the structural path coefficients using one-tailed t-test and two-tailed t-test statistics where appropriate. SmartPLS does not calculate any goodness-of-fit values. Rather, R2 values were evaluated to assess the ability of various proposed relationships to predict a significant degree of explanatory power in each construct and t-values were assessed to determine the strength of the various paths. Figure 3 below illustrates the results discussed above. Table 20 below that summarizes the results of each hypothesis.