Page 1 of 12

ON APPLYING ORGANIZATION THEORY MEASURES IN A SMALL BUSINESS CONTEXT: THE CASE OF HAGE'S FORMALIZATION, COMPLEXITY, AND CENTRALIZATION SCALES by Mohamed El Louadi (1) ABSTRACT This paper assesses the reliability and construct validity of three structural dimensions: formalization, complexity, and centralization. Internal reliability assessments suggest that the scales are methodologically inadequate. We assessed the reliability and factorial validity of the scales developed by Hage (1965), partially validated by Dewar et al. (1980) and used by Jennings and Seaman (1990) in a study of 190 U.S. small national and state commercial banks. The results were disappointing; Internal reliability assessments suggest that the instrument is unreliable. Further testing suggested that the measure is lacking construct validity. INTRODUCTION The observation is not new that, as we enter the Information era, environmental uncertainty is continuously increasing (due to constant change and intensifying global competition). Consequently, one would expect that commercial enterprises would try to improve their competitive position by attempting to counterbalance the uncertainty in their environment to satisfy their information needs (King et al., 1978). Often, depending on the level of uncertainty present in that environment (Galbraith, 1977), organizations try to match their internal structure to that of their environments (Meyer and Scott, 1992). As competitive environments become more and more complex and uncertain, most organizations seeking greater efficiency and productivity, are rethinking the way they organize their functions, tasks, and operations (Byrne, 1993). Companies are beginning to organize around process, product lines, or discrete businesses instead of specific tasks. Decision-making is being decentralized and pushed down to the level of self-managed cross-functional, multi-disciplinary teams (Katzenbach and Smith, 1993). In industries as diverse as computers, hotels, pharmaceuticals, and retailing, companies are also downsizing to smaller units and a number of books on topics such as "process reengineering" (Davenport, 1993, Hammer and Champy, 1993), "teamwork" (Katzenbach and Smith, 1993), etc. have been published which describe the way companies are moving away from the "bureaucratic paradigm" toward more flexible structures (Barzelay, 1992, Galagan, 1992). Organization structure is a concept that is receiving increased attention from both academicians and practicing managers and is one of the most investigated organizational characteristics in management. Surprisingly, it appears that a sufficiently valid measure does not yet exist on which researchers, in the context of small businesses, can rely to conduct relevant research on structural mechanisms. measures have been suggested which lack proper psychometric assessment and several studies still

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 2 of 12

use these ad hoc measures. While psychometrically "weak" measures may be tolerated in exploratory research, the trend toward their use should not be extended to studies for which practicing managers and entrepreneurs are the final users. Consequently, recommendations made on the basis of invalid and unreliable measures are no more valid and reliable than the measures used to derive them. Since organization structure is such a broad concept, it is not surprising that it has been conceptualized in many different ways (Walton, 1981) and several measures have been proposed. In this paper, we report on the evaluation of one such measure. Specifically, we assessed both the reliability and the factorial validity of three structural scales developed by Hage (1965), and which were partially validated by Dewar et al. (1980) and later used by Jennings and Seaman (1990). We did this by conducting a study on 190 small U.S. national and state commercial banks. The results of the study were disappointing; internal reliability assessments suggest that the instrument is unreliable. Further testing revealed that the measure appears to lack construct validity, i.e., we are not sure what the instrument measures exactly. In the first part of this paper, we provide a brief overview of the Jennings and Seaman study in which the structure scales exhibited impressive psychometric properties when applied to the Savings and Loans industry. In the second section, we report on the procedures in which the same scales produced disappointing and surprising results. We conclude the paper by summarizing the results and offering recommendations concerning unreliable measures to both the research and practicing communities. THE SAVINGS AND LOANS (S&L) STUDY Jennings and Seaman (1990) attempted to examine the relationships between the creation of business ventures and organizational strategy and structure. One aspect of their study was to compare new business venture formations on the basis of Miles and Snow's (1978) business-level strategies and Hage's (1965) structural dimensions. They tested four hypotheses all involving the mechanistic/organic structural arrangements proposed by Burns and Stalker (1961). Structure was an important construct in the Jennings and Seaman study at the level of both research design and hypothesis testing. At the measurement level, Jennings and Seaman utilized the items developed by Hage. Eight questions forming four structural scales: formalization, complexity, centralization, and stratification were used. Although Hage's questions were not specifically oriented toward the service industry, Jennings and Seaman modified and adapted them to the savings and loans (S&L) institutions in the study (see Table I, left hand column). Jennings and Seaman pilot-tested their questionnaire, which included questions related to other variables, using a sample of 99 S&L. A combination of questionnaire and telephone interviews were used to collect data from both the CEO and executive vice-president of each S&L in the study. The inter-rater reliability coefficients estimated from the sample ranged from 0.88 to 0.95 and the reliability coefficients ranged from 0.88 to 0.95. In the present study, the Jennings and Seaman structure was used to obtain data for a larger scale

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 3 of 12

study in which organization structure was a key variable (see El Louadi, 1992). Since this study was directed at the banking industry and because of the obvious similarities between the S&L and banking industries, the Jennings and Seaman measure was assumed to be appropriate although its impressive psychometric properties were the main reasons for its use. The details and results of our study are presented in the next section. Table I. Jennings and Seaman's (1990) Original Scales and the Wording Used in the Pilot Study. Jennings and Seaman (n=80) Formalization

The Pilot Study (n=35)

1. codified job descriptions are used by our association

tions are used by your

bank

2*. ranges of variation are

Complexity

1. codified job descrip-

2. ranges of variation

allowed within jobs

are allowed within

in our association

jobs in your bank

1. specialists (lawyers,

1. specialists (lawyers,

information systems ex-

information systems

perts, and CPAs) are em-

experts, and CPAs) are

ployed by your associa-

employed by your bank

tion to either make (or

to either make (or

assist) decisions

assist) decisions

2. the level of training

2. the level of training

required for your lowest

required for your

level manager and each

lowest level manager

succeeding level varies considerably

and each succeeding level varies consid-

erably Centralization

1*. a proportion of jobs are

1*. a proportion of jobs

used to participate in

are used to partici-

making decisions

pate in making deci-

2. decision-makers are involved in making decisions at most levels of our association

sions 2*. decision-makers are involved in making decisions at most

levels of your bank *. Reverse-scored THE BANKING INDUSTRY STUDY The objective of the initial study was to investigate the implications of organization structure on the information-processing capacity of information-intensive organizations. We proceeded in two steps. First we pilot-tested our instrument which also included other variables. Next we conducted the main

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 4 of 12

study using 286 banks. Both studies were conducted in the National and State Commercial banking industry (SIC: 6021 and 6022) on banks randomly selected from the Sheshunoff Information Services Inc. BankSearch database. Details of both the pilot and main studies are provided in the following paragraphs. The Pilot Study The pilot study was conducted to generate a basis for the internal and field testing of the instrument. A copy of the questionnaire which included a slightly modified version of Jennings and Seaman's items for three scales (Table I) with a five-point Likert scale ranging from "strongly disagree" to "strongly agree" chosen as the response format were sent to one hundred banks selected randomly. Only the centralization, complexity, and formalization scales were used because they are considered the most basic structural dimensions of all organizations by Hage and Aiken (1967) and because stratification was not part of the research question of the larger study (El Louadi, 1992). A cursory examination of the questions published by Jennings and Seaman reveals the simultaneous use of "our," if your," "our association," "your association," etc (2). Some of the items were reworded to eliminate inconsistent referents. The word "bank" was substituted for the word "association" where appropriate. All the changes are underlined in Table I, right hand-side column. Thirty five CEOs and executive vice-presidents participated in the survey which indicates a response rate of 35%. After receiving the completed questionnaires but before proceeding to the main study, further changes based on the pilot test respondents' spontaneous hand-written comments were made. Several respondents experienced difficulty answering some of the questions due to the wording (some examples are provided in the Appendix). These questions and most others were reformulated based on Hage's (1965) own definitions and were used in the main study (Table II). The Main Study In the main study, two successive mailings were used to produce responses from 286 out of 868 banks, which is a 33% response rate. Of the responses, the fact that 190 banks had less than 300 employees requires that attention be paid to the caveat that organization structure in small enterprises may be significantly different from that of other enterprises. Table II.

The Wording in the Pilot Study Compared to that Used in the Main Study. The Pilot Study (n=35)

Formalization

The Main Study (n=286)

1. codified job descriptions are used by your bank

1. the proportion of codified job descriptions

used by your bank is high 2*. ranges of variation are allowed within jobs in your bank Complexity

1. specialists (lawyers,

2*. great ranges of variation are allowed within jobs in your bank 1. the number of specialized

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 5 of 12

information systems ex-

occupations (lawyers,

perts, and CPAS) are em-

economists, information

ployed by your bank to

systems experts, and

either make (or assist)

CPAs in Your bank is

decisions

great

2. this level of training

2. the period of training

required for your lowest

required for most of your

level manager and each

personnel, including

succeeding level varies

managers, is long

considerably Centralization

1.* a proportion of jobs are used to participate in

1*. the proportion of occupations or jobs whose

making decisions

occupants participate in decisions-making in your

2*. decision-makers are in-

bank is high

volved in making deci-

2*. the number of areas in

sions at most levels of

which each decision-

your bank

maker participates is high

*. Reverse-scored The data were evaluated using correlation, reliability, and factor analysis. A correlation matrix involving all six items was computed. As shown in Table III, the formalization scale's items did not correlate with each other (r=0.10, n.s.). The correlations between the complexity items and between the centralization items are positive and significant (r=0.21 p<0.05 and r=0.25, p<0.001 respectively). However, the coefficients' magnitudes are disappointing especially when compared with Jennings and Seaman's reported correlations (0.78, 0.89, and 0.87, p<0.0001, respectively for the three scales). It should be noted that Hage and Aiken reported a correlation as low as 0.07 between the complexity scales (Hage and Aiken, 1967, Table 2). Another interesting difference between our results and those of Jennings and Seaman concerns the correlation between the items of different scales. The first formalization item, proportion of codified jobs, correlates with neither of the complexity items (number of specialized occupations: r=0.07 and period of training: r=0.08) whereas it correlated negatively with the same items in Jennings and Seaman's analysis (r=-0.757 and r=-0.784). Table III.

Correlation Matrix of the Various Items Composing the Structural Measure (184
Formalization 2

1 1

2

Complexity 3

4

5

Centralization 6

0.10

-

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 6 of 12

Complexity

3

0.07

4 Centralization

0.08 5

6

*

p < 0.05

**

p < 0.001

***

p < 0.0005

-0.08

-0.08

0.21*

-0.25** -0.01

-0.03

0.04

-

-0.06 -0.16*

-0.01 -0.10

-

0.25**

-

To minimally be acceptable, any measure should pass the reliability and validity tests. Nunnally (1967) defined reliability as "the extent to which (measurements] are repeatable and that any random influence that tends to make measurements different from occasion to occasion is a source of measurement error" (p. 206). Reliability is traditionally assessed by computing Cronbach's (1951) coefficient which ranges in value from -1.00 to +1.00. In many applied settings, where important decisions are made concerning scientific test scores, an (X of 0.90 is the minimum; and an (x of 0.95 is considered by Nunnally (1967) to be the desirable standard. However, for basic research, Nunnally also indicates that reliabilities of 0.50 to 0.60 will suffice and that increasing reliabilities beyond 0.80 is probably wasteful. Using the Cronbach formula, the internal reliability of the three structural scales was assessed. The reliability coefficients obtained for the formalization, complexity, and centralization scales are equal to 0.18, 0.34, and 0.40 respectively. These values don't meet any of the Nunnally criteria. Jennings and Seaman, however, reported coefficients of 0.90, 0.95, and 0.92. (3) A comparison of the Jennings and Seaman scales and those of the present study is provided in Table IV(4). Often, factor analysis may be used to assess construct validity. The scope of the reliance on factor analysis in management research is broad. Its dominant usage is to verify the presumed multidimensionality of measurement instruments and is considered one of the most powerful methods of construct validation (Kerlinger, 1964). However, factor analysis often yields uninterpretable or highly imaginary results due to the inherent complexity of the relationships which exist among variables. Harris (1967) recommends using several extraction techniques and retaining only those factorial compositions that are commonly identified by more than one extraction technique. According to Harris, "This is a rather conservative approach, but no more so than is required for careful scientific work" (p. 369). The data were analyzed for the sample of 190 observations using all of the extraction techniques available in the Statistical Package for Social Scientists (SPPS(x)) program and varimax as a method of rotation. Because a factorial composition of three dimensions was expected, the number of factors to be extracted was set to three. Table IV.

Comparison Between Jennings and Seaman's and the Results of the Present Study. Jennings and Seaman (1990) Our Study (186
Group 2 (n=52)

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 7 of 12

Means

SD

Means

SD

Formalization

a

0.90

1. Codified jobs

2.07

2. Variation within jobs

0.90

2.17

4.01

0.91

4.01

Complexity

0.93 0.88 0.95

1. Number of Specialties

3.81

0.93

2.12

0.81

2. Required level of training 3.84

0.92

2.00

0.86

Centralization

0.92

1. Number of decisionmaking jobs

3.75

2. Number of areas

0.93

3.93 Means

1.83

0.88

SD

1.80

0.84 0.89

a

Formalization

0.18

1. Codified jobs

3.44

2. Variation within jobs

1.10

2.69

1.10

Complexity

0.34

1. Number of specialties

1.98

0.96

2. Required level of training 2.70

0.85

Centralization

0.40

1. Number of decisionmaking jobs

3.04

2. Number of areas

0.96

2.63

0.90

(1) Group 1 is composed of high-level venturing and prospector strategy, group 2 of low-level venturing and defender strategy (see Jennings and Seaman, 1990). Table V shows that all structural items loaded on the same factors except with the GIS, IMAGE, ML, and ULS extraction techniques. This suggests a degree of stability in the homogeneity within each factor as well as in the heterogeneity between factors. Table V. Results of different extraction methods on the main study data set Rotated Factor Loadings (1,2) PC

Alpha

(6 iterations) Scales

Items

Centralization (5) Complexity

(6)

1

2

1

.81 .05 .16

.72 -.15 -.30 (3)

(5 iterations) 3

3

.49 -.05 .05

.61 -.10 -.27

.15 .79 .05

(4) -.21 .68 -.05

2

.02 .43 .05

-.17 .45 .01

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 8 of 12

Formalization

(2)

.15 -.32 .70

(1) - .21 .26 .72 Eigenvalues(3)

.07 -.21 .22

-.13 .11 .54

1.5 1.2 1.0

.7

.5

.2

% of variance (cumulative)

25.3 44.7 62.0

12.4 20.4 25.1

GLS

IMAGE

(5 iterations) 1 Centralization (5) Complexity

2

(6)

(6 iteration)

3

1

3

.47 -.05 .05

.22 .05 -.07

.58 -.10 -.27 (3)

.30 .05 -.07

.02 .43 .06

(4) -.17 .46 .03 Formalization

2

(2)

-.07 -.12 .11 -.15 -.14 .10

.06 -.21 .20

(1) -.13 .10 .58 Eigenvalues(3)

.8

.4

.00 .13 -.06 -.20 .02 .10

.3

.3

.1

.1

% of variance (cumulative)

14.0 20.3 25.3

4.4 5.4 5.5

PAF (5 iterations) Centralization (5) Complexity

1

2

(6)

.47 -.05

Formalization (1) Eigenvalues(3)

.05

.58 -.10 -.28 (3)

(4)

3

.02

.43

-.17

.45

(2)

.07 -.21

-.13

.11

.54

.8

.4

.05

.02 .21 .3

of variance (cumulative) (1)

14.0 20.2 25.0

PC: Principal Components; Alpha: Alpha Factoring; GLS: Generalized Least Squares; Image: Image Factoring; PAF: Principal Axis Factoring.

(2)

The results of the ML (Maximum Likelihood Factoring) and ULS (Un-weighted Least Squares) extraction techniques are not reported because they yielded the same factorial compositions as the GLS technique.

(3)

Eigenvalues for the Alpha, GLS, and IMAGE extractions are not available from the SPSS(X) routine, Sum of Squares are reported instead.

Only when interpreting the PC and Alpha extraction results does it seem that the factors obtained from the analysis can be interpreted in a manner similar to that of Jennings and Seaman. Moreover, several loading factors are smaller than the 0.30 threshold suggested by Kim and Mueller (1978). This can be seen in all extraction except PC. It is doubtful that loadings smaller than 0.30 could be taken seriously because they represent less than 10% of the variance of the factor on which the

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 9 of 12

corresponding item is loaded (Nunnally, 1978). More over, only in the PC extraction did all factors have eigenvalues greater than 1. The loadings corresponding to several formalization and complexity items do not seem to differ significantly in magnitude. This directly violates one of Thurstone's (1947, p.335) five formal rules, namely that a large loading for one factor be opposite a small loading for another factor. Given the relatively large number of observations in our data set, this is problematic because it may be indicative of non-heterogeneity between and non-homogeneity within the factors (Aleamoni, 1983). An additional reason to doubt the validity of a three-factor structure arises from the observation that three items seem to switch factors suddenly when using the IMAGE or GLS extractions. These items are item 4 (centralization), item 2 (formalization), and item 1 (formalization). This may result from the very weak inter-item correlation coefficients shown in Table III and may have a bearing on the weak coefficients shown in Table IV. CONCLUSION Hage and Aiken proposed a measure for organization structure that seemed appropriate for questionnaire-based studies in the context of large companies. While this measure has been validated by Dewar et al. (1980), its reliability in a small business environment or in service industries such as the S&L and banking industries is not certain. Using data collected from 190 small banks, the results of the present study suggest that after subjecting the data to the usual statistical tests, this measure fails to meet the minimum reliability and validity standards. Perhaps the most serious shortcoming revealed by the present study was that, when the measures are used in applied settings or when conclusions to be drawn from their use are addressed as recommendations to the practicing community, the scales, reliabilities fall below the 0.90 minimum desired threshold suggested by Nunnally (1967) These measures did not achieve acceptable construct validity properties despite efforts to improve the wording. This can be seen from the results of the correlation and factor analyses reported in this study. These results may pose serious questions and difficulties for the research community because, as Peter (1979) has observed, "valid measurement is the sine qua non of science. If the measures used in a discipline have not been demonstrated to have a high degree of validity, that discipline is not a science" (p. 6). Overall, the instrument did not perform as was expected based on the Hage (1965) and Dewar et al. (1980) studies. This may be due in part to the fact that it was designed to be used in a large business context. Another possible explanation may be in the fact that it was initially formed for use for companies operating in the manufacturing sector. Given the results of the present study, it would appear that researchers who use measures devised for other contexts (manufacturing vs. service sectors or small vs. large businesses), should do so with caution. REFERENCES Aiken, M. and Hage, J., Organizational Interdependence and Inter-organizational Structure, American

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 10 of 12

Sociological Review, Vol. 33, 1968, pages 912-930. Aleamoni, L.M., Effects of Size of Sample on Eigenvalues, Observed Communalities, and Factor Loadings, Journal of Applied Psychology, Vol. 58, No. 2, October 1973, pages 266-269. Barzelay, M., Breaking through Bureaucracy: A New Vision for Managing in Government, Berkeley: University of California Press, 1992. Byrne, J.A., The Horizontal Corporation, Business Week, December 20, 1993, pages 76-81. Burns, T. and Stalker, G.M., The Management of Innovation, London: Tavistock Publications, 1961. Cronbach, L.J., Coefficient Alpha and Internal Structure of Tests..Psychometrika, Vol.16, 1951, pages 297-334. Davenport, T. H., Process Innovation: Reengineering Work through Information Technology, Boston, MA: Harvard Business School Press, 1993. Dewar, R.D., Whetten, D.A., and Boje, D., An Examination of the Reliability and Validity of the Aiken and Hage Scales of Centralization, . Formalization, and Task Routineness, Administrative Science Quarterly, Vol. 15, No. 1, 1980, pages 120-128. El Louadi, M., Organizational Responses to Perceived Environmental Uncertainty in the Banking Industry: An Information Processing View, Unpublished Ph.D. dissertation, University of Pittsburgh, 1992. Galagan, P.A., Beyond Hierarchy: The Search for High Performance, Training & Development, Vol.46, No.8, 1992, pages 21-25. Galbraith, J., Organization Design, Reading, MA: Addison-Wesley, 1977. Hage, J., An Axiomatic Theory of Organizations. Administrative Science Quarterly, Vol.10, 1965, pages 289-320. Hage, J. and Aiken, M., Relationship of Centralization to Other Structural Properties, Administrative Sciences Quarterly, Vol. 12, No.1, 1967, pages 72-91. Hage, J. and Aiken, M., Routine Technology, Social Structure, and Organization Goals, Administrative Sciences Quarterly, Vol.14, 1969, pages 366-376. Hammer, M. and Champy, J., Reengineering the Corporation: A Manifesto for Business Revolution, NY: Wiley & Son, 1993. Harris, C.W., On Factors and Factor Scores. Psychometrika, Vol. 32, 1967, pages 363-379.

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 11 of 12

Jennings, D.F. and Seaman, S.L., Aggressiveness of Response to New Business Opportunities Following Deregulation: An Empirical Study of Established Financial Firms, Journal of Business Venturing, Vol.5, No.3, 1990, pages 177-189. Katzenbach, J. R. and Smith, D.K., The Wisdom of Teams: Creating the High-Performance Organization, Boston, MA: Harvard Business School Press, 1993. Kerlinger, F.N., Foundations of Behavioral Research, NY: Holt, Rineholt and Winston, Inc., 1964 Kim, J-O. and Mueller, C.W. Factor Analysis: Statistical Methods and Practical Issues, Beverly Hills, CA: Sage Publications, 1978. King, W.R., Dutta, B.K. and Rodriguez, J.T., Strategic Competitive Information Systems, Omega: The International Journal of Management Science, 1978, pp.123-132. Meyer, J.W. and Scott, W.R, Organizational Environments: Ritual and Rationality, Newbury Park, California: Sage Publications, 1992. Miles, R. and Snow, C., Organization Strategy, Structure, and Process, NY: McGraw-Hill, 1978. Nunnally, J.C., Introduction to Psychological Measurement, NY: McGraw-Hill, 1967. Nunnally, J.C., Psychometric Theory, 2nd Edition, NY: McGraw-Hill, 1978. Peter, J., Reliability: A Review of Psychometric Basics and Recent Marketing Practices, Journal of Marketing Research, Vol.16, 1979, pages 6-17. Thurstone, L.L., Multiple-Factor Analysis., Chicago: University of Chicago Press, 1947. Walton, E.J., The Comparison of measures of Organization Structure, Academy of Management Review, Vol. 6, No. 1, 1981, pages 155-160. (1)

Department of Decision Sciences & MIS, Concordia University, Quebec, Canada

(2)

Walton (1981, p.157) has already noted the extent to which items with inconsistent referents

could compromise the assessment of organizational concepts such as structure. (3)

Given the following formula for Cronbach's : k * r(ij) = -------------------1 + (k-1) r(ij)

where k is the number of items composing the scale and r(ij) is the average inter-item correlation

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

Page 12 of 12

coefficient, the minimum r(ij) will have to be equal to .82 to obtain an of .90. Table III shows that we were unable to even approach that correlation level. (4)

Unfortunately, we were unable to secure Jennings and Seaman's original data to conduct more

thorough comparative analyses. APPENDIX With respect to item "the level of training required for your lowest level manager and each succeeding level varies considerably," the direction of variations in the level of training among managers, used as a measure of complexity by Hage (1965) and Jennings and Seaman (1990), is not fully explicit. It is not clear how this variable affects complexity except other than that complexity increases on average when the period of training is longer (Hage, 1965, p. 294). Three respondents experienced difficulty answering two questions: "a proportion of jobs are used to participate in making decisions" (Centralization, item 1) and "decision-makers are involved in making decisions at most levels of your bank". One respondent failed to answer the first question and wrote the word "what??" next to it'. Another put a question mark next to both questions and added "by definition!" under the second. Still another respondent put question marks next to both questions and answered neither.

file://C:\Documents%20and%20Settings\Dr.%20Don%20Bradley\Desktop\1995\IC... 6/3/2004

13.pdf

Retrying... Whoops! There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. Retrying... 13.pdf. 13.pdf. Open. Extract.

148KB Sizes 3 Downloads 215 Views

Recommend Documents

No documents