International Council for Small Business 47th World Conference San Juan, Puerto Rico June 16-19, 2002

icsb 2002-024

Small Business Survey Methodology: Response Rates, Data Quality and Cost Effectiveness Rick Newby*, John Watson and David Woodliff

Abstract An important methodological issue for small business researchers is cost effective data collection, particularly for mail surveys. While many studies have examined the effect of various methodological strategies on response rates, the impact these strategies have on data quality is less understood. This study aimed to examine the effectiveness of various recommended strategies in terms of cost, response rate, and data quality. This study used a 16 page mail questionnaire entitled ‘Attitudes and Expectations of the Self-employed’. Responses were received from 492 owner-operators with businesses in the greater metropolitan region of Perth, Australia. Results from the study indicated that pre-notification by phone and offering incentive payments increased response rates. The relevant response rates for each of the major groups was 27.1% for pre-notification with an offer of payment, 22.4% for the offering of payment but without pre-notification, 19.9% for pre-notification but without offer of payment, and 12.5% for no pre-notification and no offer of payment. The cost effectiveness for each of these groups displayed a different pattern. Based on the number of questionnaires received, the least costly was that with the lowest response rate (no pre-notification and no offer of payment), the most costly was the group with the best response rate (pre-notification combined with the offer of payment). However, when the quality of the respondent’s replies were taken into account, neither of these groups proved the most cost effective. Cost per ‘quality’ questionnaire for those owner-operators offered payment without pre-notification, and for those that were pre-notified but without an offer of payment, was approximately $60 in both cases. For businesses that were pre-notified with an offer of payment the cost per ‘quality’ questionnaire was $76 and for those businesses for which there was no

pre-notification and no offer of payment the cost per usable questionnaire was $68. Small business researchers concerned with the quality of the data received should consider either payment to respondents or pre-notification, as these were found to be the most cost effective data collection strategies. Introduction Academic researchers of small business are regularly confronted with a need to collect substantial quantities of empirical data, often from a reluctant population. Mail surveys are often used for such data collection purposes1 because there is a commonly held view that this is the most efficient means of collecting large amounts of data (Wu and Vosika 1983), despite potential difficulties with non-response bias (Lambert and Harrington 1990; Brennan and Hoek 1992; Brown 1994; Childers and Skinner 1996). Researchers have therefore concentrated on trying to reduce non-response bias by increasing response rates. This, in turn, has lead to a myriad of techniques being suggested to stimulate mail questionnaire returns (Yammarino, Skinner and Childers 1991; Church 1993; Peacock 1996). However, the impact of these strategies on the cost effectiveness and quality of the data received has rarely been examined (Fox, Robinson and Boardley 1998). This paper assesses the impact of a variety of mail survey strategies on the cost, response rate, and quality of data received from participants in an extensive study of the attitudes and expectations of the self-employed located within greater metropolitan Perth, Australia. Inducing Responses Over the past 50 years many experiments have been conducted that attempt to identify survey design strategies that improve response rates (Turley 1999). These strategies ‘have variously been classified by timing … and by technique’ (Kanuk and Berenson 1975, p. 12) and arise from a number of issues such as those listed in Table 1.

1

A review of 160 studies related to the topic addressed in the questionnaire found that the most common method of data collection was mail surveys (115 or 72%), followed by interviews (24 or 15%), combined methods (9 or 6%), handouts (6 or 4%), telephone surveys (4 or 2%), and unknown (2 or 1%).

2

TABLE 1: Response rate issues of concern* Issues

Variables

Strategies

1. Respondent contact

Pre-notification Type of pre-notification Timing of mailing Use of follow-ups Number of follow-ups Timing of follow-ups Form of follow-ups

Yes or no? By telephone, postcard, or letter? Early or late in week/month? Yes or no? 1, 2, 3 or 4? Before or after deadline? Postcard, letter, replacement mailing, or telephone?

2. Covering letter

Personalisation Type of appeal Notification of deadline Statement of confidentiality Sponsoring organization Complexity and tone of letter

By name or Dear Sir/Madam?; Individually signed? Altruistic, egotistic, or social utilitarian?; Included or not? Yes or no? Yes or no? University or commercial?

3. Incentives

Use of incentives Nature of incentives Timing of incentives Recipient of incentives

Yes or no? Monetary or non-monetary? With mailing or for returns only? Respondent or charity?

4. Questionnaire

Saliency of topic Number of questions/questionnaire length Color of paper Type of question Format and design Notification of deadline Anonymity Statement of confidentiality

5. Postage/mailing Type of outgoing envelope Type of postage Pre-paid return envelope *

White or colored? Closed or open?; Opinion or fact?; Personal or other? Printed or photocopied?; Size?; Booklet or loose/stapled? Front or back of questionnaire? Respondent identifiable or not? Number of times stated? White or colored?; With or without logo? Stamps or franked? Yes or no?

Based on Fox et. al. (1998) and Turley (1999)

Researchers of the small business population have both espoused and followed the strategies indicated in Table 1. For example, Forsgren (1989) recommended the inclusion of prepaid monetary incentives in cash, follow-up with letter or postcards, and the use of egotistical or social utilitarian appeals to participate rather than altruistic calls for help. Anonymity (Chay 3

1993; Fagenson 1993), mail pre-notification (Igbaria, Zinatelli, Cragg and Cavaye 1997; McLaren and Shelley 2000) and telephone pre-notification (Ostgaard and Birley 1996; Coleman 2000; Morris and Zahra 2000) have all been used, as well as personalization strategies (Cragg and King 1988; Thong and Yap 1995; Zahra and Bogner 2000). Many studies have incorporated Dillman’s Total Design Method (e.g., Gaskill, Van Auken and Manning 1993; Busenitz 1996; Hite 1998; McGee and Peterson 2000) which aims to ‘minimise the costs of responding, maximise the rewards for doing so, and establish trust that those rewards will be delivered’ (Dillman 1978, p. 12). The use of incentives is largely unreported in small business research. Pre-paid monetary incentives have been employed by Everett, Price, Bedell and Telljohann (1997) and James and Bolstein (1992). Fiorito and LaForge (1986) used a non-monetary incentive for involvement, while James and Bolstein (1992) offering monetary payment for participation. Monetary (Newby and Smith 1999) and non-monetary (McLaren and Shelley 2000) prizes have also been offered. Attempts by authors to draw general conclusions on the impact of individual response inducement strategies has proved problematic (Schlegelmilch and Diamantopoulos 1991), except perhaps for the direction of the effect (Yammarino et al. 1991). Turley suggested that this most likely arose from differences between consumer, industrial, general, and special interest populations such that 'no single determinant or winning formula of response-inducing strategies ... can be employed to maximise (sic) response to postal surveys' (Turley 1999, p. 302). As such, the mail survey strategies used in this study were limited to those found in industrial mail surveys as reported by Greer, Chuchinprakam and Seshadri (2000). Cost Effectiveness Consistent with the efforts to find response rate stratagems, a number of studies have looked at the relative cost effectiveness of particular methods. After classifying these strategies into procedural and incentive groupings (Kanuk and Berenson 1975), we found that the results for procedural experiments were fairly consistent across all populations (Table 2), while the efficacy of incentive strategies are less predictable (see Table 3). From Table 2 it can be seen that pre-notification and follow-up reminders seem to be the most cost effective (Martin, Duncan, Powers and Sawyer 1989), with the only apparent dispute being the type of follow-up to be used (Erdogan and Baker 2002).

4

TABLE 2: Cost effectiveness of response inducing strategies - Procedural Inducement factor and Experimental treatment study

Result

Follow-up Edrogan and Baker [, Original questionnaire v copied 2002 #29] questionnaire v postcard reminder v letter reminder Fox et. al. (1998) 2nd questionnaire v postcard reminder v phone reminder v 2nd questionnaire and phone Martin et. al. (1989) Letter v control Personalization Martin et. al. (1989)

By name v Dear Sir/Madam

Pre-notification McLaren and Shelley (2000) Murphy et. al. (1991) Duhan and Wilson (1990) Martin et. al. (1989)

Postcard v control Letter v commitment control Postcard v control

Total Design (TDM) Paxson (1992)

TDM v control

Telephone v postcard

cards

Postcard then copied, letter and original (no control) Postcard then 2nd, 2nd and phone and phone only (no control) Letter cost effective Personalization effective

not

cost

Postcard cost effective (no control) Postcard cost effective. v Letter cost effective Postcard cost effective

Method TDM cost effective

In Table 3 it can be seen that results from consumer and general survey experiments indicate that monetary incentives are more likely to be cost ineffective (Dommeyer 1988; James and Bolstein 1990) rather than cost effective (Brennan, Hoek and Astridge 1991). This ineffectiveness is especially prevalent where the incentive is promised rather than supplied, with this apparent lack of trust not even mitigated by offers of charitable donations (Hubbard and Little 1988; Warriner et al. 1996). It may also be that where non-monetary gifts are supplied they need to be of non-trivial value given the experiences of Gajraj, Faria and Dickinson (1990)2. Comparative industrial survey experiments suggested a potentially differing conclusion. Small prepaid monetary incentives appear to be cost effective(London and Dommeyer 1990; James and Bolstein 1992) consistent with the arguments of Forsgren (1989). Promised monetary incentives are seemingly cost ineffective (James and Bolstein 1992) – but substantial non-monetary 2

Where the chance of winning a substantial amount of cash outperformed the guaranteed gift.

5

incentives have generated conflicting results (London and Dommeyer 1990; McLaren and Shelley 2000). In the absence of evidence to the contrary, we can only assume that the regularly repeated testing of the effectiveness of incentives in industrial and professional surveys implies that many researchers believe their use to be merited (Paolillo and Lorenzi 1984; Jobber, Birro and Sanderson 1988; Diamantopoulos, Schlegelmilch and Webb 1991; Armstrong and Yokum 1994; Angur and Nataraajan 1995; Schneider and Johnson 1995; Everett et al. 1997; Hare, Price, Flynn and King 1998; Greer et al. 2000; Woodruff, Conway and Edwards 2000). TABLE 3: Cost effectiveness of response inducing strategies - Incentives Inducement factor and Experimental treatment study Monetary incentives Warriner et. al. (1996)

Result

Inclusion of $2, $5 or $10 v control

Incentives not cost effective Promise to charity of $2, $5 or $10 Incentives not cost v control effective Prize draw of $200 v control Incentive not cost effective Brennan (1992) Inclusion of 50c v control Incentive not cost effective James and Bolstein Inclusion of $1 or $5 cash v control $1 incentive cost effective (1992) Inclusion of $5, $10, $20 or $40 Incentives not cost check v control effective Promised check of $50 v control Incentive not cost effective Faria and Dickinson Promise to charity of $1 v control Incentives not cost (1992) effective st Brennan et. al. (1991) Inclusion of 20c, 50c or $1 in 1 50c incentive cost effective mail v control Inclusion of 20c, 50c or $1 in 20c incentive cost effective remail v control Prize draw of $200 v control Incentive not cost effective Gajraj et. al. (1990) Inclusion of 50c v control 50c incentive cost effective Promise of 50c v control Incentive not cost effective Share of lottery prize v control Share of prize cost effective Promise of share of lottery prize v Incentive not cost effective control James and Bolstein Inclusion of 25c, 50c, $1 or $2 v Incentives not cost (1990) control effective London and Dommeyer Inclusion of $1 v control $1 incentive cost effective 6

(1990) Dommeyer (1988)

Inclusion of 25c v control

Incentives effective Incentives effective Incentives effective

Share of $25 v control Prize draw of $25 v control Non-monetary incentives McLaren and Shelley Prize of $200 v control (2000) Brennan et. al. (1991) Prize of $200 v control Gajraj et. al. (1990) Supplied gift valued 49c v control Promised gift v control

not

cost

not

cost

not

cost

Incentive not cost effective Incentive not cost effective Incentive not cost effective Incentive not cost effective

Data Quality As well as the cost effectiveness of response inducing strategies, many researchers have focussed on the impact these techniques have on the quality of data collected. While this quality can be assessed in different ways (Berdie and Anderson 1976), the most commonly used and reviewed measure is the degree of item completion (e.g., Downs and Kerr 1986; Albaum 1987; Haggett and Mitchell 1994). An examination of industrial and consumer survey experiments reveals that item completion appears to be unaffected by procedural strategies but may be impacted by the offer of monetary incentives (Table 4). TABLE 4: Data quality and response inducing strategies Inducement factor and Experimental treatment study Anonymity Albaum (1987) Humor Skinner et. al. (1983)

University v commercial

research

Joke v cartoon v control

Result

firm

v Not significant

Not significant

Monetary incentives Shaw et. al.(2001) Inclusion of $2 v inclusion of $5 Not significant James and Bolstein Inclusion of 25c, 50c, $1 or $2 v Not significant for item (1990) control omissions Significant for length of text answers London and Dommeyer Inclusion of $1 v control Significant 7

(1990) Dommeyer (1988)

Inclusion of 25c v control Share of $25 v control Prize draw of $25 v control Goetz, Tyler and Cook Promise of $10 v control (1984)

Not significant Not significant Not significant Significant for item omissions Significant for length of text.

Non-monetary incentives London and Dommeyer Prize draw v control (1990) Dommeyer (1985) Survey results v control Personalization Wunder and (1988)

Not significant Not significant

Wynn Handwritten v computer addressed Not significant envelopes

Pre-notification Murphy et. al. (1991) Prior postcard v control Duhan and Wilson Prior letter v control (1990) Jobber, Allen and Telephone v control Oakland (1985) Sponsorship Schneider and Johnson University v commercial (1995) Albaum (1987) University v research commercial

Not significant Not ignificant Not significant

Not significant firm

v Not significant

Style of questionnaire Jobber et. al. (1988) Booklet form v stapled sheets Not significant Time cue Hornik (1981) 20m v 40m v control Not significant A further issue with response inducing strategies is their potential to introduce systematic sampling bias (Warriner et al. 1996). This bias can be manifested by differences between groups under study (‘internal’ response bias), or by participants being non-representative of their population (‘external’ non-response bias). Most often the concern is over some form of external socio-demographic bias (Chebat and Cohen 1993; Green 1996); this is normally checked against known parameters of the sample or of the population (Downs and Kerr 1986; Dickinson and Faria 1995; Collins, Ellickson, Hays and McCaffrey 2000). In most cases, however, the questions of main interest to the researcher have unknown population or sample parameters, 8

implying that the best test available is to check for unexpectedly recurring distinctions between groups under study (Haggett and Mitchell 1994; Angur and Nataraajan 1995; Lau and May 1998). We have therefore not only tested the quality of our data by looking at non-responses to our most critical item, firm profitability, but also if our participants exhibit any form of systematic differences questionnaire item by questionnaire item, or compared to the West Australian small business population. We now go on to details of our study. Methodology This study tested the effectiveness of a personalization/telephone pre-notification and/or an offered monetary incentive against a no personalization/pre-notification no incentive control condition. The survey instrument was also printed on both white and colored papers given the lack of coherence over the effect of color on industrial surveys (Pressley and Tullar 1977; Greer and Lohtia 1994; LaGarce and Kuhn 1995). The data was collected in late 2000 and early 2001. The promised monetary incentive was set at $25 in recognition of the amount of effort required to complete the questionnaire (Diamantopoulos et al. 1991) rather than as part of a ‘social contract’ (see discussion in Armstrong and Yokum 1994). Those offered the monetary incentive could also choose to maintain their anonymity (Tyagi 1989; Singer, Von and Miller 1995; Faria and Dickinson 1996); by asking respondents to identify the payee they could nominate third parties (for example, a charity) to receive the payment3. The sampling frame for this study was drawn from businesses listed in the Perth, Western Australia, telephone directory and purchased from a market research firm. While the use of telephone directories has historically led to significant sampling bias in the general population (Grossman and Weiland 1978) we considered such a bias to be unlikely for our survey as commercial needs would ensure that few, if any, businesses would not have a telephone number. To ensure that we only sampled small and medium enterprises the businesses names listed in the sampling frame were reviewed. Any listed government based enterprises or publicly owned firms (and subsidiaries) were removed. Pre-notification was conducted by telephone (Hansen, Tinney and Rudelius 1983; Jobber et al. 1985). As the sampling frame was of businesses and not individuals, the purpose of this

3

Assurances of anonymity were placed on the survey’s cover and before the questions on owner-operator demographics. This second assurance was provided given Faria and Dickinson’s (1996) conclusion that this may increase the response rate.

9

pre-notification call was two-fold; firstly, to obtain the name of the principal owner-operator for the business and, secondly, to confirm the business’s mailing address. We requested the name of the principal owner-operator to allow personalised addressing to that owner4 in an attempt to avoid the ‘gatekeeper’ (London and Dommeyer 1990). We also hoped that personalization would get our message through the owner-operator’s ‘attention filter’ (Diamantopoulos et al. 1991). Piazza suggested that reaching ‘an answering machine initially can be considered a promising outcome’ (Piazza 1993, p. 231)5. Our practice on encountering an answering machine was to leave a short message giving details of why we called and requesting a call-back. If no reply was received within 48 hours we called again, but did not leave a second message if we again encountered the answering machine as two messages could ‘reduce (respondents) willingness to participate in the survey’ (Tuckel and Shukers 1997, p. 7). Industrial studies that vary the sponsoring organization have generally found that university sponsorship enhances response rate when compared to commercial sponsorship (Faria and Dickinson 1992; Schneider and Johnson 1995; Diamantopoulos and Schlegelmilch 1996). Our survey documentation therefore highlighted the sponsorship of both supporting bodies6, and included the use of the Universities’ crests on the questionnaire. Special color-printed letterhead was also used for the covering letters given the evidence of Erdogan and Baker (2002). The survey instrument was entitled ‘Attitudes and Expectations of the Self-employed’ and was based on Dillman’s (1978) Total Design Method. Our principal concern with the instrument was its length (Greer et al. 2000); despite inconclusive findings on the impact length of questionnaires has on response rates (Diamantopoulos and Schlegelmilch 1996). Comprising 240 items in eight sections, the questionnaire followed Dillman’s (1978) procedure by being printed in booklet format and without questions on the front and back covers7. An invitation to comment and plenty of ‘white’ space was included at the rear of the instrument. The questionnaire sought many potentially sensitive details from owner-operators about themselves and their firm. The first question was easy to answer, neutral and interesting (Fahy 1998), followed by queries on the individual’s psychological makeup (e.g., personality

4 5

6 7

For the no pre-notification group we addressed the material to The Principal and headed our letter Dear Sir/Madam. Where we did not encounter an answering machine but the number rang ‘busy’ or ‘engaged’ we re-called twice more in the following 48 hours. Firms that could not be contacted during this period were then replaced. The University of Western Australia and Edith Cowan University. The instrument contained 16 pages on 4 A3 sheets, folded and stapled.

10

inventory); their satisfaction with the job and business goals (e.g., Minnesota Satisfaction Questionnaire); personal demographics (e.g., age) and non-financial firm related demographics (e.g., location of premises). The last items in the questionnaire were the ones deemed to be the most ‘objectionable’ (Dillman 1978); in this case the questions on the firm’s financial standing (Gronhaug, Gilly and Enis 1988). Almost all questionnaire items were closed-ended in nature, those which were open-ended (e.g., industry) could be answered in very few words. Questionnaires were printed on white, green and blue paper for firms to whom the monetary incentive was offered, and these colors plus sand, yellow, and pink paper for the firms without the monetary incentive. Hornik (1981) indicated that providing a time cue should increase mail survey response rates compared to not providing a cue8. An expected completion time of 30 minutes, based on the experience of pre-testing, was prominently noted on the front of the questionnaire and in the covering letter. The deadline for survey return was given in the letter and on the questionnaire. The remainder of the survey methodology used a modified version of Dillman’s (1978) Total Design Method9. The covering letter was one page, dated, and signed by the chief researcher. Pre-printed reply-paid envelopes were included in the survey packet, which was placed in an envelope of sufficient size so that questionnaire or letter need not be folded. A postcard reminder was not used, but a letter and replacement questionnaire, with a revised date for return, was sent in the week following the initial deadline. Results Printing on colored paper was not effective in increasing response rates, as can be seen in Table 5 which shows the response rates for each of the colors used, and the probabilities that these rates differed from white paper. The individual strategy analyses in Panel A and combined strategy analyses in Panel B generated only one p-value less than 0.05 (Panel B, no monetary incentive, White v Pink). This was not statistically significant after making a Bonferroni (School of Psychology - University of New England 2000) correction10.

8

He even suggested that a slightly unrealistic ‘short’ cue might be better than an accurate estimation.

9

Not dissimilar to that used by Paxson (1992) and Fahy (1998). The adjusted critical alpha for 30 tests is 0.00171. If no Bonferroni (School of Psychology - University of New England 2000) correction was applied the probability of finding one or more significant differences in repeated ‘non-independent’ tests using a critical alpha of 0.05 is 78.5%.

10

11

Table 5: Response rates for questionnaires received – Color effects

Panel A

Monetary incentive*

Response rate White 26.7% Green 25.4% Blue 25.8% Color 25.6%

No monetary Pre-notified incentive p-val ue 0.760 0.823 0.745

Response rate 15.1% 17.6% 12.4% 17.9%

Monetary incentive* Pre-notified p-val Not ue pre-notified White 31.1% 23.1% Sand Green 29.2% 0.867 22.9% Yello w Blue 32.3% 0.772 21.2% Pink Color 30.7% 0.942 22.0% Panel B

*

p-val ue 0.511 0.387 0.332

Response rate 28.5% 29.5% 26.4% 26.2%

Not pre-notified

p-val ue 0.838 0.663 0.506

Response rate 18.6% 16.7% 19.2% 15.8%

p-val ue 0.575 0.862 0.252

No monetary incentive p-val Pre-notified p-val Not ue ue pre-notified 23.0% 8.8% 29.0% 0.405 10.8% 0.693 20.6% 0.721 15.6% 23.2% 0.973 14.7%

p-val ue 0.607 0.116 0.167

0.956 20.6% 28.3% 0.785 24.4%

0.449 0.035 0.198

0.721 6.2% 0.462 18.5% 0.797 13.2%

Monetary incentive response rates are based on raw sample total, that is, the number of survey packets posted less number of packets undeliverable.

Panel A of Table 6 reveals that the personalization/pre-notification strategy and the offered monetary incentive were effective in increasing response rates. All individual strategy factor analyses were statistically significant at 5% alpha11, by initial/overall mail or by data quality 12. Panel B of Table 6 shows the combination of strategies affected response rates in the expected direction. The condition combining both strategies generated the highest response rate, with the condition without strategies the lowest response rate13. The statistical analyses in Panel B shows that pre-notification was especially effective where no monetary incentive was offered, implying

11

12

13

The Bonferroni (School of Psychology - University of New England 2000) correction was not required as all tests in Table 6 are independent. Questionnaires received is defined as those at least 90% complete, quality questionnaires had to include the most ‘objectionable’ item (firm profitability) with no significant omissions elsewhere, and complete questionnaires were those without omissions. We had no prior views on the effectiveness of one strategy compared to the other, but the monetary incentive without pre-notification and pre-notification without monetary incentive conditions seemed to generated similar response rates.

12

this might be a useful technique if cash resources are limited. No statistically significant differences were found for pre-notification when the monetary incentive was offered. Table 6: Cumulative response rates by quality of questionnaire

Received Overall

Monetary incentive# Response rate Initial 15.8% 24.6%

No monetary incentive Response rate 11.0% 16.0%

Pre-notifie d p-val Response rate ue 0.001 15.9% 0.000 22.8%

Not pre-notifie d Response rate 10.4% 16.4%

p-val ue 0.000 0.000

Quality Overall

Initial 11.3% 18..3%

7.0% 11.1%

0.000 10.8% 0.000 16.6%

6.9% 11.7%

0.000 0.001

Complete Overall

Initial 8.1% 11.4%

4.4% 6.4%

0.000 7.7% 0.000 10.5%

4.3% 6.5%

0.000 0.000

Panel A

Received Overall

Monetary incentive# Pre-notified Not pre-notified Initial 18.3% 13.8% 27.1% 22.4%

No monetary incentive p-val Pre-notified Not ue pre-notified 0.072 14.3% 8.1% 0.132 20.1% 12.4%

p-val ue 0.000 0.000

Quality Overall

Initial 12.9% 19.4%

9.9% 17.4%

0.158 9.5% 0.476 14.8%

4.8% 7.8%

0.001 0.000

Complete Overall

Initial 9.2% 12.0%

7.2% 10.8%

0.245 6.7% 0.568 9.5%

2.4% 3.6%

0.000 0.000

Panel B

#

Monetary incentive response rates are calculated using the adjusted sample total, that is, the number of survey packets posted plus number of uninterested or inapplicable pre-contacts less number of packets undeliverable.

Table 7 reports on the cost effectiveness of our two main strategies. Panel A shows the relative costs per questionnaire from the original mailing (10 working days after the initial deadline), with Panel B revealing the comparative expenses at the end of the data collection. These relative costs have been calculated both with and without the cost of labor. For many studies labor is a not a relevant cost14 and hence could be excluded for the cost/benefit analysis.

14

Except, perhaps, for the opportunity costs of the researcher’s time.

13

In a time-sensitive survey for which only one mailing is possible and where labor costs are relevant15, our results suggest that no response-inducing strategies be used. The no monetary incentive and not pre-notified condition had the lowest cost per questionnaire received ($35.79 v $44.88 and above) and the lowest cost per quality questionnaire ($59.64 v $62.34 and above). However, if a fully complete response is required to an instrument of length, the exclusion of inducements may not be cost effective as this condition generated the highest cost per complete questionnaire ($122.43 v $116.46 and below). When the second mailing was conducted the pattern changed16. While the no inducement condition continued to be the most cost effective for questionnaires received ($42.88 v $44.64 and higher), this was not the case for quality ($68.06 ranks second highest) and complete questionnaires ($147.86 ranks highest). In these cases, pre-notification without monetary incentive was the most cost effective, although the difference between this condition and the monetary incentive without pre-notification seems negligible for quality questionnaires ($60.94 v $60.64) and very similar for complete questionnaires ($98.12 v $94.53). Removal of labor costs from the calculation radically changes the results. Under this circumstance, pre-notification without monetary incentive condition reigns supreme. For the single mailing, the costs per received, quality, or complete questionnaire were about one-half of the ‘next best’ condition ($13.56 v $26.18 and above; $20.54 v $43.63 and above; and $29.10 v $63.25 and above, respectively). The efficacy of the no incentive but pre-notified condition was equally pronounced when the second mailing was added ($16.61 v $31.95 and above for received; $22.57 v $47.31 and above for quality ; and $35.18 v $76.03 and above for complete). Checks for sampling bias were conducted in three ways. Firstly, we tested if our respondents were representative of the small business population17. Secondly, we examined our data to see if any bias had been introduced by our experimental conditions18. Finally, we checked for potential non-response bias by the commonly used method of considering ‘late’ respondents as representative of those who did not respond19.

15 16 17 18 19

Panel A of Table 7. Panel B of Table 7. An ‘external’ socio-demographic check. An ‘internal’ systematic difference check. An ‘internal’ systematic difference investigation as a proxy for ‘external’ non-response.

14

Table 7: Cost effectiveness by condition Monetary incentive Pre-notified Not prenotified $2 287.98 $499.52 $127.50 $491.96 $648.76 $350.28 $493.08 $1 750.00 $1 725.00 $5 007.72 $3 366.36

No monetary incentive Pre-notified Not prenotified $3 724.10 $624.40 $204.00 $688.94 $945.70 $504.00 $756.00 $5 121.04 $2 326.10

417 $12.01 $6.52

587 $5.73 $4.88

600 $8.54 $2.33

900 $2.58 $1.89

No. of questionnaires received 85 Cost per questionnaire receivedwith $58.91 labor $32.00 without labor

75 $44.88 $38.22

103 $49.72 $13.56

65 $35.79 $26.18

No. of quality questionnaires Cost per quality questionnaire with labor without labor

54 $62.34 $53.09

68 $75.31 $20.54

39 $59.64 $43.63

39 $86.32 $73.51

48 $106.69 $29.10

19 $122.43 $89.56

Panel A: Initial mailing only Labor Telephone Postage Printing Incentive20 Total costs No. of initial survey packets Cost per initial survey packet with labor without labor

60 $83.46 $45.33

No. of complete questionnaires 43 Cost per complete questionnaire $116.46 with labor $63.25 without labor

Labor Telephone Postage Printing Incentive21 Total costs

Monetary incentive Pre-notified Not prenotified $2 600.18 $923.22 $127.50 $840.70 $1 155.16 $664.44 $935.76 $2 625.00 $2 775.00 $6 857.82 $5 789.14

No monetary incentive Pre-notified Not prenotified $4 036.30 $1 092.70 $204.00 $1 212.66 $1 744.46 $975.24 $1 450.68 $6 428.20 $4 287.84

No. of initial survey packets

417

600



20

21

Panel B: Overall

587

900

70 payments were to the pre-notified (55 individuals, 15 to a charity, 15 declined payment) with 69 payments to the others (56 individuals, 13 to a charity, 6 declined payment). The distributions were not significantly different at 5% alpha. 105 payments were to the pre-notified (82 individuals, 23 to a charity, 21 declined payment) with 111 payments to the others (89 individuals, 22 to a charity, 11 declined payment). The distributions were not significantly different at 5% alpha.

15

Cost per initial survey packet with labor without labor

$16.45 $10.21

$9.86 $8.29

$10.71 $3.99

$4.76 $3.55

No. of questionnaires received 126 Cost per questionnaire receivedwith $54.43 labor $33.79 without labor

122 $47.45 $39.88

144 $44.64 $16.61

100 $42.88 $31.95

No. of quality questionnaires Cost per quality questionnaire with labor without labor

95 $60.94 $51.22

106 $60.64 $22.57

63 $68.06 $50.72

90 $76.20 $47.31

No. of complete questionnaires 56 59 68 29 Cost per complete questionnaire $122.46 $98.12 $94.53 $147.86 with labor $76.03 $82.47 $35.18 $110.18 without labor Tests for ‘external’ socio-demographic bias were completed by comparing our received sample data against estimates for self-employed businesses in Western Australia (Australian Bureau of Statistics 2000b; Australian Bureau of Statistics 2000a). We found that our sample parameters were generally similar to the Bureau’s estimates, and concluded that ours is an unbiased sample. Using the parametric Students t and non-parametric Kolmogorov-Smirnoz Z, Mann Whitney U, and Pearson Chi-square we tested for systematic bias from the use of colored paper, the offer of monetary incentives, and personalization/pre-notification. We found that our experiment was bias-free as no statistically significant differences were found for any of the response-inducing strategies for any of the questionnaire items. We also checked if the probabilities generated by these tests were skewed using the Kolmogorov-Smirnoz Z, and could not find any statistical evidence that these probabilities were anything other than uniformly distributed. We also found no evidence of non-response bias when testing for differences between group split between our initial and follow-up mailings, both with and without allowance for experimental conditions. The statistical tests used as those noted above. Discussion Our experimental results on the use of colored paper suggests that small business researchers can remove this from their list of potential response-inducing strategies. While colored paper is not an absolute deterrent to response rates as suggested by Pressley and Tullar (1977), our results were consistent with the conclusions of Greer and Lohtia (1994), Jobber and O’Reilly (1998), 16

and Greer et. al. (2000) that the use of colored paper does not increase response rates. However, the prior findings of LaGarce and Kuhn (1995) may still be valid given they used colored inks. The promise of a $25 payment did result in a significantly higher response rate and, therefore, our results conflict with the ‘no social contract’ conclusion of James and Bolstein (1992). We suspect that this inconsistency may be a function of questionnaire size relative to the payment offered22. It could be that the need to identify themselves to receive the offered $50 may have generated suspicion in the minds of James and Bolstein’s (1992) participants, hence the low response rate. Dillman also suggested that ‘The closer the monetary incentive comes to the value of the service performed, the more the transaction tends to move into the realm of economic exchange and the easier it becomes for many people to refuse it.’ (Dillman 1978, p. 16). This does not seem to have been an issue for our survey. The pre-notification and personalization strategy has also cost-effectively increased response rates, a result at odds with Greer et. al. (2000) but consistent with Martin et. al. (1989), Duhan and Wilson (1990) and Murphy et. al. (1991). We did not follow the research design of McLaren and Shelley (2000), and hence do not know if written rather than verbal pre-notification would have been more cost effective. Possibly our most striking finding, however, is the impact that labor costs can have on the efficacy of survey methodology. If labor resources are available at no extra cost, we suggest that the most efficient response-inducing strategies are labor intensive, even though these may not be the most effective in raising response rates. Where labor resources are available but must be purchased, our results are not as clear. Nevertheless, we would still recommend the inclusion of some response inducing practices, such as University sponsorship. Conclusion Historically, low response rates have been a source of major concern for academic researchers of small and medium enterprises. A number of response-inducing strategies have therefore been suggested, and we have tested three of these in our study (promised monetary incentives, telephone pre-notification, and using colored paper for the survey instrument). We find that monetary incentives and pre-notification increase response rates, but their combined use is not cost effective. Our results therefore show that small business researchers should evaluate the

22

Their two-page questionnaire comprised 14 questions compared to our 16 page questionnaire of 240 items.

17

trade-off between increasing response rates and the costs this brings, especially if the quality of data received is important. We trust that this study provides assistance to our fellow researchers. REFERENCES Albaum, G. (1987). “Do source and anonymity affect mail survey results?” Academy of Marketing Science. Journal 15(3): 74. Angur, M. G. and R. Nataraajan (1995). “Do source of mailing and monetary incentives matter in international industrial mail surveys?” Industrial Marketing Management 24(5): 351-357. Armstrong, J. S. and J. T. Yokum (1994). “Effectiveness of monetary incentives: Mail surveys to members of multinational professional groups.” Industrial Marketing Management 23(2): 133136. Australian Bureau of Statistics (2000a). Characteristics of small business. Canberra: Australian Capital Territory, Australian Bureau of Statistics. Australian Bureau of Statistics (2000b). West australian statistical indicators. Canberra: Australian Capital Territory, Australian Bureau of Statistics. Berdie, D. R. and J. F. Anderson (1976). “Mail questionnaire response rates - updating outmoded thinking.” Journal of Marketing 40(1): 71. Brennan, M. (1992). “The effect of a monetary incentive on mail survey response rates: New data.” Journal of the Market Research Society 34(2): 173-177. Brennan, M. and J. Hoek (1992). “The behavior of respondents, nonrespondents, and refusers across mail surveys.” Public Opinion Quarterly 56(4): 530. Brennan, M., J. Hoek and C. Astridge (1991). “The effects of monetary incentives on the response rate and cost-effectiveness of a mail survey.” Journal of the Market Research Society 33(3): 229-241. Brown, M. (1994). “What price response?” Journal of the Market Research Society 36(3): 227244. Busenitz, L. W. (1996). “Research on entrepreneurial alertness.” Journal of Small Business Management 34(4): 35-44. Chay, Y. W. (1993). “Social support, individual differences and well-being: A study of small business entrepreneurs and employees.” Journal of Occupational & Organizational Psychology 66(Part 4): 285-302. Chebat, J.-C. and A. Cohen (1993). “Response speed in mail surveys: Beware of shortcuts.” Marketing Research 5(2): 20. Childers, T. L. and S. J. Skinner (1996). “Toward a conceptualization of mail survey response behavior.” Psychology & Marketing 13(2): 185 25 pages. Church, A. H. (1993). “Estimating the effect of incentives on mail survey response rates: A meta-analysis.” Public Opinion Quarterly 57(1): 62-79. Coleman, S. (2000). “Access to capital and terms of credit: A comparison of men- and womenowned small businesses.” Journal of Small Business Management 38(3): 37-52. Collins, R. L., P. L. Ellickson, R. D. Hays and D. F. McCaffrey (2000). “Effects of incentive size and timing on response rates to a follow-up wave of a longitudinal mailed survey.” Evaluation Review 24(4): 347-363. Cragg, P. B. and M. King (1988). “Organizational characteristics and small firm's performances revisited.” Entrepreneurship Theory and Practice 13(2): 49-64.

18

Diamantopoulos, A. and B. B. Schlegelmilch (1996). “Determinants of industrial mail survey response: A survey-on-surveys analysis of researchers' and managers' views.” Journal of Marketing Management 12: 505-531. Diamantopoulos, A., B. B. Schlegelmilch and L. Webb (1991). “Factors affecting industrial mail response rates.” Industrial Marketing Management 20(4): 327. Dickinson, J. R. and A. J. Faria (1995). “Refinements of charitable contribution incentives for mail surveys.” Journal of the Market Research Society 37(4): 447-453. Dillman, D. A. (1978). Mail and telephone surveys: The total design method. New York, New York, John Wiley. Dommeyer, C. J. (1985). “Does response to an offer of mail survey results interact with questionnaire interest?” Market Research Society. Journal of the Market Research Society 27(1): 27. Dommeyer, C. J. (1988). “How form of the monetary incentive affects mail survey response.” Market Research Society. Journal of the Market Research Society 30(3): 379. Downs, P. E. and J. R. Kerr (1986). “Recent evidence on the relationship between anonymity and response variables for mail surveys.” Academy of Marketing Science. Journal 14(1): 72. Duhan, D. F. and R. D. Wilson (1990). “Pre-notification and industrial survey responses.” Industrial Marketing Management 19: 95-105. Erdogan, B. Z. and M. J. Baker (2002). “Increasing mail survey response rates from an industrial population: A cost-effectiveness analysis of four follow-up techniques.” Industrial Marketing Management 31(1): 65-73. Everett, S. A., J. H. Price, A. W. Bedell and S. K. Telljohann (1997). “The effect of a monetary incentive in increasing the return rate of a survey to family physicians.” Evaluation & the Health Professions 20(2): 207-214. Fagenson, E. A. (1993). “Personal value systems of men and women entrepreneurs versus managers.” Journal of Business Venturing 8(5): 409-430. Fahy, J. (1998). “Improving response rates in cross-cultural mail surveys.” Industrial Marketing Management 27(6): 459-467. Faria, A. J. and J. R. Dickinson (1992). “Mail survey response, speed, and cost.” Industrial Marketing Management 21(1): 51. Faria, A. J. and J. R. Dickinson (1996). “The effect of reassured anonymity and sponsor on mail survey response rate and speed with a business population.” Journal of Business and Industrial Marketing 11(1): 66-76. Fiorito, S. S. and R. W. LaForge (1986). “A marketing strategy analysis of small retailers.” Entrepreneurship Theory and Practice 10(4): 7-17. Forsgren, R. A. (1989). “Increasing mail survey response rates: Methods for small business researchers.” Journal of Small Business Management 27(4): 61-66. Fox, C. M., K. L. Robinson and D. Boardley (1998). “Cost-effectiveness of follow-up strategies in improving the response rate of mail surveys.” Industrial Marketing Management 27(2): 127133. Gajraj, A. M., A. F. Faria and J. R. Dickinson (1990). “A comparison of the effect of promised and provided lotteries, monetary and gift incentives on mail survey response rate, speed and cost.” Journal of the Market Research Society 32(1): 141-162. Gaskill, L. R., H. E. Van Auken and R. A. Manning (1993). “A factor analytic study of the perceived causes of small business failure.” Journal of Small Business Management 31(4): 1831. 19

Goetz, E. G., T. R. Tyler and F. L. Cook (1984). “Promised incentives in media research: A look at data quality, sample representativeness, and response rate.” JMR, Journal of Marketing Research 21(2): 148. Green, K. E. (1996). “Sociodemographic factors and mail survey response.” Psychology & Marketing 13(2): 171. Greer, T., V., N. Chuchinprakam and S. Seshadri (2000). “Likelihood of participating in mail survey research: Business respondents' perspective.” Industrial Marketing Management 29(2): 97-119. Greer, T. V. and R. Lohtia (1994). “Effects of source and paper color on response rates in mail surveys.” Industrial Marketing Management 23(1): 47. Gronhaug, K., M. C. Gilly and B. M. Enis (1988). “Exploring income non-response: A logit model analysis.” Market Research Society. Journal of the Market Research Society 30(3): 371. Grossman, R. and D. Weiland (1978). “The use of telephone directories as a sample frame: Patterns of bias revisited.” Journal of Advertising 7(3): 31. Haggett, S. and V.-W. Mitchell (1994). “Effect of industrial prenotification on response rate, speed, quality, bias, and cost.” Industrial Marketing Management 23(2): 101-110. Hansen, R. A., C. H. Tinney and W. Rudelius (1983). “Increase response to industrial surveys.” Industrial Marketing Management 12(3): 165. Hare, S., J. H. Price, M. G. Flynn and K. A. King (1998). “Increasing return rates of a mail survey to exercise professionals using a modest monetary incentive.” Perceptual and Motor Skills 86(1): 217-218. Hite, P. A. (1998). “An examination of factors influencing financial reporting decisions of small business owner-managers.” Behavioral Research in Accounting 10: 159-178. Hornik, J. (1981). “Time cue and time perception effect on response to mail surveys.” Journal of Marketing Research 18(2): 243. Hubbard, R. and E. L. Little (1988). “Promised contributions to charity and mail survey responses: Replication with extension.” Public Opinion Quarterly 52(2): 223. Igbaria, M., N. Zinatelli, P. Cragg and A. L. M. Cavaye (1997). “Personal computing acceptance factors in small firms: A structural equation model.” MIS Quarterly 21(3): 279-302. James, J. M. and R. Bolstein (1990). “The effect of monetary incentives and follow-up mailings on.” Public Opinion Quarterly 54(3): 346. James, J. M. and R. Bolstein (1992). “Large monetary incentives and their effect on mail survey response rates.” Public Opinion Quarterly 56(4): 442. Jobber, D., N. Allen and J. Oakland (1985). “The impact of telephone notification strategies on response to an industrial mail survey.” International Journal of Research in Marketing 2(4): 291. Jobber, D., K. Birro and S. M. Sanderson (1988). “A factorial investigation of methods of stimulating response to a mail survey.” European Journal of Operational Research 37(2): 158. Jobber, D. and D. O'Reilly (1998). “Industrial mail surveys: A methodological update.” Industrial Marketing Management 27(2): 95-107. Kanuk, L. and C. Berenson (1975). “Mail surveys and response rates - a literature review.” Journal of Marketing Research 12(4): 440-453. LaGarce, R. and L. D. Kuhn (1995). “The effect of visual stimuli on mail survey response rates.” Industrial Marketing Management 24(1): 11. Lambert, D. M. and T. C. Harrington (1990). “Measuring nonresponse bias in customer service mail surveys.” Journal of Business Logistics 11(2): 5. 20

Lau, R. S. M. and B. E. May (1998). “Response bias issues in survey assessment of work life and work relationships.” South Dakota Business Review 57(2): 1 4. London, S. J. and C. J. Dommeyer (1990). “Increasing response to industrial mail surveys.” Industrial Marketing Management 19(3): 235. Martin, W. S., W. J. Duncan, T. L. Powers and J. C. Sawyer (1989). “Costs and benefits of selected response inducement techniques in mail survey research.” Journal of Business Research 19(1): 67. McGee, J. E. and M. Peterson (2000). “Toward the development of measures of distinctive competencies among small independent retailers.” Journal of Small Business Management 38(2): 19-33. McLaren, B. and J. Shelley (2000). “Response rates of victorian general practitioners to a mailed survey on miscarriage: Randomised trial of a prize and two forms of introduction to the research.” Australian and New Zealand Journal of Public Health 24(4): 360-364. Morris, M. H. and S. Zahra (2000). “Adaptation of the business concept over time: The case of historically disadvantaged south african owner/managers.” Journal of Small Business Management 38(1): 92-100. Murphy, P. R., J. M. Daley and D. R. Dalenberg (1991). “Exploring the effects of postcard prenotification on industrial firms' response to mail surveys.” Journal of the Market Research Society 33(4): 335-341. Newby, R. R. and M. Smith (1999). “A comparison of the impact of franchising on return and risk for two australian industries.” Accounting Forum 23(2): 193-205. Ostgaard, T. A. and S. Birley (1996). “New venture growth and personal networks.” Journal of Business Research 36(1): 37-50. Paolillo, J. G. P. and P. Lorenzi (1984). “Monetary incentives and mail questionnaire response rates.” Journal of Advertising 13(1): 46. Paxson, M. C. (1992). “Follow-up mail surveys.” Industrial Marketing Management 21(3): 195. Peacock, J. D. (1996). “"yes, you can raise response rates".” Journal of Advertising Research 36(1): RC7. Piazza, T. (1993). “Meeting the challenge of answering machines.” Public Opinion Quarterly 57(2): 219. Pressley, M. M. and W. Tullar (1977). “A factor interactive investigation of mail survey response rates from a commercial population.” Journal of Marketing Research 41: 108-112. Schlegelmilch, B. B. and A. Diamantopoulos (1991). “Prenotification and mail survey response rates: A quantitative integration of the literature.” Journal of the Market Research Society 33(3): 243-255. Schneider, K. C. and J. C. Johnson (1995). “Stimulating response to market surveys of business professionals.” Industrial Marketing Management 24(4): 265-276. School of Psychology - University of New England (2000). Bonferroni, School of Psychology. 2002. Shaw, M. J., T. J. Beebe, H. L. Jensen and S. A. Adlis (2001). “The use of monetary incentives in a community survey: Impact on response rates, date quality, and cost.” Health Services Research 35(6): 1339-1346. Singer, E., T. D. R. Von and E. R. Miller (1995). “Confidentiality assurances and response - a quantitative review of the experimental literature.” Public Opinion Quarterly 59(1): 66. Skinner, S. J., A. J. Dubinsky and T. N. Ingram (1983). “Impact of humor on survey responses.” Industrial Marketing Management 12(2): 139. 21

Thong, J. and C. S. Yap (1995). “Ceo characteristics, organizational characteristics and information technology adoption in small businesses.” Omega 23(4): 429-442. Tuckel, P. and T. Shukers (1997). “The answering machine dilemma.” Marketing Research 9(3): 4-9. Turley, S. K. (1999). “A case of response rate success.” Journal of the Market Research Society 41(3): 301-309. Tyagi, P. K. (1989). “The effects of appeals, anonymity, and feedback on mail survey response patterns from salespeople.” Academy of Marketing Science. Journal 17(3): 235. Warriner, K., J. Goyder, H. Gjertsen, P. Hohner and K. McSpurren (1996). “Charities, no; lotteries, no; cash, yes: Main effects and interactions in a canadian incentives experiment.” Public Opinion Quarterly 60(4): 542-562. Woodruff, S. I., T. L. Conway and C. C. Edwards (2000). “Increasing response rates to a smoking survey for u.S. Navy enlisted women.” Evaluation & the Health Professions 23(2): 172-181. Wu, B. T. W. and J. Vosika (1983). “Improving primary research: An experimental study of mail survey response.” Journal of Small Business Management 21(2): 30. Wunder, G. C. and G. W. Wynn (1988). “The effects of address personalisation on mailed questionnaires response rate, time and quality.” Market Research Society. Journal of the Market Research Society 30(1): 95. Yammarino, F. J., S. J. Skinner and T. L. Childers (1991). “Understanding mail survey response behavior: A meta-analysis.” Public Opinion Quarterly 55(4): 613 27 pages. Zahra, S. A. and W. C. Bogner (2000). “Technology strategy and software new ventures' performance: Exploring the moderating effect of the competitive environment.” Journal of Business Venturing 15(2): 135-173. About the Author: Authors: Rick Newby*, John Watson and David Woodliff Email: [email protected] Company or Institution: The University of Western Australia, The Department of Accounting and Finance Country: Australia

22

024.pdf

There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 024.pdf. 024.pdf.

644KB Sizes 2 Downloads 131 Views

Recommend Documents

No documents