The Role of Cognitive Ability in Self-Efficacy and Self-Assessed Test Performance1 Donald M. Truxillo2, and Rainer Seitz

Talya N. Bauer School of Business Administration Portland State University

Department of Psychology Portland State University

Research has shown that test takers are often unable to assess their own test performance accurately. However, the role of cognitive ability in assessing one’s test performance has not been explored. We examined whether high cognitive ability participants were better than low cognitive ability participants in assessing their performance on a video-based situational judgment test (SJT) of customer-service skills. Results indicated a strong relationship between actual and perceived SJT performance for high cognitive ability participants, but no relationship for those low in cognitive ability. The discussion focuses on implications for metacognitive theory, test perceptions, and providing feedback to applicants.

Many personnel selection researchers have moved beyond the traditional focus on test validity and now also consider test-taker perceptions, such as test-taking motivation (Arvey, Strickland, Drauden, & Martin, 1990; Clause, Delbridge, Schmitt, Chan, & Jennings, 2001; Sanchez, Truxillo, & Bauer, 2000), test-taking self-efficacy (e.g., Bauer, Maertz, Dolen, & Campion, 1998; Ellis & Ryan, 2003), and the perceived fairness of selection procedures (e.g., Maertz, Bauer, Mosley, Posthuma, & Campion, 2004; Ployhart & Ryan, 1997; Truxillo, Bauer, Campion, & Paronto, 2002). This interest has grown because these perceptions may affect important outcomes, such as test validity (e.g., Schmit & Ryan, 1992), organizational attractiveness (e.g., Bauer et al., 1998), and group differences in test performance (e.g., Ryan, 2001). One key finding of this line of research is that the favorability of the selection outcome, in terms of actual or perceived performance in the selection process, is a primary determinant of these applicant perceptions (e.g., Ryan & Ployhart, 2000). However, most research has found only a weak or inconsistent relationship between perceived and actual test performance.

1 An earlier version of this paper was presented at the annual conference of the Society for Industrial and Organizational Psychology, Toronto, Ontario, Canada, April 2002. 2 Correspondence concerning this article should be addressed to Donald M. Truxillo, Department of Psychology, Portland State University, P.O. Box 751, Portland, OR 97207. E-mail: [email protected]

903 Journal of Applied Social Psychology, 2008, 38, 4, pp. 903–918. © 2008 Copyright the Authors Journal compilation © 2008 Blackwell Publishing, Inc.

904 TRUXILLO ET AL. That is, test takers are often unable to assess their own performance on selection tests accurately (e.g., Macan, Avedon, Paese, & Smith, 1994; Ryan & Ployhart, 2000). However, there is evidence that some people are better at judging their performance on tests than are others. For example, knowledge of a content domain appears to affect the accuracy of people’s assessments of their own test performance (e.g., Hacker, Bol, Horgan, & Rakow, 2000; Kruger & Dunning, 1999). Although few studies have explored the antecedents of self-assessed test performance, there are at least three reasons to look at these antecedents. First, because perceived test performance is itself a key determinant of applicant perceptions (e.g., Ryan & Ployhart, 2000), understanding the antecedents of perceived performance should enrich the theoretical understanding of applicant perceptions. Second, if individual differences (e.g., cognitive ability) systematically moderate the relationship between actual and perceived test performance, this could lead to errors in estimating the relationship between test performance and key outcomes. Third, greater understanding of individual differences and their effects on perceived performance could help in the improved presentation of performance feedback so that it is understood clearly by more people. For example, this could have important implications for providing training feedback or it could lead to more people seeking remediation through test-preparation programs. In the present study, we explore cognitive ability as a moderator of the relationship between actual and perceived test performance. The existence of such a moderator could explain the weak relationship observed between perceived and actual test performance (e.g., Macan et al., 1994). Specifically, we reason that because the process of judging one’s own performance on a test is an essentially cognitive task, those higher in cognitive ability should be better able to assess their own performance on a selection test. We used a video-based situational judgment test for a retail job to examine this issue. Finally, we examined whether cognitive ability would moderate the relationship between actual test performance and a related key outcome: test-taking self-efficacy.

Importance of Perceived Test Performance One of the most consistent findings in research on applicant reactions is that the outcome that applicants receive (e.g., pass/fail) or their perceptions of how well they did on a selection hurdle affects applicant perceptions (Ryan & Ployhart, 2000). In fact, Ryan and Ployhart noted that outcome favorability is the most studied antecedent of applicant perceptions. For example,

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

905

Gilliland (1994) found that the actual hiring outcome for applicants (i.e., selected/rejected) was related to perceptions of process and outcome fairness. Similarly, Bauer et al. (1998) found that whether applicants actually passed or failed a selection procedure accounted for significant variance in such outcomes as the perceived fairness of testing and organizational attractiveness. Perceived test performance has also been shown to relate to fairness perceptions (e.g., Chan, Schmitt, Jennings, Clause, & Delbridge, 1998) and test-taking motivation (e.g., Sanchez et al., 2000). In addition, perceived performance may be crucial to a number of other relevant personnel selection issues. For example, perceptions of ability and performance may be related to applicants’ test-preparation activities (cf. Clause et al., 2001). Similarly, test-preparation programs are provided for applicants by many organizations (Ryan, Ployhart, Greguras, & Schmit, 1998). To the extent that applicants’ perceptions of their abilities determine their attendance in such programs, the factors that affect such perceptions are worthy of further study. Finally, employees’ perceptions of their performance or ability may be important to the success of training programs, affecting training motivation. In actual selection settings, applicants must often rely on their own perceptions of how well they did on the selection procedure in judging their own performance, since they commonly receive no specific feedback about their performance. Unfortunately, research has shown that many applicants may have difficulty estimating their own test performance (Ryan & Ployhart, 2000). For example, while some research has found that actual and perceived performance are related (e.g., Chan et al., 1998), other research has found only modest relationships between these variables (Macan et al., 1994). Given the importance of perceived test performance, it is unfortunate that many applicants appear to judge inaccurately how well they performed on a selection test. Moreover, prior research has focused little attention on the determinants of applicants’ perceptions of selection test performance. The present study explores cognitive ability as an explanation for the relationship between perceived and actual test performance.

Cognitive Ability and Perceived Test Performance The relationship between actual and perceived performance on a selection device is largely a function of people’s ability to judge their own performance or skill. This ability is part of a larger group of skills, referred to as metacognitive skills (e.g., Kraiger, Ford, & Salas, 1993). The relationship between expertise and metacognitive skills has been demonstrated in the literature. For example, those with greater expertise are better able to judge the number

906 TRUXILLO ET AL. of attempts they will require to accomplish a task (Chi, 1987), and better readers are better able to judge their own reading comprehension (Pressley, Snyder, Levin, Murray, & Ghatala, 1987). Although the personnel psychology literature has not focused on the relationship between expertise and self-assessed test performance, studies from other areas of psychology suggest that those with lower ability or expertise have lower metacognitive skills and are thus less accurate in judging their own test performance. Kruger and Dunning (1999) found that those who performed in the bottom quartile on a range of tests (e.g., humor, grammar, logic) were also the least accurate in assessing their actual test performance. However, improving participants’ abilities in a given domain increased their metacognitive skills and thus increased their accuracy. Hacker et al. (2000) explored the relationship between students’ actual and perceived test performance over a semester-length undergraduate course. They found that low-performing undergraduates grossly overestimated their performance both before (prediction) and after (postdiction) taking a test. These low-performing undergraduates were also less accurate in their estimates than were their higher performing counterparts. As explanations for this phenomenon, Hacker et al. suggested that those low in ability over-rely on their past predictions of how well they will do, rather than on actual performance in making assessments, and tend to attribute poor test performance to external factors. Because judging one’s own performance on a selection test is an essentially cognitive task, we suggest that cognitive ability could affect metacognition and self-assessments. That is, just as ability levels within a content domain are related to metacognitive skills (e.g., Hacker et al., 2000; Kruger & Dunning, 1999), cognitive ability may enhance the ability to assess one’s own performance. This is suggested by the fact that cognitive ability is related to the acquisition of job knowledge (e.g., Colquitt, LePine, & Noe, 2000; Ree, Carretta, & Teachout, 1995) and, as noted, knowledge or expertise is a key determinant of metacognition and the ability to judge one’s test performance (e.g., Hacker et al., 2000; Kruger & Dunning, 1999). Cognitive ability has also been defined as the ability to process complex information (e.g., Gottfredson, 2002), an important factor in assessing one’s performance. Moreover, research suggests that those higher in cognitive ability are more accurate at judging the performance of others (Hauenstein & Alexander, 1991). Therefore, to the extent that cognitive ability affects people’s knowledge acquisition and expertise, it may likewise affect their ability to judge their own performance on a selection test. In the present study, therefore, we extend the research in three ways. First, we study the role of cognitive ability in metacognition, in terms of accurately assessing test performance. Second, in doing so, we explore the moderating

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

907

effect of cognitive ability on the actual test performance/perceived test performance relationship, expecting that actual and perceived test performance will be more strongly related for those high in cognitive ability than for those low in cognitive ability. Third, we extend the work on self-assessed test performance (e.g., Hacker et al., 2000, Kruger & Dunning, 1999) by using a video-based situational judgment test for a retail job. Specifically, we hypothesize the following: Hypothesis 1. Cognitive ability will moderate the relationship between actual and perceived performance on a situational judgment test. Specifically, there will be a positive relationship between actual and perceived test performance for those high in cognitive ability, while there will be no relationship for those low in cognitive ability.

Test-Taking Self-Efficacy A key outcome of perceived test performance is test-taking self-efficacy. Whereas perceived performance is focused on perceptions of performance on a specific test the applicant has taken, test-taking self-efficacy captures a person’s confidence in his or her ability to do well on tests in general. Test-taking self-efficacy is a variable that has received some attention in the literature, both as an antecedent and as an outcome variable. Gilliland (1993) suggested the importance of self-efficacy in terms of applicant motivation in the selection process and future job-search activities. That is, self-efficacy could be important to rejected applicants’ pursuit of future employment. Moreover, Gilliland suggested that procedural justice and distributive justice (outcomes) will interact to affect applicants’ self-efficacy (cf. Brockner & Wiesenfeld, 1996). Gilliland (1994) found support for this general proposition in that procedural justice in terms of job relatedness had a negative effect on self-efficacy for rejected applicants, but a positive effect for selected applicants. Bauer et al. (1998) extended this research by exploring the effects of process fairness dimensions and applicant outcomes (i.e., test performance) on test-taking self-efficacy, finding that process fairness interacted with whether applicants passed the test to affect general test-taking self-efficacy. That is, there was a positive relationship between process fairness and test-taking self-efficacy for applicants who passed the test, but a negative relationship for those who failed the test. In the present study, we conceptualize perceived test performance and test-taking self-efficacy as related, but distinct constructs. That is, perceived performance is focused largely on a given testing situation. In contrast,

908 TRUXILLO ET AL. test-taking self-efficacy is more stable and is based on performance in a given testing situation, past performance, and other self-perceptions (e.g., selfesteem). For this reason, we hypothesize that actual test performance will be related to test-taking self-efficacy for those high in cognitive ability, but not for those low in cognitive ability. Hypothesis 2. Cognitive ability will moderate the relationship between situational judgment test performance and test-taking self-efficacy. Specifically, there will be a positive relationship between actual test performance and test-taking self-efficacy for those high in cognitive ability, while there will be no relationship for those low in cognitive ability.

Method Participants Study participants were 108 undergraduate business administration students (61 females, 47 males) from a western university. Participants’ mean age was 25 years, and they had an average of 3.5 years of full-time and 3.6 years of part-time work experience. Participants’ ethnicity was reported as follows: 58% White, 25% Asian, and 17% other. The participants received extra credit for their participation. In addition, to increase motivation on the situational judgment test, students were told that those who received the top four scores would each receive a prize of $25.

Procedure Students were asked to participate in a study on reactions to personnelselection tests and were given a cover letter explaining the study. Participation was voluntary and anonymous, and responses were matched by a code. Those who agreed to participate were given a survey that included a demographic sheet and pre-test measures. Once these were completed, participants were shown the video-based situational judgment test (SJT), which lasted about 15 min. After the test was over, participants were asked to wait while the tests were scored. Scoring took approximately 10 min. Participants were then given feedback about their test performance in terms of their test scores (out of 20), and they completed the post-test survey. Finally, participants completed the test of cognitive ability.

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

909

Measures Demographic measures. The demographic component of the survey included measures of age, gender, and years of full-time and part-time work experience. Cognitive ability. Cognitive ability was measured by the Wonderlic (2000) Personnel Test, a 12-min, 50-item test of cognitive ability. Test items are a combination of multiple-choice and fill-in-the-blank format and include problems such as vocabulary, mathematical reasoning, perceptual relations, and clerical ability. Test questions are arranged in order of increasing difficulty. Average item difficulty is such that 60% of respondents answer the item correctly. Research suggests that the Wonderlic test measures constructs similar to other cognitive ability tests, correlating, for example, .93 with the Wechsler Adult Intelligence Scale (Dodrill, 1983). The Wonderlic test has an established split-half reliability of .88 to .94. The overall mean (N > 100,000) for the measure is 21.06 (SD = 7.12; Wonderlic, 2000). The mean score for participants in the present study was 21.45 (SD = 5.20). Video-based situational judgment test (SJT). We used 10 items from a video-based SJT for a large retailer in the western U.S. described by Weekley and Jones (1997). The test was developed based on critical incidents to measure dimensions such as friendliness, diplomacy, team orientation, and values. It consists of videotaped vignettes of work situations that would be faced by persons in a retail job, including interactions with peers, supervisors, and customers. Immediately after each vignette is introduced, four possible responses to the vignette are presented on the screen. Each response has been empirically keyed to have a value of -1, 0, or +1. Participants indicate their responses with paper and pencil on an answer sheet. The sum of the 10 items forms a person’s test score. However, to eliminate the possibility of a person receiving a negative score on the test, we added 10 points to the total, resulting in a possible score ranging from 0 to 20. Weekley and Jones (1997) found that this test correlated between .29 and .33 with their measure of cognitive ability, but accounted for significant incremental variance in job performance beyond that explained by cognitive ability and work experience. Perceived test performance. We measured perceptions of test performance using a three-item scale adapted from Sanchez et al. (2000). A sample item is “I believe that I got a good score on the video test I took today.” Responses were rated on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Coefficient alpha for post-test perceived performance was .89. Because research has indicated that post-test perceived performance is strongly correlated with pre-test predictions (e.g., Hacker

910

TRUXILLO ET AL.

et al., 2000), we also assessed participants’ pre-test predictions of test performance on the SJT (a = .90) for use as a control variable. Perceived test performance items (pre-test and post-test) are included in the Appendix. Test-taking self-efficacy. We measured test-taking self-efficacy using a three-item scale adapted from Bauer et al. (1998). A sample item is “I am confident in my ability to do well on this sort of test.” Responses were rated on a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree). Coefficient alpha for post-test self-efficacy was .77. We also assessed participants’ pre-test perceptions of self-efficacy (a = .82) for use as a control variable. Test-taking self-efficacy items (pre-test and post-test) are included in the Appendix.

Results Table 1 presents the means, standard deviations, intercorrelations, and reliabilities for the measures. The demographic variables of age and gender were not correlated with perceived performance. Consistent with Weekley and Jones (1997), cognitive ability was weakly correlated with video test score (r = .22, p < .05). In addition, SJT score was not correlated with pre-test predicted performance (r = -.02, ns), but it was correlated with posttest perceived performance (r = .33, p < .01). Cognitive ability was positively correlated with pre-test predicted performance (r = .32, p < .01). Because of the high correlation between test-taking self-efficacy and perceived test performance, we conducted a factor analysis (with principal axis factoring and varimax rotation) of the six post-test perceived performance and self-efficacy items. Two factors emerged (as indicated by eigenvalues > 1). Using factor loadings greater than .40, the items loaded onto the two expected scales with no cross-loadings. Therefore, we retained perceived performance and test-taking self-efficacy as separate scales. We calculated a hierarchical regression equation with post-test perceived performance as the dependent variable to test Hypothesis 1 (i.e., cognitive ability would moderate the relationship between actual and perceived test performance). Pre-test predicted performance, cognitive ability test score, and SJT score were entered on Step 1. The interaction term (centered product of cognitive ability score and SJT score) was entered on Step 2. The results are presented in Table 2. Hypothesis 1 was supported, as indicated by the significant increase in R2 with the addition of the interaction term on Step 2 (DR2 = .07), F(1, 102) = 9.04, p < .01. This interaction is shown graphically in Figure 1. As hypothesized, there was a strong relationship between actual and perceived performance for those high in cognitive ability, but there was no relationship

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

911

4

Perceived Test 3 Performance Low Cognitive Ability High Cognitive Ability

2 -1 SD

+1 SD

SJT Score Figure 1. Moderating effect of cognitive ability on relationship between test performance and perceived test performance. SJT = situational judgment test. SJT score and cognitive ability were centered by standardizing.

for those low in cognitive ability. As a further analysis of this moderator effect, we performed a median split based on cognitive ability for the sample to compare the correlation between perceived performance and test score for the high and low cognitive-ability groups. This analysis indicates a strong correlation between actual test performance and perceived performance for the high cognitive-ability group (r = .55), but not for the low cognitive-ability group (r = .03). The difference between the correlations for the two groups was statistically significant (z = 2.98, p < .01). Similarly, we used hierarchical regression to test Hypothesis 2 (i.e., cognitive ability would moderate the relationship between test performance and test-taking self-efficacy). Pre-test self-efficacy, cognitive-ability test score, and SJT score were entered on Step 1. The interaction term (Cognitive Ability Score ¥ SJT Score) was entered on Step 2. The results are presented in Table 2. Only marginal support was found for Hypothesis 2, as indicated by the increase in R2 with the addition of the interaction term on Step 2 (DR2 = .02), F(1, 103) = 2.77, p = .10.

Discussion One purpose of the present study was to explore the role of cognitive ability in self-assessed test performance. We found that cognitive ability moderated the relationship between perceived and actual test performance, such that those with lower cognitive ability appeared to be less able to

Age Gender Cognitive ability SJT score Pre-test predicted performance Post-test perceived performance Pre-test self-efficacy Post-test self-efficacy

24.95 0.56 21.45 10.78 4.00 3.02 3.81 3.39

M 5.61 0.50 5.20 2.67 0.66 0.96 0.65 0.70

SD — .17 .06 -.02 .05 .03 .08 .06

1 — -.15 .08 -.14 -.16 -.13 -.12

2

— .22* .32** .08 .35** .14

3

— -.02 .33* .01 .09

4

(.90) .08 .69** .30**

5

(.89) .21* .54**

6

(.82) .52**

7

Note. N = 108. SJT = situational judgment test. Gender: 0 = male; 1 = female. Alpha reliabilities appear on the diagonal. *p < .05. **p < .01.

1. 2. 3. 4. 5. 6. 7. 8.

Variable

Descriptive Statistics and Intercorrelations

Table 1

(.77)

8

912 TRUXILLO ET AL.

.18*

.11*

5.55*

.07*

DR2

.27*

.12 — .32* .00

b

.30*

.28*

R2

11.01*

.02

DR2

.14

— .56* .12 -.08

b

Post-test test-taking self-efficacy

Note. N = 108. bs are for the final equation. SJT = situational judgment test. SJT and cognitive ability scores were centered by standardizing. *p < .01.

Step 1 Pre-test predicted performance Pre-test self-efficacy SJT score Cognitive ability Step 2 SJT ¥ Cognitive Ability F total

R2

Post-test perceived performance

Hierarchical Regressions for SJT Score, Cognitive Ability, and Their Interaction on Pre- and Post-Test Perceived Performance

Table 2

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

913

914 TRUXILLO ET AL. estimate their performance on a selection test. This effect was found even though participants were told their actual score out of a total possible score, providing them with a frame of reference. This suggests that simply giving test takers their test scores out of a total possible score may not be sufficient for some applicants to understand their performance level relative to others. Kruger and Dunning (1999) noted that certain individuals may misjudge their own performance because they do not pick up on social comparisons, and thus do not gain insight into their own abilities. However, this effect was not found for test-taking self-efficacy, where the statistical test for moderation only reached marginal significance. We suggest that test-taking self-efficacy is more stable than is perceived performance, since it may be affected by variables other than the outcome favorability for a particular event (e.g., self-esteem). The present study has several implications for research. First, the moderator effect found in this study may explain the weak relationship sometimes found between actual and perceived test performance in the literature (e.g., Macan et al., 1994; Ryan & Ployhart, 2000). Second, if the relationship between actual and perceived test performance is partly a function of cognitive ability, there may be a stronger relationship between actual and perceived test performance in situations in which the cognitive ability of the applicant pool is higher, such as for promotions (vs. entry-level selection) or for certain types of jobs (e.g., professional jobs). Third, these results may imply that the effects of outcome favorability on variables such as fairness perceptions are underestimated. Because some applicants (i.e., those lowest in cognitive ability) cannot accurately assess their test performance, error is introduced into the measurement of outcome favorability. In other words, the relationship between outcomes such as pass/fail and variables such as fairness may be even stronger among high cognitive-ability groups. Fourth, this relationship may account for the finding that those who need help most may not be those who seek it through test preparation and other remediation programs (e.g., Hacker et al., 2000). If low cognitive-ability applicants do not internalize their test performance feedback (i.e., if they do not believe that their test performance is poor or that they have a problem with low testtaking ability) they may not seek help in preparation or remediation. Fifth, to our knowledge, the present study is the first to suggest a link between cognitive ability and metacognitive skills. Participants lower in cognitive ability were less able to assess their performance accurately, suggesting that cognitive ability may be related to certain metacognitive skills. Moreover, these findings are consistent with past research showing that cognitive ability leads to increased knowledge acquisition (Ree et al., 1995) and accuracy in estimating others’ performance (Hauenstein & Alexander, 1991), and that increased knowledge improves metacognitive skills (e.g., Hacker et al., 2000).

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

915

Finally, this study contributes to understanding of the antecedents of perceived test performance. This study also has implications for practice. First, it suggests that when it is critical for applicants to have a clear understanding of their test performance, they should be told how they performed on selection procedures in such a way that their performance levels are made salient to them, such as by telling them their performance relative to others. Future research should explore different methodologies for delivering feedback about test performance to applicants to improve the accuracy of applicants’ perceptions of their test performance. Second, if those lower in cognitive ability are not seeking help, this research suggests that test-preparation programs may actually be helping those who are least in need of assistance. For this reason, we propose that if organizations offer test-preparation programs (e.g., Ryan et al., 1998) to applicants, they should make such programs mandatory so that all applicants participate. Third, these results may have implications for other human resource functions (e.g., training). To the extent that training success relies on participants’ ability to receive and understand feedback about their abilities and performance levels, decreased learning among those low in cognitive ability may be partly a result of the decreased effect of training feedback. As an initial exploration of this issue, this study is not without its limitations. First, we did not measure perceived performance prior to feedback. However, it is the post-feedback effects of outcome favorability that are of the greatest long-term interest for applicant reactions research (Ryan & Ployhart, 2000) and for organizations. Second, we explored these relationships for a SJT. Future research should study this issue for other types of tests and in additional contexts (e.g., in actual selection settings). We also note that stronger effects may be found in other (e.g., non-college) contexts in which there is likely to be greater variance in cognitive ability. Finally, we did not consider additional measures of perceived performance (e.g., estimates of percentage correct, estimates of performance relative to others), which is an area for future research. We encourage further research on the causes and antecedents of perceived test performance using additional measures, samples, and tests. Research that extends and generalizes our findings is needed. However, the present study suggests that cognitive ability may play a role in perceptions of test performance. References Arvey, R. D., Strickland, W., Drauden, G., & Martin, C. (1990). Motivational components of test-taking. Personnel Psychology, 43, 695–716.

916

TRUXILLO ET AL.

Bauer, T. N., Maertz, C. P., Dolen, M. R., & Campion, M. A. (1998). A longitudinal assessment of applicant reactions to an employment test. Journal of Applied Psychology, 83, 892–903. Brockner, J., & Wiesenfeld, B. M. (1996). An integrative framework for explaining reactions to decisions: Interactive effects of outcomes and procedures. Psychological Bulletin, 120, 189–208. Chan, D., Schmitt, N., Jennings, D., Clause, C. S., & Delbridge, K. (1998). Applicant perceptions of test fairness: Integrating justice and self-serving bias perspectives. International Journal of Selection and Assessment, 6, 232–239. Chi, M. T. H. (1987). Representing knowledge and metaknowledge: Implications for interpreting metamemory research. In F. E. Weinert & R. H. Kluwe (Eds.), Metacogntion, motivation, and understanding (pp. 239–266). Hillsdale, NJ: Lawrence Erlbaum. Clause, C. S., Delbridge, K., Schmitt, N., Chan, D., & Jennings, D. (2001). Test preparation activities and employment test performance. Human Performance, 14, 149–167. Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678–707. Dodrill, C. B. (1983). Long term reliability of the Wonderlic Personnel Test. Journal of Consulting and Clinical Psychology, 512, 316–317. Ellis, A. P., & Ryan, A. M. (2003). Race and cognitive ability test performance: The mediating effects of test preparation, test-taking strategy use, and self-efficacy. Journal of Applied Social Psychology, 33, 2607–2629. Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18, 694– 734. Gilliland, S. W. (1994). Effects of procedural and distributive justice on reactions to a selection system. Journal of Applied Psychology, 79, 691– 701. Gottfredson, L. S. (2002). Where and why g matters: Not a mystery. Human Performance, 15, 25–46. Hacker, D. J., Bol, L., Horgan, D. D, & Rakow, E. A. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92, 160–170. Hauenstein, N. M. A., & Alexander, R. A. (1991). Rating ability and performance judgments: The joint influence of implicit theories and intelligence. Organizational Behavior and Human Decision Processes, 50, 300–323. Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skillbased, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311–328.

COGNITIVE ABILITY AND PERCEIVED PERFORMANCE

917

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated selfassessments. Journal of Personality and Social Psychology, 77, 1121– 1134. Macan, T. H., Avedon, M. J., Paese, M., & Smith, D. E. (1994). The effects of applicants’ reactions to cognitive ability tests and an assessment center. Personnel Psychology, 47, 715–738. Maertz, C. P., Jr., Bauer, T. N., Mosley, D. C., Jr., Posthuma, R. A., & Campion, M. A. (2004). Do procedural justice perceptions in a selection testing context predict applicant attraction and intention toward the organization? Journal of Applied Social Psychology, 34, 125–145. Ployhart, R. E., & Ryan, A. M. (1997). Toward an explanation of applicant reactions: An examination of organizational justice and attribution frameworks. Organizational Behavior and Human Decision Processes, 72, 308–335. Pressley, M., Snyder, B. S., Levin, J. R., Murray, H. G., & Ghatala, E. S. (1987). Perceived readiness for examination performance (PREP) produced by initial reading of text and text containing adjunct questions. Reading Research Quarterly, 22, 219–236. Ree, M. J., Carretta, T. R., & Teachout, M. S. (1995). Role of ability and prior knowledge in complex training performance. Journal of Applied Psychology, 80, 721–730. Ryan, A. M. (2001). Explaining the Black–White test score gap: The role of test perceptions. Human Performance, 14, 45–75. Ryan, A. M., & Ployhart, R. E. (2000). Applicants’ perceptions of selection procedures and decisions: A critical review and agenda for the future. Journal of Management, 26, 565–606. Ryan, A. M., Ployhart, R. E., Greguras, G. J., & Schmit, M. J. (1998). Test preparation programs in selection contexts: Self-selection and program effectiveness. Personnel Psychology, 51, 599–621. Sanchez, R. J., Truxillo, D. M., & Bauer, T. N. (2000). Development and examination of an expectancy-based measure of test-taking motivation. Journal of Applied Psychology, 85, 739–750. Schmit, M. J., & Ryan, A. M. (1992). Test-taking dispositions: A missing link? Journal of Applied Psychology, 77, 629–637. Truxillo, D. M., Bauer, T. N., Campion, M. A., & Paronto, M. E. (2002). Selection fairness information and applicant reactions: A longitudinal field study. Journal of Applied Psychology, 87, 1020–1031. Weekley, J. A., & Jones, C. (1997). Video-based situational testing. Personnel Psychology, 50, 25–49. Wonderlic, Inc. (2000). Wonderlic Personnel Test and Scholastic Level Exam user’s manual. Libertyville, IL: Author.

918

TRUXILLO ET AL.

Appendix Pre-Test Items Self-efficacy (a = .82) 1. I am confident in my ability to do well on this sort of test. 2. When it comes to taking this sort of test, I generally do well. 3. I tend to do better on this sort of test than most people. Perceived test performance (a = .90) 1. I believe I will do well on the video test that I take today. 2. I believe that I will get a good score on the video test I take today. 3. I believe that I will pass the video test I take today.

Post-Test Items Self-efficacy (a = .77) 1. I am confident in my ability to do well on this sort of test. 2. When it comes to taking this sort of test, I generally do well. 3. I tend to do better on this sort of test than most people. Perceived test performance (a = .89) 1. I believe I did well on the video test that I took today. 2. I believe that I got a good score on the video test I took today. 3. I believe that I passed the video test I took today.

2008 Truxillo, Seitz, & Bauer self ef & test perf JASP.pdf

Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 2008 Truxillo, Seitz, & Bauer self ef & test perf JASP.pdf. 2008 Truxillo, Seitz, & Bauer self ef & test perf ...
Missing:

104KB Sizes 0 Downloads 115 Views

Recommend Documents

Justesen Lacher Seitz - SESSION II.pdf
Page 1 of 18. 1/16/2017. 1. Building Your Marketing Technology Stack. Presenters: Caitlin Lacher | Senior Marketing Coordinator | Postlethwaite & Netterville.

Fichas EF 6pri.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Fichas EF 6pri.Missing:

Ef and EF PFR 30.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Ef and EF PFR ...

990-01378-00 RevB self test - Liftware
Liftware works best for tremors less than 1.75 inches (4.5 cm). If you have trouble ... Or contact us at [email protected] and ask us to mail you the printed test.

OEDIT Perf Eval_July (1).pdf
There was a problem previewing this document. Retrying... Download. Connect more ... of the apps below to open or edit this item. OEDIT Perf Eval_July (1).pdf.

Fichas EF 4pri.pdf
Page 3 of 15. 4o LOS MÚSCULOS DEL CUERPO Ficha 3. Los músculos son los encargados de realizar los movimientos. En todo el. cuerpo tenemos más de ...

Cheap NEW EW-65 II Lens Hood for Canon EF 28mm f2.8 EF 35mm ...
Cheap NEW EW-65 II Lens Hood for Canon EF 28mm f2.8 EF 35mm f2 free shipping.pdf. Cheap NEW EW-65 II Lens Hood for Canon EF 28mm f2.8 EF 35mm f2 ...

DOLA 15-16 Perf Plan and June 2015 Perf Eval.pdf
through local housing authorities and non-profit service organizations. Additionally, DOLA channels. federal aid for disaster recovery. In these ways, DOLA is a ...

RFQ Cal Perf MP final.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. RFQ Cal Perf ...

HCPF Perf Eval_July_070915 (1).pdf
Page 1 of 3. Department of Health Care Policy and Financing. July 2015 Performance Evaluation. Strategic Policy Initiatives. The Department of Health Care Policy and Financing has identified several strategic policy initiatives for FY 2014-15 and bey

HCPF Perf Eval_05.04.2015.pdf
Page 1 of 4. 1. Department of Health Care Policy and Financing. April 2015 Performance Evaluation. Strategic Policy Initiatives. The Department of Health Care Policy and Financing has identified several strategic policy initiatives for FY 2014-15 and

DRDO Entry Test 2008 Mechanical Engineering Previous year paper.pdf
DRDO Entry Test 2008 Mechanical Engineering Previous year paper.pdf. DRDO Entry Test 2008 Mechanical Engineering Previous year paper.pdf. Open. Extract. Open with. Sign In. Details. Comments. General Info. Type. Dimensions. Size. Duration. Location.

OEDIT Perf Eval_04.19.2015.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. OEDIT Perf ...

ACT Practice Test 2008-2009 61C.pdf
Before you choose a test date, check the application. deadlines of the colleges and scholarship agencies you. are considering. It will normally take three to eight ...

DRDO Entry Test 2008 Mechanical Engineering Previous year paper.pdf
12345678901234567890123456789012123456789012345678901234567890121234567890123456789012345678901212345678901234567890123456789012123456. DRDO SET EXAM 2008. MECHANICAL ENGINEEERING. Page 1 of 18 ...

integration ef traité mécanique.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. integration ef ...

Raissued May18, 1915. Oscar Bauer
zen of the United States, and a resident of the city of New York, borough of ..... support for sald arm adjustable perpendicu larly of the bed but maintaining the ...

1994 Williams & Bauer Diversity GOM.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 1994 Williams ...Missing:

1996 Bauer & Green LMX AMJ.pdf
There was a problem previewing this document. Retrying... Download. Connect more ... 1996 Bauer & Green LMX AMJ.pdf. 1996 Bauer & Green LMX AMJ.pdf.

IEEE 1500 compliant Self-Test design library
http://www.cad.polito.it/tools and includes the following ... compliant approach: a case of study”, IEEE Design Automation and Test in Europe Conference, 2005.

PDF-Download The Marshmallow Test: Mastering Self ...
and how to master it. A child is ... With profound implications for the choices we make in parenting, education, public policy and self-care, "The Marshmallow.