Journal of Social Research & Policy, Vol. 6, Issue 1, July 2015

Elementary and Secondary Education in America: Using Induction and Correlation to Evaluate Public Policies and Student Outcomes KERN CRAIG 1 Troy University, USA

Abstract This is a study of elementary and secondary education in America. The current debate with respect to public policies is addressed using the general consensus with respect to student outcomes. Five policies are examined: spending, organization, size, race, and sex. And five outcomes are examined: standardized tests, academic achievement, economic success, serious misbehavior, and overall wellbeing. The five policy variables are operationalized using twelve policy measures. And the five outcome variables are operationalized using fifteen outcome measures. An inductive approach is employed with bivariate correlation used for both data-mining and hypothesis-testing. The twelve policy measures are transformed into twelve null hypotheses which are rejected (or NOT) based on the strength, significance, and number of correlations between each policy measure and the fifteen outcome measures. And the rejected null hypotheses are viewed as specific conclusions with respect to the direction of correlation, positive or negative. General conclusions are then drawn with respect to the five policy variables.

Keywords: Education; K-12; Elementary; Secondary; Policies; Outcomes.

Introduction This study asks an important question: What matters with respect to elementary and secondary education in America? It attempts to answer that question using induction and correlation to evaluate public policies in terms of student outcomes. The research design is inductive as opposed to deductive, bottom-up versus top-down, factual first and theoretical second. Moving from observation to abstraction, facts are narrowly examined before ideas are broadly expressed. And bivariate correlation is the method used for both data-mining and hypothesis-testing. The present-day debate with respect to public policies is addressed in terms of the longstanding consensus with respect to student outcomes. Five policies involving spending, organization, size, race, and sex are operationalized as independent variables or causes. They are controversial and thus important if for no other reason. And five outcomes involving standardized tests, academic achievement, economic success, serious misbehavior, and overall wellbeing are operationalized as dependent variables or effects. They are conventional and thus important if for no other reason. Data from each of the fifty states are collected and analyzed (N=50). Public policies are measured in terms of state averages per capita, per pupil, per teacher, per principal, and per school. Student outcomes are measured in terms of state averages per student and former student. So the level of analysis is fairly consistent. Uncorrelated data-sets are ignored. And the results of those comparisons are not included in this article since it relies on 27 measures that are strongly and significantly correlated.

1

Postal address: 81 Beal Parkway S.E. Fort Walton Beach, FL, 32548, USA. E-mail Address: [email protected]

2 | JSRP

Kern Craig

The five general policy variables are operationalized using twelve specific policy measures. The five general outcome variables are operationalized using fifteen specific outcome measures. This results in a 12 by 15 correlation matrix with 180 cells. The twelve policy measures are transformed into 12 null hypotheses which are rejected (or NOT) based on the strength, significance, and number of correlations between each policy measure and the 15 outcome measures. The rejected null hypotheses are viewed as specific conclusions with respect to the direction of correlation, positive or negative. General conclusions are then drawn with regard to the five general policy variables. This is done in conjunction with a review of recent literature. The authors’ own words, if expressed succinctly, are quoted verbatim. In other sections, limitations are addressed with regard to some measures and with regard to some methods. The conclusion deals with national policy including both No Child Left Behind and Race to the Top. This research is exploratory rather than experimental. On the one hand, it is complex in terms of the number of variables (10) and the number of measures (27). On the other hand, it is simple in terms of the analysis (bivariate correlation). Complexity in terms of content is thus balanced by simplicity in terms of method. Although this research is secondary rather than primary, the variables of interest are operationalized using measures of data taken from reliable sources. The twelve policy measures Spending 1. Expenditure per capita (exppcap) for elementary and secondary education is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 1, p. 59, Table 32, Column 8 for 2009-10. 2. Expenditure per pupil (expppup) for elementary and secondary education is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, p. 300, Table 215, Column 2 for 2009-10. 3. Average pay for teachers (payte) is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, p. 137, Table 91, Column 15 for 2011-12. 4. Average pay for principals (paypr) is taken from the U.S. Department of Education (2014), Schools and Staffing Survey, 2011-12 SASS Tables, Public School Principal Data File, Table 4. Organization 5. Percent of instruction expenditure (instexp) is derived from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, pp. 289-90, Table 208, Column 4 divided by Column 2. 6. Percent of union teachers (unionte) is extracted from the Fordham Institute (2012), How Strong are U.S. Teacher Unions? A State-by-State Comparison, pp. 61-361. The data here are almost perfectly correlated (.99) with 2004-2008 average data for “percent unionized” in Special Interest: Teachers Unions and America’s Public Schools, pp. 54-55 (Moe, 2011). Size 7. Number of pupils per elementary school (pupesch) is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, p. 181, Table 114, Column 10 for 2010-11. 8. Number of pupils per secondary school (pupssch) is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, p. 182, Table 115, Column 12 for 2010-11. Race 9. Percent of white teachers (whitete) is taken from the Center for American Progress (2011), Teacher Diversity Matters: A State-by-State Analysis of Teachers of Color, pp. 12-15, Appendix A for 2008.

Elementary and Secondary Education in America …

3 | JSRP

10. Percent of white principals (whitepr) is taken from the U.S. Department of Education (2014), Schools and Staffing Survey, 2011-12 SASS Tables, Public School Principal Data File, Table 1. Sex 11. Percent of male teachers (malete) is taken from the U.S. Department of Education (2014), Schools and Staffing Survey, 2011-12 SASS Tables, Public School Teacher Data File, Table 2. The missing data for Florida, Hawaii, Maryland, and Rhode Island are replaced with the average of the other 46 states. 12. Percent of male principals (malepr) is taken from the U.S. Department of Education (2014), Schools and Staffing Survey, 2011-12 SASS Tables, Public School Principal Data File, Table 2. The missing datum for Maryland is replaced with the average of the other 49 states. The fifteen outcome measures Standardized Tests 1. Grade 4 math score (g4math) at or above proficient is taken from the National Center for Education Statistics (2010), National Assessment of Educational Progress (NAEP), 2009 Mathematics and Reading Assessments, p. 174, Table 269, Proficiency Levels on Selected NAEP Tests for Students in Public Schools by State: 2009. 2. Grade 4 reading score (g4read) at or above proficient is taken from the same source. 3. Grade 8 math score (g8math) at or above proficient is taken from the same source. 4. Grade 8 reading score (g8read) at or above proficient is taken from the same source. 5. ACT average composite score (testact) is taken from ACT (2013), Average Scores by State. Academic Achievement 6. Illiteracy rate (illiter), percent of population lacking basic prose literary skills, is taken from the National Center for Education Statistics (2003), State & County Estimates of Low Literacy. 7. High school graduation rate (gradhs) for persons age 25 and over is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 1, p. 40, Table 16, Column 5. 8. College graduation rate (gradcol) for persons age 25 and over is taken from the U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 1, p. 40, Table 16, Column 8. Economic Success 9. Poverty rate (poverty) in 2012 is taken from the U.S. Census Bureau (2013), Poverty: 20002012, p. 3, Table 1. 10. Employment rate (employ) is taken from the U.S. Census Bureau (2012b), 2012 Statistical Abstract, Table 594, Characteristics of the Civilian Labor Force by State: 2010. 11. Household median income (income) is taken from U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 1, p. 49, Table 24, Column 10 for 2011. Serious Misbehavior 12. Percent expelled (expel) is taken from U.S. Department of Education (2013), Digest of Education Statistics 2012, Chapter 2, p. 272, Table 193, Column 2 for 2006. 13. Percent imprisoned (prison) is derived from the U.S. Department of Justice (2013), Prisoners in 2012, Appendix Table 6 and the U.S. Census Bureau (2012c), 2012 Statistical Abstract, Table 19, Resident Population by Race and State: 2010, total state and federal prisoners divided by total population. Overall Wellbeing 14. Life expectancy at birth (lifeexp) is taken from the Measure of America 2013-2014, American Human Development Report, pp. 17-18, Table 2, American Human Development Index by State. 15. Human Development Index (hdi) is taken from the same source.

4 | JSRP

Kern Craig

Correlation results The three tables in this article show Spearman correlation coefficients and probability values: first, for the policy measures alone; second, for the outcome measures alone; and, third for the policy and the outcome measures together (SAS OnDemand for Academics, 2010). Table 1 is a 12 by 12 matrix of policy measures. There 144 are cells with 12 identities, 66 correlations, and 66 duplicates. 32 of the 66 correlations (48 percent) are strong (equal to or more than absolute 0.3). And all of the 32 probability values are significant (equal to or less than 0.05). The 12 specific policy measures are grouped according to the 5 general policy variables: spending, organization, size, race, and sex. The logic of categorization is supported by the results of correlation. Correlations between measures within groups are strong, significant, and consistent. Table 2 is a 15 by 15 matrix of outcome measures. There are 225 cells with 15 identities, 105 correlations, and 105 duplicates. 98 of the 105 correlations (93 percent) are strong (equal to or more than absolute 0.3). And all of the 98 probability values are significant (equal to or less than 0.05). The 15 specific outcome measures are grouped according to the 5 general outcome variables: scores on standardized tests, levels of academic achievement, indicators of economic success, results of serious misbehavior, and measures of overall wellbeing. The logic of categorization is supported by the results of correlation. Correlations between measures within groups are strong, significant, and consistent, except one in the group for levels of academic achievement. The correlation between illiteracy rate and college graduation rate is neither strong nor significant (although it is consistent). Table 3 is a 12 by 15 matrix of policy and outcome measures. There are 180 cells with no identities and no duplicates. 115 of the 180 correlation coefficients (64 percent) are strong (equal to or more than absolute 0.3). All of the 115 probability values are significant (equal to or less than 0.05). These are the results used for hypothesis-testing. And, for that purpose, 11 of the 15 outcomes are classified as superior (g4math, g4read, g8math, g8read, testact, gradhs, gradcol, employ, income, lifeexp, and hdi) and 4 are classified as inferior (illiter, poverty, expel, and prison).

Elementary and Secondary Education in America …

5 | JSRP

6 | JSRP

Kern Craig

Elementary and Secondary Education in America …

7 | JSRP

8 | JSRP

Kern Craig

The 12 null hypotheses There are 12 null hypotheses corresponding to the 12 specific policy measures. They are listed in 5 groups corresponding to the 5 general policy variables. In each instance, the null hypothesis posits no relationship between the specific policy measure and the 15 outcome measures. Testing is straightforward. The first two decision rules are arbitrary, but customary (Fernandez, 2011, p. 359), the third decision rule is simply reasonable, and the fourth decision rule is arbitrary, but reasonable. 1. Association strength: The correlation coefficient is equal to or more than absolute 0.3. 2. Statistical significance: The probability value is equal to or less than 0.05. 3. Directional consistency: There are positive correlations with superior outcomes and negative correlations with inferior outcomes (and vice versa with respect to H7 and H8). 4. Hypothesis rejection: The null hypothesis is rejected if 7 or more of the 15 tests are strong, significant, and consistent. Spending H1: There is no relationship between expenditure per capita (exppcap) and education outcomes. Hypothesis 1 is rejected. 12 of the 15 correlations are both strong and significant. They are consistent. Expenditure per capita is positively correlated with 11 superior outcomes and negatively with 1 inferior outcome. H2: There is no relationship between expenditure per pupil (expppup) and education outcomes. Hypothesis 2 is rejected. 13 of the 15 correlations are both strong and significant, and they are consistent. Expenditure per pupil is positively correlated with 11 superior outcomes and negatively with 2 inferior outcomes. H3: There is no relationship between average pay for teachers (payte) and education outcomes. Hypothesis 3 is rejected. 8 of the 15 correlations are both strong and significant, and they are consistent. Average pay for teachers is positively correlated with 7 superior outcomes and negatively with 1 inferior outcome. H4: There is no relationship between average pay for principals (paypr) and education outcomes. Hypothesis 4 is rejected. 7 of the 15 correlations are both strong and significant, and they are consistent. Average pay for principals is positively correlated with 6 superior outcomes and negatively with 1 inferior outcome. Organization H5: There is no relationship between percent of instruction expenditure (instexp) and education outcomes. Hypothesis 5 is rejected. 9 of the 15 correlations are both strong and significant, and they are consistent. Percent of instruction expenditure is positively correlated with 7 superior outcomes and negatively with 2 inferior outcomes. H6: There is no relationship between percent of union teachers (unionte) and education outcomes. Hypothesis 6 is rejected. 12 of the 15 correlations are both strong and significant, and they are consistent. Percent of union teachers is positively correlated with 10 superior outcomes and negatively with 2 inferior outcomes. Size H7:

H8:

There is no relationship between pupils per elementary school (pupesch) and education outcomes. Hypothesis 7 is rejected. 9 of the 15 correlations are both strong and significant, and they are consistent. Pupils per elementary school is positively correlated with 4 inferior outcomes and negatively with 5 superior outcomes. There is no relationship between pupils per secondary school (pupssch) and education outcomes. Hypothesis 8 is NOT rejected. 4 of the 15 correlations are both strong and significant, and they are consistent. Pupils per secondary school is positively correlated with 2 inferior outcomes and negatively with 2 superior outcomes. But the null hypothesis in NOT rejected because fewer than 7 of the 15 correlations are strong, significant, and consistent.

Elementary and Secondary Education in America …

9 | JSRP

Race H9: There is no relationship between the percent of white teachers (whitete) and education outcomes. Hypothesis 9 is rejected. 11 of the 15 correlations are both strong and significant, and they are consistent. The percent of white teachers is positively correlated with 7 superior outcomes and negatively with 4 inferior outcomes. H10: There is no relationship between the percent of white principals (whitepr) and education outcomes. Hypothesis 10 is rejected. 11 of the 15 correlations are both strong and significant, and they are consistent. The percent of white principals is positively correlated with 7 superior outcomes and negatively with 4 inferior outcomes. Sex H11: There is no relationship between the percent of male teachers (malete) and education outcomes. Hypothesis 11 is rejected. 11 of the 15 correlations are both strong and significant, and they are consistent. The percent of male teachers is positively correlated with 7 superior outcomes and negatively with 4 inferior outcomes. H12: There is no relationship between the percent of male principals (malepr) and education outcomes. Hypothesis 12 is rejected. 8 of the 12 correlations are both strong and significant, and they are consistent. The percent of male principals is positively correlated with 5 superior outcomes and negatively with 3 inferior outcomes. Implications with regard to recent literature “Schools clearly make a big difference. Research has established that the students who are most likely to lag behind academically, are the ones who attend schools with less-qualified teachers and poorer resources” (Beatty, 2013, p. 69). But “many factors determine the quality of education” (Belfield & Levin, 2002, p. 297). This cross-sectional study highlights a variety of policy measures that are clearly associated with a variety of outcome measures. Spending Education expenditure per capita is strongly, significantly, and positively correlated with superior outcomes as is education expenditure per pupil. And average pay for teachers is strongly, significantly, and positively correlated with superior outcomes as is average pay for principals. And all 4 policies are strongly, significantly, and negatively correlated with inferior outcomes. But Hanushek finds “no strong or systematic relationship between school expenditures and student performance” (1986, p. 1162) and “no strong or consistent relationship between variations in school resources and student performance” (1997, p. 141). Goldhaber and Brewer report that “the effects of educational inputs such as per pupil spending … have been shown to be relatively unimportant predictors of outcomes” (1996, p. 1). According to Burtless, “the case for additional school resources is far from overwhelming” (1996, p. 40). Okpala, Okpala, and Smith find “there is a lack of conclusive empirical evidence regarding the effect of expenditures on academic achievement (2001, p. 5).” Manna’s study of fiscal centralization suggests “essentially no relationship between the percent of nonlocal revenues for education and student success” (2013, p. 698). “The relationship between educational expenditure, and performance in the labor market has been the subject of much debate” (Heckman, Layne-Farrar, & Todd, 1996, p. 192). But the results presented in this article show strong, significant, and positive correlations between policies involving expenditure per capita and expenditure per pupil, and outcomes involving employment and income. Hedges and Greenwald report “that school resources are systematically related to student achievement and that these relations are large enough to be educationally important” (1996, p. 90). Card and Krueger find “that a 10 percent increase in school spending is associated with a 1 to 2 % increase in annual earnings for students later in their lives” (1996, p. 133).

10 | JSRP

Kern Craig

Loeb and Page insist however that “only a regression analysis that controls for other factors … will produce policy-relevant elasticity estimates of the effect of teacher wages on student outcomes” (2000, p. 406). In this correlation analysis there is no experimental control. Nor is the pay for teachers and principals adjusted for factors such as “cost of living, inflation, per capita personal earnings, and private sector earnings” (Lehnen & Minick, 1994, p. 289). The findings presented here nevertheless support the contention that spending does in fact matter in terms of expenditure per capita, expenditure per pupil, pay for teachers, and pay for principals. All 4 policy measures are favorably correlated with a variety of outcome measures (12 of 15, 13 of 15, 8 of 15, and 7 of 15 strongly and significantly so). Organization The percent of instruction expenditure is strongly, significantly, and positively correlated with superior outcomes as is the percent of union teachers. And both policies are strongly, significantly, and negatively correlated with inferior outcomes. “The war on poverty … accelerated the bureaucratization of the education system … (and) the ratio of teachers to non-teachers” went from 4 to 1 in the 1970s to almost 1 to 1 by the 1990s (Severin, 2013, p. 738). “The decade of the 1970s was particularly important in yielding a significant, and apparently permanent, increase in the share of expenditure going to other areas” besides instructional staff (Hanushek, 1996, p. 46). From 1950 to 2009, students in grades K-12 increased by 96 percent while teaching staff increased by 252 percent and non-teaching staff increased by 702 percent (Scafidi, 2013, p. 1). The dramatic increase in non-teaching expenditure is clearly problematic given the findings of this study. Student outcome measures are favorably correlated with instruction expenditure rather than non-instruction expenditure (9 of 15 strongly and significantly so). The effects of bureaucratization on the part of staff were perhaps offset to some extent by the effects of unionization on the part of teachers. “The percentage of American teachers covered by collective bargaining increased … from zero in 1960 to … 64 percent in 2000” (Moe, 2011, p. 48). “Theory suggests that teachers’ unions will affect important … student outcomes” (Strunk and Reardon, 2010, p. 664). “Groups like the National Education Association (NEA) … are major forces in American politics … (yet) are barely on political scientists’ radar screens” (Moe, 2009, p. 172). And “collective bargaining has rarely been studied by education researchers” (Anzia and Moe, 2014, p. 84) even though “teacher collective bargaining agreements shape nearly everything public schools do” (DeMitchell, 2010, p. 13). “The majority of public schools deliver education in a unionized environment” (p. 142) but even “in many southern right-to-work states where public sector bargaining laws are weak or nonexistent, teacher unions are able to gain leverage in the policy-making process by ratcheting up their group’s political activism in the form of campaign contributions to candidates for state office” (Hartney and Flavin, 2011, p. 259). A study conducted in California shows that “contracts as a whole are more restrictive in districts with stronger unions … and there are … increased costs associated with CBA (collective bargaining agreement) provisions without clear benefits for students or student learning” (Strunk and Grissom, 2010, p. 403). Other studies show “that unionization is associated with both more generous school inputs and worse student achievement” (Hoxby, 1996, p. 709). Moe contends that “unions are an enormous problem for public education” (2006, p. 252). He finds “that collective bargaining does have negative consequences for student achievement” but he says “that more research is needed … to be confident about these findings” (2009, p. 173). The results of an earlier study by Steelman, Powell and Carini “challenge the position that teacher unions are negatively related to state test scores” and these researchers suggest the “mixed set of findings is due at least in part to … choices of dependent variables studied” (2000). That problem is however addressed in this study by the inclusion of a variety of outcome measures all of which are favorably correlated with the percent of instruction expenditure (9 of 15 strongly and significantly so) and with the percent of union teachers (12 of 15 strongly and significantly so).

Elementary and Secondary Education in America …

11 | JSRP

Size The number of pupils per elementary school is strongly, significantly, and positively correlated with inferior outcomes and it is strongly, significantly, and negatively associated with superior outcomes. The evidence with respect to the number of pupils per secondary school is similar but less convincing. In the U.S., larger schools are related not only to school consolidation, but also to student busing. Initially, driven by urbanization and, subsequently, by desegregation, “school busing systems have grown exponentially” (Zars, 1998, p. 7). It is logical to assume there is an opportunity cost for students associated with “empty time” spent on a bus similar to the opportunity cost for the public associated with lost time stuck in traffic whenever a bus stops. But “research shows little consistent evidence that altering … class size … (has) consistently enhanced the education of students in failing schools” although some studies show “an inverse relationship between school size and student outcomes” (Fowler & Walberg, 1991, p. 200). “Empirical work, on the relationship between school size and outcomes, has suggested advantages of small schools for student learning” (Schwartz, Stiefel, & Kim, 2004, p. 502). Class size “is largely under the control of the school system” and there is “evidence that smaller classes help, at least in early grades” (Mosteller, 1995, pp. 114 - 125). An analysis of data from Project STAR, Student Teacher Achievement Ratio, in Tennessee shows that “small classes in Grades K-3 lead to higher academic achievement … (and) could help reduce … racial … inequality in reading … and gender equality in mathematics” (Nye & Hedges, 2004, p. 99). It is of course possible “that student enrollment is not linearly related to student outcomes and that the direction of the effect might change after a certain school size threshold is reached” (Gottfredson & DiPietro, 2011, p. 84). Opdenakker and Van Damme find “a positive connection between school size … and school outcomes” albeit in a sample of small to medium sized schools, and schools sized by default rather than by design (2007, p. 195). They also find “the overall scientific literature with respect to the effectiveness of school size on outcomes is rather inconclusive” (p. 182). “Theoretical arguments underpinning the historical trend toward larger school units have not held up well to empirical scrutiny … (and) the lesson learned is to let school size policies be driven by empirical evidence” (Leithwood & Jantzi, 2009, p. 465). The evidence presented here shows the number of pupils per elementary school is unfavorably correlated with a variety of outcome measures (9 of 15 strongly and significantly so). Race The percent of white teachers is strongly, significantly, and positively correlated with superior outcomes as is the percent of white principals. And both policies are strongly, significantly, and negatively correlated with inferior outcomes. These correlations could account for the overrepresentation of white teachers and white principals. In 2011, the Center for American Progress reported that “students of color made up more than 40 percent of the school-age population … (whereas) teachers of color were only 17 percent of the teaching force” (Boser, 2014, p. 1). The U.S. Department of Education (2013b) reported in 2011-12 SASS Tables, Public School Principal Data File, Table 1 that 80.3 percent of public school principals were white. Some have studied ethnic representation with regard to expulsions in a single state (Roch, Pitts & Navarro, 2010, p. 58). They find that schools with balanced racial and ethnic representation are less likely to adopt sanction-oriented policies. Others have studied “Race of Teacher” with respect to “Challenging Behavior” using surveys in five districts of one state (Alter, Walker & Landers, 2013, p. 59). They find that African American teachers view “off task” behavior as less problematic than do other non-Caucasian teachers. But they find no such difference between African American and Caucasian teachers. A study in Tennessee involving the random assignment of teachers indicates “that there are rather large educational benefits for both black and white students from assignment to an ownrace teacher” (Dee, 2004, p. 209). A study in Georgia indicates that “schools in which teachers are more representative of their target populations by race and ethnicity, there is less frequent use of punitive disciplinary practices such as out-of-school suspensions … (and better) performance … on standardized math and reading exams” (Roch & Pitts, 2012, p. 298). A

12 | JSRP

Kern Craig

national study suggests that “both white and minority (i.e., black and Hispanic) students are likely to be perceived as disruptive by a teacher who does not share their racial/ethnic designation” (Dee, 2005, p. 162). Most studies provide “support for the conventional assumption that recruiting minority teachers can generate important achievement gains among minority students … (but they) also suggest that one of the real and overlooked costs of such efforts may be a substantial reduction in the educational achievement of non-minority students” (Dee, 2004, p. 209). If “black teachers are important … because it is important for black students to see them as successful role models” (Evans & Leonard, 2013, p. 1) then white teachers are no doubt important for white students. The results presented in this article show that both white teachers and white principals are favorably correlated with a variety of outcome measures (11 of 15 and 11 of 15 strongly and significantly so). Sex The percent of male teachers is strongly, significantly, and positively correlated with superior outcomes as is the percent of male principals. Both policies are strongly, significantly, and negatively correlated with inferior outcomes. “The evidence that the demographic interactions between students and teachers matter is surprisingly thin, sometimes contradictory and usually based on small, localized samples” (Dee, 2005, p. 158). One such study indicated that female teachers found “student verbal disruptions … more problematic than male teachers” (Alter, Walker, & Landers, 2013, p. 59). It also indicated that female teachers found verbal disruptions more prevalent and off-task behavior more problematic. A national study showed that “both male and female students are more likely to be seen as disruptive” by a teacher of the opposite sex (Dee, 2005, p. 162). “Adverse gender effects have an impact on both boys and girls, but that effect falls more heavily on the male half of the population in middle school, simply because most middle school teachers are female” (Dee, 2006, p. 75). “Researchers have shown that science and engineering professions have ‘chilly climates’ for women” (Cech and Blair-Loy, 2010, p. 374). Perhaps the teaching profession has a chilly climate for men. The percent of male public school principals is 48.4 according to the 2011-12 SASS Tables, Public School Principal Data File, Table 2 (U.S. Department of Education, 2013b). But “every school district shares the pervasive issue of having males under-represented in the teaching profession” and evidently “students need to see teachers that look like them” (Bryan & Ford, 2014, pp. 156-157). “The most widely recommended policy responses to these sorts of effects are arguably the ones that involve recruiting underrepresented teachers” (Dee, 2005, p. 164). “Men make up just 9 percent of elementary school teachers – a forty year low … and reformers … assert that … students, boys in particular, need more positive male role models in schools” (Sevier & Ashcraft, 2009, pp. 533-534). But this “leaves unchallenged the assumption that increasing the number of male teachers will … have a positive impact on children” (idem, p. 534). Sokal and Katz find “that neither male reading teachers nor computer based reading had a significant effect on boys’ reading performance” (2008, p. 88). Research has shown that information and communication technology (ICT) “infrastructure alone does not bring forth profound and significant changes in the practice of teaching and its efficiency” (Buda, 2010, p. 134). But the results presented in this article show that both male teachers and male principals are favorably correlated with a variety of outcome measures (11 of 15 and 8 of 15 strongly and significantly so). Limitations with respect to some measures More about Race The variables and measures in this study are employed in a logical and factual manner with one exception. Race is difficult if not impossible to operationalize in any objective fashion, e.g., people who are part white and part black are routinely categorized as black including President Obama. Hypodescent is the process that assigns persons of mixed-race to “the minority or socially subordinate group” (Halberstadt, Sherman, & Sherman, 2011, p. 29). Racial stereotyping is problematic in other systematic ways.

Elementary and Secondary Education in America …

13 | JSRP

Most data on race is self-reported (U.S. Census Bureau, 2012a). It is thus measured in a subjective fashion. The same might be said about data that is administratively-reported. One study revealed a 16 percent discrepancy between self-reported and administratively-reported data (Hamilton et al., 2009, p. 297). Matters are made even worse since “the use of broad racial/ethnic categories … ignores cultural variation within each race/ethnicity status” (McGrady and Reynolds, 2013, p. 15). Classification by sex is straightforward, but classification by race is not. Miscegenation is common. “Modern humans haven't been around long enough to evolve into different subspecies and we've always moved, mated, and mixed our genes ... Beneath the skin, we are one of the most genetically similar of all species” (Cheng and Shim, 2003). Yet “minority status continues to be defined, often for sociopolitical purposes … as the possession of an arbitrarily small proportion of minority blood” (Halberstadt, Sherman, & Sherman, 2011, p. 29). “Throughout the study of schools and achievement, considerable attention has gone to the distribution of outcomes, and especially racial aspects of schooling” (Hanushek & Raymond, 2005, p. 299). Like many others, Hanushek and Raymond “disaggregate … state results for Whites, Blacks, and Hispanics” (p. 298) and find “effect varies by subgroup” (p. 321). Racial stereotyping is common in educational research. But typically it focuses on students and parents rather than teachers and principals (Wayne & Youngs, 2003, p. 107). Meier and Stewart report that black teachers are associated with higher scores for black students on standardized tests but that “black principals have no effect on the performance of black students on standardized tests” (1992). Recent but “limited research … suggests that minority teachers are less prepared and could have a negative impact on student performance” and Pitts finds “that proportional representation can lead to positive consequences, but … negative reaction among White students” ( 2007, pp. 503, 521). In the less limited research presented here, the percentages of white teachers and white principals are shown to be favorably correlated with a variety of outcome measures. But no attempts are made to further disaggregate teacher or principal characteristics. For example, there may be different “types of principals … integrating, controlling, or balkanizing” (Urick & Bowers, 2014, p. 112) and different “dimensions of leadership … transformational leadership ... and shared instructional leadership” (p. 117). But “in general, effect sizes are small … that is, correlations between leadership and student achievement” (Witziers, Bosker, & Kruger, 2003, p. 415). “Many researchers have concluded that socioeconomic status (SES) factors, such as parent education and income, are related strongly to student outcomes” (Toutkoushian, 2005, p. 260). But here no attempt is made to control for exogenous factors including the socioeconomic and racial characteristics of students and parents. The racial characteristics of teachers and principals are included as endogenous measures. No other measures in this study involve racial profiling. This is consistent with recent reforms to increase “the performance of individual children … and is not related to the former emphasis on group rights, an emphasis that was prevalent” after Brown v. Board of Education in 1954 (Miller, Kerr, and Ritter, 2008, p. 100). More about Standardized Tests “Standardized tests … do not clearly differentiate between what the teacher has imparted and what the student has acquired otherwise” (Wilson, 1989, p. 168). Some claim “the emphasis on standardized testing … has served to narrow the curriculum and … the over-use of high stakes tests … has not succeeded in achieving educational excellence” (Gomez-Velez, 2013). But “standardized tests have long been used in the United States (since the 19 th century) to monitor the performance of educators and individual student achievement” (Heinrich, 2007, p. 266). And “test-based accountability is the linchpin of current education reform initiatives at the federal and state levels” (Koretz, 2002, p. 752). “State tests may not be an accurate measure of education gains, since schools may substitute for more durable student learning by using strategies to increase performance on the particular testing instrument” (Carnoy & Loeb, 2002 p. 305). In high school, “achievement is typically

14 | JSRP

Kern Craig

measured by … end-of-course exams (that) students take … at different points in their schooling careers” (Parsons et al., 2015, p. 130). But National Assessment of Educational Progress “NAEP tests … every four years in mathematics and reading at the 4th and 8th grades … (are) considered a reasonable assessment of student knowledge in these subjects” (p. 308). And “Race to the Top goes much further in tying nationalized high-stakes testing to teacher accountability” than No Child Left Behind (Tanner, 2013, p. 5). Some advocate “transformation from quantity to quality” where students “learn basic research skills … improve their ability to communicate … improve social skills … (and) learn to take charge” (Ng, 2008, pp. 7-8). Others likewise contend “there is ample reason to be skeptical about the potential of many current test-based accountability systems” and that “some valued outcomes of education will remain poorly tested or untested entirely” (Koretz, 2002, pp. 770-771). But this study reveals numerous strong and significant correlations between policy measures and test scores as well as numerous strong and significant correlations between test scores and other outcome measures. Some maintain that “traditional school accountability systems … punish socioeconomically disadvantaged schools … (whereas) value-added approaches have the potential to highlight the learning that occurs in otherwise low-performing schools” (Ready, 2013, pp. 111-114). But traditional systems can (and should) compare one school with another using not only the level of student achievement but also the degree of student progress. Other quantitative methods can be used to complement more traditional ones, a “coding scheme-correspondence index” for example (Tolboom & Kuiper, 2014, p. 167). “Many stakeholders are hoping to use student test scores as the single outcome to measure performance not only of K-12 teachers, schools, and districts but also as a measure of the performance of higher education institutions’ TPP(s) (teacher preparation programs)” (KuklaAcevedo, Streams & Toma, 2012, p. 316). But, as shown in this study, other outcome measures besides test scores are important and other policy measures besides TPPs are important. “The current trend in education is to recognize that assessment must perform a double duty … to achieve both summative purposes (measure learning) and formative purposes (enhance learning)” (Tan, 2013, p. 38). Standardized tests are typically designed to accomplish the former function as opposed to the latter. “Teaching to the test” is generally criticized (Popham, 2001). But “integrating assessment with instruction may … increase student engagement and … improve learning outcomes” (Wiliam, 2011, p. 13). Developing a common measurement system for K-12 classes in science, technology, engineering, and mathematics (STEM) is a possibility (Saxton et al., 2014). But expecting any single test to be all things to all people is expecting too much. SAT scores are not for instance included in this study since they are not sufficiently correlated with policy measures and since they are not positively correlated with ACT scores. Although not shown in the tables presented here, the Pearson correlation of SAT combined scores and ACT composite scores is negative .28 (Barnett, 2013; ACT, 2013). “The ACT and SAT are different tests … The ACT measures achievement related to high school curricula, while the SAT measures general verbal and quantitative reasoning” (ACT, 2008). Distinctions between the 2 tests can also be made based upon the type of correspondence, e.g., equating, scaling, or prediction (Dorans, 1999, p. 1). “Public expenditures are positively related to state SAT and ACT performance … (but) more than 80 percent of the variation in average state SAT scores could be attributed to the percentage of students taking the test … (while) state ACT performance unfolds even before selection factors” (Powell & Steelman, 1996). Although not shown in the tables here, the Pearson correlation of participation rates between the SAT and ACT is negative .84 (Barnett, 2013; ACT, 2013). Limitations with respect to some methods This study follows an inductive approach that begins with data-mining and ends with hypothesistesting. It is empirical first and theoretical second, the opposite of a deductive approach. Facts are examined specifically before ideas are expressed generally. Since observation proceeds

Elementary and Secondary Education in America …

15 | JSRP

abstraction, this approach is often termed “bottom-up” as opposed to “top-down.” It is, more crudely, a “fishing” expedition. But, more importantly, it is one that catches “fish” in terms of the 27 correlated measures. A cross-sectional design is employed. It is narrowly descriptive as opposed to broadly inferential on two counts: first, spatial, predicting what will happen elsewhere; and, second, temporal, predicting what will happen in the future. But, even though it is national rather than global, it does involve fifty states. Even though it is synchronic rather than diachronic, it does involve matters of current interest. This study does not examine any Tiebout sorting that might occur (Lovenheim, 2009, p. 555) where “exit becomes a signal of preference” (McCabe & Vinzant, 1999, p. 368), e.g., with students or teachers moving from state-to-state due to education policies or outcomes. The purpose is not to compare one state with another. It is instead to compare one measure with another, especially one policy measure with one outcome measure. And bivariate correlation is the method employed to determine the strength, significance, and direction of association between pairs. But “making descriptive inferences that show associations between variables is different than showing that a causal relationship between those variables exists” (Manna, 2013, p. 699). Correlation deals with association rather than sequence and with empirical evidence rather than theoretical arguments. It is therefore a necessary but insufficient determinant of cause and effect. Its results are nevertheless used for hypothesis-testing. The factual basis of correlation lends considerable credence to that process. The results of this study show that policies involving spending, organization, size, race, and sex are important with respect to outcomes involving standardized tests, academic achievement, economic success, serious misbehavior, and overall wellbeing. But some outcomes could conceivably “influence student achievement” (Dee & Jacob, 2011, p. 428). Some policies could reflect pre-existing conditions. In other words, there could be some confusion with regard to dependent and independent variables. The differentiation between policies and outcomes is nonetheless conventional and logical. Good policies by definition produce good outcomes. One can always ask if the chicken or the egg came first. But that question will go unanswered since life cycles and feedback loops are beyond the scope of this research. This study focuses on the direction of association as opposed to the direction of causation. And, although it is objectively assumed that no relationship exists between measurement pairs, the results of correlation are simply too strong, too significant, and too consistent to ignore. “Research has consistently shown that students’ academic performance is influenced by a variety of factors, such as school inputs … (and) teacher characteristics … (so) making relationships between student, teacher, and school more transparent assists policy decisions” (Hogrebe, Kyei-Blankson, & Zou, 2008, p. 570). But in this analysis there is no “education production function to describe the relation between school inputs and student outcomes” (Greenwald, Hedges, & Laine, 1996). A “comprehensive indicator system that includes measures of inputs, processes, and outcomes is relatively rare” (Hamilton et al., 2013, p. 455). This study like most is a “black box” with regard to process measures (Goldhaber, 2006, p. 152; Lee & Reeves, 2012, p. 212). Although a variety of policy and outcome measures are examined, the focus is on the association rather than the mechanism between measures. Context no doubt affects policy implementation. But it is not the subject of this article. Some scholars have examined contingency theories, “the circumstances under which particular implementation strategies will be more or less effective,” while others have examined implementation context, “not only the particular set of resources and problems … but also the profession … and the ways in which they understand their work” (McDermott, 2004, pp. 45 48). “A variety of non-school factors could be playing a role” (Murnane, Sawhill, and Snow, 2012, p. 11). It is always possible for spurious relationships to appear in the presence of untested but confounding factors. It is possible that policies other than those tested “might be important in determining student performance” (Hanushek & Raymond, 2005, p. 301) and that outcomes other than those tested

16 | JSRP

Kern Craig

might be important as well. In this study, some variables are operationalized less fully than others: 6 with 2 measures, 2 with 3 measures, 1 with 4 measures, and 1 with 5 measures. But this is a data-mining exercise. Recent data from the 50 states are collected and analyzed and insufficiently correlated measures are ignored (Afonso and Aubyn, 2005, p. 229). The number of measures is thus limited by the availability of data (Anderson, 2011, p. 122). There are of course, constraints with regard to the time, energy, and money available for research. Although the analysis conducted here is “observational rather than experimental” (Hamilton et al., 2013, p. 456), data-mining produces 27 related measures. Spearman correlation ranks the data for each measure to compensate for any non-normal distributions. The 12 policy measures are transformed into 12 null hypotheses which are rejected (or NOT) based on the strength, significance, and number of correlations between each policy measure and the 15 outcome measures. And no attempts are made to explain away the results of this study. Some contend that “in any analysis of factors influencing public school performance, it is necessary to control for school, teacher, and student characteristics that … influence school performance and may be beyond the control of school leaders” (Sun & van Ryzin, 2014, p. 331). But, just as it is possible to reject a null hypotheses when the p-value is equal to or less than the alpha-value, it is also possible to “reject the notion that conclusions cannot be drawn in the absence of tight experimental control” (Bigler & Signorella, 2011, p. 664). This analysis is exploratory rather than comprehensive. Only 12 policy measures and 15 outcome measures are evaluated. Other data should therefore be explored. This is a simple bivariate analysis. No attempt is made to control or manipulate either policy or outcome measures. More sophisticated statistical techniques should therefore be employed. But, in the meantime, it is possible to view simplicity as virtue rather than vice. If all bets are hedged, then nothing is gained. This is the caveat of caveats. It is of course important to avoid an ecological fallacy, “inferring individual outcomes when student level data are not used” (Sullivan, Klingbeil, & Van Norman, 2013, p. 110). In this study, the 12 policies are measured in terms of state averages per capita, per pupil, per teacher, per principal and per school. The 15 outcomes are measured in terms of state averages per student and former student. So the level of analysis is at least roughly consistent. Most of the assumptions for Pearson correlation are met: measures use an interval or ratio scale; outliers are minimal; variances are similar; and, there appear to be linear relationships between measures. But the normal distribution of data for some measures is questionable. Data for each measure are evaluated using a Shapiro-Wilk test with an alpha-value of .05. The results, although not shown in this article, indicate that data for 15 of the 27 measures appear to be normally distributed while data for 12 measures do NOT appear to be normally distributed. The non-normal distributions include 7 policy measures (expenditure per capita, expenditure per pupil, average pay for teachers, average pay for principals, percent of union teachers, percent of white teachers, and percent of white principals) and 5 outcome measures (illiteracy rate, high school graduation rate, household income, percent imprisoned, and life expectancy). Spearman rank-order correlation is therefore employed instead of Pearson product-moment correlation. Each of the 180 correlation coefficients is based on a monotonic rather than a linear relationship between one policy measure and one outcome measure, i.e. as one goes up the other goes up or as one goes up the other goes down. Spearman correlation is simply a nonparametric version of Pearson correlation. As such, it tests the strength, significance, and direction of association using ranked data. The results of Pearson correlation, although not included in this article, are very similar to the results of Spearman correlation in terms of strength, significance, and consistency. Using the same decision rules, 4 correlations are added to the list drawn from Table 3: 1 regarding average pay for teachers, 1 regarding average pay for principals, and 2 regarding percent of male teachers. And 6 correlations are subtracted from the list: 3 regarding expenditure per capita, 1 regarding expenditure per pupil, 1 regarding percent of instruction expenditure, and 1 regarding percent of male principals. This results in a total of 113 instead of 115. But the same 11 hypotheses are rejected and the same single hypothesis is NOT.

Elementary and Secondary Education in America …

17 | JSRP

Conclusion “The NCLB (No Child Left Behind) Act of 2001 reauthorized and substantially altered the federal Elementary and Secondary Education Act (ESEA) … passed in 1965 to provide extra educational assistance to children in need” (Page, 2006, p. 179). The goals of NCLB include “targeting federal funds on effective (evidence-based) practices … reducing bureaucracy and increasing administrative funding and funding flexibility” for state educational agencies (SEAs) and local educational agencies (LEAs) including choice of supplemental educational services (SESs) (Heinrich, 2010, p. i60). But “NCLB requires each state to adopt academic standards, an accountability system to insure schools reach 100 percent student proficiency by the 2013-14 school year … (and) tests determine whether schools are progressing toward NCLB’s goal of 100 percent student proficiency” (Pilotin, 2010, p. 550, 546). Although a goal of 100 percent is admirable, it is certainly not achievable unless there is a Race to the Bottom (RTTB) in terms of academic standards. “NCLB led to district-level increases in school spending of nearly $600 per pupil, which were funded by increases in state and local (as opposed to federal) revenue” (Dee, Jacob, & Schwartz, 2013, p. 274). NCLB promised that “states and school districts would not be required ‘to spend any funds or incur any costs not paid for under this act’ … (but) many states and school districts have argued that the legislation did, in practice, constitute an unfunded mandate” (p. 263). The Race to the Top (RTTT), part of the American Recovery and Reinvestment Act (ARRA) in 2009, is a “competitive grant program” that continues “to tie federal funding decisions to outcomes” but it is “entirely voluntary” and “allows states to develop their own educational reform plans” (Johnston, 2011, p. 206). The emphasis is on the assessment of a wide range of student outcomes including achievement beyond K-12 in college and at work (Linn, 2010, p. 145). The program is however another form of federal bribery. There was “an infusion of $280 billion in economic stimulus to programs administered by state and local governments under the 2009 American Recovery and Reinvestment Act” (Nelson and Balu, 2014, p. 1). The entire portion devoted to education, $97.4 billion, was awarded in 2010 (U.S. Department of Education, 2010). “The lack of constitutional authority to directly impose school reform on the states … forced the federal government to pursue its goals for school reform indirectly through the grant-in-aid system … changes that are politically unpopular with middle- and upperclass parents as well as with teachers unions and local school leaders” (McGuinn, 2012, p. 152). “Public education … (is) the nation’s largest social welfare program devoted to promoting equal opportunity through social mobility” (Hartney & Flavin, 2014, p. 4). It accounts for at least “one-quarter of state and local governmental spending … (and) one-third of total government employment in the nation … (yet) the field of public administration has virtually ignored public education ” (Raffel, 2007, p. 135). A “federal bias … has reinforced the neglect” since “historically, education has been a state rather than a federal function” (p. 142). With the introduction of NCLB and RTTT, that is no longer the case. But the case is not airtight. Federal “legislation creates a framework, or bare bones outline” (Furgol & Helms, 2012, p. 806). States are still able to influence rulemaking and they are still able to interpret and implement rules so that national policy fits local practice. “Education is the primary mechanism driving upward mobility”(Murnane, Sawhill, and Snow, 2012, p. 6). This study shows that policies involving spending, organization, size, race, and sex matter. Yet “even students know that how much they learn depends, in the end, on how hard they work” (Derthick & Dunn, 2009, p. 1025). At present, “polarization … characterizes the debate of these vital issues” (Hannaway &Rotherham, 2006, p. 1). The logical and practical “next steps for systematic reform lie in finding the right balance between sustaining the best current practices and exploring new ideas that may lead to greater student achievement” (Anderson, Brown, and Lopez-Ferrao, 2003, p. 624) especially best practices and new ideas based on evidence as opposed to ideology. Based on the findings of this study, stakeholders should consider: (1) spending more on education; (2) replacing non-instructional staff with teachers; (3) negotiating with teachers’ unions; (4) replacing large schools, especially elementary schools, with small ones; and, (5) hiring more qualified teachers and principals even if they are white and especially if they are male elementary school teachers.

18 | JSRP

Kern Craig

References 1. ACT. (2008). Compare ACT & SAT Scores. Retrieved February 28, 2015, from http://www.act.org/solutions/college-career-readiness/compare-act-sat/ 2.

ACT. (2013). Average Scores by State. Retrieved February 28, 2015, from http://www.act.org/newsroom/data/2013/states.html

3.

Alter, P., Walker, J., & Landers, E. (2013). Teachers’ perceptions of students’ challenging behavior and the impact of teacher demographics. Education and Treatment of Children, 36(4), pp. 51-69. http://dx.doi.org/10.1353/etc.2013.0040

4.

Anderson, B.T., Brown, C.L., & Lopez-Ferrao, J. (2003). Systemic Reform: Good educational practice with positive impacts and unresolved problems and issues. Review of Policy Research, 20(4), pp. 617-627. http://dx.doi.org/10.1046/j.15411338.2003.00042.x

5.

Anderson, K.J.B. (2011). Science education and test-based accountability: Reviewing their relationship and exploring implications for future policy. Science Education, 96(1), pp. 104-129. http://dx.doi.org/10.1002/sce.20464

6.

Afonso, A, & St. Aubyn, M. (2005). Non-parametric approaches to education and health efficiency in OECD countries. Journal of Applied Economics, 8(2), pp. 227-246.

7.

Anzia, S.F., & Moe, T.M. (2014, March). Collective bargaining, transfer rights, and disadvantaged schools. Educational Evaluation and Policy Analysis, 36(1), pp. 83-111. http://dx.doi.org/10.3102/0162373713500524

8.

Barnett, J. (2013). SAT Scores by State 2013. Commonwealth Foundation. Retrieved February 28, 2015, from http://www.commonwealthfoundation.org/policyblog/detail/satscores-by-state-2013

9.

Beatty, A.S. (2013). Schools alone cannot close achievement gap. Issues in Science and Technology, 29(3), pp. 69-75.

10. Belfield, C.R., & Levin, H.M. (2002). The effects of competition between schools on educational outcomes: A review for the United States. Review of Educational Research, Summer, 72(2), pp. 279-341. http://dx.doi.org/10.3102/00346543072002279 11. Bigler, R.S., & Signorella, M.L. (2011). Single-sex education: New perspectives and evidence on a continuing controversy. Sex Roles, 65(9), pp. 659-669. http://dx.doi.org/10.1007/s11199-011-0046-x 12. Boser, U. (2014). Teacher Diversity Revisited: A New State-by-State Analysis. Center for American Progress. Retrieved February 28, 2015, from https://www.americanprogress.org/issues/race/report/2014/05/04/88962/teacherdiversity-revisited/ 13. Bryan, N. & Ford, D.Y. (2014). Recruiting black male teachers in gifted education. Gifted Child Today, 37(3), pp. 156-161. http://dx.doi.org/10.1177/1076217514530116 14. Buda, A. (2010). Attitudes of teachers concerning the use of ICT equipment in education. Journal of Social Research & Policy, 2, pp. 131-150. 15. Burtless, G. (Ed.). (1996). Does Money Matter? The Effect of School Resources on Student Achievement and Adult Success. Washington, DC: Brookings Institution Press.

Elementary and Secondary Education in America …

19 | JSRP

16. Card, D., & Krueger, A.B. (1996). Labor market effects of school quality: Theory and evidence. In G. Burtless (Ed.), Does Money Matter? The Effect of School Resources on Student Achievement and Adult Success (pp. 97-140). Washington, DC: Brookings Institution Press. 17. Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A cross-state analysis. Educational Evaluation and Policy Analysis, 24(4), pp. 305-331. http://dx.doi.org/10.3102/01623737024004305 18. Cech, E., & Blair-Loy, M. (2010). Perceiving glass ceilings: Meritocratic versus structural explanations of gender inequality among women in science and technology. Social Problems, 57(3), pp. 371-397. http://dx.doi.org/10.1525/sp.2010.57.3.371 19. Center for American Progress. (2011). Teacher Diversity Matters: A State-by-State Analysis of Teachers of Color. Retrieved February 28, 2015, from http://www.americanprogress.org/issues/education/report/2011/11/09/10657/teacherdiversity-matters/ 20. Cheng, J. & Shim, I. (2003). Race: The Power of an Illusion. California Newsreel. Retrieved February 28, 2015, from http://www.pbs.org/race/000_About/002_04background-01-11.htm 21. Dee, T.S. (2004). Teachers, race and student achievement in a randomized experiment. Review of Economics and Statistics, 86(1), pp. 195-210. http://dx.doi.org/10.1162/003465304323023750 22. Dee, T.S. (2005). A teacher like me: Does race, ethnicity or gender matter? American Economic Review, 95(2), pp. 158-165. http://dx.doi.org/10.1257/000282805774670446 23. Dee, T.S. (2006). How a teacher’s gender affects boys and girls. Education Next, Fall. Retrieved February 28, 2015, from http://educationnext.org/the-why-chromosome/ 24. Dee, T.S., & Jacob, B. (2011). The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3), pp. 418-446. http://dx.doi.org/10.1002/pam.20586 25. Dee, T.S., Jacob, B., & Schwartz, N.L. (2013). The effects of NCLB on school resources and practices. Educational Evaluation and Policy Analysis, 35(2), pp. 252-279. http://dx.doi.org/10.3102/0162373712467080 26. DeMitchell, T.A. (2010). Labor Relations in Education: Policies, Politics, and Practices. New York: Rowman & Littlefield. 27. Derthick, M., & Dunn, J.M. (2009). False premises: The accountability fetish in education. Harvard Journal of Law and Public Policy, 32(3), pp. 1015-1034. 28. Dorans, N.J. (1999). Correspondence between ACT and SAT I Scores. New York: College Entrance Examination Board. 29. Evans, B.R. & Leonard, J. (2013). Recruiting and retaining black teachers to work in urban schools. SAGE Open, July-September, pp. 1-12. http://dx.doi.org/10.1177/2158244013502989 30. Fernandez, K.E. (2011). Evaluating school improvement plans and their affect on academic performance (sic). Educational Policy, 25(2), pp. 338-367. http://dx.doi.org/10.1177/0895904809351693

20 | JSRP

Kern Craig

31. Fordham Institute. (2012). How Strong are U.S. Teacher Unions? A State-by-State Comparison. Retrieved February 28, 2015, from http://edexcellence.net/publications/how-strong-are-us-teacher-unions.html 32. Fowler, W.J., Jr., Walberg, H.J. (1991). School size, characteristics, and outcomes. Educational Evaluation and Policy Analysis, 13(2), pp. 189-202. http://dx.doi.org/10.3102/01623737013002189 33. Furgol, K.E., & Helms, L.B. (2012). Lessons in leveraging implementation: Rulemaking, growth models, and policy dynamics under NCLB. Educational Policy, 26(6), pp. 777-812. http://dx.doi.org/10.1177/0895904811417588 34. Goldhaber, D.D., & Brewer, D.J. (1996). Evaluating the Effect of Teacher Degree Level on Educational Performance. Rockville, MD: Westat. 35. Goldhaber, D. (2006). Are teachers unions good for students? In J. Hannaway & A.J. Rotherham (Eds.), Collective Bargaining in Education: Negotiating Change in Today’s Schools (pp. 141-157). Cambridge, MA: Harvard Education Press. 36. Gomez-Velez, N. (2013). Urban education reform: Governance, accountability, outsourcing. Urban Lawyer, 45(1), pp. 51-104. 37. Gottfredson, D.C., & DiPietro, S.M. (2011). School size, social capital, and student victimization. Sociology of Education, 84(1), pp. 69-89. http://dx.doi.org/10.1177/0038040710392718 38. Greenwald, R., Hedges, L.V., & Laine, R.D. (1996). The effect of school resources on student achievement. Review of Educational Research, 66(3), pp. 361-396. http://dx.doi.org/10.3102/00346543066003361 39. Halberstadt, J., Sherman, S.J., & Sherman, J.W. (2011). Why Barrack Obama is black: A cognitive account of hypodescent. Psychological Science, 22(1), pp. 29-33. http://dx.doi.org/10.1177/0956797610390383 40. Hamilton, N.S., Edelman, D., Weinberger, M., & Jackson, G.L. (2009). Concordance between self-reported race/ethnicity and that recorded in a Veterans Affairs medical record. North Carolina Medical Journal, 70(4), pp. 296-300. 41. Hamilton, L.S., Schwartz, H.L., Stecher, B.M., & Steele, J.L. (2013). Improving accountability through expanded measures of performance. Journal of Educational Administration, 51(4), pp. 453-475. http://dx.doi.org/10.1108/09578231311325659 42. Hannaway, J., & Rotherham, A.J. (Eds.) (2006). Collective Bargaining in Education: Negotiating Change in Today’s Schools. Cambridge, MA: Harvard Education Press. 43. Hanushek, E.A. (1986). The economics of schooling: Production and efficiency in the public schools. Journal of Economic Literature, 24(3), pp. 1141-1178. 44. Hanushek, E.A. (1996). School resources and student performance. In G. Burtless (Ed.), Does Money Matter? The Effect of School Resources on Student Achievement and Adult Success (pp. 43-73). Washington, DC: Brookings Institution Press. 45. Hanushek, E.A. (1997). Assessing the effects of school resources on student performance: An update. Educational Evaluation and Policy Analysis, 19(2), pp. 141164. http://dx.doi.org/10.3102/01623737019002141

Elementary and Secondary Education in America …

21 | JSRP

46. Hanushek, E.A., & Raymond, M.E. (2005). Does school accountability lead to improved student performance? Journal of Policy Analysis and Management, 24(2), pp. 297-327. http://dx.doi.org/10.1002/pam.20091 47. Hartney, M., & Flavin, P. (2011). From the schoolhouse to the statehouse: Teacher union political activism and U.S. state education reform policy. State Politics & Policy Quarterly, 11(3), pp. 251-268. http://dx.doi.org/10.1177/1532440011413079 48. Hartney, M., & Flavin, P. (2014). The political foundations of the black-white education achievement gap. American Politics Research, 42(1), pp. 3-33. http://dx.doi.org/10.1177/1532673X13482967 49. Heckman, J., Layne-Farrar, A., & Todd, P. (1996). Does measured school quality really matter? An examination of the earnings-quality relationship. In G. Burtless (Ed.), Does Money Matter? The Effect of School Resources on Student Achievement and Adult Success (pp. 192-289). Washington, DC: Brookings Institution Press. 50. Hedges, L.V., & Greenwald, R. (1996). Have times changed? The relation between school resources and student performance. In G. Burtless (Ed.), Does Money Matter? The Effect of School Resources on Student Achievement and Adult Success (pp. 74-92). Washington, DC: Brookings Institution Press. 51. Heinrich, C.J. (2007). Evidence-based policy and performance management: Challenges and prospects in two parallel movements. American Review of Public Administration, 37(3), pp. 255-277. http://dx.doi.org/10.1177/0275074007301957 52. Heinrich, C.J. (2010). Third-party governance under No Child Left Behind: Accountability and performance management challenges. Journal of Public Administration Research and Theory, 20(Suppl.), pp. i59-i80. http://dx.doi.org/10.1093/jopart/mup035 53. Hogrebe, M.C., Kyei-Blankson, L., & Zou, L. (2008). Examining regional science attainment and school-teacher resources using GIS. Education and Urban Society, 40(5), pp. 570-589. http://dx.doi.org/10.1177/0013124508316045 54. Hoxby, C.M. (1996). How teachers’ unions affect education production. Quarterly Journal of Economics, 111(3), pp. 671-718. http://dx.doi.org/10.2307/2946669 55. Johnston, M. (2011). From regulation to results: Shifting American education from inputs to outcomes. Yale Law & Policy Review, 30(1), pp. 195-209. 56. Koretz, D.M. (2002). Limitations in the use of achievement tests as measures of educators’ productivity. Journal of Human Resources, 37(4), pp. 752-777. http://dx.doi.org/10.2307/3069616 57. Kukla-Acevedo, S., Streams, M.E., & Toma, E. (2012). Can a single performance metric do it all? A case study in education accountability. American Review of Public Administration, 42(3), pp. 303-319. http://dx.doi.org/10.1177/0275074011399120 58. Lee, J., & Reeves, T. (2012). Revisiting the impact of NCLB high-stakes school accountability, capacity, and resources: State NAEP 1990-2009 reading and math achievement gaps and trends. Education Evaluation and Policy Analysis, 34(2), pp. 209-231. http://dx.doi.org/10.3102/0162373711431604 59. Lehnen, R.G., & Minick, M.B. (1994). Differing perspectives on teacher pay in Indiana: Toward social equity for women and children. Public Administration Quarterly, 18(3), pp. 279-297.

22 | JSRP

Kern Craig

60. Leithwood, K., & Jantzi, D. (2009). A review of empirical evidence about school size effects: A policy perspective. Review of Educational Research, 79(1), pp. 464-490. http://dx.doi.org/10.3102/0034654308326158 61. Linn, R.L. (2010). A new era of test-based educational accountability. Measurement: Interdisciplinary Research and Perspectives, 8(2-3), pp. 145-149. http://dx.doi.org/10.1080/15366367.2010.508692 62. Loeb, S., & Page, M.E. (2000). Examining the link between teacher wages and student outcomes: The importance of alternative labor market opportunities and non-pecuniary variation. Review of Economics and Statistics, 82(3), pp. 393-408. http://dx.doi.org/10.1162/003465300558894 63. Lovenheim, M.F. (2009). The effect of teachers’ unions on education production: Evidence from union certifications in three Midwestern states. Journal of Labor Economics, 27(4), pp. 525-587. http://dx.doi.org/10.1086/605653 64. Manna, P. (2013). Centralized governance and student outcomes: Excellence, equity, and academic achievement in the U.S. states. Policy Studies Journal, 41(4), pp. 683706. http://dx.doi.org/10.1111/psj.12037 65. McCabe, B.C., & Vinzant, J.C. (1999). Governance lessons: The case of charter schools. Administration & Society, 31(3), pp. 361-377. http://dx.doi.org/10.1177/00953999922019175 66. McDermott, K.A. (2004). Incentives, capacity, and implementation: Evidence from Massachusetts education reform. Journal of Public Administration Research and Theory, 16(1), pp. 45-65. http://dx.doi.org/10.1093/jopart/mui024 67. McGrady, P.B., & Reynolds, J.R. (2013). Racial mismatch in the classroom: Beyond black-white differences. Sociology of Education, 86(1), pp. 3-17. http://dx.doi.org/10.1177/0038040712444857 68. McGuinn, P. (2012). Stimulating reform: Race to the Top, competitive grants and the Obama education agenda. Education Policy, 26(1), pp. 136-159. http://dx.doi.org/10.1177/0895904811425911 69. Measure of America. (2013-2014). American Human Development Report. Retrieved February 28, 2015, from http://ssrc-static.s3.amazonaws.com/moa/MOA-III-June-18FINAL.pdf 70. Meier, K.J., & Stewart, J., Jr. (1992). The impact of representative bureaucracies: Educational systems and public policies. American Review of Public Administration, 22(3), pp. 157-171. http://dx.doi.org/10.1177/027507409202200301 71. Miller, W.H., Kerr, B., & Ritter, G. (2008). School performance measurement: Politics and equity. American Review of Public Administration, 38(1), pp. 100-117. http://dx.doi.org/10.1177/0275074007304387 72. Moe, T.M. (2006). Union power and the education of children. In J. Hannaway & A.J. Rotherham (Eds.), Collective Bargaining in Education: Negotiating Change in Today’s Schools (pp. 229-255). Cambridge, MA: Harvard Education Press. 73. Moe, T.M. (2009). Collective bargaining and the performance of the public schools. American Journal of Political Science, 53(1), pp. 156-174. http://dx.doi.org/10.1111/j.1540-5907.2008.00363.x

Elementary and Secondary Education in America …

23 | JSRP

74. Moe, T.M. (2011). Special Interest: Teachers Unions and America’s Public Schools. Washington, DC: Brookings Institution Press. 75. Mosteller, F. (1995). The Tennessee study of class size in the early school grades. The Future of Children, 5(2), pp. 113-127. http://dx.doi.org/10.2307/1602360 76. Murnane, R., Sawhill, I., & Snow, C. (2012). Literacy challenges for the twenty-first century: Introducing the issue. The Future of Children, 22(2), pp. 3-35. http://dx.doi.org/10.1353/foc.2012.0013 77. National Center for Education Statistics. (2003). State & County Estimates of Low Literacy. Retrieved February 28, 2015, from http://nces.ed.gov/naal/estimates/StateEstimates.aspx 78. National Center for Education Statistics. (2010). Proficiency Levels on Selected NAEP Tests for Students in Public Schools by State: 2009. Retrieved February 28, 2015, from http://www.census.gov/compendia/statab/2012/tables/12s0269.pdf 79. Nelson, A.A., & Balu, R. (2014). Local government responses to fiscal stress: Evidence from the public education sector. Public Administration Review, 74(5), pp. 553–681. http://dx.doi.org/10.1111/puar.12211 80. Ng, P.T. (2008). Educational reform in Singapore: From quantity to quality. Educational Research for Policy and Practice, 7, pp. 5-15. http://dx.doi.org/10.1007/s10671-007-9042-x 81. Nye, B., & Hedges, L.V. (2004). Do minorities experience larger lasting benefits from small classes? Journal of Educational Research, 98(2), pp. 94-100. http://dx.doi.org/10.3200/JOER.98.2.94-114 82. Okpala, C.O., Okpala, A.O., Smith, F.E. (2001). Parental involvement, instructional expenditures, family socioeconomic attributes, and student achievement. Journal of Educational Research, 95(2), pp. 100-115. http://dx.doi.org/10.1080/00220670109596579 83. Opdenakker, M., & Van Damme, J. (2007). Do school context, student composition and school leadership affect school practice and outcomes in secondary education? British Educational Research Journal, 33(2), pp. 179-206. http://dx.doi.org/10.1080/01411920701208233 84. Page, S. (2006). The web of managerial accountability: The impact of reinventing government. Administration & Society, 38(2), pp. 166-197. http://dx.doi.org/10.1177/0095399705285990 85. Parsons, E., Koedel, C., Podgursky, M., Ehlert, M., & Xiang, B. (2015). Incorporating end-of-course exam timing into educational performance evaluations. Journal of Research on Educational Effectiveness, 8(1), pp. 130-147. http://dx.doi.org/10.1080/19345747.2014.974790 86. Pilotin, M. (2010). Finding a common yardstick: Implementing a national student assessment and school accountability plan through state-federal collaboration. California Law Review, 98(2), pp. 545-574. 87. Pitts, D.W. (2007). Representative bureaucracy, ethnicity, and public schools: Examining the link between representation and performance. Administration & Society, 39(4), pp. 497-526. http://dx.doi.org/10.1177/0095399707303129 88. Popham, W.J. (2001). Teaching to the test? Helping All Students Achieve, 58(6), pp. 16-20.

24 | JSRP

Kern Craig

89. Powell, B., & Steelman, L.C. (1996). Bewitched, bothered, and bewildering: The use and misuse of state SAT and ACT scores. Harvard Educational Review, 66(1), pp. pp. 27-60. http://dx.doi.org/10.17763/haer.66.1.h6l5048817g24m05 90. Raffel, J.A. (2007). Why has public administration ignored public education, and does it matter? Public Administration Review, 67(1), pp. 135-151. http://dx.doi.org/10.1111/j.1540-6210.2006.00703.x 91. Ready, D.D. (2013). Association between student achievement and student learning: Implications for value-added school accountability models. Educational Policy, 27(1), pp. 92-120. http://dx.doi.org/10.1177/0895904811429289 92. Roch, C.H., Pitts, D.W., & Navarro, I. (2010). Representative bureaucracy and policy tools: Ethnicity, student discipline, and representation in public schools. Administration & Society, 42(1), pp. 38-65. http://dx.doi.org/10.1177/0095399709349695 93. Roch, C.H., & Pitts, D.W. (2012). Differing effects of representative bureaucracy in charter schools and traditional public schools. American Review of Public Administration, 42(3), pp. 282-302. http://dx.doi.org/10.1177/0275074011400404 94. SAS OnDemand for Academics (2010). SAS Enterprise Guide 4.3. Retrieved February 28, 2015, from http://www.sas.com/govedu/edu/programs/od_academics.html 95. Saxton, E., Burns, R., Holveck, S., Kelley, S., Prince, D., Rigelman, N., & Skinner, E.A., (2014). A common measurement system for K-12 STEM education: Adopting an educational evaluation methodology that elevates theoretical foundations and systems thinking. Studies in Educational Evaluation, 40, pp. 18-35. http://dx.doi.org/10.1016/j.stueduc.2013.11.005 96. Scafidi, B. (2013). The School Staffing Surge: Decades of Employment Growth in America’s Public Schools (Part II). The Friedman Foundation. Retrieved February 28, 2015, from http://www.edchoice.org/research/the-school-staffing-surge/ 97. Schwartz, A.E., Stiefel, L., & Kim, D.Y. (2004). The impact of school reform on student performance evidence from the New York Network for School Renewal Project. Journal of Human Resources, 39(2), pp. 500-522. http://dx.doi.org/10.2307/3559024 98. Severin, J.R. (2013). The state of American public education. International Journal of Arts & Sciences, 6(2), pp. 731-742. 99. Sevier, B. & Ashcraft, C. (2009). Be careful what you ask for: Exploring the confusion around and usefulness of the male teacher as male role model discourse. Men and Masculinities, 11(5), pp. 533-557. http://dx.doi.org/10.1177/1097184X07302290 100. Sokal, L. & Katz, H. (2008). Effects of technology and male teachers on boys’ reading. Australian Journal of Education, 52(1), pp. 81-94. http://dx.doi.org/10.1177/000494410805200106 101. Steelman, L.C., Powell, B., & Carini, R.M. (2000). Do teacher unions hinder educational performance? Lessons learned from state SAT and ACT scores. Harvard Educational Review, 70(4), pp. 437-466. http://dx.doi.org/10.17763/haer.70.4.w17t1201442683k6 102. Strunk, K.O., & Grissom, J.A. (2010). Do strong unions shape district policies? Collective bargaining, teacher contract restrictiveness, and the political power of teachers’ unions. Educational Evaluation and Policy Analysis, 32(3), pp. 389-406. http://dx.doi.org/10.3102/0162373710376665

Elementary and Secondary Education in America …

25 | JSRP

103. Strunk, K.O., & Reardon, S.F. (2010). Measuring the strength of teachers’ unions: An empirical application of the partial independence item response approach. Journal of Educational and Behavioral Statistics, 35(6), pp. 629-670. http://dx.doi.org/10.3102/1076998609359790 104. Sullivan, A.L., Klingbeil, D.A. & Van Norman, E.R. (2013). Children, research, and public policy. School Psychology Review, 42(1), 99-114. 105. Sun, R., & van Ryzin, G.G. (2014). Are performance management practices associated with better outcomes? Empirical evidence from New York public schools. American Review of Public Administration, 44(3), pp. 324-338. http://dx.doi.org/10.1177/0275074012468058 106. Tan, K.H.K. (2013). Variation in teachers’ conceptions of alternative assessment in Singapore primary schools. Educational Research for Policy and Practice, 12(1), pp. 21-41. http://dx.doi.org/10.1007/s10671-012-9130-4 107. Tanner, D. (2013). Race to the top and leave the children behind. Journal of Curriculum Studies, 45(1), pp. 4-15. http://dx.doi.org/10.1080/00220272.2012.754946 108. Tolboom, J., & Kuiper, W. (2014). Quantifying correspondence between the intended and the implemented intervention in educational design research. Studies in Educational Evaluation, 43, pp. 160-168. http://dx.doi.org/10.1016/j.stueduc.2014.09.001 109. Toutkoushian, R.K., (2005). Effects of socioeconomic factors on public high school outcomes and rankings. Journal of Educational Research, 98(5), pp. 259-271. http://dx.doi.org/10.3200/JOER.98.5.259-271 110. Urick, A. & Bowers, A.J. (2014). What are the different types of principals across the United States? A latent class analysis of principal perception of leadership. Educational Administration Quarterly, 50(1), pp. 96-134. http://dx.doi.org/10.1177/0013161X13489019 111. U.S. Census Bureau (2012a). The Two or More Races Population: 2010. Retrieved February 28, 2015, from http://www.census.gov/prod/cen2010/briefs/c2010br-13.pdf 112. U.S. Census Bureau (2012b). 2012 Statistical Abstract. Characteristics of the civilian labor force by state: 2010. Retrieved February 28, 2015, from http://www.census.gov/compendia/statab/2012/tables/12s0594.pdf 113. U.S. Census Bureau (2012c), 2012 Statistical Abstract. Resident population by race and state: 2010. Retrieved February 28, 2015, from http://www.census.gov/compendia/statab/2012/tables/12s0019.pdf 114. U.S. Census Bureau. (2013). Poverty: 2000-2012. Retrieved February 28, 2015, from http://www.census.gov/prod/2013pubs/acsbr12-01.pdf 115. U.S. Department of Education (2010). Recovery Act. Retrieved February 28, 2015, from http://www.ed.gov/recovery 116. U.S. Department of Education. (2013). Digest of Education Statistics 2012. Retrieved February 28, 2015, from http://nces.ed.gov/pubs2014/2014015.pdf 117. U.S. Department of Education. (2013b). Schools and Staffing Survey. Retrieved February 28, 2015, from http://nces.ed.gov/surveys/sass/tables_list.asp#2012

26 | JSRP

Kern Craig

118. U.S. Department of Justice (2013). Prisoners in 2012. Retrieved February 28, 2015, from http://www.bjs.gov/content/pub/pdf/p12tar9112.pdf 119. Wayne, A.J., & Youngs, P. (2003). Teacher characteristics and student achievement gains: A review. Review of Educational Research, 73(1), pp. 89-122. http://dx.doi.org/10.3102/00346543073001089 120. Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), pp. 3-14. http://dx.doi.org/10.1016/j.stueduc.2011.03.001 121. Wilson, J.Q. (1989). Bureaucracy: What Government Agencies Do and Why They Do It. New York: Basic Books. 122. Witziers, B., Bosker, R.J., & Kruger, M.L. (2003, August). Educational leadership and student achievement: The elusive search for an association. Educational Administration Quarterly, 39(3), pp. 398-425. http://dx.doi.org/10.1177/0013161X03253411 123. Zars, B. (1998). Long Rides, Tough Hides: Enduring Long School Bus Rides. Randolph, VT: Rural Challenge Policy Program.

Elementary and Secondary Education in America: Using Induction and ...

Expenditure per capita (exppcap) for elementary and secondary education is taken ..... technology (ICT) “infrastructure alone does not bring forth profound and ...

2MB Sizes 0 Downloads 245 Views

Recommend Documents

Characteristics of Public Elementary and Secondary Schools in the ...
courses before their first year of teaching in selected subject areas, by ... teachers, average base salary and earnings from all sources, percentage of .... school were sampled from a teacher list provided by the school, collected from school ......

(Elementary and Secondary) Technical Working Committee.pdf
Meeting of all Sports Unit Chairmen, Sports Unit Co- ... ntary and Secondary) Technical Working Committee.pdf. Meeting of all Sports Unit Chairmen, Sports Unit ...

Characteristics of Public Elementary and Secondary Schools in the ...
A-1. Appendix B: Methodology and Technical Notes . ... Percentage distribution of public school teachers based on years of teaching experience, average total ...

Using Induction and Correlation to Evaluate Public Policies and ...
1Postal address: 81 Beal Parkway S.E. Fort Walton Beach, FL, 32548, USA. E-mail Address: .... College graduation rate (gradcol) for persons age 25 and over is taken from the U.S.. Department of ...... Arts & Sciences, 6(2), pp. 731-742. 99.

Adopting a uniform color of all public elementary and secondary ...
Adopting a uniform color of all public elementary and secondary schools in Quezon City.pdf. Adopting a uniform color of all public elementary and secondary ...

Gender, Innovation and Education in Latin America - UNESCO.PDF ...
initiate, appreciate group work, discover through play, make. decisions, negotiate proposals and construct identities. Angela Rocío Acosta. CIMDER, Colombia. Page 3 of 3. Gender, Innovation and Education in Latin America - UNESCO.PDF. Gender, Innova

Gender, Innovation and Education in Latin America - UNESCO.PDF ...
Angela Rocío Acosta. CIMDER, Colombia. Page 3 of 106. Gender, Innovation and Education in Latin America - UNESCO.PDF. Gender, Innovation and Education in Latin America - UNESCO.PDF. Open. Extract. Open with. Sign In. Main menu. Displaying Gender, In

Training for Mastery in Secondary Vocational Education, 1990.pdf ...
Training for Mastery in Secondary Vocational Education, 1990.pdf. Training for Mastery in Secondary Vocational Education, 1990.pdf. Open. Extract. Open with.

Fuzzy rule induction and artificial immune systems in ...
Jun 18, 2008 - Samples' collection and data preprocessing steps have been carried ... Common approaches to data mining in genomic datasets are mainly ...

Fuzzy rule induction and artificial immune systems in ...
Jun 18, 2008 - the RE procedure returns, to the caller SC procedure, the best evolved rule, which will then be added to the set of discovered rules by the caller ...