The Journal of Psychology, 1987, 121(1), 95-100

Some Reflections on Harris and Rosenthal's 31 Meta-Analyses*

SIU L. CHOW

Department of Psychology

University of Wollongong, Australia

ABSTRACT. A critique of Harris and Rosenthal's (1985) 31 meta-analyses of the mediation of interpersonal expectancy effects raises two issues. First, the study of interpersonal expectancy effects requires an examination of the expectancy-mediator-changes (B-C-D) chain in toto. Harris and Rosenthal's meta-analytic exercise fails to substantiate such a chain, and also fails to enhance understanding of the subtle and unwitting nature of the expectancy-mediator (B-C) and the mediator-changes (C-D) links. Second, mixing studies of uneven quality in a meta-analysis still appears to be a cause for concern.

HARRIS AND ROSENTHAL (1985) conducted 31 meta-analyses to evaluate the tenability and utility of Rosenthal's (1981) 10-arrow model of inter-personal expectancy effects in the context of Rosenthal's (1973) four-factor theory. Careful examination of the procedures followed indicates that the meta-analytic exercise did not succeed in substantiating the ten-arrow model.

Rosenthal's (1981) 10-arrow model of interpersonal expectancy effects consists of three types of variables: predictor variables, which are subdivided into (A) moderator variables (age, sex, etc.) and (B) the expectancy itself; the mediating variable (C) which is the expecter's behavior; and the outcome variable, which is subdivided into (D) immediate- and (E) long-term changes in the expectee.

Harris and Rosenthal (1985) were concerned with the links between expectancy (B), the expecter's mediating behavior (C), and the immediate changes in the expectee (D). They selected 180 studies conducted from 1960 to 1983 and classified them in terms of whether or not the B-C or the C-D link was investigated. Findings from these 180 studies were categorized into 31 response categories, such as praise, positive climate, negative climate, smiles, and the like (see p. 367).

To study the B-C link, Harris and Rosenthal (1985) further classified the 10 most frequently investigated response categories in terms of Rosenthal's (1973) four-factor theory. For example, the factor climate was represented by the response categories positive climate, negative climate, and eye contact; the feedback factor by praise, criticism, accept student's ideas, and ignore student; the input factor by input, and the output factor by ask questions and frequency of interaction (see Table 2, Harris & Rosenthal, 1985, p. 370). Following this classification, Harris and Rosenthal integrated the standard normal deviates and the correlation coefficients of individual studies into a combined standard normal deviate and a combined correlation coefficient, respectively, for each of the four factors (see Rosenthal, 1978, and Rosenthal & Rubin, 1982, for the procedures used). Tests of significance on the combined measures showed that the effects of expectancy on all four factors (climate, feedback, input, and output) were highly significant.

To investigate the C-D link, Harris and Rosenthal (1985) chose students' achievement on some academic tasks, the students' attitudes, and observers' ratings of the students' behavior as the immediate outcome measures. They reported that 15 of the 24 response categories yielded a statistically significant combined standard normal deviate (Table 6, p. 374). The 10 most frequently investigated response categories were grouped into the four factors (climate, feedback, input, and output) and subjected to meta-analytic computations. All four factors showed significant effects on the students' behavior, although the feedback factor had a relatively small effect. They concluded that the meta-analytic exercise supported the belief that expectancies were mediated by 12 identifiable behaviors.

Although Harris and Rosenthal (1985) first argued that the B-C and the C-D links were concerned with different questions about the interpersonal expectancy effects, they subsequently demonstrated that separation of the B-C and the C-D links was inappropriate for the following reason. Among the 24 behavior categories under consideration, the effects of the C-D link were larger than those of the B-C link in 18 categories; 8 categories showed some B-C effects but a near-zero effect of the C-D link; 3 behaviors showed a near-zero effect for the B-C link but substantial effects for the C-D link; and 3 behaviors showed effect sizes that were opposite in direction for the B-C and C-D links. In other words, "(the) presence of such conflicting results warns against unequivocally accepting the results of B-C analyses as evidence for mediating factors without determining whether the given mediating variable has appropriate effects on subsequent behavior" (p. 375).

Harris and Rosenthal (1985) suggested that, to be considered a viable mediating variable, the expecter's behavior (C) must show a significant relation in both the B-C and the C-D links (in other words, it must meet a viability criterion). They listed 12 viable mediators (creating a less negative climate, maintaining closer physical contact, etc.) in order of the magnitude of the geometric means of the B-C and the C-D effect sizes. Presumably for this reason they said, "The utility of these analytic frameworks is thus enhanced, because we can now predict more precisely how such mediation might occur" (p. 379).

Their evidence may not satisfy their viability criterion, however. Consider the response category "creating a less negative climate" as an example. A geometric mean of 0.32 was reported (Harris & Rosenthal, 1985, p. 376). This number is the square root of the product of 0.285 (the combined effect size of expectancy on the negative climate factor; i.e., a B-C link in Table 1, p. 368) and 0.357 (the combined effect of negative climate on the expectee's changes; i.e., a C-D link in Table 6, p. 374).

These two combined effect sizes (0.285 and 0.357) were derived from different numbers of studies. Specifically, the 0.285 value was derived from 16 studies, whereas the 0.357 value was derived from 3 studies. Assume that the latter 3 studies were among the former 16 studies. The obvious question is whether the combined effect size of expectancy on the negative climate factor would be as large as 0.285 if only the three studies were considered. If the integrated effect size based on three studies were not as large as 0.285, would the geometric mean be as large as 0.32? Would it be significantly different from zero?

Harris and Rosenthal (1985) defined the viability criterion solely in terms of the numerical magnitude of a derived measure. A question that has been neglected is whether the 3 studies of the C-D link were among the 16 studies of the B-C link. This question is important for the following reason. As an instance of C, consider a teacher praising a student. Further assume that the teacher has genuine warm regard for the student as a result of receiving some positive information about him or her (this is an example of the C-D link; call this "Praise I"). In contrast, consider another teacher who is induced to praise a student in order not to antagonize him or her (this is an example of the B-C link; call this "Praise II"). Although the behavior "praise" is observed in both instances, there are good reasons to doubt whether Praise I and Praise II are experientially the same to the student. The facts that B causes Praise I, and Praise II causes D do not lead to the conclusion that B causes D if there are questions about the equivalence of Praise I and Praise II.

This issue of equivalence becomes more serious when it is recalled that the meta-analyses were conducted in terms of the variable, feedback, which included praise, accept student's ideas, criticism, and ignore student. Can an instance of C in one experiment (e.g., praise) be treated as though it is effectively and qualitatively the same mediating mechanism as another instance of C (e.g., ignore student) in another experiment when the two behaviors are radically different? To ensure that the C terms of the two kinds of links are equivalent, all three components of the B-C-D chain should be found in the same experiment. That is, the claim that B is the cause of D in the context of B-C-D cannot be unambiguously substantiated unless the B-C-D chain is studied in toto. For this reason, Harris and Rosenthal's (1985) meta-analytic exercise may not have successfully substantiated the theoretically important B-C-D chain.

According to some advocates of the use of meta-analysis (e.g., Cooper, 1979; Cooper & Rosenthal, 1980; Glass, McGaw, & Smith, 1981; Light & Smith, 1971), new insights may be gained when information from diverse studies of the same phenomenon is integrated numerically in accordance to some standard and readily replicable procedure. There is an important feature of the interpersonal expectancy effects that seems best suited to be revealed by meta-analysis. Central to the notion of the interpersonal expectancy effects is the assumption that the mediation processes involved in the B-C-D chain are unwitting ones. For example, it has been rhetorically asked, ". . . what subtle forces are going on in the exchange between teacher and learner . . ." and "how does A communicate his or her expectations to B, especially when both A and B probably are unaware of the processes?" (Rosenthal, 1973, p. 60, my emphasis). This important assumption may be the feature that makes the notion of interpersonal expectancy effects intriguing and fascinating. It is not explicitly incorporated in Rosenthal's (1981) 10-arrow model, however, nor is it given any recognition in Rosenthal's (1973) four-factor theory.

By its very nature, the unwitting character of the mediation processes cannot be explicitly studied. At the same time, it might be pervasive enough to be revealed had the 180 studies examined by Harris and Rosenthal (1985) been coded differently. This is not a criticism of Harris and Rosenthal's (1985) study. It is a suggestion that this assumption (of the unwitting character of the mediation processes) should be given due attention in future meta-analytic exercises.

One objection to the use of meta-analysis is that the numerical integration may be conducted with no regard to the quality of the studies included in the exercise. That is, some of the studies included may be of questionable methodological value. Anticipating this objection, Harris and Rosenthal (1985) compared the integrated results based on experimental studies with the integrated results based on nonexperimental studies. They found that, although the experimental studies gave larger effect sizes in terms of some response categories than did nonexperimental studies, these differences were not consistent. They also found that the estimated effect sizes were not a function of whether the studies chosen were published journal articles on unpublished dissertations. They concluded that "(these) findings help to alleviate traditional doubts about meta-analyses with respect to publication bias and combining good with bad studies" (p. 378).

Implicit in Harris and Rosenthal's (1985) comparisons are the assumptions that (a) experimental studies are necessarily better than nonexperimental ones, and (b) studies published in journals are necessarily better than unpublished dissertations. Both of these assumptions, however, may be questioned. Every empirical study should be judged in terms of its internal validity, statistical conclusion validity, external validity, and construct validity, regardless of whether it is experimental or nonexperimental and regardless of where it appears (Campbell & Stanley, 1966; Cook & Campbell, 1979). Even an experimental study published in a reputable journal may be found wanting in some respects. A logical possibility for the outcomes of Harris and Rosenthal's (1985) comparisons may be that the experimental studies under consideration were as unsatisfactory as their nonexperimental counterparts in terms of Cook and Campbell's (1979) notion of internal validity or statistical conclusion validity. Seen in this light, Harris and Rosenthal have not unambiguously answered the probable criticism that some bad studies may have been included in a meta-analytic exercise.

Harris and Rosenthal's (1985) aim was to substantiate, with meta-analysis, the claim that an individual's expectancy, mediated by the individual's behavior, causes changes in an expectee's behavior. An issue of concern, however, is that their viability criterion (of when a behavior can be characterized as a mediator of an expectancy) is not satisfactory because the C terms in the B-C and the C-D links may not have been equivalent.

REFERENCES

Campbell, D. T., & Stanley, J. L. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field studies. Chicago: Rand McNally.

Cooper, H. M. (1979). Statistically combining independent studies: A meta-analysis of sex differences in confirmity research. Journal of Personality and Social Psychology, 37, 131-146.

Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442-449.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social re-search. Beverly Hills, CA: Sage.

Harris, M. J., & Rosenthal, R. (1985). Mediation of interpersonal expectancy effects: 31 meta-analyses. Psychological Bulletin, 97, 363-386.

Light, R. J., & Smith, P. V. (1971). Accumulating evidence: Procedures for resolving contradictions among different research studies. Harvard Educational Review, 41,429-471.

Rosenthal, R. (1973, September). The Pygmalion effect lives. Psychology Today, 56-63.

Rosenthal, R. (1978). Combining results of independent studies. Psychological Bulletin, 85, 185-193.

Rosenthal, R. (198 1). Pavlov's mice, Pfungst's horse, and Pygmalion's PONS: Some models for the study of interpersonal expectancy effects. In T. A. Sebeok & R. Rosenthal (Eds.), The clever Hans phenomenon: Communication with horses, whales, apes, and people. Annals of the New York Academy of Sciences, 364, 182-198.

Rosenthal, R., & Rubin, D. B. (1982). A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 166-169.

  • I thank Dennis Hunt, Philip de Lacey, and Don Mixon for their comments on an early draft of this manuscript. The revised manuscript was prepared when I spent my sabbatical at the University of Alberta; I thank Vincent Di Lollo and the Department of Psychology of the University of Alberta for their hospitality.
  • Request for reprints should be sent to Siu L. Chow, Department of Psychology, University of Regina, Regina, Saskatchewan, Canada S4S 0A2