Free Essay

Effect of Race on Interview

In:

Submitted By Tlogan
Words 13590
Pages 55
Journal of Applied Psychology 2003, Vol. 88, No. 5, 852– 865

Copyright 2003 by the American Psychological Association, Inc. 0021-9010/03/$12.00 DOI: 10.1037/0021-9010.88.5.852

An Investigation of Race and Sex Similarity Effects in Interviews: A Multilevel Approach to Relational Demography
Joshua M. Sacco
Aon Consulting

Christine R. Scheu, Ann Marie Ryan, and Neal Schmitt
Michigan State University

This research studied the effects of race and sex similarity on ratings in one-on-one highly structured college recruiting interviews (N 708 interviewers and 12,203 applicants for 7 different job families). A series of hierarchical linear models provided no evidence for similarity effects, although the commonly used D-score and analysis-of-variance– based interaction approaches conducted at the individual level of analysis yielded different results. The disparate results demonstrate the importance of attending to nested data structures and levels of analysis issues more broadly. Practically, the results suggest that organizations using carefully administered highly structured interviews may not need to be concerned about bias due to the mismatch between interviewer and applicant race or sex.

There is a large body of literature supporting the notion that demographic similarity affects important outcomes at work (see Riordan, 2000; Williams & O’Reilly, 1998, for a review). For instance, researchers have reported that demographic similarity is positively related to communication, the probability of remaining on the job, and job satisfaction (Tsui & O’Reilly, 1989; Vecchio & Bullis, 2001; Wagner, Pfeffer, & O’Reilly, 1984; Wesolowski & Mossholder, 1997). One key limitation of this research, however, is that demographic similarity is often measured at the individual level of analysis even though it occurs between pairs of individuals or within a group. This not only implies a lack of clarity about how similarity is conceptualized, but it is also associated with data analytic techniques that can potentially yield misleading results. In evaluation contexts, demographic similarity is frequently hypothesized to yield artificially higher ratings for ratees who are demographically similar to the rater. Research on performance ratings by supervisors, 360° feedback ratings, and panel interview ratings support this notion, although studies in performance appraisal contexts tend to report less consistent or smaller demo-

Joshua M. Sacco, Aon Consulting, Southfield, Michigan; Christine R. Scheu, Ann Marie Ryan, and Neal Schmitt, Department of Psychology, Michigan State University. An earlier version of this study was presented at the 15th Annual Conference of the Society for Industrial and Organizational Psychology, New Orleans, Louisiana. We thank K. D. Delbridge, Brian H. Kim, and Darin Wiechmann for their helpful comments on earlier versions of this article. We are especially grateful to Katherine Klein, the associate editor, and to two anonymous reviewers for their valuable comments that helped us vastly improve this article. We also thank our source in the client organization who allowed us to conduct this study and who provided permission to present the materials in the Appendix. Correspondence concerning this article should be addressed to Joshua M. Sacco, who is now at Aon Consulting, 707 Wilshire Boulevard, Suite 5700, Los Angeles, California 90017. E-mail: joshua_m_sacco@ aoncons.com 852

graphic similarity effects than in selection contexts (e.g., Mount, Judge, Scullen, Sytsma, & Hezlett, 1998; Tuzinski & Ones, 1999). If, as the literature suggests, demographic similarity is indeed associated with higher ratings, it would have important practical implications; however, only limited conclusions can be drawn about the effects of demographic similarity on evaluations in work contexts because studies have rarely considered effects beyond the individual level of analysis. In accordance, the research reported here has two main goals. First, we sought to clarify how demographic similarity is conceptualized from a levels of analysis perspective. In this vein, we discuss hierarchical linear modeling (HLM) as an approach that is well suited to studying data structures frequently occurring in studies of demographic similarity. Second, we examined whether demographic similarity is indeed associated with higher ratings in one-on-one interviews. We first provide background on similarity research. We then briefly summarize research on demographic similarity in work contexts and provide more detail regarding the evidence for similarity effects in evaluations because this is the focus of the present study. Finally, in the Method section, we discuss several key issues involved in operationalizing and analyzing demographic similarity effects and subsequently present the results of a study that applies a levels of analysis perspective to assessing these effects in employment interviews.

Similarity as a Construct in Psychological Research
The concept of similarity may be one of the most ubiquitous concepts in psychology (e.g., Medin, Goldstone, & Gentner, 1993; Tversky, 1977). Similarity judgments allow us to simplify our world by organizing information, classifying people and objects, and more quickly making generalizations when we encounter something new and previously uncategorized (Medin et al., 1993; Tversky, 1977). As a cognitive heuristic, the concept of similarity can be viewed as relatively flexible; that is, it can be changed or manipulated through knowledge, by altering a context, or by

SIMILARITY IN INTERVIEWS

853

drawing people’s focus to different stimulus cues or features (Medin et al., 1993). Research has suggested that the basis of our similarity judgments changes as we age and gain expertise. Specifically, the more we mature or the more we know about a person, object, or subject, the more complex our judgments of similarity become (e.g., Chi, Feltovich, & Glaser, 1981). That is, the basis of our judgments shifts from superficial or surface level features such as color, size, shape, race, or sex to deeper structural similarities such as analogies, mental models, and personal interests. Our similarity judgments also change as the context changes (e.g., Barsalou, 1982; Roth & Shoben, 1983) such that a person who may seem similar on the baseball field or in an athletic context suddenly appears very different in an academic context, for instance. This occurs due to a shift in the key features used to make the similarity judgment from one context to another. Borrowing from cognitive psychology’s research and theories of similarity, more applied research has also studied the role of similarity in the workplace. Many of these studies are on supplementary person– organization (PO) fit, which examines the extent to which similarities among workers or between an employee and the broader organizational context impacts various work and adjustment outcomes (Muchinsky & Monahan, 1987). For instance, Kristof’s (1996) review indicated that supplemental fit was related to job choice, work attitudes, and the intention to remain on the job. Similarly, Schneider’s (1987) influential attraction-selectionattrition framework emphasizes that similarity among employees has important consequences in the three stages of the employment relationship from which the name of the theory derives. Indeed, the range of impacts that similarity has on important outcomes at work is well established in the empirical literature. One exception to this general finding regarding the importance of similarity is a recent meta-analysis by Webber and Donahue (2001), who found that the diversity of work groups was unrelated to workgroup cohesion or performance. The difference in these findings may be due to any one of the several reasons cited by Webber and Donahue, as well as the fact that their study examined workgroup-level outcomes, whereas the other studies cited above, as well as ours, focus on attitudes or judgments that result from perceived or actual similarity. Relational demography theory examines the importance of similarity more narrowly by focusing on how people use demographic variables such as race, sex, educational level, or socioeconomic status to assess how similar one individual is to another. Like the PO-fit literature mentioned above, this demographic similarity, in turn, is thought to be related to important work outcomes. Relational demography theory developed as an extension of the social psychological literature on similarity, specifically, the similarity– attraction paradigm (Berscheid & Walster, 1969; Bryne, 1971; Newcomb, 1956), which states that similarity results in interpersonal attraction. This notion has received a great deal of empirical support across a wide variety of applied and laboratory settings. A similar notion, homophily, has been discussed extensively in the sociological literature (e.g., McPherson, Smith-Lovin, & Cook, 2001). The similarity–attraction paradigm is complemented by social identity (Tajfel & Turner, 1986) and self-categorization theory (Turner, 1987), which propose that our self-concepts are in part formed by the groups to which we think we belong. To the extent

that this is true, people are expected to evaluate members of their own group more positively than those of other groups to maintain a positive self-regard. That is, because it is necessary to maintain a positive self-regard, seeing others that are similar to oneself in a positive light is psychologically beneficial. Although different factors may influence how people categorize themselves and others into groups in different situations, research indicates that demographic variables, such as those mentioned above, are important in this regard. Several recent literature reviews have concluded, as theory suggests, that demographic similarity leads to more positive interactions at work (Reskin, McBrier, & Kmec, 1999; Riordan, 2000; Williams & O’Reilly, 1998). For instance, demographic similarity has been related to more positive superior–subordinate and mentoring relationships, communication, and job satisfaction (Ensher & Murphy, 1997; Green, Anderson, & Shivers, 1996; Tsui & O’Reilly, 1989; Turban & Jones, 1988; Vecchio & Bullis, 2001; Wesolowski & Mossholder, 1997; Wharton, Rotolo, & Bird, 2000; Zenger & Lawrence, 1989). These effects on relationships and attitudes, in turn, are thought to lead to a number of important behavioral outcomes, such as reductions in work-team and individual turnover and enhanced team performance (Jackson et al., 1991; Timmerman, 2000; Wagner et al., 1984; cf. Webber & Donahue, 2001). Taken together, these findings provide at least moderate support for the notion that demographic similarity is related to positive outcomes at work. The following section discusses research on demographic similarity in situations most similar to the one studied here—ratings contexts.

Research on Demographic Similarity and Evaluations
The notion underlying research on demographic similarity and evaluations is that demographic similarity and the concomitant interpersonal attraction might lead to biases favoring ratees that are demographically similar to the rater. Overall, the results of studies on racial similarity and supervisor ratings of subordinate job performance have been mixed. Kraiger and Ford’s (1985) metaanalysis indicated that the size of these effects was relatively large; however, more recent large sample studies suggest that these effects are relatively small in absolute terms (e.g., Pulakos, White, Oppler, & Borman, 1989), may in fact be smaller and less consistent than what Kraiger and Ford’s study indicated (Sackett & DuBois, 1991), or may not exist at all (Waldman & Avolio, 1991). Other research in the context of 360° developmental feedback supports the notion that higher ratings are assigned by raters to ratees of the same race, although these results in part depend on the specific race combinations examined and the perspective of the rater (i.e., peer, supervisor, or subordinate; Mount et al., 1998; Tuzinski & Ones, 1999). Rather than focusing on performance appraisal or developmental feedback ratings, however, the present research focused on ratings made during recruiting interviews. Recruiting interviews are a logical place to expect demographic similarity effects. This is because they are relatively brief; although the focus is on gathering employment-related information, the relative amount of information gleaned in these limited interactions is likely to be low as compared with information gathered from extended interactions at work. One potential reason for the inconsistent findings is the idea, presented earlier, that as we gain knowledge about others, the bases of our evaluations shift from

854

SACCO, SCHEU, RYAN, AND SCHMITT

surface-level features, such as demographics, to deeper similarities. This possibility derives from the cognitive perspective on similarity and conceptual and empirical work in relational demography is consistent with this notion. For instance, Riordan (2000) suggested that demographic diversity effects will be initially important but will fade over time as people have an opportunity to see past surface-level features. Shaw and Barrett Power (1998) also alluded to this idea; their model of demographic and cognitive diversity effects suggests that the former is more important at the outset of group development, whereas the latter is more influential in later stages. Several studies also provide evidence that demographic similarity effects fade over time or as people get to know each other. Indirect evidence of this effect is provided by several researchers who have reported stronger demographic similarity effects at the outset of their studies as compared with the end (Ancona & Caldwell, 1992; Watson, Kumar, & Michaelsen, 1993). Chatman and Flynn (2001) modeled the effects of time and reported that the negative effects of diversity on perceptions of cooperative group norms were mainly present early in the group’s lifespan. Similarly, Harrison, Price, and Bell (1998) found that gender (though neither race nor age) and affective diversity interacted with time such that the former was more strongly related to group cohesion in groups that had not been together very long. Conversely, underlying attitudes became a stronger predictor of group cohesion as time spent together increased. In a very large sample, Sacco and Schmitt (2003) found that the misfit between employees’ race, sex, and age, and that of the other employees in the restaurant in which they worked was associated with a higher turnover probability but only in the initial weeks and months of the employees’ tenure. That is, as time went on, these effects diminished and in some cases reversed such that demographic misfit was associated with a lower turnover risk. Collectively, this research and theory suggests that the interpersonal interactions that naturally occur over time mitigate the effects of demographic dissimilarity. This is consistent with social categorization theory, which states that individuals form judgments on the basis of the available information at a given time (Tajfel, 1981; Turner, 1987); however, it is also important to note that a large sample study of supervisor–subordinate race and sex similarity in the military found that its effects did not fade over time (Vecchio & Bullis, 2001).

a situational interview in a separate sample. In both samples, the only significant difference between the panels occurred when comparing two-member panels of the same race to same-race– other-race and other-race– other-race panels. These researchers noted that the null results for Whites may have been due to the low statistical power associated with an underrepresentation of Whites in their applicant pool. Although these studies suggest that racial similarity effects might also occur for single-interviewer interviews, we are aware of no research that has examined this possibility. We think that one-on-one interviews would be even more likely to show similarity effects for several reasons. First, there are no other interviewers present to whom the interviewer has to justify ratings (McFarland et al., 2000; Prewett-Livingston et al., 1996). Similarly, there are no other interviewers against whom the interviewer can readily calibrate his or her ratings. Third, the cognitive demands placed on interviewers who conduct multiple interviews back-to-back on their own may lead to diminished attentional capacity to less obvious cues such as job-related experience as compared with more obvious characteristics such as race. Thus, based on these panel interview studies and the broader literature on similarity effects, we propose the following hypothesis: Hypothesis 1: Interviewers will assign higher interview ratings to applicants of the same race as compared with applicants of different races. Recent research has emphasized the need to look at race effects in racial groups other than Blacks and Whites. This is important for a number of reasons. First, relatively little research examines other groups such as Hispanics or Asians. On the basis of the relational demography paradigm, one would expect similarity effects to vary across different races to the extent that different races perceive themselves to be more or less similar to other races. Given that different racial groups have vastly different experiences in this country, we think it is important to consider these differences. Second, researchers have found that race effects are not necessarily uniform across all minority groups (e.g., Tuzinski & Ones, 1999) and that they may differ in magnitude (e.g., Lin et al., 1992). Third, the labor market is becoming increasingly diverse (e.g., National Research Council, 1999), and these minority groups represent an increasing proportion of the workforce in the United States. Thus, it is important to determine whether race effects reported in Black– White comparisons generalize to other races (Williams & O’Reilly, 1998). In accordance, we studied three different minority groups, Asians, Blacks, and Hispanics, to examine the extent to which racial similarity effects might vary across them.

Effects of Race Similarity on Interview Ratings
All of the research we could locate concerning racial similarity effects in interview ratings pertains to panel interviews rather than the more ubiquitous one-on-one interview. Prewett-Livingston, Field, Veres, and Lewis (1996) reported a Black–White racial similarity effect that yielded higher ratings only when the interview panel was racially balanced. On racially unbalanced panels, applicants who were the same race as the majority of the panel members were evaluated more positively across all the interviewers. McFarland, Sacco, Ryan, and Kriska (2000) also reported Black–White similarity effects in their study of structured interviews, although only when the panel consisted of one White and two Black interviewers. Lin, Dobbins, and Farh (1992) found panel-level similarity effects for Hispanic and Black applicants but not White applicants when a conventional structured interview was used, and the same effects were reported for Black interviewees for

Effects of Sex Similarity on Interview Ratings
There are very few studies on sex similarity effects in the employment interview. Graves and Powell (1995) sampled a single applicant from each interviewer who conducted college recruiting interviews with multiple applicants. The results indicated that sex similarity had a weak negative indirect effect on interview outcomes. Further analyses revealed this effect only for female recruiters. According to these researchers, one possible explanation consistent with social identity theory was that female recruiters sought to distance themselves from their own lower status group

SIMILARITY IN INTERVIEWS

855

and to take on the psychological characteristics of the higher status group (i.e., men). In contrast, Graves and Powell (1996) found that sex similarity had a small positive effect on recruiting outcomes. Although there is little reason to take the results of one study over the other, on the basis of the relational demography literature as a whole, we expected to find sex similarity effects. Hence, we propose the following hypothesis: Hypothesis 2: Interviewers will assign higher interview ratings to same-sex applicants as compared with opposite-sex applicants.

Method Sample
The initial sample consisted of 708 college recruiters who interviewed 12,203 applicants applying for exempt jobs with a large manufacturing firm over the course of three college recruiting seasons. These were the interviewers and applicants for whom we had either race or sex data; however, because of the nature of the HLM analyses, smaller samples were actually used (see the Data Analysis section). Each interviewer interviewed an average of 17.03 applicants (SD 11.64). Applicants were applying for jobs in one or more of seven job families (fewer than 5% applied to more than one job family). The applicant sample was 30.9% female, and .1% did not indicate their gender. The sample was 61.8% White, 13.7% Black, 15.1% Asian, 3.9% Hispanic, 0.2% American Indian, and 5.4% did not respond to the race item. The interviewers were 25.4% female, 75.8% White, 14.1% Black, 4.1% Asian, 1.7% Hispanic, 0.7% American Indian, and for 3.8% there were no race data. American Indians were dropped from all analyses because they represented such a small number of interviewers and applicants. There was a mix of full-time college recruiters and employees who volunteered to serve as interviewers for at least one recruiting season.

to the dimension. Approximately eight behavioral anchors were listed under each grouping (i.e., ineffective, effective, and highly effective), thus providing nearly 24 behaviors for each of the four dimensions. Before conducting any interviews, all interviewers completed a formal training program to learn how to conduct the interviews, use the rating system, and determine when an applicant’s response allowed the competency to be comprehensively evaluated. For each competency, the interviewers chose an initial question from a list, and a series of predetermined probes were asked as appropriate. The interviewers had discretion over which items were selected from the list and were instructed to ask additional questions until the competency could be comprehensively evaluated. The interviewers were required to take notes during the interview regarding the candidate’s reported behaviors for each question. These notes were made in the space provided on the interview form. After the interview was complete, the interviewers reviewed their notes and recorded their ratings for the dimensions. The average rating across the four dimensions was used as the dependent variable because the ratings were highly correlated (r .58; .84).

Operationalizing Demographic Similarity Effects
Several different approaches have been used to operationalize demographic dissimilarity. Most of the studies cited in the introduction that examined racial similarity effects in evaluation contexts operationalized dissimilarity as an interaction in the traditional analysis of variance (ANOVA) framework. That is, the relationship between the race of the ratee and ratings was modeled as a function of a third variable (i.e., the race of the rater). This approach is clearly consistent with the theory underlying the study of similarity effects. Unfortunately, it only works where there are equal numbers of observations for each rater. This has led some researchers to randomly discard or average data (Graves & Powell, 1995; Mount, Sytsma, Hazucha, & Holt, 1997), which is a waste of valuable information (DeShon, Ployhart, & Sacco, 1998). Although this might not change the results of some studies, in the sample reported here the number of observations was severely unbalanced within each interviewer such that discarding or averaging data would dramatically reduce the sample size. Riordan (2000) identified three other major operationalizations of similarity: perceived similarity, Euclidian distance (i.e., difference, D) scores, and the interaction approach. Although we agree that perceived similarity is an important construct worthy of study, we do not have such data available, and thus our focus is on actual similarity. The second approach involves calculating D scores, which are said to index dissimilarity between an individual and another individual (or the average dissimilarity between an individual and a group of individuals). D scores have been widely criticized on a host of conceptual and methodological grounds (Edwards, 1994, 2002; Johns, 1981; Riordan, 2000; Riordan & Shore, 1997). The third approach, forming interaction terms in moderated regression, also suffers from a number of serious limitations when there are repeated measurements (here we use repeated measures to refer not only to cases involving ratings in which there literally are repeated measures, but also when dissimilarity is examined between all the individuals within a group and another individual in that group). In the case of both D scores and interaction terms, these repeated measurements introduce dependencies that pose serious problems when they are not modeled (Kenny & Judd, 1986).1 One implication of these approaches is that the standard errors of Level 2 coefficients (e.g., interviewer effects in the present research) are downwardly biased due to the “miraculous multiplication of the number of

Measures
Development of the interview. The interviews used in the current study were developed by industrial– organizational psychologists experienced in the design and administration of selection systems (none of whom are authors of this article). The interview questions were based on a leadership competency model specifically developed to meet the needs of the organization. The competency model was composed of four broad dimensions (skills, personal characteristics, knowledge and experience, and values), and each dimension included up to three competencies. The same four dimensions were examined for all seven job families. These dimensions and their associated competencies were supported by thorough separate job analyses for each job family, providing support for the content validity of the interviews. The job analyses were designed and conducted by psychologists experienced in job analysis design and administration. The job analysis results were subsequently used as a basis for the experientially based interview questions and the development of behavioral anchors. Interview process. The interviews were conducted by a single interviewer during the first phase of a two-stage process, which included a situational judgment test (also at Stage 1) and an assessment center (at Stage 2). The interviews were conducted at the college or university from which the applicant was about to graduate. The interview was highly structured and behaviorally based, with one behaviorally anchored 9-point rating scale for each of the four competencies. Ineffective responses corresponded to a rating of 1, 2, or 3; effective responses to 4, 5, or 6; and highly effective responses to 7, 8, or 9 (see the Appendix for sample questions and a partial list of probes and behaviorally anchored rating scales). The specific behavioral anchors associated with ineffective, effective, and highly effective responses were based on the competencies linked

Edwards (1994) offers a seminal comprehensive critique of D scores. Because of the complexity of Edwards’s criticisms of D scores, a detailed discussion of how they apply to categorical demographic variables is beyond the scope of this article.

1

856

SACCO, SCHEU, RYAN, AND SCHMITT

units” (Snijders & Bosker, 1999, p. 15; also see Raudenbush & Bryk, 2002) when D scores or interaction terms are used. That is, the chance of committing a Type I error is increased when conducting statistical tests on the nesting variables. Further, nested data structures introduce correlated prediction errors that are not modeled in ordinary least squares (OLS) regression (e.g., Raudenbush & Bryk, 2002; Hannan, 1990; Hofmann, 1997; Kennedy, 1998; Kenny & Judd, 1986). This increases the chances of making a Type II error for analyses of variables at the individual level (e.g., the effects of applicant-level characteristics on ratings in the present study; Bliese, 2002). Raudenbush and Bryk (2002) also noted that ignoring dependencies likely violates the constant variance assumption; indeed, it has been well documented that disaggregation is associated with a host of problems, both conceptual and analytical (W. H. Glick & Roberts, 1984; Hannan, 1971; Langbein & Lichtman, 1978; Nezleck & Zyzniewski, 1998; Rousseau, 1985; cf. Roberts, Hulin, & Rousseau, 1978). Thus, in research studying similarity effects using either the D score or interaction term approach (which includes almost all of the studies in this domain; e.g., Tsui, Egan, & O’Reilly, 1992), it is difficult to determine the extent to which the reported results accurately describe the true nature of the relationships of interest. Levels of analysis theorists call these misspecification errors because they do not correctly identify the level at which the hypothesized processes occur (Rousseau, 1985). Rather than using the approaches critiqued above, we conceptualize demographic similarity in a way that is analogous to a cross-level effect or a cross-level interaction (Rousseau, 1985). Although interviewers are not a level of analysis in organizational research in the same sense that teams or work units are, in the present study, as well as in most rating contexts, applicants are nested within raters because each interviewer conducts multiple interviews. In accordance, we examine whether the race (or sex) of the interviewer moderates the relationship between race (or sex) and interview ratings at the applicant level, and we expected the form of this interaction to be consistent with the theory and research described in the introduction. We analyzed our data using HLM (Raudenbush & Bryk, 2002), which is specifically designed to accommodate nested or multilevel data structures. HLM, a specific type of random coefficient model (e.g., Bleise, 2002), has been used to study a number of organizational phenomena, such as the effects of group cohesiveness on the relationship between job satisfaction and courtesy, human resource practices on perceived organizational support and trust in management, and goal congruence between teachers and principals (Kidwell, Mossholder, & Bennett, 1997; Vancouver, Millsap, & Peters, 1994; Whitener, 2001). A number of authors have described the benefits of HLM over OLS regression when examining nested or multilevel data structures (Bliese, 2000, 2002; Gilbert & Shultz, 1998; Hofmann, 1997; Nezleck & Zyzniewski, 1998; Pollack, 1998). In fact, Hox (1994) provided an extended discussion of the benefits of HLM over OLS regression in interview studies, making many points that are similar to those discussed here. With regard to studies of demographic similarity in evaluation contexts, this approach avoids the need to randomly discard, average, or sample data where unbalanced designs exist (Mount et al., 1997; Graves & Powell, 1995; Pulakos et al., 1989) or to disaggregate and use D scores or moderated regression, which we earlier described as problematic.2 Thus, an additional goal of this research was to contrast the results obtained using HLM to those using the D score and ANOVA-based interaction approaches. On the basis of the literature described above, we expected the latter two approaches to yield stronger evidence of similarity effects as compared with HLM. An additional benefit of HLM was that it provides more weight to units (i.e., interviewers) with more cases (i.e., applicants), resulting in more accurate parameter estimates (Raudenbush & Bryk, 2002). This is especially beneficial in datasets like those studied here in which there is substantial variability in the number of applicants nested within each interviewer.

Data Analysis
Because HLM is most frequently used to handle nested data structures that often correspond to hierarchical levels in an organization, we refer to Level 1 (L1) and Level 2 (L2) when discussing applicant and interviewer effects, respectively, acknowledging that the nesting does not necessarily correspond to levels of an organizational hierarchy. At a conceptual level, HLM allows researchers to assess whether an L2 variable impacts (a) outcomes at L1 or (b) the relationship between an L1 predictor and an L1 outcome. An example of the former would be using a characteristic of interviewers (e.g., sex) to predict average interview ratings. In contrast, an example of the latter would be using interviewer sex to predict the relationship between applicant sex and ratings (i.e., a cross-level interaction) because it tests a moderation hypothesis across two levels of analysis. Although the particulars of the statistical approach are more complex than OLS regression, HLM essentially computes separate regressions (and thus, parameter estimates) for each L2 group (e.g., interviewer) and then uses characteristics of the L2 groups to predict variability in the L1 parameter estimates across groups. For instance, if some interviewers yield a stronger association between applicant sex and ratings than others, HLM allows one to assess whether this is a function of interviewer sex. In the following paragraphs we detail the logic of HLM as it applies to the research reported here because HLM is not yet commonly used in applied research. Readers interested in comprehensive discussions of these models are referred to other sources (especially Hox, 2002; Kreft & de Leeuw, 1998; Raudenbush & Bryk, 2002, Snijders & Bosker, 1999; but also see Bliese, 2000, 2002, and Hofmann, 1997, for discussions oriented toward organizational researchers). Hypothesis testing in HLM involves evaluating a series of models. The statistical significance of specific parameters in initial models is a prerequisite for finding significant results in subsequent models. The first step in evaluating an HLM is equivalent to a one-way ANOVA and yields variance component estimates and significance tests of the within- and between-group variance. In our research, this information indicates whether there are significant applicant and interviewer differences in interview ratings, respectively. This model, shown below, is also known as a null model because no predictors are used: L1: Rating L2:
0j 00 0j

rij

(1) (2)

u0j

Thus, the L1 equation predicts applicants’ interview ratings based on the mean rating (i.e., intercept) within each of the j interviewers ( 0j) and the error for each of i applicants (rij). The L2 equation models each interviewer’s intercept based on the grand mean ( 00) and each interviewer’s deviation (u0j). In addition, the associated variance components of these error terms can be used to calculate an intraclass correlation coefficient (ICC), which indexes the ratio of between-interviewer variance in ratings to total variance. A significant variance component for the intercept indicates statistically significant interinterviewer variability in ratings (i.e., a significant nesting effect). In contrast, a nonsignificant variance component for the intercept indicates that HLM would yield little additional informa-

Polynomial regression (Edwards, 1995) is a commonly used alternative to D scores. It is not appropriate given the data structure and research questions posed here, however, because it is limited to a single level of analysis. Performing an analysis at a single level would have required either aggregating or disaggregating our measures. In our view, aggregating is not appropriate because we could not think of a strong theoretical rationale supporting this approach (see Atwater, Ostroff, Yammarino, & Fleenor, 1998, for an example of aggregation that is justified). Similarly, disaggregation is subject to the problems discussed earlier. Thus, polynomial regression was not a viable alternative analytical approach.

2

SIMILARITY IN INTERVIEWS tion as compared with OLS regression. If the variance of the intercept is significant, the next step involves adding a predictor to the L1 equation. Using applicant sex as an example, the following equations were estimated: L1: Rating L2: L2:
0j 00 0j 1j

857

(sexApp)

rij

(3) (4) (5)

u0j u1j

1j

10

Systems of equations such as these are known as random coefficient regression models because the regression coefficients 0j and 1j are modeled as random effects at L2 in Equations 4 and 5. These random coefficients are predicted by the overall mean ( 00) and slope ( 10) for each interviewer. The significance of these L2 parameters indicates whether ratings are significantly different from zero and whether applicant sex is related to ratings, respectively. The statistical significance of the variance components for the error parameters u0j and u1j indicates whether there is a significant amount of variability in the corresponding coefficients at L1. In this research, we are specifically interested in the variability of slopes because we want to predict it with interviewer demographics. (Note, however, that these models can only be estimated for interviewers whose applicants vary in sex because the slope of applicant sex within an interviewer cannot be computed when sex is a constant within that interviewer.) If there is a significant amount of variability in the intercepts, the next step involves assessing whether an L2 variable predicts that variability. Continuing with our example, this analysis, represented below, adds the interviewer’s sex as a predictor of the variability of intercepts at L1: L1: Rating L2: L2:
0j 00 0j 1j

(sexApp) u0j

rij

(6) (7) (8)

01

(sexInt)

1j

10

u1j

resents the mean interview rating for the race (or sex) group that is coded as zero. Although this is not of substantive interest in the present research, in our view this is substantially easier to interpret and more meaningful as compared with the other centering approaches given our data structure. Similarly, in uncentered form, the slope represents the within-interviewer race or sex difference between the applicants, unadjusted for the overall proportion of applicants of a given race or sex interviewed by a given interviewer (group mean centered) or this same proportion in the entire sample (grand mean centered). Thus, we used an uncentered dichotomous variable to represent sex or a comparison between two racial groups. Racial group comparisons. In testing the hypotheses, we evaluated similarity at different levels of generality. At the most general level, we compared Whites to non-Whites and Blacks to non-Blacks. Although it has been argued that collapsing different racial groups may be inappropriate (Vecchio & Bullis, 2001), in our view it is informative from a practical perspective when practitioners are making decisions concerning interviewers. Focusing more narrowly on subgroup status, we also studied sex similarity and racial similarity using a number of two-group comparisons. We limited our analyses with regard to race to two groups for two reasons. First, including multiple groups in a single analysis would involve multiple dummy variables at each level of analysis. Because our hypotheses focused on cross-level interactions, the interpretation of any significant effects would largely reduce to two-group differences anyway because we would have to examine interactions between specific dummy variables. Second, interviewers would have had to interview at least one member of every racial group so that a slope could be calculated for every dummy variable, drastically reducing our sample size. Taking an even more focused approach, we also examined race similarity effects within sex to avoid the potential confounding effects of sex (Mount et al., 1997; Pulakos et al., 1989; Tuzinski & Ones, 1999). This approach, however, was only feasible with Blacks and Whites due to sample size constraints.

This intercepts-as-outcomes model tests for significant differences in mean ratings as a function of interviewer sex. Next, for models that exhibited significant variability in the applicant sex slope, we proceeded to estimate the following series of equations: L1: Rating L2: L2:
0j 00 0j 1

Results Sample Characteristics and Descriptive Statistics
Table 1 presents the sample sizes, means, and standard deviations by applicant and interviewer race for the samples used in the HLM analyses. A number of comments on these data are warranted. First, the table indicates there was a minimum of 35 White or Black interviewers for applicants of all races, and aside from Asians and Hispanics, the number of applicants in these race combinations was large. Second, there were very few cases in which Hispanics interviewed Asians or where Hispanics were interviewed by Hispanics or Asians. Thus, these combinations will not be discussed in more detail here. With regard to the ratings themselves, there were small applicant race differences in ratings except that Asians scored slightly lower than the other races. In addition, there was little evidence of similarity effects for White interviewers; that is, their ratings of Whites were not appreciably different than their ratings of applicants of other races. Again, Asians are the exception here, but this seems to be more due to the fact that Asian interviewers assign slightly higher ratings to Asian applicants than to applicants who are Black or Hispanic. This last observation, however, should be cautioned by the small sample sizes for these comparisons. Table 2 presents the same data by sex. Overall, there are only very small sex differences in interview ratings; however, the pattern of the means by sex shows a statistically significant interaction such that interviewers assigned lower ratings to members of

(sexApp) u0j u1j

rij

(9) (10) (11)

01

(sexInt) (sexInt)

1j

10

11

This is known as a slopes-as-outcomes model because the sex of the interviewer is used to predict variability in the slope of applicant’s sex at L1. As stated above, in HLM terminology, a significant 11 coefficient would be evidence of a cross-level interaction. That is, the sex of the interviewer would moderate the relationship between applicant sex and interview ratings. If such an effect is identified, the form of the interaction must be examined to determine whether it is consistent with the expected results. The 11 would have to be significant and positive (because our demographic variables are coded the same way at both levels of analysis) to support our hypotheses. Centering. The interpretation of the intercepts and their covariances with other model parameters as a function of the L1 predictors’ scaling has been discussed by a number of authors (Raudenbush & Bryk, 2002; Hofmann & Gavin, 1998; Kreft & de Leeuw, 1998; Kreft, de Leeuw, & Aiken, 1995). The possible approaches to centering are grand mean, group mean, and no centering. The authors cited above note that there is no correct centering approach, although the different approaches can yield different results and should be interpreted differently. Consequently, the recommended approach is to make a decision in light of the specific research questions. In this research, the intercept at L1 is meaningful when the data are not centered because the intercept rep-

858

SACCO, SCHEU, RYAN, AND SCHMITT

Table 1 Means, Standard Deviations, Sample Sizes, and Interview Ratings for Each Applicant–Interviewer Race Combination
Interviewer race Applicant race & statistic White M SD Applicant n Interviewer n Black M SD Applicant n Interviewer n Asian M SD Applicant n Interviewer n Hispanic M SD Applicant n Interviewer n White 6.20 1.25 6,311 519 6.19 1.31 641 266 5.98 1.32 1,509 407 6.24 1.18 364 215 Black 6.06 1.32 586 62 6.12 1.32 900 87 5.88 1.35 159 56 6.20 1.19 57 35 Asian 5.88 1.21 227 25 5.60 1.89 31 15 6.06 1.39 88 18 5.85 0.90 13 7 Hispanic 6.52 1.04 125 9 5.79 1.35 34 6 5.90 1.64 17 5 6.31 1.27 25 7 Overall 6.18 1.26 7,249 615 6.12 1.34 1,606 374 5.98 1.34 1,773 486 6.22 1.18 459 264

the same sex, as evidenced by the results of an Applicant Sex Interviewer Sex ANOVA, F(1, 12003) 11.11, p .01. It is important to note, however, that the results of this ANOVA, as well as the descriptive statistics in Tables 1 and 2, ignore the nested data structure and the dependencies this introduces. Thus, although informative in understanding the general patterns that exist in these data, these statistics may not accurately describe the relationships between applicant and interviewer sex and race. The results of the HLMs we evaluated, which account for these dependencies, are discussed next.

HLMs
We followed the steps outlined in the Data Analysis section in testing the HLMs for race and sex similarity effects. These models were estimated using the software package described in Bryk, Table 2 Means, Standard Deviations, Sample Sizes, and Interview Ratings for Each Applicant–Interviewer Sex Combination
Interviewer sex Applicant sex & statistic Female M SD Applicant n Interviewer n Male M SD Applicant n Interviewer n Female 6.10 1.37 1,100 166 6.15 1.35 1,799 474 Male 6.24 1.23 2,616 177 6.09 1.27 6,492 524 Overall 6.13 1.36 3,716 343 6.14 1.26 8,291 998

Raudenbush, and Congdon (1996) using the restricted maximum likelihood algorithm. Table 3 provides summary data for the HLMs. As can be seen in the table, the sample sizes for all the analyses were fairly large, and all of the models had significant variability in intercepts. The ICCs are large enough to increase the nominal alpha by several-fold if clustering is not taken into account, especially given the moderate sample size within each interviewer (Barcikowski, 1981; Kreft & de Leeuw, 1998). This strongly supports the use of HLM in this context. The columns for the variance components of the intercepts (mean ratings) and slopes (race or sex differences) reveal that the slopes varied significantly for four of the eight comparisons studied (note that we report the results of all steps in the HLM analyses for completeness even if an earlier step was not significant). Specifically, the magnitude of male–female, Black–Non-Black, Black–White, and White–Asian differences varied significantly across interviewers (Comparisons 1, 3, 4, and 5, respectively). The last two columns provide estimates for the cross-level coefficients ( 01 and 11) and their standard errors relating interviewer race or sex to the mean rating and the slope of applicant race or sex at L1, respectively. As can be seen in Table 3, none of these effects were significant, yielding no evidence of race or sex similarity effects or of interviewer sex or race differences in ratings once the nested data structure was taken into account. Two additional sets of analyses were conducted at the suggestion of an anonymous reviewer. First, we examined whether variability in the L1 slopes was perhaps due to cognitive overload, which we operationalized as the number of interviews conducted by each interviewer. In these analyses, for each of the eight comparisons listed in Table 3, the number of interviews conducted was included as the sole predictor of the L1 intercept and slope (the demographic variables were dropped because the results reported above indicated that they did not have any significant

SIMILARITY IN INTERVIEWS

859

Table 3 Sample Sizes and Summary of Key Results for Race and Sex Similarity HLM Analyses
Sample sizes Comparison 1 2 3 4 5 6 7 8 Group Men Women Whites Non-Whites Blacks Non-Blacks Whites Blacks Whites Asians Whites Hispanics White men Black men White women Black women Applicants 8,154 2,618 6,222 2,935 1,179 5,605 3,875 911 4,929 1,417 3,014 350 2,035 558 689 156 Interviewers 426 149 421 85 58 251 234 47 365 16 191 7 121 24 35 13 ICC .16* .16* .16* .16* .17* .17* .17* .10* L1 variance components Intercept .26* .27* .26* .27* .36* .18* .32* .33* Slope .06* .03 .11* .09* .07* .01 .08 .12 L2 parameter estimates Intercept (
01

) (SE)

Slope (

11

) (SE)

.09 (.07) .01 (.06) .09 (.08) .03 (.10) .06 (.20) .16 (.20) .14 (.10) .03 (.17)

.11 (.06) .12 (.07) .10 (.11) .09 (.12) .26 (.19) .24 (.28) .10 (.17) .39 (.41)

Note. Race and sex were coded as either 0 or 1; the first group listed in the Group column was coded as 1. L1 Level 1 (applicant level); L2 Level 2 (interviewer level); ICC intraclass correlation coefficient; SE standard error; HLM hierarchical linear modeling. *p .05 (asterisks on the ICCs indicate that the intercepts had significant variability rather than statistical significance tests computed on the ICCs themselves).

effects). Out of the eight HLM analyses, the only model that yielded significant results was that comparing Black women to White women. In this analysis, the number of applicants interviewed by each interviewer was negatively related to the race slope at L1 ( 11 .02, p .05), indicating that conducting more interviews was associated with a smaller race difference. Second, where it was significant, we also examined whether the variance in the slopes for the four comparisons (1, 3, 4, and 5) might be attributable to differences between the seven job families studied. To examine this possibility, we formulated three-level HLMs to see if the applicant sex slope variability differed across the job families. We limited these analyses to the sex slope because theory suggests that this might vary as a function of the sex-type of the job, although we could think of no theory indicating why this same effect might vary as a function of race. These analyses, the details of which are available from Joshua M. Sacco on request, indicated that the sex slopes did not vary across job family.

Tests of Between Interviewer Effects
Most of the interviewers in our sample interviewed applicants from more than one of the subgroups studied here, making HLM a logical choice. Nonetheless, this approach still left a large sample of interviewers who had interviewed either White or Black interviewees but not both (only a very small sample of interviewers interviewed applicants of only one sex). Rather than discarding these data, we conducted more traditional analyses to see if we might find any evidence of racial similarity effects; however, the vast majority of these interviewers interviewed multiple applicants, thus violating the assumption of independent observations. To examine the extent to which dependencies might exist in these data, we conducted a one-way ANOVA with the interviewer as a between-subjects factor and interview rating as the depen-

dent variable. Consistent with the analyses using HLM reported above, there were significant differences between interviewers, F(1, 3266) 2.98, p .01. The ICC for this analysis was .17, indicating nesting similar in magnitude to that described in Table 3. Because the sample was so large (n 2,688 White applicants, n 563 Black applicants, n 264 White interviewers, n 51 Black interviewers), rather than discarding these data we used a 2 2 (Applicant Race Interviewer Race) ANOVA under the caution that the results would have to be highly statistically significant in light of the dependencies in order to conclude that there was a similarity effect in these data. The descriptive statistics indicated that Black interviewers assigned almost identical ratings to Black and White applicants (Black applicants: M 6.04, SD 1.39, n 515; White applicants: M 6.08, SD 1.36, n 48), whereas White interviewers assigned slightly higher ratings to White applicants as opposed to Blacks (Black applicants: M 5.90, SD 1.44, n 97; White applicants, M 6.11, SD 1.26, n 2,591). The results of this analysis indicated there was not a statistically significant Race Race interaction even at the traditional .05 level, F(1, 3247) .526, p .05. Because these analyses yielded no evidence of similarity effects, the exclusion of these cases from our HLM analyses likely had little or no impact on our ability to detect significant similarity effects.

D Scores
To compare the results described above with the commonly used D score approach, we calculated separate D scores for race and sex. Each of these variables was coded zero if the applicant and interviewer’s race (or sex) was different and was coded one if they were the same. Thus, the sample was split into two groups each for race and sex (same or different). Racial similarity as

860

SACCO, SCHEU, RYAN, AND SCHMITT

indexed by the D score was positively related to interview ratings because the same-race group had a significantly higher mean rating (M 6.19, SD 1.26) than did the different-race group (M 6.05, SD 1.30, p .05). Sex similarity, as indexed by the D scores, was also significantly related to ratings, although in the opposite direction (M 6.09, SD 1.28, and M 6.20, SD 1.28 for similar and dissimilar groups, respectively, p .05). A multiple regression analysis indicated that both indexes made unique significant contributions to mean interview ratings, R .07 ( sex-D .04, race-D .05, both parameters significant at p .05). The implications of these results, as well as those described in the previous section, are discussed next.

Discussion
The HLM analyses yielded no support for either of our hypotheses. That is, there was no evidence that race or sex similarity played a significant role in determining the ratings assigned to any of our applicant groups. As the sample sizes in all of the analyses were quite large, the failure to find significant results is not a function of statistical power. The mean differences of ratings provided to applicants of different race and sex combinations (see Tables 1 and 2 and similar summaries in the text) all indicate that observed differences were very small (in most cases, less than .1 SD). None of the tests for similarity as an explanation of the variability in slopes at the last step of the HLM analyses were statistically significant. The finding that differences in slopes were not related to interviewers’ demographic status (or their demographic similarity, more specifically) is practically encouraging insofar as the use of structured employment interviews is concerned. Our results suggest that organizations do not have to be concerned about matching interviewer and interviewee race and sex to avoid potentially biased ratings, at least when using carefully developed jobanalysis– based highly structured interviews like the ones used here. This is the first study of which we are aware in which race similarity was investigated in one-on-one interviews, although studies of the role of similarity in performance and panel interview ratings have been conducted previously. In these earlier studies, there was also little evidence for large similarity effects. Other research summarized by Webber and Donahue (2001) reported an absence of diversity effects on group cohesion and performance, although we recognize that the nature of the relationships studied in the Webber and Donahue article is different from those addressed here. Although our results were obtained in only one organization, the samples of interviewees were very large, the interviews were conducted at over 100 campuses by a large number of interviewers for a range of jobs, and several race and sex combinations were studied. These results are inconsistent with applied psychological research indicating that demographic similarity is associated with a range of important outcomes. Although much of the research underlying the similarity–attraction paradigm has been conducted in laboratory settings, the linkages between similarity and work outcomes derive from a wide range of applied settings, and thus we expected these results to generalize to the present context. Thus, the interesting issue is not “Why does demographic similarity impact outcomes at work?” but rather “What boundary conditions make demographic similarity more or less likely to impact work

outcomes?” In the introduction to this article, we discussed research and theory indicating that the amount of exposure to another person was one such boundary condition in that less exposure should be associated with stronger demographic similarity effects. College recruiting interviews are probably among the shortest exposures in which ratings are made in a work-related context. Yet we found no evidence for similarity effects. Thus, we now turn our discussion to factors that might have mitigated these putative effects. First, there is an unmistakable emphasis on diversity in most large organizations (as there was in the organization that provided the sample studied here). One possible explanation is that interviewers have been so sensitized to the importance of demographic status that they consciously and effectively reduce reliance on these aspects of an applicant when making judgments about their job relevant capabilities. Second, it may be the case that similarity plays less of a role when the interview is highly structured as it was here; interviewers had a prescribed set of questions evaluating guidelines specifying effective and ineffective responses and had received training on making such distinctions. Furthermore, the recruiters went through a formal training program and were instructed to take notes and make judgments about candidate behaviors that were described during the interview. This process likely refocused the recruiters’ attention on deeper structural aspects of behavior and away from surface features and similarities, such as race and sex. Both explanations are consistent with cognitive and social categorization theory discussed in the introduction. That is, the highly structured nature of the interviews may have created a deeper understanding of interview responses and their relationships to the underlying skills and abilities required for successful job performance and thus away from demographic similarity.

Directions for Future Research
On the basis of the rationale described above, future research efforts might evaluate the hypothesis that similarity effects would be found under conditions without interviewer training, when the interview is unstructured, or in organizations without such an explicit emphasis on the value of diversity. Reiterating our earlier suggestion that recruiting interviews yield comparatively little job-relevant behavior, we strongly suspect that less-structured recruiting interviews would be more susceptible to demographic similarity effects. An intriguing finding was that the applicant race and sex slopes varied significantly for a number of comparisons, although this variability was unrelated to the interviewers’ race or sex. Future research should examine the nature of the correlates or determinants of this variability. This variability could be a function of the interviewer’s background, personality, or the particular set of interviewees in our study. For the latter to be the case, potential interviewee characteristics that explain this variability would have to be related to race or sex. If such correlates could be identified, the important practical question would be the extent to which these characteristics are job relevant. As an anonymous reviewer mentioned, another avenue for future research would be that which incorporates discussions with applicants and interviewers. One possibility is that the interviewers differ in their adherence to the guidelines that were provided to structure the interview and that variability in this adherence, rather than demographics, explains

SIMILARITY IN INTERVIEWS

861

the variance in applicant demographic differences in interview ratings. This information could certainly be obtained from interviewees after the interviews (although their responses might be confounded with their performance on the interviews; Chan, Schmitt, DeShon, Clause, & Delbridge, 1997; Chan, Schmitt, Sacco, & DeShon, 1998) or from unobtrusive observers who make judgments about the interviewers’ adherence to structure. In a similar vein, although we did not have perceived similarity data, a highly structured interview might impact perceived similarity in terms of job-related attributes rather than demographics. In accordance, future research might attempt to disentangle perceived similarity on these two dimensions and relate them to interview outcomes. In addition, research on attributes of the interviewers may be helpful. For instance, studies incorporating the perspective of modern racism (McConahay, 1983) or ambivalent sexism (P. Glick & Fiske, 1996) may be particularly fruitful. More generally, these investigations are likely to be most productive if we have theoretical or practical reasons to predict correlates of the variability in which interviewers’ ratings are related to demographic status. Although not related to the specific research questions studied here, future research examining interviewer or supervisorlevel predictors of validity could also be informative. In particular, the HLM analyses described here could be adapted to examine characteristics of supervisors (such as their familiarity with the job or characteristics that might make them more motivating to their employees) that are associated with a stronger relationship between test scores and job performance (i.e., validity). HLM would be particularly well suited to studies such as this because supervisors often rate multiple employees whom they directly or indirectly supervise. Of particular relevance would be supervisor characteristics that could be enhanced via interventions or training. This, in turn, could lead to validation studies that are more exact in their assessment of validity coefficients. More broadly, this discussion highlights a key limitation of both the research reported here and many of the demographic similarity studies reviewed in the introduction. Very few studies measure the variables that supposedly connect demographic diversity to important work outcomes (cf. Harrison et al., 1998). To the extent that this is true, it becomes difficult to explain findings such as ours that clearly do not fit within the broader body of research. In accordance, future research that specifically targets hypothesized intervening variables would be particularly informative. Given the voluminous research literature supporting the similarity–attraction paradigm, we think that investigating the linkage between attraction and work outcomes would be more fruitful than that between similarity and attraction.

Methodological and Theoretical Implications Based on a Multilevel Perspective
We believe that the results reported here have important methodological and theoretical implications for several areas of research. The ANOVA conducted on the data presented in Table 2 indicated the presence of a significant Applicant Interviewer Sex interaction effect. Similarly, the much criticized yet often used D score approach also indicated that race and sex similarity was related to interview ratings, albeit weakly and in a different direction for race versus sex. In contrast, the more appropriate use of

HLM given the nested data structure yielded no evidence of any such effects. This leads us to wonder about the extent to which research that has analyzed demographic similarity at the individual level might have concluded that significant results existed when there was, in fact, inadequate evidence that the null hypothesis should have been rejected. As mentioned earlier, this is likely to happen when there is a significant nesting effect in one’s data. Given that much of the similarity research emphasizes group or team-level effects, we think that significant nesting effects are very likely to be present in most studies and that analyses such as ours might yield different results. The different results across the three analytic approaches are especially important given that all three examine the same conceptual hypothesis (i.e., whether interviewer–applicant similarity is related to ratings). In accordance, we strongly urge researchers to use appropriate analytic techniques when examining the nested data structures that frequently occur in relational demography studies. The approach chosen should be a function of the statistical issues described earlier in this article as well as an assessment of whether the hypotheses can be adequately tested given a particular analytic technique. Integral in this assessment are levels of analysis issues, including the level at which the predictors and outcomes supposedly occur and whether aggregation is empirically and theoretically justified (Chan, 1998). The HLM approach used here holds promise for how the results of fit studies might be analyzed. In particular, HLM would seem to be a natural choice when examining the effects of the congruence between an individual-level predictor and a group-level construct or diversity effects on individual level outcomes. We see three broad types of situations like these and discuss each in turn. The first scenario involves research examining the impact of demographic misfit where the demographic variables are categorical. For instance, Sacco and Schmitt (2003) used the proportions of various racial groups within quick service restaurants as demographic composition variables at L2 with the expectation that they would moderate the relationship between individual-level race and turnover risk at L1. That is, a cross-level interaction effect was modeled to test the hypothesis that the fit between an individual’s race and the racial composition of the restaurant predicts individual-level turnover risk. Second, researchers might be interested in fit with regard to a continuous demographic variable such as age. In situations such as these it might be important to consider not only the mean age, but also the dispersion within each L2 unit. This is because theory suggests that fit effects should be stronger in groups that are more homogeneous. This situation could be handled in HLM by using the interaction between the group’s mean and variance at L2 as a predictor of the L1 relationship or intercept (for an example of an interaction between L2 variables used to predict an L1 outcome, see Raudenbush & Bryk, 2002, p. 126). For instance, if the individual-level model examined the relationship between age and affect, the significance and form of the three-way interaction effect (i.e., Group Mean Age Group Age Variance Individual Age) would then indicate whether affect is predicted by the fit between individual’s age and the average age of the group, depending on the age variability within the group. Depending on the exact nature of the hypotheses, polynomial terms at the individual level might be required, with the Average Age Age Variance interaction term predicting the form of a polynomial age coefficient. Although

862

SACCO, SCHEU, RYAN, AND SCHMITT Barsalou, L. W. (1982). Context-independent and context-dependent information in concepts. Memory & Cognition, 10, 82–93. Bedeian, A. G., & Mossholder, K. W. (2000). On the use of the coefficient of variation as a measure of diversity. Organizational Research Methods, 3, 285–297. Berscheid, E., & Walster, E. (1969). Interpersonal attraction. Reading, MA: Addison Wesley. Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. J. Klein & S. W. J Kozlowski (Eds), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 349 – 381). San Francisco: Jossey-Bass. Bliese, P. D. (2002). Multilevel random coefficient modeling in organizational research: Examples using SAS and S-PLUS. In F. Drasgow & N. Schmitt (Eds.), Measuring and analyzing behavior in organizations: Advances in measurement and data analysis (pp. 401– 445). San Francisco: Jossey-Bass. Bryk, A., Raudenbush, S., & Congdon, R. (1996). Hierarchical linear and nonlinear modeling with the HLM/2L and HLM/3L Programs. Chicago: SSI. Bryne, D. (1971). The attraction paradigm. New York: Academic Press. Chan, D. (1998). Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology, 83, 234 –246. Chan, D., Schmitt, N., DeShon, R. P., Clause, C. S., & Delbridge, K. (1997). Reactions to cognitive ability tests: The relationships between race, test performance, face validity, perceptions, and test-taking motivation. Journal of Applied Psychology, 82, 300 –310. Chan, D., Schmitt, N., Sacco, J. M., & DeShon, R. P. (1998). Understanding pretest and posttest reactions to cognitive ability and personality tests: Performance–reactions relationships and their structural invariance across racial groups. Journal of Applied Psychology, 83, 471– 485. Chatman, J. A., & Flynn, F. J. (2001). The influence of demographic heterogeneity on the emergence and consequences of cooperative norms in work teams. Academy of Management Journal, 44, 956 –974. Chi, M. T. H., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152. DeShon, R. P., Ployhart, R. E., & Sacco, J. M. (1998). The estimation of reliability in longitudinal models. International Journal of Behavioral Development, 22, 493–515. Edwards, J. R. (1994). The study of congruence in organizational behavior research: Critique and a proposed alternative. Organizational Behavior and Human Decision Processes, 58, 51–100. Edwards, J. R. (1995). Alternatives to difference scores as dependent variables in the study of congruence in organizational research. Organizational Behavior and Human Decision Processes, 64, 307–324. Edwards, J. R. (2002). Alternatives to difference scores: Polynomial regression analysis and response surface methodology. In F. Drasgow & N. Schmitt (Eds.), Measuring and analyzing behavior in organizations: Advances in measurement and data analysis (pp. 350 – 400). San Francisco: Jossey-Bass. Ensher, E. A., & Murphy, S. E. (1997). Effects of race, gender, perceived similarity, and contact on mentor relationships. Journal of Vocational Behavior, 50, 460 – 481. Gilbert, J. A., & Shultz, K. S. (1998). Multilevel modeling in industrial and personnel psychology. Current Psychology: Developmental, Learning, Personality, Social, 17, 287–300. Glick, P., & Fiske, S. T. (1996). The Ambivalent Sexism Inventory: Differentiating hostile and benevolent sexism. Journal of Personality and Social Psychology, 70, 491–512. Glick, W. H., & Roberts, K. H. (1984). Hypothesized interdependence, assumed independence. Academy of Management Review, 9, 722–735. Graves, L. M., & Powell, G. N. (1995). The effect of sex similarity on

predicting the form of L1 polynomial terms is routinely done within the HLM framework when evaluating growth curve models (e.g., Raudenbush, 2002), using an L2 interaction to predict polynomial coefficients might present challenges in practice (e.g., multicollinearity). Thus, future research on this approach is encouraged. On the other hand, if the mean level of affect within a group is the L1 outcome of interest, a simpler intercepts-asoutcomes model could be used with the same L2 predictors mentioned above. More broadly, using the mean and variance at L2 also seems reasonable for other types of continuous variables, although the particular compositional form of the higher level variable can complicate the matter. In particular, when there is within-group agreement, the notion of fit is less useful because everyone exhibits a relatively high degree of within-group fit. In contrast, situations where agreement is not necessary might be more amenable to the approach described immediately above, although these types of studies may be the exception rather than the rule. The third setting includes research in which attitudinal, perceptual, or affective diversity is the predictor of interest. For instance, Barsade, Ward, Turner, and Sonnenfeld (2000) used D scores to study the relationship between affective diversity within top management teams and individual-level attitudes. In this type of situation we see diversity as conceptually analogous to a high degree of supplemental misfit at the team level. That is, diverse teams have individuals who are less similar to each other and thus exhibit a lower overall level of supplemental fit as compared with homogenous teams. Situations such as these can be handled using affective diversity as the sole L2 predictor in an intercepts-as-outcomes HLM. Diversity could be operationalized in several different ways depending on the situation, including the standard deviation or the coefficient of variation (e.g., Bedeian & Mossholder, 2000). In our view, the HLM approaches described above have the potential to provide more accurate answers to a range of research questions surrounding the notion of similarity or fit. To us it appears that analytic techniques such as HLM are more consistent with existing theory about the nature and putative effects of similarity or fit in these situations. Aligning analytic techniques with theory has been strongly advocated by levels of analysis theorists, and we believe that the areas of research mentioned above could potentially benefit from these approaches. Further, modeling these phenomena from a multilevel perspective explicitly acknowledges and takes into account the multilevel nature of these domains. This is a critical step in developing and advancing robust theoretical models of behavior in organizations (Rousseau, 1985).

References
Ancona, D., & Caldwell, D. (1992). Demography and design: Predictors of new product team performance. Organization Science, 3, 321–341. Atwater, L. E., Ostroff, C., Yammarino, F. J., & Fleenor, J. W. (1998). Self-other agreement: Does it really matter? Personnel Psychology, 51, 577–598. Barcikowski, R. S. (1981). Statistical power with group mean as the unit of analysis. Journal of Educational Statistics, 6, 267–285. Barsade, S. G., Ward, A. J., Turner, J. D. F., & Sonnenfeld, J. A. (2000). To your heart’s content: A model of affective diversity in top management teams. Administrative Science Quarterly, 45, 802– 836.

SIMILARITY IN INTERVIEWS recruiters’ evaluations of actual applicants: A test of the similarity– attraction paradigm. Personnel Psychology, 48, 85–98. Graves, L. M., & Powell, G. N. (1996). Sex similarity, quality of the employment interview and recruiters’ evaluation of actual applicants. Journal of Occupational and Organizational Psychology, 69, 243–261. Green, S. G., Anderson, S. E., & Shivers, S. L. (1996). Demographic and organizational influences on leader–member exchange and related work attitudes. Organizational Behavior and Human Decision Processes, 66, 203–214. Hannan, M. T. (1971). Aggregation and disaggregation in sociology. Lexington, MA: Health. Hannan, M. T. (1990). Aggregation and disaggregation in the social sciences. Lexington, MA: Lexington Books. Harrison, D. A., Price, K. H., & Bell, M. P. (1998). Beyond relational demography: Time and the effects of surface- and deep-level diversity on work group cohesion. Academy of Management Journal, 41, 96 –107. Hofmann, D. A. (1997). An overview of the logic and rationale of hierarchical linear models. Journal of Management, 23, 723–744. Hofmann, D. A., & Gavin, M. B. (1998). Centering decisions in hierarchical linear models: Implications for research in organizations. Journal of Management, 24, 623– 641. Hox, J. J. (1994). Hierarchical regressions models for interviewer and respondent effects. Sociological Methods and Research, 22, 300 –318. Hox, J. J. (2002). Multilevel analysis. Mahwah, NJ: Erlbaum. Jackson, S. E., Brett, J. F., Sessa, V. I., Cooper, D. M., Julin, J. A., & Peyronnin, K. (1991). Some differences make a difference: Individual dissimilarity and group heterogeneity as correlates of recruitment, promotions, and turnover. Journal of Applied Psychology, 76, 675– 689. Johns, G. (1981). Difference score measures of organizational behavior variables: A critique. Organizational Behavior and Human Decision Processes, 27, 443– 463. Kennedy, P. (1998). A guide to econometrics. Boston, MA: MIT Press. Kenny, D. A., & Judd, C. M. (1986). Consequences of violating the independence assumption in analysis of variance. Psychological Bulletin, 99, 422– 431. Kidwell, R. E., Jr., Mossholder, K. W., & Bennett, N. (1997). Cohesiveness and organizational citizenship behavior: A multilevel analysis using work groups and individuals. Journal of Management, 23, 775–793. Kraiger, K., & Ford, J. K. (1985). A meta-analysis of ratee race effects in performance ratings. Journal of Applied Psychology, 70, 56 – 65. Kreft, I., & de Leeuw, J. (1998). Introducing multilevel modeling. Thousand Oaks, CA: Sage. Kreft, I., de Leeuw, J., & Aiken, L. S. (1995). The effect of different forms of centering in hierarchical linear models. Multivariate Behavior Research, 30, 1–21. Kristof, A. L. (1996). Person– organization fit: An integrative review of its conceptualizations, measurement, and implications. Personnel Psychology, 49, 1– 49. Langbein, L. I., & Lichtman, A. J. (1978). Ecological inference. Beverly Hills, CA: Sage. Lin, T. R., Dobbins, G. H., & Farh, J. L. (1992). A field study of race and age similarity effects on interview ratings in conventional and situational interviews. Journal of Applied Psychology, 77, 363–371. McConahay, J. B. (1983). Modern racism and modern discrimination: The effects of race, racial attitudes, and context on simulated hiring decisions. Personality and Social Psychology Bulletin, 9, 551–558. McFarland, L. A., Sacco, J. M., Ryan, A. M., & Kriska, S. D. (2000). Racial similarity and composition effects on structured panel interview ratings. Poster presented at the 15th Annual Conference of the Society for Industrial and Organizational Psychology, New Orleans, LA. McPherson, M., Smith-Lovin, L., & Cook, J. M (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27, 415– 444.

863

Medin, D. L., Goldstone, R. L., & Gentner, D. (1993). Respects for similarity. Psychological Review, 100, 254 –278. Mount, M. K., Judge, T. A., Scullen, S. E., Sytsma, M. R., & Hezlett, S. A. (1998). Trait, rater and level effects in 360-degree performance ratings. Personnel Psychology, 51, 557–576. Mount, M. K., Sytsma, M. R., Hazucha, J. F., & Holt, K. E. (1997). Rater–ratee race effects in developmental performance ratings of managers. Personnel Psychology, 50, 51– 69. Muchinsky, P. M., & Monahan, C. J. (1987). What is person– environment congruence? Supplementary versus complementary models of fit. Journal of Vocational Behavior, 31, 268 –277. National Research Council. (1999). The changing nature of work. Washington, DC: National Academy Press. Newcomb, T. M. (1956). The prediction of interpersonal attraction. American Psychologist, 11, 575–586. Nezleck, J. B., & Zyzniewski, L. E. (1998). Using hierarchical linear modeling to analyze grouped data. Group Dynamics, 2, 313–320. Pollack, B. N. (1998). Hierarchical linear modeling and the “unit of analysis” problem: A solution for analyzing responses of intact group members. Group Dynamics, 2, 299 –312. Prewett-Livingston, A. J., Field, H. S., Veres, J. G., III, & Lewis, P. M. (1996). Effects of race on interview ratings in a situational panel interview. Journal of Applied Psychology, 81, 178 –186. Pulakos, E. D., White, L. A., Oppler, S. H., & Borman, W. C. (1989). Examination of race and sex effects on performance ratings. Journal of Applied Psychology, 74, 770 –780. Raudenbush, S. W. (2002). Alternative covariance structures for polynomial models of individual growth and change. In D. S. Moskowitz & L. S. Hershberger (Eds.), Modeling intraindividual variability with repeated measures data: Methods and applications. Multivariate applications book series (pp. 25–57). Mahwah, NJ: Erlbaum. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage. Reskin, B. F., McBrier, D. B., & Kmec, J. A. (1999). The determinants and consequences of workplace sex and race composition. Annual Review of Sociology, 25, 335–361. Riordan, C. M. (2000). Relational demography within groups: Past developments, contradictions, and new directions. In G. R. Ferris (Ed.), Research in personnel and human resources management (Vol. 19, pp. 131–174). New York: JAI Press. Riordan, C. M., & Shore, L. M. (1997). Demographic diversity and employee attitudes: An empirical examination of relational demography within work units. Journal of Applied Psychology, 82, 342–358. Roberts, K. H., Hulin, C. L., & Rousseau, D. M. (1978). Developing an interdisciplinary science of organizations. San Francisco: Jossey-Bass. Roth, E. M., & Shoben, E. J. (1983). The effect of context on the structure of categories. Cognitive Psychology, 15, 346 –378. Rousseau, D. M. (1985). Issues of level in organizational research: Multilevel and cross-level perspectives. In L. L. Cummings & B. M. Staw (Eds.), Research in organizational behavior (Vol. 7, pp. 1–37). Greenwich, CT: JAI Press. Sacco, J. M., & Schmitt, N. (2003). A dynamic multilevel model of demographic diversity and misfit effects. Manuscript submitted for publication. Sackett, P. R., & DuBois, C. L. (1991). Rater–ratee race effects on performance evaluation: Challenging meta-analytic conclusions. Journal of Applied Psychology, 76, 873– 877. Schneider, B. (1987). The people make the place. Personnel Psychology, 40, 437– 453. Shaw, J. B., & Barrett Power, E. (1998). The effects of diversity on small work group processes and performance. Human Relations, 51, 1307– 1325.

864

SACCO, SCHEU, RYAN, AND SCHMITT supervisor-subordinate similarity on subordinate outcomes. Journal of Applied Psychology, 86, 884 – 896. Wagner, W. G., Pfeffer, J., & O’Reilly, C. A. (1984). Organizational demography and turnover in top-management groups. Administrative Science Quarterly, 29, 74 –92. Waldman, D. A., & Avolio, B. J. (1991). Race effects in performance evaluations: Controlling for ability, education, and experience. Journal of Applied Psychology, 76, 897–901. Watson, W. E., Kumar, K., & Michaelsen, L. K. (1993). Cultural diversity’s impact on interaction process and performance: Comparing homogeneous and diverse task groups. Academy of Management Journal, 36, 590 – 602. Webber, S. S., & Donahue, L. M. (2001). Impact of highly and less job-related diversity on work group cohesion and performance: A metaanalysis. Journal of Management, 27, 141–162. Wesolowski, M. A., & Mossholder, K. W. (1997). Relational demography in supervisor–subordinate dyads: Impact on subordinate job satisfaction, burnout, and perceived procedural justice. Journal of Organizational Behavior, 18, 351–362. Wharton, A. S., Rotolo, T., & Bird, S. R. (2000). Social context at work: A multilevel analysis of job satisfaction. Sociological Forum, 15, 65–90. Whitener, E. (2001). Do “high commitment” human resource practices affect employee commitment? A cross-level analysis using hierarchical linear modeling. Journal of Management, 27, 515–525. Williams, K. Y., & O’Reilly, C. A. (1998). Demography and diversity in organizations: A review of 40 years of research. Research in Organizational Behavior, 20, 77–140. Zenger, T. R., & Lawrence, B. S. (1989). Organizational demography: The differential effects of age and tenure distributions on technical communication. Academy of Management Journal, 32, 353–376.

Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: Sage. Tajfel, H. (1981). Human groups and social categories: Studies in social psychology. Cambridge: Cambridge University Press. Tajfel, H., & Turner, J. (1986). The social identity of intergroup behavior. In S. Worchel & W. Austin (Eds.), Psychology and intergroup relations (pp. 7–24). Chicago: Nelson-Hall. Timmerman, T. A. (2000). Racial diversity, age diversity, interdependence, and team performance. Small Group Research, 31, 592– 606. Tsui, A. S., Egan, T. D., & O’ Reilly, C. A. (1992). Being different: Relational demography and organizational attachment. Administrative Science Quarterly, 37, 549 –579. Tsui, A. S., & O’ Reilly, C. A. (1989). Beyond simple demographic effects: The importance of relational demography in superior–subordinate dyads. Academy of Management Journal, 32, 402– 423. Turban, D. B., & Jones, A. P. (1988). Supervisor–subordinate similarity: Types, effects, and mechanisms. Journal of Applied Psychology, 73, 228 –234. Turner, J. (1987). Rediscovering the social group: A social categorization theory. Oxford, United Kingdom: Blackwell. Tuzinski, K. A., & Ones, D. S. (1999). Rater–ratee race effects on performance ratings for understudied ethnic groups. Poster presented at the 14th Annual Conference of the Society for Industrial and Organizational Psychology, Atlanta, GA. Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327–352. Vancouver, J. B., Millsap, R. E., & Peters, P. A. (1994). Multilevel analysis of organizational goal congruence. Journal of Applied Psychology, 79, 666 – 679. Vecchio, R. P., & Bullis, R. C. (2001). Moderators of the influence of

SIMILARITY IN INTERVIEWS

865

Appendix Sample Interview Question, Probes, and Partial List of Behaviorally Anchored Rating Scales
Competency: Systemic thinking Lead question: Describe the last problem you solved which required careful analysis of complex information. Probes: What were the major aspects of the problem? How did you integrate the information? How was the problem solved? What were the implications of your solutions? Ineffective 1 2 3 4 Effective 5 6 7 Highly effective 8 9

• Did not approach problem/situation in a structured way • Did not consider priorities and planning when dealing with interruptions • Failed to consider alternatives • Did not recognize interdependence in tasks • Did not describe a stretch objective • Explained examples in a confusing and disorganized manner

• Used a structured approach for important tasks • Established priorities and followed through • Consider one or two alternatives before deciding on a course of action • Recognized interdependence in completing tasks • Accepted stretch objectives • Explained examples in a clear manner

• Effectively identified key issues and priorities • Established and worked to complete important tasks • Maintained flexibility in solving problems • Recognized and acted on interdependence in completing tasks • Put in a considerable amount of extra work to ensure that stretch objectives were achieved • Explained examples in a clear, concise, and organized manner

Competency: Teamwork Lead question: Describe a time when you were working on a group project where problems arose within the team that limited the team’s ability to complete the project. Probes: What was/were the situation(s)? What was the problem? Describe, in detail, the steps you took to resolve the problem(s). What steps did others take? Was/were the problem(s) resolved to the satisfaction of everyone? Ineffective 1 2 3 4 Effective 5 6 7 Highly effective 8 9

• Unable to describe the purpose of the team and own role on it • Did not display awareness of others’ viewpoints • Emphasized differences and criticized others • Took all the credit for successes of the group • Changed mind in face of opposition • Ignored disagreements in the group

• Described the purpose of the team and own role • Related effectively to people of differing backgrounds and interests • Sometimes used others as problem-solving resources • Shared credit for group tasks • Mentioned an awareness of others’ viewpoints • Highlighted and summarized areas of agreement with others

• Described how role on team contributed to task accomplishments • Worked very effectively with people of differing backgrounds and interests • Viewed others as valuable problem-solving resources and leveraged their abilities to achieve key objectives • Rewarded others when their efforts made substantial contributions to the group task • Consistently demonstrated an awareness of others’ viewpoints and feelings and modified own position when appropriate • Raised and discussed difficult issues/disagreements to find common ground

Received July 11, 2002 Revision received February 11, 2003 Accepted March 24, 2003

Similar Documents

Free Essay

African American Achievement Gap

...Race and Poverty: Factors of the African American Achievement Gap Abstract The proposed action research study will pinpoint factors that contribute to the African American academic achievement gap. These factors impact not only the lives of families in the African American community but continues a vicious cycle of generations of poverty that hinders our country’s ability to effectively compete economically and also threatens America’s capacity to provide social equality for all. The participants in this study will comprise of parents and students of highly concentrated poverty - low academically performing African American public schools. Thirty two parents and thirty two students from eight low performing-poverty schools in the research study will be interviewed and surveyed online. Collected information and data will be researched employing qualitative and quantitative practices. Introduction There was a time when children of color were denied the hope and expectation of equal education because of racial isolation and discrimination in America’s education system. Although it’s been well over 50 years since Brown –vs.- The Board of Education which established equal education for all, today we are still faced with large racial disparities in reading and math proficiency between African American children and their thriving white contemporaries. This purpose of this study is to illustrate the connection that occurs between race and poverty with the academic...

Words: 3689 - Pages: 15

Premium Essay

Disparity In Special Education Essay

...demographic disparity between K-12 students and the teaching force not only harms the concept of equity , but that it also causes damaging effects on students’ achievement, particularly students of color. A growing concern about the demographic...

Words: 4259 - Pages: 18

Free Essay

Business

...theory of symbolic interactionism but “labeling” is a key concept that is very relevant when it comes to the study of race. Not a lot of television shows deal with racism and the effect that it has on its victims, but on February 4,2005 a Disney show that was reaching out more the black audience took a stand. “That’s So Raven” was a very popular show in the early 2000’s and for black history they decided to do a show on racism , Raven ( a black girl )and her friend Chelsea (a white girl) applied for a job at the same place. At the interview they had to perform jobs such as folding clothes, and creating displays for the store. Chelsea did really poorly while Raven ends up excelling. During the interview the manager’s gestures towards Raven were very rude and obnoxious and it was clear that she wasn’t interested in anything thing that Raven was doing. At the end of the interview Raven was very confident that she got the job but the manger ended up hiring Chelsea. Raven who was very hurt and confused wanted to know why she wasn’t hired when she knew just as well as the manager that she did a better job and was much more qualified for the job so she decided to investigate. It was later discovered though a hidden camera that the manager said that she didn’t hire black people. Even though the show stopped playing a little after that it still sent out a message to all races and ethnic groups that’s it was not to discriminate and to be racist. After that episode of” That’s So Raven” the...

Words: 495 - Pages: 2

Free Essay

“the Ebb and Flow of Favour”: Narrative Structure in Dionne Brand’s “Job”

...based on her race is the fact the story hinges upon; that she is willing to be exploited based on her gender is the essay’s central irony. Brand offers a narrative structure that allows the reader to empathize with the speaker—to experience an emotional response that reflects that of the speaker. She accomplishes this response by withholding information until a crucial moment, by varying sentence length and control to reflect emotions, and by repeating certain images throughout the essay. [Thesis statement] Brand opens her essay by outlining the series of events that lead her to seek employment at an office on Keele Street in Toronto. She recounts how she secures—by telephone—an interview for the following day; she then recounts her careful preparations for the interview and her arrival at the office on the day of the interview. Suddenly—and apparently inexplicably—she is told that the job no longer exists. Just as it dawns on the speaker that the reason she is unacceptable for the position is her race, it also dawns on the reader. Brand, with careful rhetorical manipulation of structure, mimics the speaker’s epiphany in the reader by withholding the information that the speaker is black. [Topic sentence] Indeed, the first mention of the speaker’s race comes after her rejection as she makes her escape and laughs “that laughter that Black people get, derisive and self-derisive” (74). Before the non-interview, the speaker sees herself as neutral in terms of race (interestingly...

Words: 1250 - Pages: 5

Premium Essay

Eyewitness Evidence Analysis

...In 1985, the first study was done to evaluate the effects of the police interview, cognitive interview and hypnosis on a witness’ testimony (Geiselman, Fisher, MacKinnon, & Holland, 1985). The scholars discovered that the cognitive interview was the best technique at recovering accurate information (Geiselman, Fisher, MacKinnon, & Holland, 1985). However, like the other interviewing techniques it does not decrease inaccurate information from the witness. (Geiselman, Fisher, MacKinnon, & Holland, 1985). Another system variable involving the police would be the post identification feedback on the witness; it amplifies their confidence – which is not highly correlated to accuracy, and can lead to serious consequences (Bradfield, Wells & Olson, 2002). This helped give the criminal justice system the ability to assess their protocols to get rid of some system variables that effects eyewitness testimony...

Words: 1794 - Pages: 8

Free Essay

Effect of Media During Elections

...Introduction The purpose of this interview was to examine the role of media in the electoral process, or during elections to be precise. The activities of the media are various, so this perhaps was just by noting and classifying some of the things the media do in elections. The interview was quite entertaining and there were no conflicting views or controversial opinions. The interview was conducted via cell phone while the interviewee was in the comfort of her home. This set a casual and comfortable tone for the interview. The topic for the essay which stood out after the interview was “ELECTIONS AND THE NEWS MEDIA.” Below is a summary of the interview: What is Media? Most journalists define media as a channel of communication through which news, entertainment, education, data or promotional messages are disseminated. Broadcasting, newspapers, magazines, TV, radio, billboards, telephone, fax and internet are all considered media. Who are the News Media? News media are those elements of the mass media. Generally focus on delivering news to the public or target public. Print media such as newspapers and news magazines are part of news media. Broadcast news such as radio and television, Internet such as online newspapers and news blogs are all news media. So what is a newsreel? (follow-up question) Newsreel was a documentary film common in the first half of the 20th Century. This released a public presentation place containing filmed news stories on a regular basis...

Words: 1410 - Pages: 6

Premium Essay

Ethical Issue and Management Paper

...company and federal law when it comes to the hiring of employees and are sometimes bias with certain applicant. Part of my job as a manger is to recruit and interview potential candidates for available potions in my company, and by doing this, the first thing I have to do is to screen the applications to see if the applicant meets qualification for the open positions, and if they do the next step is to begin the selection process. There are certain questions an application that refer to age, sex, race and citizenship status that is required as part of the verification process. The application process also calls for prior employment history, as well as the reason for leaving or wanting to leave your current or prior job. Any of the information that the candidate provides is to be used as a lead to gather background information and check references. As a manager I have an ethical responsibility to hiring for the needs of the business as well as hiring the right person for the job. In today’s job market, we’re hiring people with diverse backgrounds to meet the needs of the public, sometimes hiring managers are faced with moral and ethical issues facing discrimination based on race, color, religion, sex and national origin. As a manger I may be face with an ethical dilemma to hire one candidate over another because of race. For example, a manger is hiring for a team leader position and there are two candidates; an African American and a Caucasian American, after interviewing both...

Words: 973 - Pages: 4

Free Essay

Discrimination

...of age, almost close to the discrimination against women is slavery. Ancient civilizations were developed on the basis of hard work of the slaves. Slaves were treated like no better than animals or machines. Slavery is nothing but a form of discrimination based on races. According to Oxford dictionary, discrimination is the unjust or prejudicial treatment of different categories of people, especially on the grounds of race, age, or sex. Canadian Human Rights Act defines discrimination as an action or a decision that treats a person or a group negatively for as much as 11 reasons namely, race, national or ethnic origin, colour, religion, age, sex, sexual orientation, marital status, family status, disability, and a conviction for which a pardon has been granted or a record suspended. These reasons are known as grounds of discrimination. Example in hiring process In the recruitment and selection processes of the workplaces, there exists two types of discriminations - direct and indirect. For example, a qualified female job applicant might be rejected and a less qualified male candidate might be selected because the employer or existing workers prefer to work with males. Moreover, in an interview, the employer might ask a particular question like having or plan to have children in near future, only to...

Words: 1441 - Pages: 6

Free Essay

Race and Your Community

...Race and Your Community Robert Dillman ETH/125 March 8th, 2012 Sharon D. White, Ph.D. Race and Your Community There is very little racial diversity in my community. In my paper I will look at the demographics of not just the city itself, but the county in which I reside, as well as the businesses that are part of the community and the different races represented. I will explore my own personal experiences and that of a hate crime that has happened to this community. I will also include an interview with a community member involved in the community itself. I see our community as being one of a close nature. With little conflict between any certain ethnic groups, there seems to be great social cohesion within the residents of the community. The minorities that do reside here seem to blend into the community and feels little effects from racism or discrimination. My community is very small compared to many cities that surround us within a 250 mile radius. How secluded are we from the big city life? Our city only has a population of 31,894 (2010 US Census Bureau, Jan. 2012) and in the county has 39,265 (2010 US Census Bureau, Jan. 2012). I am including the Nez Perce County because it plays a big part of our community. Our community sees very little race ethnicity. Most of my community is made up of the same ethnicity as me, comprised of mostly Whites (90.1% of the population). Among the other races that inhabit the Nez Perce County are Native Americans (5.6%), Hispanic...

Words: 1968 - Pages: 8

Premium Essay

Tesco Policy Act

...Introduction In this assignment I will be taking the role of a HR manager and i will be explaining the different aspects that a selection panel would have to consider when they are recruiting employees. In addition their are a lot of legislations that need to be considered when handling different aspects of the selection proccess and I will be covering some of these legislations. Also the business that I will be talking about in this assignment in Tescos. Job Advert A job advert is the first part of the selection process it is one of the most important parts of the prcess becasue without it a business will not get any applicants. Job adverts help a business to make people aware that they have a job vacancy, also a job advert contains information about the vacancy therefore it will then atract the right people. Furthermore when a business creates job adverts they must try to consider all legislations and ethical issues because if they dont a business can be sued and will have to pay a lot of money. National minimum Wage in the UK National minimum wage is the wage that employees would get, these figures are released by the government and the wages do vary depending on the age of the employees. For a businees like Tescos it is important that they but the correct wages that match the national minimum wage and so they should familiarize themselves with the rates before they create their job advert because the rates always change...

Words: 1246 - Pages: 5

Free Essay

Racial Disparity in Prisons

...strong effect of many realms of society such as the family life, and employment. Education and race seem to be the most decisive factors when deciding who goes to jail and what age cohort has the greatest percentage chance of incarceration. Going to prison no longer affects just the individual who committed the crime. Instead, the family and community left behind gain a new burden by one individual's actions. The United States still has a large disparity between Whites and Blacks and now a growing Hispanic population. This racial disparity in the educational system, job sector, and neighborhoods have all contributed to the booming prison population in the latter part of the 20th century which has only continued to widen in the 21st century. At the end of 2006, the Bureau of Justice released data that stated that there were 3,042 black male prisoners per 100,000 black males in the United States, compared to 1,261 Hispanic male prisoners per 100,000 Hispanic males, and 487 white male prisoners per 100,000 white males (USDOJ, 2008). The likelihood of black males going to prison in their lifetime is 16% compared to 2% of white males and 9% of Hispanic males (USDOJ, 2008). Other social factors can be linked to the racial inequality in the criminal justice system such as socioeconomic status, the environment in which a person was raised, and the highest educational level a person achieves. It has been argued by some that the race a person is born into has a substantial effect on the...

Words: 1466 - Pages: 6

Premium Essay

Annotated Bib

...Annotated Bibliography Finkel, E. (2010, November 1). Black Children Still Left Behind DistrictAdministration.com. Retrieved October 22, 2012, from http://www.districtadministration.com/article/black-children-still-left-behind In this article, Ed Finkel discusses the effect No Child Left Behind had on minority students, in particular African-American students. Finkel uses data from the National Assessment of Educational Progress to support his claims that African American students have been negatively impacted by the No Child Left Behind Act that was passed almost a decade prior. Finkel interviews several sources who work for or with educational institutions to get their expertise in the matter. The information that was provided in this article gives a clear stance on African-American education and the effect No Child Left Behind has had on it. It also provides specific examples of how detrimental the Act has been. Ed Finkel has been a writer for over twenty years. He writes mainly about public policy, with a special emphasis on education. Finkel worked as a writer for Chicago Lawyer Magazine, and he also writes for DistrictAdministration.com which is a website dedicated to school district management. Finkel's writing is clear and concise and he only makes claims with supporting evidence. The information will be added to my paper to attest that African-American children score lower on standardized tests, graduate high school at lower rates, and are considerably more likely...

Words: 2022 - Pages: 9

Premium Essay

Journal Review

...Low Income Living Arrangements and Child Development Alzier Johnson-Gomez Housatonic Community College May 12, 2014 Abstract This study was conducted by researchers at the University of North Carolina-Chapel Hill and the University of Chicago by E. Michael Foster and Ariel Kalil; it was published in the November/December 2007 issue of the journal Child Development. It used longitudinal data from approximately 2,000 low-income families, in order to compare the development of children living only with their mothers with children in other arrangements (those living with their biological fathers, in blended families, and in multigenerational households) to determine the effect of living arrangements on the children’s cognitive achievement and emotional adjustment. Instead of comparing children in different family arrangements at one point in time, the researchers addressed how children and their families change over time. Allowing them to consider whether and how a child’s emotional and intellectual development changes after there has been a change in family structure. The study found that in general, children’s performance on developmental assessments changed very little after their mothers married. The absence of a relationship between family structure and children’s outcomes suggests that there is as much diversity within families of a given type as there is across families of different types. This distinction implies that policies like income support that seek to improve...

Words: 2712 - Pages: 11

Premium Essay

College Admissions Obstacles

...I agree with the ASA’s conclusion that race must be considered in college admissions because race puts many obstacles in the path of success for minorities, a more diverse campus will help to end segregation and stigma while better preparing students for their field, and that Affirmative Action does not affect admissions as much as athletics and legacy do. As previously stated, race creates obstacles for minorities trying to achieve their goals. Just as it is more impressive for a flu ridden runner to win a race than a healthy one, it is more impressive for a disadvantaged minority to score highly on a standardized test than a privileged white child. Many critics of Affirmative Action argue that the real obstacle comes from the school attended...

Words: 1115 - Pages: 5

Free Essay

Thesis

...BGLO Membership and Class Participation 94 The Effects of Sorority and Fraternity Membership on Class Participation and African American Student Engagement in Predominantly White Classroom Environments Shaun R. Harper The relationship between Black Greek-letter organization membership and African American student engagement in almost exclusively White college classrooms was explored in this study. Data were collected through interviews with 131 members from seven undergraduate chapters at a large, predominantly White university in the Midwest. This study resulted in an explanatory model that shows how underrepresentation, voluntary race representation, and collective responsibility positively affect active participation, while Forced Representation has a negative effect. Findings also reveal that faculty teaching styles both positively and negatively affect engagement among African American sorority and fraternity members in their classes. The implications of these findings are discussed at the end of the article. The title of Kimbrough‘s (2005) article, ―Should Black Fraternities and Sororities Abolish Undergraduate Chapters?‖ captures the essence of an ongoing debate among students, various stakeholders on college and university campuses across the country, and leaders of the nine national Black Greekletter organizations (BGLOs). Instead of offering a balanced description of risks and educational benefits associated with membership, Kimbrough...

Words: 8911 - Pages: 36