Reducing psychological bias in job analysis practice essay
The following post is an essay researched and written in anticipation of an exam question for the module Selection and Assessment in pursuit of the Organizational Psychology Master's qualification within the University of London's International Programmes (Birkbeck College). Excerpts may be used with the citation:
Aylsworth, J. (2010). Job analysis: Reducing psychological bias related to social and cognitive factors. (url) Accessed: (Month year).
Exam essay question:
"How can social and cognitive bias influence job analysis, and how can the validity of job analysis be improved?"
Job analysis: Reducing Psychological Bias
Related to Social and Cognitive Factors
Psychological sources of inaccuracy may be introducing bias at any stage of the selection process (Dewberry, 2009). Job analysis is a good starting point for examining this influence because it should be the first stage in any good system of personnel selection (Schmitt & Chan, 1998). We will address this question in two parts. Part 1 will examine distinctions between social and cognitive psychological processes. Part 2 will present evidence and recommendations. We will then conclude that, as with selection methods, issues of validity, reliability, acceptability, usability, fairness, generality and utility should be the broader context for efforts to mitigate the influence of psychological processes on job analysis.
Part 1: Distinctions Between Social and Cognitive Processes
Rothmann & Cooper (2008) define “job analysis” as “methodologies that allow personnel specialists to set the necessary standards by discovering exactly what the job entails and which skills and abilities are necessary for successful job performance.” Beyond selection, job analysis has many uses such as identifying training needs and reorganizing the company workforce. In fact, Levine (1983) mentions 11 uses for job analysis.
Social and cognitive sources of inaccuracy should be understood as distinct from one another because they arise from different underlying processes – and because “knowing the source of such inaccuracy is critical in mitigating its influence on job analysis data,” according to Morgeson & Campion (1997). They explain that social sources come from normative pressures that arise when people interact, while cognitive sources are rooted in information-processing abilities. Rather than a theory or a model of inaccuracy, the authors propose an organizing framework as a basis for future research. Within it are 16 potential sources of inaccuracy, six of them social and 10 of them cognitive.
Part 2: Evidence and Practical Recommendations
A substantial number of Morgeson and Campion’s (1997) potential sources of inaccuracy are group-related. These influences may also be operating in group interviews and assessment center consensus sessions (Dewberry & Jordan, 2005), so group-related sources merit a closer look.
Group size. To the extent possible, group size should be kept to a necessary minimum, though the number of members would certainly be somewhat subjective. The basis for this view is that group norms can lead to conformity pressure, which can then lead to biased output. Asch (1951, 1955) has found norm-related biased output to occur in groups of only four members, and norms can operate even when members report as individuals. Kidwell & Bennett (1993) found that as group size increases, the value of individual contributions becomes less identifiable – and if the group is too large, members can lose their motivation to contribute.
Status differences. Here, we cite Morgeson & Campion’s (1997) opinion that groups should be composed of equal-status members where possible. The inclusion of incumbents seems especially problematic because they are more likely to be of lower status and subject to group norms and less likely to contribute. They may engage in ingratiatory behavior (Dula & Perry, 1997) and internalize supervisor expectations as demand effects (King, 1974).
They may also have routinized the complexity of their jobs, resulting in preferential recall of the novel aspects of their jobs (Morgeson & Campion, 1997). Furthermore, a meta-analysis by Dierdorff and Wilson (2003) showed that incumbents have lower inter-rater reliability than analysts or technical experts. Inclusion of outside analysts (Smith & Hakel, 1979) instead of incumbents, might be a less biasing option. However, analysts can also contribute bias – for example, in the form of extraneous information, such as compensation levels of similar jobs (Mount and Ellis, 1987).
Consensus Requirement. The evidence argues strongly against requiring raters to reach unanimous agreement, the agreement rule shown to produce the greatest amount of conformity pressures (Kaplan & Miller, 1987). Williams and Taormina (1993) found that both unanimous and majority rules produce extremity shifts (increased polarization of opinion). Morgeson & Campion (1997) say that anonymous responses or averaging of responses should be considered as alternatives.
Complexity and Information Overload. Confrontation with large amounts of complex information and competing priorities can lead to the use of heuristics (cognitive shortcuts), which can introduce inaccuracy and also hamper raters’ ability to make fine distinctions, according to Morgeson & Campion, 1997. They recommend giving people enough time to complete their work and suggest breaking excessively long questionnaires into shorter ones to be completed by a different group of raters. Sanchez and Levine (1994) propose enhancing rater awareness by training job analyst raters to reduce their use of heuristics.
Conclusion: Broad recommendations
Little of the evidence cited by Morgeson and Campion (1997) actually comes from job analysis studies, and the authors concede that they are mainly presenting hypothesized relationships. However, cutting across social and cognitive processes, they make four broad recommendations:
1) Job information should always be obtained from multiple perspectives.
2) A variety of research methods should be used.
3) The data collection process should be clear and understandable for the respondent.
4) Data collection should be closely supervised.
Perhaps in the future, we will under understand more about how psychological processes contribute to both accuracy and inaccuracy in job analysis. Until then, we conclude that our efforts to mitigate the harmful effects of psychological bias should occur within a broader context – a context that, like selection methods, values validity, reliability, acceptability, usability, fairness, generality and utility.
Exam performance: This essay was not used under exam conditions or submitted for evaluation as a practice essay. Therefore, how it might have been marked is unknown.