Job performance practice essay |
|
The following post is an essay researched and written in anticipation of an exam question for the module Selection and Assessment in pursuit of the Organizational Psychology Master's qualification within the University of London's International Programmes (Birkbeck College). Excerpts may be used with the citation:
Aylsworth, J. (2010). Challenges for job performance: The criterion problem, problem criteria and the reality gap. (url) Accessed: (Month year). "Critically discuss attempts to conceptualise and measure job performance" Challenges for Job Performance:
The Criterion Problem, Problem Criteria And the Reality Gap Introduction
How we conceptualize job performance (JP) is important because it determines the criteria that we measure during the selection process. It is also difficult because well-recognized conceptualizations of JP have important, fundamental differences. We will address this question in three parts. Part 1 will consider what we mean by “the criterion problem.” Part 2 will examine behavior versus outcome as well as organizational citizenship. Part 3 will touch briefly on measurement issues. We will conclude that the opportunity remains open to narrow the gap that exists between how JP is researched and practiced. Part 1: The "Criterion Problem" Within selection and assessment, JP is the broad outcome or "criterion" that organizations seek to predict among candidates. Put simply, JP refers to the extent to which people are good at their jobs (Dewberry, 2009). However, we must further decide which facet or facets we need to predict and how to measure them. This, as described by Austin and Villanova (1992), is ”the criterion problem.” Differing criteria are illustrated within some of the more academically important descriptions of JP. – Campbell, McCloy et al (1993) argue that JP should be measured as behaviors rather than outcomes because outcomes are not entirely under the employee’s control. They also invoke levels of analysis, recognizing that individual behaviors cannot be assumed to contribute directly to organizational outcomes. For them, JP consists of eight categories. – Motowidlo (2003) defines JP as “the total expected value of discrete behavioral episodes to an organization that an employee carries out over a standard period of time.” Like Campbell et al (1993), he measures JP as behaviors. For him JP consists of two facets:1) Task performance (in both direct and indirect support of the organization’s core activities), and 2) Contextual performance. The latter refers to behaviors that support the organization’s broader social and psychological environment. This second facet can also be thought of as organizational citizenship behavior (OCB) (Organ (1988) or prosocial discretionary behavior. – Schmitt & Chan’s (1993) KSAOs (knowledge, skills, abilities and other characteristics) constitute a theoretical approach to JP within job analysis. Part 2: Problem criteria A range of JP approaches called “competencies” furthers illustrates the criterion problem and draws our attention to two issues: 1) Whether we should measure behaviors or outcomes and how to address OCB. While the approaches described in Part 1 are academic in origin, competencies evolved as a practitioner approach. Boyatzis (1982) agrees with Klemp (1980) that a competency is “an underlying personality characteristic that results in effective and/or superior performance of a job." Measuring behavior versus outcome. Spencer & Spencer (1993) measure competencies as outcomes, suggesting that the relationship of importance is between the underlying competency and some particular performance-related criterion. Behavior is not measured at all. However, Woodruffe (1993) does measure behavior. The appeal of competencies to organizations probably lies in their usability and face validity, so we are not surprised that they have not been well-developed by researchers – or that their basic assumptions have not withstood academic scrutiny (Wood & Payne, 1998). An example is the notion that individual contribution can somehow be carved out of team performance. We don’t see much theory holding competencies together – certainly not to the standards of testable relationships. OCB. While job- and non-job specific behaviors can be observed directly during assessment, OCB cannot. However, it is important, and we may be able to infer it from a measure as straightforward as task performance. Using confirmatory factor analysis, Hoffman and Blair (2007) found that OCB could be measured as a single factor. The median correlation among their nine sub-dimensions of OCB was 0.75, explaining 60 percent of the variance. That strongly suggests “p” as a general factor of performance. They also found that while task performance and OCB were distinct, they were highly correlated – at 0.74. This is exciting because it suggests that OCB as a single dimension might be sufficiently inferred from task performance. We do need to acknowledge that Viswesvaran et al’s (1996) review suggested a “p” that did not overly include OCB. However, we believe that their “quality” dimension may allow conceptually for OCB. Part 3: Measurement issues Measurement of JP, whether within job analysis or employee review, presents challenges. One issue is that subjectivity can never be entirely eliminated – even with measures such as the 360-degree review, which seeks diverse input (from self, supervisor and colleagues). A second issue: Spencer and Spencer (1993) propose that competencies arise partly from invisible motives, which brings the unconscious into the picture. Projective methods are necessary to access the unconscious because quantitative methods, such as questionnaires, cannot be used to tap implicit motives (Kerr, 2004) – and organizations are using projective methods during selection (Dewberry, 2009). It seems, at least to this writer, that the use of projective methods for selection is the equivalent of trespassing into individuals’ psychological genomes without individuals’ explicit consent. We also believe that this violation is even more egregious when findings are not communicated back to both successful and unsuccessful candidates. Conclusion: The Reality Gap We have looked at the criterion problem, problem criteria and measurement issues. We now draw attention to Dewberry’s (2009) mention of “the reality gap.” He writes that academically grounded prescriptions for selection and assessment “do not necessarily bear a great deal of resemblance to the way organizations” carry out the process. This mirrors Austin & Villanova’s (1992) description of research and practice as two parallel trains heading in the same direction but never converging despite the illusion caused by the horizon. Unfortunately, we do not forecast progress here any time soon, even though researchers are doing quite progressive work. For example, Grant and Wrzesniewski (2010) used core self-evaluations and anticipated guilt and gratitude as intervening variables to extend job-personality theory. They presented quite useful suggestions for managers, yet it seems that there is no unaligned, impartial third party to champion this knowledge across the academic-practitioner divide. However, as with the “road to inquiry” (Kaplan, 1964; in Austin & Villanova, 1991) the opportunity remains open to narrow the gap between how JP is researched and practiced. Let us hope, that unlike the parallel trains that only appear to converge at the horizon, this opportunity is not an illusion. Exam performance: The essay was not used under exam conditions because "job performance" did not appear as an exam question in 2010. However, the essay was submitted as a practice exercise and evaluated as easily meeting criteria to be marked at the distinction level. |
|