
The process of analyzing occupations, jobs, professional practices, and tasks results in the establishment of work and worker requirements for multiple purposes (Brannick, Levine, & Morgeson, 2007). Among them are testing, training, job design, and curriculum development. Theories, tools, and techniques for conducting occupational analyses are rooted in education, psychology, engineering, and human resource management (Wilson, Bennett, Gibson, & Alliger, 2012). This brief is organized around the “What, Why, and How” of verification — which begins when a panel or a committee defines (or revises) the elements that constitute the occupational content domain. Tasks are common, but any set of elements can be verified. A linear process with a focus on verification is depicted below, but variations are common.
Job/Occupational Definition ⇨ Verification ⇨ Occupational Information Usage
⇨ Revision/Re-verification
WHAT is verification?
Verification is a check on the initial definition (job/occupational definition) accomplished by asking others to review it element by element. The process can be accomplished qualitatively or quantitatively. Verification is a best practice for demonstrating quality or due diligence in high-stakes testing or training contexts in terms of establishing content-oriented validity evidence.
WHY conduct verification?
One answer resides in the importance of quality assurance. Whether using inductive or deductive job/occupational analysis, quality is critical and should not be overlooked. Another motivation to conduct verification is to provide opportunities for input (voice) from members of a profession or occupation. Finally, if challenges are expected the existence of verification can provide legal defensibility.
Consider, for example, the facilitated process used in DACUM (Developing A CurriculUM; Norton & Moser, 2014). A facilitated DACUM workshop with a panel of seven to 12 job experts produces a chart containing broad duties, subordinate tasks, and other elements (e.g., knowledge areas, skills). Below we describe a standard CETE process for verification that can be applied widely to any occupation or job. We have used it alone as well as with DACUM in projects involving the Western Region Intergovernmental Personnel Assessment Council (WRIPAC), Work Profiling System (CEB-SHL), and Occupational Information Network (O*Net). WHAT is done follows a standard yet flexible sequence in CETE projects. In broad terms, immediately following the creation or revision of the occupational content the verification sequence unfolds from initial review to summary report.
HOW is verification conducted?
First, the newly-defined DACUM chart is shared with the client organization (e.g., firm, educational institution, association). With client approval and signoff, the chart is designated as initial, or changes are incorporated, and then the client signs off. Reviews typically include expert workers who served on the DACUM panel but may also include supervisors, union representatives, and human resources or training staff. This quality assurance step can be expanded based on the proposed consequences of usage with higher stakes requiring more diligence than lower stakes.
Second, CETE staff recommends that verification surveys always precede hand-offs to task analyses, curriculum development, training, or testing. Requirements analysis ensures understanding about the project. Client organizations indicate goals of the verification (e.g., uses, purposes) and try to ensure a match of expectations or recommend alternatives. Collaboration and negotiation help in guiding the remainder of the project.
Approaches to verification may be qualitative or quantitative. Qualitative approaches to verification might include a focus group review of the occupational specification. CETE staff also use parallel panels in which two or more panels conduct the same DACUM process and the results are consolidated. This method was used to develop the Community Support Skill Standards for a National Skill Standards Board project during the 1990s.
A quantitative verification involves surveys (print or online) to provide a look over the shoulder of the panel as well as ratings on occupational definition elements for high-quality, defensible materials and products. CETE has used print and optically-scanned surveys in past projects; currently our concentration is on web-based surveys. An ideal quantitative process (synthesized from experiences across CETE projects) might proceed as follows:
- Enter work duties and tasks, at a minimum, into a database for manipulation — we advocate entering all elements including worker characteristics (e.g., knowledge, skills).
- Request demographic information to describe the sample of respondents and allow filtered analysis.
- Use any job elements as items of the survey; it is appropriate to rate tasks, knowledge/skill statements, or other features of the chart or to employ a supplemental set (such as O*Net). If desirable, include repeated or nonsense elements as a response-check for inattentive subject matter experts (SMEs), who could be dropped from analyses.
- Choose dimensions for rating elements (e.g., task, knowledge, skill) carefully; we advise no more than three because each dimension adds ratings equal to the number of elements — the most we have used is four.
There are multiple options for verification survey rating. A classic pair is importance and frequency, but difficulty (learning and instructing) and needed at the time of testing or hire are often seen. A key is what you want to know. For example, one client wanted to know how to sequence its training, so one rating we asked incumbents to make involved the stem “By when do you need to be able to perform skillfully (or to know) X?” and the rating anchors were temporally based. Excellent summaries of rating dimensions are available in articles on certification practice analysis by Raymond (2001, 2005). A rating of knowledge area, skill, and ability (KSA) elements with four scale values and verbal anchors is given below.
EXAMPLE: NECESSITY FOR PERFORMANCE SCALE (statements of knowledge area, skill, ability) – The scale asks for the degree to which respondents believe a KSA is necessary for successful performance of a (given) task.
Scale Values – Definitions
- 0 – Possession of this KSA is NOT RELATED [to successful overall performance of this task].
- 1 – Possession of this KSA is DESIRABLE but NOT essential [to successful …].
- 2 – Possession of this KSA is IMPORTANT [to successful …].
- 3 – Possession of this KSA is ESSENTIAL [to successful …].
Sample sizes recommended by CETE depend on the purposes of the client and the size of the target population. Generally, higher stakes tests for hiring or certification require a higher response rate (percentage of population responding) and several hundred respondents, while lower stakes uses require less in terms of response rate and thus 50–100 respondents may be sufficient. Incentives, in our experience, are very helpful in increasing response rates as is creating shorter, incomplete surveys with common items through matrix sampling (Childs & Jaciw, 2003).
CETE posts a draft survey, seeks approval from client staff, and then monitors online surveys for periods from two to six weeks. Incentives and reminders are helpful in boosting response rates. Data analysis consists of cleaning, calculation of composite variables (e.g., criticality, duty level values), descriptive, exploratory, and sometimes inferential analyses. As well, subgroup or filtered analyses can reveal additional details (e.g., evaluating samples of more and less experienced respondents, incumbents, and supervisors, or certified versus noncertified respondents). Lastly, decisions about testing emphasis or training weight are made about tasks, KSA, and composites using rational cutoffs and decision trees. Below is a calculation of criticality for two respondents when there are three ratings for each task element: importance, frequency, and needed at time of testing.
Respondent | Need at Testing (0=No, 1=Yes) | Importance (1-5) | Frequency (1-5) | Criticality |
---|---|---|---|---|
Person 15 | 1 (Yes) | 5 (Critically Imp) | 5 (Daily) | 25 |
Person 20 | 0 (No) | 4 (Very Imp) | 3 (Monthly) | 0 |
Third, after data cleaning and analysis, the important and frequent tasks are specified and the quantitative data is uploaded to the database defined above, which contains duties, tasks, and possibly other elements. There, calculated statistics and composites are available for:
- creating a test specification (blueprint) using a spreadsheet or deeper analysis (e.g., item response theory)
- conducting follow-up task analyses (behavioral or cognitive) to drill down
- assessing training needs, planning training programs, or evaluating training outcomes
- assessing employee competency for certification and hiring or promotion
- developing new competency-based materials designed to meet training needs (online, Sharable Content Object Reference Model [SCORM])
A final step, for thoroughness, involves a write-up of the project for the client. This step does not have to result in a long document but is part of documenting the work for a possible technical report. Details about the following form a template for the sections of the report:
- purpose
- verification method
- focus group or survey respondents
- analyses
- results
- conclusions
Selected research needs that are specific to verification include:
- Apply verification to replication data.
- Consolidate across panels when more than one panel is used on the same occupation or job.
- Use verification data to establish new areas for curriculum, testing, or credentialing.
- Integrate verification into online methodologies such as SkillsNET.
- Analyze alignment (crosswalk) data in a deeper and richer manner.
- Use behavioral and cognitive task analysis to follow up verification.
- Evaluate and implement recommendations from the recent National Research Council review of O*Net (Tippins & Hilton, 2010).
In summary, we reviewed one component in a competent job/occupational analysis: verification of selected elements of the content domain. We used a framework of what, why, and how. CETE staff strongly recommend verification of occupational analysis for quality assurance, defensibility, and collection of input from practitioners.
Selected Resources
[For details of the verification survey rating dimensions or a more comprehensive set of resources emailaustin.38@osu.edu.]
- Brannick, M. T., Levine, E. L., & Morgeson, F. P. (2007). Job and work analysis: Methods, research, and applications for human resource management. Thousand Oaks, CA: Sage.
- Childs, R. A., & Jaciw, A. P. (2003). Matrix sampling of items in large-scale assessments. Practical Assessment, Research & Evaluation, 8(16). Retrieved July 3, 2013 from http://PAREonline.net/getvn.asp?v=8&n=16
- Flanagan, J. (1954). Critical incident technique. Psychological Bulletin, 51, 327–358.
- Lunz, M. E., Stahl, J. A., & James, K. (1989). Content validity revisited: Transforming job analysis data into test specifications. Evaluation and the Health Professions, 12, 192–206.
- Norton, R. E., & Moser, J. (2014). Developing a curriculum (DACUM) (4th ed.). Columbus, OH: CETE.
- Peterson, N. G., Mumford, M. D., Borman, W. C., Jeanneret, P. R., & Fleishman, E. A. (Eds.). (1999). An occupational information system for the 21st century: The development of O*Net. Washington, DC: American Psychological Association.
- Raymond, M. R. (2001). Job analysis and the specification of content for licensure and certification examinations. Applied Measurement in Education, 14, 369–415.
- Raymond, M. R. (2005). An NCME instructional module on developing and administering practice analysis questionnaires. Educational Measurement: Issues & Practice, 24, 29–34.
- Raymond, M. R., & Neustel, S. (2006). Determining the content of credentialing examinations. In S. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 181–224). Mahwah, NJ: Erlbaum.
- Tippins, N., & Hilton, M. L. (2010). A database for a changing economy: Review of the Occupational Information Network (O*NET). Washington, DC: National Academies Press.
- Wilson, M. A., Bennett, W., Jr., Gibson, S. A., & Alliger, G. M. (Eds.). (2012). The handbook of work analysis: Methods, systems, applications and science of work measurement in organizations. New York: Taylor & Francis.
Contributor: James T. Austin, Brooke Parker, Dennis Priebe