Contact NCRVE's Materials Distribution Service at (800) 637-7652 to order these or other NCRVE reports on assessment.
Damaris Moore, a member of the Dissemination Program, handles NCRVE's public information initiatives.
Many vocational educators are advocating the wider use of alternative assessments, such as portfolios, exhibitions, and performance events, for measuring skills . . . This interest in new measures derives in part from the changes occurring in vocational education. Educators and employers believe that the work world is changing and vocational education must adapt if it is to serve students well. The changes in the workplace are complex and not completely understood, but most observers believe that future employees will need integrated academic and vocational knowledge, a broad understanding of occupational areas, the ability to interact creatively with their peers, and higher-order cognitive skills that allow them to be flexible, learn rapidly, and adapt to ever-changing circumstances. To the extent this belief is true, vocational training needs to place greater emphasis on integrated learning, critical-thinking skills, and connections between vocational and academic skills, rather than on the mastery of the narrow, occupation-specific skills that characterized vocational education in the past. This new vision may also require broader changes in vocational education, including rethinking the organization, goals, content, and delivery of services, as well as the manner in which students and programs are assessed.
The educational measurement community is engaged in an equally serious rethinking of the structure of assessment. Traditional, selected-response methods (multiple choice, matching, true-false) are being criticized for a variety of reasons: they can lead to narrowing of curriculum, test preparation practices may inflate scores in high-stakes situations, there are consistent differences in average performance between racial/ethnic and gender groups, etc. Many educators advocate the use of alternative approaches, including open-response items, realistic simulations, extended performance events, exhibitions, judged competitions, portfolios, and other forms of elaborate student demonstration.
Educators and researchers are working to find ways to improve the technical quality and feasibility of such performance-based assessments. With regard to the quality dimension, researchers are concerned about the consistency of scoring and of student performance; the fairness of assessments that demand complex, contextually rich responses; and the interpretability of scores. From a practical point of view, educators worry about the complexity and cost of developing and scoring performance assessments, the additional time burdens they impose on students and teachers, and their acceptability to key stakeholders, including the business community. On the positive side, the distinguishing feature of most alternative assessments is "authenticity," i.e., students perform an activity or task as it would be done in practice rather than selecting from a fixed set of alternatives. On their face, these activities have greater validity than selected-response tests because success is clearly related to the criterion of interest, be it writing, problem solving, or performing job tasks. On the negative side, a student's performance on complex tasks is not as consistent from one task to the next as it is with selected-response items, and the scores produced by alternative assessments are not as dependable or interpretable as those produced by traditional tests. These issues are unresolved at present, and there appear to be trade-offs among quality and feasibility considerations.
Over the years, the nation has witnessed a wide range of proposals to change the structure and content of education and employment preparation programs. A major impetus for reform efforts has been recognizing that many high school graduates' knowledge and skills fall short of what is required in high-performance workplaces. Despite different approaches to reform, all sides agree on the need for trustworthy methods for assessing students' knowledge and skills to discern whether students are making progress towards desired outcomes.
This consensus leads to the question "Which methods for assessing students are most useful and appropriate for your goals?" This practitioner's guide is designed to help you find an answer that is meaningful in your particular context by taking you through these steps:
The intent of this study was to gain a better sense of the perspectives of three key stakeholder groups toward student outcomes associated with Tech Prep. Knowing how educators, students, and employers conceptualize outcomes could provide several benefits to practitioners and policymakers. First, understanding the similarities and differences in the perspectives of the three stakeholder groups could inform practitioners about how to proceed with various aspects of program implementation. Second, knowing the priorities that stakeholders place on various student outcomes could help to focus attention and resources on aspects of Tech Prep thought most likely to produce desired results. Third, knowing more about outcomes could result in the development of more meaningful outcomes assessment procedures and instruments, especially where there is a high level of consensus on particular foci of Tech Prep. Finally, understanding the stakeholder perspectives toward Tech Prep could contribute to building more accountability into evolving Tech Prep systems, thereby increasing their potential for continued public support.
Formal program evaluation and outcomes assessment for Tech Prep has been limited, but when evaluations have been conducted they have tended to focus on compliance-oriented measures required by governmental units. Outcomes measures linked to enrollments, program completion, and job placement have been typical of the kinds of measures demanded by state and federal agencies. The national evaluation sponsored by the U.S. Department of Education concentrates much of its attention on having local Tech Prep coordinators estimate the number of students who reach specified points in the educational and employment system, such as high school completion, matriculation into two-year postsecondary education, two-year postsecondary completion, and job entry or matriculation into four-year postsecondary education. Such estimates may be useful in terms of understanding the potential scope and scale of the nation's emerging Tech Prep system, but they are not as helpful to understanding the way programs should operate and benefit students on more personal and consequential levels.