CenterFocus Logo

Alternative Approaches to Outcomes Assessment for Postsecondary Vocational Education

CenterFocus Number 10 / Winter 1995

Morton Inger

At least since the early 1980s, policymakers, business leaders, and educational reformers have called for increased performance and accountability in education. The emphasis on accountability is intensifying. For example, bills awaiting action in Congress would provide individual states with large block grants for vocational education and job training with a great deal of flexibility and less federal prescriptions. But in at least one area--accountability--the plans call for even stricter federal control than in the past.

As part of the general call for reform, state governments and accrediting associations have strongly recommended the adoption of outcomes assessment in postsecondary education.

Reinforcing the need for improvements, the Perkins Act of 1990 requires each state to measure student learning gains in basic and more advanced academic skills as well as vocational skills. The requirement to measure academic competence represents a departure from the type of outcomes traditionally measured.

"Traditional" outcomes are those that have been routinely used in postsecondary vocational education, such as job placement, occupational competence, program completion or retention, and earnings. Traditional outcomes are usually measured with certification tests, follow-up surveys, and self reports. The goal of alternative assessment is to produce more valid information about the results of vocational education programs than has been available with more traditional testing procedures.

This CenterFocus explores alternative ways of assessing outcomes. But first it is important to clarify which outcomes to assess. Vocational programs have often been thought of solely in terms of students' economic outcomes, such as:

However, legislators and policymakers are moving more and more toward accountability measures tied to students' educational outcomes, such as:

One might suppose that the only important evaluation information is that related to student outcomes. However, teachers and administrators need to know what worked, what did not work, and why. And practitioners, program managers, policymakers, and funders need a basis for deciding whether to modify, enhance, or drop programs. Therefore, in addition to information on student outcomes, a broad, meaningful approach to assessment requires valid and reliable information on:

Needs, of course, provide the basis for evaluation, and process information helps evaluators understand and interpret what has occurred and why it occurred, and decide what needs to be changed.

One needs also to look at two other broad categories of outcomes: institutional and program. There are certain goals that can be effectively addressed only at the institutional level because these outcomes represent an aggregation of all efforts occurring across the institution, such as institution-wide retention and completion rates. But when a specific program represents only one part of a larger institution, outcomes must be assessed at the program level because that is where they have the most meaning.

Alternative Approaches

As the demands and expectations for outcomes assessment increase, so do the challenges for practitioners in postsecondary institutions. Postsecondary institutions and their vocational programs serve several different groups of students simultaneously. These include traditional, full-time students looking to transfer to four-year colleges; young adults seeking higher-paying employment options; and middle-aged adults seeking retraining for career change. Each of these groups seeks different outcomes and requires different outcomes assessment. Since students who come to these schools with different needs encounter the curriculum in different ways and in different stages of their education, designing outcomes assessment around highly structured curriculum paths is extremely difficult to accomplish.

Further, according to Ewell (1992), community college students "don't stand for assessment." He advocates assessment systems that are infused into the curriculum so that students are not intimidated by formal testing procedures. Alternative assessment approaches that enable students to demonstrate skills and knowledge in natural settings and at convenient times and places can help to accomplish this goal.

Each of the following approaches offers a unique perspective for thinking about, collecting information on, and analyzing outcomes for postsecondary education.

Total Quality Management (TQM)

American businesses are adopting TQM because of their need to improve quality to increase economic competitiveness. TQM is now being contemplated in relation to educational reform. The fundamental goal of TQM is to identify, meet, and exceed customer needs by managing and improving work processes to make continuous improvements and by ensuring that everyone is involved in carrying out the initiative. With TQM, measurement is focused on the entire system rather than simply on the final product.

In a TQM approach, deviations from specified performance standards are analyzed and fed back into the system to make improvements. Although assessment to achieve accountability is part of this approach, it is not the primary goal. The primary goal--the critical purpose of TQM--is to create and support an environment that yields continuous quality improvement.

Thus, TQM focuses on how work processes contribute to or hinder the realization of customer expectations. A fully functioning TQM approach yields outputs (products and services) that are responsive to ever-changing customer needs. In an educational institution, for example, the goal of TQM is to ensure that all work processes contribute to meeting student needs and expectations and are continually refined and refocused on meeting those needs. Further, according to TQM philosophy, a school's institutional effectiveness can be determined by measuring the effectiveness and efficiency of these processes.

While difficult to generalize, a comparison of TQM to other educational outcomes assessment approaches reveals that TQM is more comprehensive in its approach and more highly focused on evaluation for the sake of improvement than for the sake of accountability. Improvement is the key concept.

In 1991, Axland identified fourteen community and technical colleges involved with TQM, of which roughly two-thirds were implementing TQM within their own administrative units. Fox Valley Technical College (FVTC) in Appleton, Wisconsin has been using TQM since 1986. Comment cards strategically located on campus and picked up two or three times a week provide the College a direct and anonymous mechanism for monitoring and managing student satisfaction. The College also uses focus groups to collect data on student needs and expectations. In addition, problem-solving teams are formed within units across the campus to enable employees to take corrective action when problems are detected. Teams directly confront problems that could potentially jeopardize students' positive outcomes. FVTC is so committed to using measurement to achieve improvements that it offers educational guarantees to its graduates and offers retraining or refunds if these guarantees are not met.

The Student Success Model

The goal of the student success model is to create an institutional environment that contributes to developing and maintaining student success. This model is based on the premise that students are in the best position to define "success" for themselves and, therefore, for the institution and its programs. An important part of this model involves encouraging students to set high standards, documenting their goals and intended outcomes, and monitoring their progress toward achieving these outcomes. These student goals and intended outcomes form the basis for an extensive, institution-wide assessment system that is far different from an assessment system devised from outcome measures and performance standards established by such external groups as governing boards and accrediting agencies.

Traditionally, some postsecondary educators have operated on the premise that students have the right to take any class and to pass or fail that class, and some have even felt that a high failure rate is necessary to achieve high standards (Dressel, 1961). The student success paradigm operates from a vastly different perspective--that the school is obligated to devise a course of study and a program of support services that enables students to meet their goals.

Santa Fe Community College (SFCC) has used the student success model to meet its own high standards and, at the same time, to demonstrate accountability to external bodies. The SFCC approach to outcomes assessment connects student achievement directly to both the college's mission and specified learning objectives for each discipline, course, and program. For example, course success and studies of student achievement in math and English courses are used to ensure that specified competencies are, in reality, being acquired while the student is in attendance. The degree of success experienced by students as measured by criterion-referenced exams becomes a quality benchmark for both the college and the individual student. The college also measures student success after they leave the school.

The following example illustrates how SFCC uses this assessment approach. In 1989, 85 percent of its students who took the national nursing licensure exam (NCLEX-RN) passed it; this was the same as the national average. In 1990, while the national average rose slightly to 86 percent, SFCC's rate fell below 74 percent. This decline and a subsequent study led to a series of recommendations designed to improve student performance on the test. These included strengthening the curriculum in general, improving specific areas of weakness, enhancing student study skills, and improved monitoring of each student's academic performance. In 1991, the College's pass rate on that test rose to 100 percent (Santa Fe Community College, 1991).

Value-Added Assessment

The concept of value-added can be applied whether one is assessing institutional effectiveness, program quality, or student learning. The assessment strategy is to gather baseline information about entry-level competency and compare it with exit-level performance. This approach, which makes it possible to determine skills attained by students as they complete critical levels of a program, would be especially useful in determining program effectiveness for the many individuals who move in and out of college or into the job market without finishing a degree or certificate. Although value-added assessment has traditionally been conducted using a pre- and post-test design, the potential for using alternative approaches such as performance assessments appears to be feasible. For example, Northeast Missouri State University, an early leader in the application of value-added assessment to higher education, is now including performance assessment as part of its value-added measurement.

Alverno College, in Milwaukee, Wisconsin, uses multiple assessments across time not only from the institution's research perspective but from the student's perspective as well. One of Alverno's most widely recognized contributions to student outcomes assessment is the use of a matrix that defines abilities to be developed and demonstrated by students. To graduate, each student must fully meet the criteria in each of the forty-eight matrix cells. Instruction is tied to these explicitly stated criteria. Assessment techniques at Alverno include written tests, verbal feedback from instructors, peer assessment, and self-assessment. Through this process of continuous, multiple-method assessment of student performance, the College develops a comprehensive record of performance for each student.

One problem with value-added assessments is that they measure growth, which is not necessarily the same as competence. To an employer, the degree of change in competence is not likely to be important if a student has not met specified performance levels. However, Belcher (1987, p 32) points out that "measuring improvement does not replace the possibility of setting a floor by establishing exit standards." Turnbull (1987) argues that it is important to measure both growth and terminal competence. Measuring both is of critical concern where postsecondary students enroll, gain skills, and move into the job market without completing a degree or certificate program. Here, failure to address both aspects of learning could lead to a situation where an institution that is highly successful in producing a competent work force can appear to be ineffective because of a high non-completion rate.

Concept Mapping

Once an institution commits to assessing outcomes and determining the value added to each student's skill and knowledge, it needs a methodology for conceptualizing outcomes--thinking through the basic tenets and key components of a program. A new approach that offers help in this area is concept mapping. This process enables diverse groups of practitioners to describe how their programs work and then, based on that collective wisdom, articulate the theory of the program, that is, how the program is intended to influence outcomes.

Practitioners of vocational education prescribe particular courses of action to achieve certain vocational education outcomes. In doing so, they act as theorists: Behaviors, interpretations, and responses to situations represent their view of how vocational education programs are intended to function. Understanding the theoretical underpinnings of a program is a practical step because it is important to understand how programs can be expected to work before the attempt to evaluate them is begun. Knowing the theory behind a program is essential to determining what to measure and what characteristics of a program are most important.

Concept mapping employs small group processes, sorting and rating techniques, and multivariate statistical analysis techniques to represent the program in the form of a map. The map illustrates the participant stakeholders' ideas and how those ideas are interrelated. It also illustrates the major components of a program and can be used to indicate relationships among components and to show how these components are linked to outcomes. Once constructed, the map is then used by the participants as a common framework to identify potential vocational education outcomes. Practitioner stakeholders who play a part in the design and delivery of a program should be involved in the development of this map. Thus, vocational educators can involve educators, administrators, and students as well as community, business, labor, or other representatives in conceptualizing outcomes.

Concept mapping can also be used to guide policymakers, managers, program designers, and administrators in setting policy, conducting strategic planning, and facilitating program planning, implementation, and improvement. Program theory illustrated through a concept map can also provide feedback to those who created and funded a program. This type of information can lead to improvements in policies that govern funding and implementation.

Performance Assessment

Performance assessment is an umbrella term that describes a variety of techniques designed to assess directly and "authentically" what students know and can do in a form other than paper and pencil or multiple choice tests. The assessments are considered more "authentic" because the student's mastery is demonstrated rather than inferred from his or her responses to questions about the subject. These assessments include profiles and records of achievement, portfolios, projects, and performance tasks. These techniques are particularly useful in measuring student outcomes that are difficult to quantify with paper and pencil or multiple choice tests. Moreover, performance assessment techniques motivate students to learn--because they are involved in the evaluation processes--and promotes the development of their own personal standards of quality.

Although empirical evidence that performance assessment improves student outcomes is patchy, in both secondary and postsecondary schools where performance assessment has gained a foothold, good news is reported. For example, in the Pittsburgh Arts PROPEL schools, students creating portfolios of their writings reportedly write better, develop internal standards of quality, and think of themselves as professional writers (Camp, 1990). Other sites offer significant evidence of important and positive student outcomes (Archbald & Newmann, 1992).

Advocates of performance assessment hold that students cannot fully display what they know and can do except in response to motivating tasks (Raven, 1992). Interesting assessment tasks calling for student-valued and complex learning skills not only provide useful evaluations but also positive learning experiences. In this way, distinctions between assessment and learning are blurred; it is not merely that assessment informs future instruction but that the assessment itself is instructive (Camp, 1990).

Since many performance assessment techniques are relatively new, educators are struggling with such issues as cost, faculty development, and public credibility. For example, although tasks that are intended to elicit student action and production are direct and meaningful measures of students' knowledge and skills, developers are finding it difficult and time-consuming to construct performance tasks in some areas. The specificity of the tasks, combined with the time required to complete them, constrains the scope of what can be tested.

Nonetheless, vocational educators have long used tasks, projects, and demonstrations to assess student achievement. Researchers and policymakers can learn from vocational education's experience with performance assessment, while vocational educators can learn from the developing field of alternative assessments and adapt new strategies to their particular needs.

Note: These different approaches have been presented separately to highlight what each can contribute to improving postsecondary vocational education, but they are not either/or choices. These various approaches can be--and are in fact--used in connection with one another. For example, TQM, the student success model, and performance assessment can all be used to assess the value added by an educational program. And concept mapping would be useful in all cases to clarify the theory of a program before any assessment is planned.

References

Archbald, D. A., & Newmann, F. M. (1992). Approaches to assessing academic achievement. In H. Berlak, F. M. Newmann, E. Adams, D. A. Archbald, T. Burgess, J. Raven, & T. A. Romberg, Toward a new science of educational testing and assessment (pp. 71-84). Albany, NY: SUNY Press.

Axland, S. (1991, October). Looking for a quality education? Quality Progress, 24(10),61-72

Belcher, M. J. (Fall, 1987). Value-added assessment: College education and student growth. In D. Bray & M. J. Belcher (Eds.), Issues in student assessment. New Directions for Community Colleges, No. 59. San Francisco, CA: Jossey-Bass.

Camp, R. (1990). Thinking together about portfolios. The Quarterly of the National Writing Project and the Center for the Study of Writing, 12(2), 8-14, 27.

Dressel, P. L., & Associates. (1961). Evaluation in higher education. Boston, MA: Houghton Mifflin.

Ewell, P. (1992, June). Outcomes assessment: A perspective. Keynote presentation at the Illinois Council of Community College Administrators Summer Drive-in Conference in Champaign, IL.

Raven, J. (1992). A model of competence, motivation, and behavior, and a paradigm for assessment. In H. Berlak, F. M. Newmann, E. Adams, D. A. Archbald, T. Burgess, J. Raven, & T. A. Romberg, Toward a new science of educational testing and assessment (pp. 85-116). Albany, NY: SUNY Press.

Santa Fe Community College. (1991). NCLEX RN licensure results student outcomes study. Unpublished internal document.

Turnbull, W. W. (1987). Can "Value Added" add value to education? In Research and Development Update. (ERIC Document Reproduction Service No. ED 282 466)

This CenterFocus was developed at the Institute on Education and the Economy, Teachers College, Columbia University, which is a site of the National Center for Research in Vocational Education. It is a distillation of a report of the same title, edited by Debra Bragg and written by Debra Bragg, Thomas Grayson, Michael Harmon, Linda Mabry, and N. L. McCaslin, 1992. (NCRVE Report No. MDS-239). National Center for Research in Vocational Education, University of California, Berkeley.


CenterFocus Archive | No: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20