One reason for the lack of data use is the perception that the data are being collected for someone else's purposes. Administrators report ADA to the state education agency, provide transcripts to postsecondary institutions, and report grades to students and parents. Similarly, participant information is reported to the state or federal office that funds a particular program, and test scores are maintained by the teacher or counselor who administers the test. Rather than considering these various data as potential sources of information on the quality of teaching and learning at a site, educators view them as obligatory or otherwise limited in value. When "data" are used, they often take the form of anecdotes or casual observations. Without taking steps to gather systematic, representative information, data collected in this way may lead to inappropriate conclusions and actions.
State agencies sometimes provide data to school districts and schools that are intended to trigger school improvement efforts. The school report card movement that began during the 1980s and Perkins-mandated performance measures and standards for vocational education, are included in this category. For example, state education agencies in California and Illinois produce annual report cards summarizing various performance data--in particular, state achievement test scores--for each school district in the state. In response to the 1990 Perkins Act requirement, some states distribute performance reports to their local vocational program administrators. The reports include such information as achievement test scores and rates of placement into employment and further education for vocational students or completers. These state-level efforts to provide data to local educators have produced mixed results.
State-provided data often do not lead to local improvement efforts for a variety of reasons. The school report card experience shows that superintendents and principals may find district- and school-level data useful for public relations purposes. However, teachers typically do not find these data to be useful for assessing their own performance or the performance of their students. In other cases, local administrators and educators find that the state provided data do not reflect what they are trying to do in their community. The joint RAND-Management Planning Research Associates (MPR) study for the National Center for Research in Vocational Education (NCRVE) of the preliminary effects of Perkins performance measures found that local vocational administrators and instructors were more likely to use the data if they had personally participated in developing the performance measures and related assessment instruments (Stecher et al., 1995). Allowing for a certain degree of local customization of performance measurement data improves the chances that local educators will find the data to be meaningful and relevant. When state-provided data do not reflect local educational goals or do not describe a useful unit of analysis, the data may simply be ignored.
Program evaluations are another source of information on local educational performance. When a new program is implemented, efforts may be made to evaluate its effectiveness. Typically, districts and schools rely on outside evaluators to undertake this work, although in some larger school districts, the district staff may include an evaluator. Responsibility for the evaluation is usually given to an expert, because a rigorous evaluation--particularly one that is intended to produce an estimate of program impact--requires attention to exacting methodologies.
However, expert research is often ignored or devalued for many of the reasons described above. Administrators and faculty may perceive the evaluation as providing someone else with information about their program; may question the focus and goals of the evaluation; and may react defensively to seemingly critical results, by dismissing them or explaining them away. Consequently, evaluation findings are often "underutilized." In an effort to improve using evaluation results, some educators have encouraged building the capacity of districts and schools for self-evaluation. However, implementing a rigorous evaluation design (involving random assignment of subjects to treatment and control groups, identification of an appropriate comparison group, or statistical equation of participant and nonparticipant groups) often proves too burdensome or is practically infeasible for administrative or political reasons. Moreover, evaluation tends to be a one time activity, which does not encourage ongoing improvement efforts.
Experience on NCRVE projects and a review of the evaluation utilization and performance indicator literatures suggest several strategies for improving the likelihood that performance data will be used--and used well--by local educators.1These include moving from a framework where data are reported to someone else toward a framework where data are used locally; involving local educators in designing performance measurement systems; and providing technical assistance to increase the capacity of local educators to use data critically.
Performance indicator systems differ from formal evaluations in several ways that are listed in Table 1. Performance indicator systems are primarily descriptive, while formal evaluations provide causal evidence about the impact of particular strategies or activities. Indicator systems help answer the question, "How well do our collective strategies appear to be working?" In contrast, evaluations help answer the question, "What is the unique contribution of a particular strategy or activity?"
However, the two approaches can be complementary. By providing information on crucial aspects of schooling, performance indicator systems may help identify areas that require more thorough evaluation. For their part, formal evaluations may help identify serious conditions that should be monitored on an ongoing basis through indicator systems.
|Performance Indicators||Formal Evaluations|
Student outcomes describe the ultimate end product of the education system--or what we want students to know or achieve. Examples of student outcomes include academic achievement, employability or work-readiness skills, high school graduation, and placement into and success in further education or employment, among many others. School practices contribute to student outcomes. Examples include the curriculum, instructional strategies, and supporting structures such as scheduling practices. School inputs describe the background for both practices and outcomes. They are typically considered to be "givens;" that is, they represent conditions that are difficult to change, such as student demographics, local economic conditions, facilities, and school funds. Working from the goals identified in Step 1, educators are encouraged to identify crucial outcomes, practices, and inputs and their relationships to one another. In effect, educators develop explicit hypotheses about the schooling process at their site. Then the resulting performance indicator data will allow them to test their hypotheses.
Examples of special data collection efforts that may be developed to supplement existing data sources include special surveys and questionnaires, interviews and focus groups, teacher logs and diaries, classroom observations, and alternative assessment instruments. Educators should identify those new data sources that are most essential to describing identified outcomes, practices, and inputs, and should plan to phase these into their system.
Once data sources have been identified, educators are ready to develop actual indicators. As mentioned previously, indicators are statistics that typically appear as averages, percents, and rates. Examples of performance indicators include average achievement test scores and high school graduation rates. However, it is generally a sound practice to select multiple indicators for each outcome, practice, or input. Teaching and learning are complex processes, and a single indicator will rarely adequately describe a particular construct or concern. For instance, if a school's goal is high academic achievement for all students, then educators may want to know what percentage of graduates complete high-level academic coursework, what proportion of teachers report integrating academic and vocational learning on a regular basis, as well as what the average achievement test score is for the school and how it is increasing or decreasing over time. Collecting data on one of these indicators to the exclusion of the others may miss important information on academic achievement and distort perceived performance.
Blank, R.K. (1993, April). Developing A System of Educational Indicators: Selecting, Implementing, and Reporting Indicators. Washington, D.C.: Council of Chief State School Officers.
Braskamp, L.A., and Brown, R.D., eds. (1980). Utilization of Evaluative Information. New Directions for Program Evaluation Series. San Francisco, CA: Jossey-Bass. No. 5.
Bruininks, R.H., et al. (1991, July). Assessing Educational Outcomes: State Activity and Literature Integration. Minneapolis, MN: Minnesota University, National Center on Educational Outcomes.
Cohen, D.K., and Garet, M. (1975). Reforming Educational Policy with Applied Research. Harvard Educational Review, 45 (6), 17-43.
Cooley, W.W. (1983). Improving the Performance of an Educational System. Educational Researcher, 12 (6), 4-12.
Cousins, J.B., and Leithwood, K.A. (1986, Fall). Current Empirical Research on Evaluation. Utilization Review of Educational Research, 6 (3), 331-364.
Cronbach, L.J. and Associates. (1980). Toward Reform of Program Evaluation. San Francisco CA: Jossey-Bass.
David, J.L. (1981, January/February). Local Uses of Title I Evaluations. Educational Evaluation and Policy Analysis, 3 (1), 27-39.
David, J.L. (1988, March). The Use of Indicators by School Districts: Aid or Threat to Improvement? Phi Delta Kappan, 69 (7), 499-503.
David, J.L. (1987, October). Improving Education with Locally Developed Indicators. Santa Monica, CA: The RAND Corporation, Center for Policy Research in Education.
Dickinson, K.P., West, R.W., Kogan, D.J., Drury, D.A., Franks, M.S., Schilichtmann, L., and Vencill, M. (1988, September). Evaluation of the Effects of JTPA Performance Standards on Clients, Services, and Costs. Washington, D.C.: National Commission for Employment Policy.
Fetler, M.E. (1994, October). Carrot or Stick? How Do School Performance Reports Work? Education Policy Analysis Archives, 2 (13). (electronic journal).
Franklin, A.L. and Ban, C. (1994). The Performance Measurement Movement: Learning from the Experiences of Program Evaluation. Paper presented at the annual meeting of the American Evaluation Association, Boston, MA.
Haertel, E. (1986, May). Measuring School Performance to an Urban Society, 18 (3), 312-325.
Hanushek, E.A. (1994). Making Schools Work: Improving Performance and Controlling Costs. Washington, D.C.: The Brookings Institution.
Harp, L. (1995, February 15). Kentucky Names Schools to Receive Achievement Bonuses. Education Week, 14 (21), 11.
Hoachlander, E.G., Levesque, K., and Rahn, M.L. (1992). Accountability for Vocational Education: A Practitioner's Guide. Berkeley, CA: National Center for Research in Vocational Education.
Hoachlander, E.G., and Levesque, K.A. (1993). Improving National Data for Vocational Education: Strengthening a Multiform System. Berkeley, CA: National Center for Research in Vocational Education.
Janis, I.L., and Mann, L. (1977). Decision Making. New York: Free Press.
Kaagan, S.S., and Coley, R.J. (1989). State Education Indicators: Measured Strides, Missing Steps. New Brunswick, NJ: Rutgers University, Center for Policy Research in Education.
Kennedy, M.M. (1983, December). Working Knowledge. Knowledge: Creation, Diffusion, Utilization, 5 (2), 193- 211.
Klausmeier, H.J. (1985). Developing and Institutionalizing a Self Improvement Capability: Structures and Strategies of Secondary Schools. Lanham, MD: University Press of America.
Levesque, K., and Medrich, E. (1995). School to Work Opportunities Performance Measures: First Year Data Collection. Washington, D.C.: National School to Work Office, U.S. Departments of Education and Labor.
Lindblom, C.E., and Cohen, D.K. (1979). Usable Knowledge. New Haven, CT: Yale University Press.
McDonnell, L.M., and Burstein, L., Ormseth, T., Catterall, J.M., and Moody, D. (1990, June). Discovering What Schools Really Teach: Designing Improved Coursework Indicators. Los Angeles, CA: Center for Research on Evaluation, Standards, and Student Testing.
McLaughlin, M.W., and Phillips, D.C. (1991). Evaluation and Education: At Quarter Century. Chicago, IL: National Society for the Study of Education.
Murnane, R.J, and Pauly, E.W. (1988, March). Educational and Economic Indicators. Phi Delta Kappan, 69 (7), 509-513.
National Commission on Excellence in Education. (1983). A Nation at Risk. Washington, D.C.: U.S. Government Printing Office.
National Study of School Evaluation. (1993). Senior High School Improvement: Focusing on Desired Learner Outcomes. Falls Church, VA: NSSE.
Nisbett, R., and Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, N.J.: Prentice Hall.
Oakes, J. (1986, October). Educational Indicators: A Guide for Policymakers. Santa Monica, CA: RAND Corporation. Center for Policy Research in Education.
Office of Educational Research and Improvement. (1988, September). Creating Responsible and Responsive Accountability Systems. Washington, D.C.: U.S. Department of Education, OERI.
Patton, M.Q. (1978). Utilization Focused Evaluation. Beverly Hills, CA: Sage Publications.
Porter, A. (1988, March). Indicators: Objective Data or Political Tool? Phi Delta Kappan, 69 (7), 503-508.
Porter, A. (1991). Creating a System of School Process Indicators. Educational Evaluation and Policy Analysis, 13 (1), 13-29.
Rahn, M.L., Hoachlander, E.G., and Levesque, K.A. (1992). State Systems for Accountability in Vocational Education. Berkeley, CA: National Center for Research in Vocational Education.
Raizen, S.A., and Rossi, P.H., eds. (1981). Program Evaluation in Education: When? How? To What Ends? Washington, D.C.: National Academy Press.
Richards, C.E. (1988, March). Educational Monitoring Systems: Implications for Design. Phi Delta Kappan, 69 (7), 495-498.
Rossi, P.H., and Freeman, H.E. (1993). Evaluation: A Systematic Approach. Beverly Hills, CA: Sage Publications.
Selden, R.W. (1988, March). Missing Data: A Progress Report from the States. Phi Delta Kappan, 69 (7), 492-494.
Selden, R. (1994). How Indicators Have Been Used in the USA. Measuring Quality: Education Indicators--UK and International Perspectives, edited by K. Riley and D. Nuttall. London; Washington, D.C.: Falmer Press.
Smith, M.S. (1988, March). Educational Indicator.
Sproull, L.S., and Zubrow, D. (1982). Performance Information in School Systems: Perspectives from Organization Theory. Educational Administration Quarterly, 17 (3), 61-79.
Stecher, B.M., and Hanser, L.M. (1992). Local Accountability in Vocational Education: A Theoretical Model and Its Limitations in Practice. Santa Monica, CA: The RAND Corporation.
Stecher, B.M., and Hanser, L.M. (1993). Beyond Vocational Education Standards and Measures: Strengthening Local Accountability Systems for Program Improvement. Santa Monica, CA: The RAND Corporation.
Stecher, B., et al. (1995). Improving Perkins II Performance Measures and Standards: Lessons Learned from Early Implementers in Four States. Berkeley, CA: National Center for Research in Vocational Education.
Stern, D. (1986, May). Toward a Statewide System for Public School Accountability: A Report from California. Education and Urban Society, 18 (3), 326-346.
Timar, T.B., and Kirp, D.L. (1986). Educational Reform and Institutional Competence. Harvard Educational Review, 57 (3), 309-330.
Weiss, C.H. (1980). Knowledge Creep and Decision Accretion. Knowledge: Creation, Diffusion, Utilization, 1 (3), 381-404.
Zucker, L.G. (1980). Institutional Structure and Organizational Processes: The Role of Evaluation Units in Schools. CSE Report, No. 139. Los Angeles, CA: Center for the Study of Evaluation, University of California.
This CenterFocus was developed at the Institute on Education and the Economy, Teachers College, Columbia University, which is a site of the National Center for Research in Vocational Education.
This publication was published pursuant to a grant from the Office of Vocational and Adult Education, U.S. Department of Education, authorized by the Carl D. Perkins Vocational Education Act.
National Center for Research in Vocational Education
University of California at Berkeley
Address all comments, questions and requests for additional copies to:
2030 Addison Street, Suite 500
Berkeley, CA 94704-1058
Our toll-free number is 800-(old phone deleted)