CenterFocus Logo

Using Data for Program Improvement: How Do We Encourage Schools To Do It?

Centerfocus Number 12 / May 1996

Karen Levesque, Denise Bradby, and Kristi Rossi, MPR Associates, Inc.

Most school districts and schools in this country are routinely involved in data collection. Administrators tally average daily attendance (ADA) rates and maintain transcript data, including students' course enrollments and grades. As a condition of receiving state or federal funds, they collect information on participants in particular programs or activities. Administrators also rely on anecdotal information to assess informally the quality of teaching and learning at their site, and teachers and counselors use various assessment instruments for diagnosing individual students. Thus, school districts and schools collect a wide array of data. However, they do not typically use the data they collect in a systematic fashion to identify strengths and weaknesses at their sites and to develop improvement strategies.

One reason for the lack of data use is the perception that the data are being collected for someone else's purposes. Administrators report ADA to the state education agency, provide transcripts to postsecondary institutions, and report grades to students and parents. Similarly, participant information is reported to the state or federal office that funds a particular program, and test scores are maintained by the teacher or counselor who administers the test. Rather than considering these various data as potential sources of information on the quality of teaching and learning at a site, educators view them as obligatory or otherwise limited in value. When "data" are used, they often take the form of anecdotes or casual observations. Without taking steps to gather systematic, representative information, data collected in this way may lead to inappropriate conclusions and actions.

State agencies sometimes provide data to school districts and schools that are intended to trigger school improvement efforts. The school report card movement that began during the 1980s and Perkins-mandated performance measures and standards for vocational education, are included in this category. For example, state education agencies in California and Illinois produce annual report cards summarizing various performance data--in particular, state achievement test scores--for each school district in the state. In response to the 1990 Perkins Act requirement, some states distribute performance reports to their local vocational program administrators. The reports include such information as achievement test scores and rates of placement into employment and further education for vocational students or completers. These state-level efforts to provide data to local educators have produced mixed results.

State-provided data often do not lead to local improvement efforts for a variety of reasons. The school report card experience shows that superintendents and principals may find district- and school-level data useful for public relations purposes. However, teachers typically do not find these data to be useful for assessing their own performance or the performance of their students. In other cases, local administrators and educators find that the state provided data do not reflect what they are trying to do in their community. The joint RAND-Management Planning Research Associates (MPR) study for the National Center for Research in Vocational Education (NCRVE) of the preliminary effects of Perkins performance measures found that local vocational administrators and instructors were more likely to use the data if they had personally participated in developing the performance measures and related assessment instruments (Stecher et al., 1995). Allowing for a certain degree of local customization of performance measurement data improves the chances that local educators will find the data to be meaningful and relevant. When state-provided data do not reflect local educational goals or do not describe a useful unit of analysis, the data may simply be ignored.

Program evaluations are another source of information on local educational performance. When a new program is implemented, efforts may be made to evaluate its effectiveness. Typically, districts and schools rely on outside evaluators to undertake this work, although in some larger school districts, the district staff may include an evaluator. Responsibility for the evaluation is usually given to an expert, because a rigorous evaluation--particularly one that is intended to produce an estimate of program impact--requires attention to exacting methodologies.

However, expert research is often ignored or devalued for many of the reasons described above. Administrators and faculty may perceive the evaluation as providing someone else with information about their program; may question the focus and goals of the evaluation; and may react defensively to seemingly critical results, by dismissing them or explaining them away. Consequently, evaluation findings are often "underutilized." In an effort to improve using evaluation results, some educators have encouraged building the capacity of districts and schools for self-evaluation. However, implementing a rigorous evaluation design (involving random assignment of subjects to treatment and control groups, identification of an appropriate comparison group, or statistical equation of participant and nonparticipant groups) often proves too burdensome or is practically infeasible for administrative or political reasons. Moreover, evaluation tends to be a one time activity, which does not encourage ongoing improvement efforts.

Experience on NCRVE projects and a review of the evaluation utilization and performance indicator literatures suggest several strategies for improving the likelihood that performance data will be used--and used well--by local educators.1These include moving from a framework where data are reported to someone else toward a framework where data are used locally; involving local educators in designing performance measurement systems; and providing technical assistance to increase the capacity of local educators to use data critically.

At Your Fingertips

MPR Associates' staff are midway through a two-year NCRVE project to develop training materials that provide step-by-step guidelines for setting up performance indicator systems. At Your Fingertips: Using Data for Program Improvement will produce a workbook and trainer's manual that introduce educators to a practical method for using locally available data to determine strengths and weaknesses, identify improvement strategies, and monitor progress. The materials are based on experience gained from working intensively with a number of sites as well as relevant research.2

What Are Performance Indicators and Systems?

Performance indicators are statistics that "indicated" something about the performance or health of a district, school, or program. Indicators describe crucial educational outcomes, processes, and inputs, and typically appear as averages, percents, or rates. A performance indicator system establishes loose relationships among the outcome, process, and input statistics, and enables educators to monitor these statistics on an ongoing basis. Such a system helps to identify strengths and weaknesses and generates discussion about causes and appropriate improvement strategies. Ultimately, a performance indicator system produces evidence about whether strategies are working or not.

Comparing Performance Indicator Systems and Evaluations

At Your Fingertips focuses on establishing performance indicator systems rather than on implementing a formal evaluation design. While formal evaluations may represent a one-time or periodic activity, performance indicator systems are designed to support continuous program improvement. Moreover, formal evaluations often prove overly burdensome or impractical to implement, whereas all districts and schools should have ready access to at least some relevant indicator data. A primary objective of the project is to encourage local educators to become familiar and comfortable with using data for program improvement. Performance indicator systems offer a more appropriate strategy than formal evaluations to achieve this end.

Performance indicator systems differ from formal evaluations in several ways that are listed in Table 1. Performance indicator systems are primarily descriptive, while formal evaluations provide causal evidence about the impact of particular strategies or activities. Indicator systems help answer the question, "How well do our collective strategies appear to be working?" In contrast, evaluations help answer the question, "What is the unique contribution of a particular strategy or activity?"

However, the two approaches can be complementary. By providing information on crucial aspects of schooling, performance indicator systems may help identify areas that require more thorough evaluation. For their part, formal evaluations may help identify serious conditions that should be monitored on an ongoing basis through indicator systems.

Table 1
INDICATOR SYSTEMS COMPLEMENT EVALUATIONS
Performance Indicators Formal Evaluations
  • Ongoing
  • Describe the district, school, or program
  • Indicate progress and achievements
  • Suggest areas for improvement
  • Monitor changes over time
  • One-time or periodic
  • Formally evaluate a program
  • Isolate the impact of particular activities
  • May provide valid comparisons of participants and nonparticipants

The Program Improvement Process

At Your Fingertips describes a six-step program improvement process that is illustrated in Figure 1.

Figure 1: The Program Improvement Process

A Goal Driven Process

Performance indicators should be rooted in local goals. If not, they may end up becoming the de facto goals. Educators are encouraged to identify what it is they are striving to achieve in their district, school, or program, and then what information they need to determine whether they are achieving these goals. A wide variety of education stakeholders should be involved in the process of identifying goals. These stakeholders may include academic and vocational teachers, counselors, school- and district-level administrators, school board members, state education agency staff, parents and students, local employers, and local postsecondary institutions. Generally, all those who have a stake in educational outcomes or who will be responsible for helping to achieve the goals should participate in establishing them.

Outcomes, Practices, and Inputs

Performance indicator systems may be based on a simple model of the schooling process that incorporates three basic elements: (1) student outcomes, (2) school practices (or processes), and (3) school inputs. Although some performance measurement initiatives have focused on just one of these elements to the exclusion of the others, performance indicator systems provide a more powerful analytic tool when they collect information on all three.

Student outcomes describe the ultimate end product of the education system--or what we want students to know or achieve. Examples of student outcomes include academic achievement, employability or work-readiness skills, high school graduation, and placement into and success in further education or employment, among many others. School practices contribute to student outcomes. Examples include the curriculum, instructional strategies, and supporting structures such as scheduling practices. School inputs describe the background for both practices and outcomes. They are typically considered to be "givens;" that is, they represent conditions that are difficult to change, such as student demographics, local economic conditions, facilities, and school funds. Working from the goals identified in Step 1, educators are encouraged to identify crucial outcomes, practices, and inputs and their relationships to one another. In effect, educators develop explicit hypotheses about the schooling process at their site. Then the resulting performance indicator data will allow them to test their hypotheses.

Identifying Data Sources and Developing Indicators

The next steps in the process of developing a performance indicator system involve identifying data sources and developing indicators that describe the outcomes, practices, and inputs identified in Step 2. Educators should begin by identifying data sources that are already maintained by districts or schools. Only when existing data sources do not provide sufficient information do participants need to consider special data collection efforts. Examples of commonly available data sources include the following:

Examples of special data collection efforts that may be developed to supplement existing data sources include special surveys and questionnaires, interviews and focus groups, teacher logs and diaries, classroom observations, and alternative assessment instruments. Educators should identify those new data sources that are most essential to describing identified outcomes, practices, and inputs, and should plan to phase these into their system.

Once data sources have been identified, educators are ready to develop actual indicators. As mentioned previously, indicators are statistics that typically appear as averages, percents, and rates. Examples of performance indicators include average achievement test scores and high school graduation rates. However, it is generally a sound practice to select multiple indicators for each outcome, practice, or input. Teaching and learning are complex processes, and a single indicator will rarely adequately describe a particular construct or concern. For instance, if a school's goal is high academic achievement for all students, then educators may want to know what percentage of graduates complete high-level academic coursework, what proportion of teachers report integrating academic and vocational learning on a regular basis, as well as what the average achievement test score is for the school and how it is increasing or decreasing over time. Collecting data on one of these indicators to the exclusion of the others may miss important information on academic achievement and distort perceived performance.

Interpreting the Data and Developing Improvement Strategies

The final steps of the program improvement model involve interpreting indicator data and developing improvement strategies. Sound interpretation requires determining the appropriate student cohort to examine on each indicator (for example, last year's graduates, this year's seniors, or this year's ninth graders); determining the appropriate unit of analysis (such as the school, a grade level, or individual classrooms); and identifying important subpopulations (for instance, examining data by gender and race-ethnicity). Through the At Your Fingertips materials, participants in the program improvement process also become familiar with some basic statistical concepts to assist them in analyzing the data. After discussing what the data mean, participants develop appropriate improvement strategies. To do so, educators must decide when they believe they have sufficient information to proceed with specific strategies and when they need more or different data.

Dynamic Process

Developing an indicator system for program improvement purposes requires a certain amount of trial and error. Few program improvement teams will identify all of the most relevant and appropriate information on the first try. When team members sit down to examine their indicator data, they will most likely discover additional data needs and may decide to drop or add some data sources and indicators. It is the nature of performance indicator systems to raise more questions than they answer. However, this is their greatest strength--generating discussion and debate based on objective, if imperfect, data rather than on hunches, anecdotal evidence, or the force of inertia.

Establishing a Program Improvement Process

Over time, the performance indicator system provides schools with trend data to determine whether improvement strategies appear to be working. Program improvement team members meet periodically to review the indicator data; determine whether performance is improving; discuss reasons why improvement is or is not happening; and refine their indicators and improvement strategies. The team also decides with whom to share the performance information. Some teams may develop a school report card that is distributed periodically to students, parents, faculty, district administrators, and the school board. Others may post student and teacher attendance rates, for example, on a daily basis in an attempt to generate some healthy competition. Still others may decide that classroom-specific performance data should be reviewed only by the school principal. Whichever specific dissemination strategies are employed, participants in the program improvement process decide together how best to use the indicator information to bring about improved performance at their school.

Conclusion

Locally developed performance indicator systems offer a promising strategy for establishing a data-based program improvement process in districts and schools. By encouraging local educators to articulate their goals and involving them in deciding how to measure their performance on the goals, the model described here ensures that indicator systems will be relevant to local educational objectives. After working through the process, educators should also become familiar with the many data available to them and begin to see the data's usefulness for answering a wide variety of questions about performance and effectiveness. Indicator systems are also generally practical and feasible, with all districts and schools having access to at least some meaningful data. In these ways, indicator systems avoid several pitfalls of other data collection and reporting strategies.


Notes

  1. These NCRVE projects include providing technical assistance to the Southern Regional Education Board State Vocational Education Consortium, working with New Castle County Vocational-Technical High School District in Delaware to develop a school improvement process, and providing technical assistance to the states on implementing Perkins performance measures and standards.
  2. Sites included a vocational-technical high school district, a joint academic and vocational high school, and a statewide youth corrections education program. In addition to these sites, many other high schools and community colleges as well as state-level administrators participated in introductory workshops on the At Your Fingertips materials.

Selected Bibliography

Bickel, W.E. (1984). Evaluator in Residence: New Prospects for School District Evaluation Research. Educational Evaluation and Policy Analysis, 6 (3), 297-306.

Blank, R.K. (1993, April). Developing A System of Educational Indicators: Selecting, Implementing, and Reporting Indicators. Washington, D.C.: Council of Chief State School Officers.

Braskamp, L.A., and Brown, R.D., eds. (1980). Utilization of Evaluative Information. New Directions for Program Evaluation Series. San Francisco, CA: Jossey-Bass. No. 5.

Bruininks, R.H., et al. (1991, July). Assessing Educational Outcomes: State Activity and Literature Integration. Minneapolis, MN: Minnesota University, National Center on Educational Outcomes.

Cohen, D.K., and Garet, M. (1975). Reforming Educational Policy with Applied Research. Harvard Educational Review, 45 (6), 17-43.

Cooley, W.W. (1983). Improving the Performance of an Educational System. Educational Researcher, 12 (6), 4-12.

Cousins, J.B., and Leithwood, K.A. (1986, Fall). Current Empirical Research on Evaluation. Utilization Review of Educational Research, 6 (3), 331-364.

Cronbach, L.J. and Associates. (1980). Toward Reform of Program Evaluation. San Francisco CA: Jossey-Bass.

David, J.L. (1981, January/February). Local Uses of Title I Evaluations. Educational Evaluation and Policy Analysis, 3 (1), 27-39.

David, J.L. (1988, March). The Use of Indicators by School Districts: Aid or Threat to Improvement? Phi Delta Kappan, 69 (7), 499-503.

David, J.L. (1987, October). Improving Education with Locally Developed Indicators. Santa Monica, CA: The RAND Corporation, Center for Policy Research in Education.

Dickinson, K.P., West, R.W., Kogan, D.J., Drury, D.A., Franks, M.S., Schilichtmann, L., and Vencill, M. (1988, September). Evaluation of the Effects of JTPA Performance Standards on Clients, Services, and Costs. Washington, D.C.: National Commission for Employment Policy.

Fetler, M.E. (1994, October). Carrot or Stick? How Do School Performance Reports Work? Education Policy Analysis Archives, 2 (13). (electronic journal).

Franklin, A.L. and Ban, C. (1994). The Performance Measurement Movement: Learning from the Experiences of Program Evaluation. Paper presented at the annual meeting of the American Evaluation Association, Boston, MA.

Haertel, E. (1986, May). Measuring School Performance to an Urban Society, 18 (3), 312-325.

Hanushek, E.A. (1994). Making Schools Work: Improving Performance and Controlling Costs. Washington, D.C.: The Brookings Institution.

Harp, L. (1995, February 15). Kentucky Names Schools to Receive Achievement Bonuses. Education Week, 14 (21), 11.

Hoachlander, E.G., Levesque, K., and Rahn, M.L. (1992). Accountability for Vocational Education: A Practitioner's Guide. Berkeley, CA: National Center for Research in Vocational Education.

Hoachlander, E.G., and Levesque, K.A. (1993). Improving National Data for Vocational Education: Strengthening a Multiform System. Berkeley, CA: National Center for Research in Vocational Education.

Janis, I.L., and Mann, L. (1977). Decision Making. New York: Free Press.

Kaagan, S.S., and Coley, R.J. (1989). State Education Indicators: Measured Strides, Missing Steps. New Brunswick, NJ: Rutgers University, Center for Policy Research in Education.

Kennedy, M.M. (1983, December). Working Knowledge. Knowledge: Creation, Diffusion, Utilization, 5 (2), 193- 211.

Klausmeier, H.J. (1985). Developing and Institutionalizing a Self Improvement Capability: Structures and Strategies of Secondary Schools. Lanham, MD: University Press of America.

Levesque, K., and Medrich, E. (1995). School to Work Opportunities Performance Measures: First Year Data Collection. Washington, D.C.: National School to Work Office, U.S. Departments of Education and Labor.

Lindblom, C.E., and Cohen, D.K. (1979). Usable Knowledge. New Haven, CT: Yale University Press.

McDonnell, L.M., and Burstein, L., Ormseth, T., Catterall, J.M., and Moody, D. (1990, June). Discovering What Schools Really Teach: Designing Improved Coursework Indicators. Los Angeles, CA: Center for Research on Evaluation, Standards, and Student Testing.

McLaughlin, M.W., and Phillips, D.C. (1991). Evaluation and Education: At Quarter Century. Chicago, IL: National Society for the Study of Education.

Murnane, R.J, and Pauly, E.W. (1988, March). Educational and Economic Indicators. Phi Delta Kappan, 69 (7), 509-513.

National Commission on Excellence in Education. (1983). A Nation at Risk. Washington, D.C.: U.S. Government Printing Office.

National Study of School Evaluation. (1993). Senior High School Improvement: Focusing on Desired Learner Outcomes. Falls Church, VA: NSSE.

Nisbett, R., and Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, N.J.: Prentice Hall.

Oakes, J. (1986, October). Educational Indicators: A Guide for Policymakers. Santa Monica, CA: RAND Corporation. Center for Policy Research in Education.

Office of Educational Research and Improvement. (1988, September). Creating Responsible and Responsive Accountability Systems. Washington, D.C.: U.S. Department of Education, OERI.

Patton, M.Q. (1978). Utilization Focused Evaluation. Beverly Hills, CA: Sage Publications.

Porter, A. (1988, March). Indicators: Objective Data or Political Tool? Phi Delta Kappan, 69 (7), 503-508.

Porter, A. (1991). Creating a System of School Process Indicators. Educational Evaluation and Policy Analysis, 13 (1), 13-29.

Rahn, M.L., Hoachlander, E.G., and Levesque, K.A. (1992). State Systems for Accountability in Vocational Education. Berkeley, CA: National Center for Research in Vocational Education.

Raizen, S.A., and Rossi, P.H., eds. (1981). Program Evaluation in Education: When? How? To What Ends? Washington, D.C.: National Academy Press.

Richards, C.E. (1988, March). Educational Monitoring Systems: Implications for Design. Phi Delta Kappan, 69 (7), 495-498.

Rossi, P.H., and Freeman, H.E. (1993). Evaluation: A Systematic Approach. Beverly Hills, CA: Sage Publications.

Selden, R.W. (1988, March). Missing Data: A Progress Report from the States. Phi Delta Kappan, 69 (7), 492-494.

Selden, R. (1994). How Indicators Have Been Used in the USA. Measuring Quality: Education Indicators--UK and International Perspectives, edited by K. Riley and D. Nuttall. London; Washington, D.C.: Falmer Press.

Smith, M.S. (1988, March). Educational Indicator.

Sproull, L.S., and Zubrow, D. (1982). Performance Information in School Systems: Perspectives from Organization Theory. Educational Administration Quarterly, 17 (3), 61-79.

Stecher, B.M., and Hanser, L.M. (1992). Local Accountability in Vocational Education: A Theoretical Model and Its Limitations in Practice. Santa Monica, CA: The RAND Corporation.

Stecher, B.M., and Hanser, L.M. (1993). Beyond Vocational Education Standards and Measures: Strengthening Local Accountability Systems for Program Improvement. Santa Monica, CA: The RAND Corporation.

Stecher, B., et al. (1995). Improving Perkins II Performance Measures and Standards: Lessons Learned from Early Implementers in Four States. Berkeley, CA: National Center for Research in Vocational Education.

Stern, D. (1986, May). Toward a Statewide System for Public School Accountability: A Report from California. Education and Urban Society, 18 (3), 326-346.

Timar, T.B., and Kirp, D.L. (1986). Educational Reform and Institutional Competence. Harvard Educational Review, 57 (3), 309-330.

Weiss, C.H. (1980). Knowledge Creep and Decision Accretion. Knowledge: Creation, Diffusion, Utilization, 1 (3), 381-404.

Zucker, L.G. (1980). Institutional Structure and Organizational Processes: The Role of Evaluation Units in Schools. CSE Report, No. 139. Los Angeles, CA: Center for the Study of Evaluation, University of California.

This CenterFocus was developed at the Institute on Education and the Economy, Teachers College, Columbia University, which is a site of the National Center for Research in Vocational Education.

This publication was published pursuant to a grant from the Office of Vocational and Adult Education, U.S. Department of Education, authorized by the Carl D. Perkins Vocational Education Act.

CENTERFOCUS
National Center for Research in Vocational Education
University of California at Berkeley

Address all comments, questions and requests for additional copies to:
NCRVE
2030 Addison Street, Suite 500
Berkeley, CA 94704-1058

Our toll-free number is 800-(old phone deleted)


CenterFocus Archive | No: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20