Previous Next Title Page Contents NCRVE Home

4.

LIMITATIONS IN PRACTICE



INTRODUCTION

        The MidAmerica example reflects a level of local accountability that is not always achieved in practice. In order to illustrate the features of the accountability model, the description of MidAmerica omitted any discussion of the shortcomings that often exist in local accountability networks. However, a discussion of local accountability is incomplete without an examination of practical limitations that can constrain accountability systems at the local level. In this section we turn our attention to these limitations.

        The model diagrammed in Figure 2.1 serves as a basis for discussing the shortcomings of local accountability systems. Most of the problems we have seen in practice can be traced back to deficiencies in goals, measures, feedback loops, organizational change mechanisms, or the interactions among these components. When one of these components is missing or weak, it interferes with the effective functioning of the system.

        These four elements are highly interconnected, and each one affects the others. However, the clearest way to discuss their limitations is to consider the elements individually while holding the others constant. For example, in the discussion of measures, we assume the other elements in the accountability system are in place, i.e., the school has appropriate goals, there are mechanisms for communicating between administrators and constituents, and the school has procedures for adjusting the program in light of feedback that is received. If these conditions do not exist, any deficiencies in measures are exacerbated by problems with the other components, but the deficiencies in measures remain. Examining the components individually simplifies the discussion considerably while not limiting the generality of the conclusions we draw about measures or the other elements of local accountability.

        In the following paragraphs we will describe some of the common limitations that occur with each of the components of a local accountability system. Unless otherwise noted, the discussions of limitations are based on feedback from local program staff and constituents. All examples are derived directly from our site visits. As a rhetorical device, we will describe idealized components first and then analyze ways in which actual systems fail to achieve these ideals.

        There are some constraints we will not address, including external pressures from the state and federal levels and unusual local conditions that may not generalize. These influences can take many forms. For example, state policymakers often establish goals that affect all vocational programs; federal legislation currently mandates the adoption of statewide systems of measures that will be required of all local programs; state funding formulas affect local administrators' options to change local programs, particularly their ability to initiate new programs in response to local demand. In Figure 2.1, these factors are represented in the area labeled state and federal, and we treat them as exogenous elements in the local accountability model. We will discuss influences from state and federal sources briefly in our conclusions and at greater length in a subsequent report.

        Finally, there are other local factors not included explicitly in the model that can affect accountability systems. These include such things as collective bargaining agreements and local political pressures. Such factors do affect administrators and instructors, particularly in their role in the change process. However, these elements are too idiosyncratic and dependent on local context to be included in a general model. To the extent that we have specific comments to make about such factors, they will be included in the discussions of the four accountability elements.

GOALS

The Role of Goals

        Educational systems are accountable to many constituencies: students, parents, local businesses, and local, state, and federal agencies, and each of these constituencies has a set of implicit goals relating to vocational training and employment. Students and parents are likely to share the goals of obtaining affordable education and training that lead to employment with opportunities for lifetime advancement. Local businesses desire a qualified labor pool and a free or inexpensive source of additional training. Local governmental agencies focus on the well-being of the local economy that they often translate into low unemployment levels. State agencies desire the same thing but balanced across the state. They also may be interested in promoting an equitable sharing of education and training resources across the state. Federal agencies, while interested in providing for a growing nationwide economy, often have as an equal goal equity in the provision of services to minorities, the handicapped, and underprivileged groups.

        These goals are communicated, formally and informally, to school district and program administrators, and they represent the de facto standards against which the performance, progress, or success of the local education and training system is judged. At the school and program level, goals arise from and are an attempt to crystallize the interests and desires of local, state, and federal constituencies. Thus, federal goals tend to be more broadly conceived and stated than state and local goals, which in turn find expression in specific school-level goals.

        In addition to the felt need that often drives school systems to develop their own formal goals, federal legislation, i.e., the Perkins Act, requires states to have explicit goals for vocational education; a clause in the act says that states must

develop and implement a statewide system of core standards [emphasis added] and measures of performance for secondary and postsecondary vocational education programs.[19]

        It is clear from our discussion above that goals can be arranged into a natural hierarchy, from the relatively broad:

By the year 2000, every adult American will be literate and will possess the knowledge and skills necessary to compete in a global economy and exercise the rights and responsibilities of citizenship.[20]

        to the very narrow:

Accounting clerks will be able to "classify and record transactions, assets, liabilities, capital, revenue, and expenses."[21]

        However, it is not always the case that only broader goals are generated at the federal level and narrower goals are generated at the local level. For example, the Perkins Act contains some relatively narrow goals, such as its requirement that states develop a system of measures and standards, and schools often adopt relatively broad goals, such as "providing opportunities for personal growth through basic vocational education and upgrading the knowledge and skills needed to keep pace with changing technology." Ideally, each higher-level goal becomes translated into an increasingly larger number of more locally specific goals as we move from national to state to local to school to individual goals. In any case, the final result should be a set of interlocking goals. In addition to translating higher-level goals into lower-level goals, additional goals of regional or local interest may be added by each succeeding lower organizational level.

        The research literature on goal setting indicates that for goals to appropriately affect behavior, each goal must be seen as meaningful and realistic, each must be subscribed to by all interested parties, and each must be stated in such a way that it is possible to know if or when it has been achieved. It is also perhaps obvious but important that goals not be conflicting. With multiple organizational levels, this is often difficult to determine.

        Thus, an ideal system of goals is one that is broad at the highest level and interlocks with and becomes increasingly specific at lower levels.[22] Constituents at each level must understand and support the goals of the organizational level above them and develop goals at their level that support higher-level goals. As much as possible, goals must not conflict and must be stated in a way that allows constituents to measure progress toward them or achievement of them. Finally, if a set of goals is not equally important (and they usually are not), then a set of priorities must be established, promulgated, and supported. As often as not, a major stumbling block is not that the goals themselves conflict but that limited resources do not allow all goals to receive equal support. In this case, what may be perceived as conflict among the goals is really conflict between groups that have different goal priorities.

Limitations in Goals

        As we have already suggested, the major shortcomings that we find in goal systems are as follows:

        Constituents' goals are ultimately expressed in mission and goal statements at the school and program levels. To illustrate some of their shortcomings, we turn to specific examples of goal statements. Figure 4.1 shows a sample mission statement from one of the schools we visited. The formal school-level goals of this same institution are


Figure 4.1--Sample School Mission Statement

listed in Figure 4.2. We could also provide a complete list of goals at the program and course levels, but these lists are prohibitively long and too detailed to include here. At the program and course levels, goals frequently appear as statements of required occupation-specific task proficiency (e.g., for an accounting clerk "journalizing transactions into multicolumn journal") and employability skills (e.g., demonstrate self-control).

        The mission statement represents the overarching goal for the school and is presumably embodied in the goals listed in Figure 4.2. The goals are indeed lofty and admirable. Unfortunately, such goal statements are often developed, occasionally revised, and then consulted only once or twice a year. For example, though we do not know how frequently these goals are reviewed, this particular set of goals came with a notation that it had been adopted in October 1973 and revised in November 1978, December 1986, and July 1990, i.e., only four times in 17 years.

        Two things are obvious from the list of school goals in Figure 4.2. First, none of the goals is stated in a way that permits the school administration to know definitively whether it was met. For example, what does it mean to assist a student in determining individual vocational goals? And how would we know we had done it? Would a reference section in the school library on career information be sufficient? Would every student have to be assisted for this goal to be met? When specific levels of goal attainment are left unspecified, it is tempting to consider the goal to be an absolute or to accept any degree of success as complete success.

        Second, because these goals are open-ended, it is clear that there are not sufficient resources available to meet every one of them--perhaps not even sufficient resources to address every one of them. Yet there is no clear priority of goals. Knowing how or when to


Figure 4.2--Sample School Goals

make trade-offs among goals is further complicated by the lack of clear statements that would set a target for each goal. Would the system reduce the assessment of basic skills to fund a different level of service to special populations?

        Because of the lack of clear operational definitions in these goals, this school will find it difficult to judge progress and will be frustrated by conflicting implicit priorities among the staff and in the community. On the other hand, the administration will always be able to say positive things about what the school is doing (if you do not have explicit goals, it is easy to say that you are achieving them or "working toward them").

        This is not to say that broad goals are inherently "bad." Clearly, they provide a general direction within which programs can function. However, to the degree that no shared understanding exists about their meanings or how to translate these meanings into actions, there may be conflict or friction in the system. Sometimes goals are left broad in an attempt to bridge underlying disagreements. But this is more useful as a negotiating tool than as an accountability vehicle.

        Of course, one can err in the opposite direction as well. Highly operationalized goals can also wreak havoc on a system. As we will point out in our discussion of measures, there is often a tendency to place more faith, and hence more importance, in those things that we can easily quantify, sometimes to the decrement of more important but less easily measureable goals.

        Even the best of goals can fail if they are not understood and supported by the staff and the administration. Several things can go wrong in promulgating goals throughout the system. For example, an instructor in one school that we visited was never apprised that his job included the responsibility for placing students. Two things happened as a result. First, the instructor did not know to focus effort on student placement, so placement suffered initially. Second, the instructor was given a poorer performance review than perhaps he deserved. In this case, the goal was simply not communicated properly to the person responsible for achieving it. It is often equally inappropriate to say that a given goal is everyone's responsibility--with no incentives or sanctions for individuals to work toward its achievement. Such a goal quickly becomes no one's responsibility.

        Goals often succeed or fail because of the support that is provided by administrators. While we may be tempted to think of support as simply providing the means to accomplish a task, it also includes the development and maintenance of an atmosphere that rewards task performance. Consider a goal to use advisory committees to ensure that curricula remain up-to-date and relevant to community needs. A school can provide support for this goal by providing time and space for meetings to occur. If there is little incentive to hold the meetings, all the time and space in the world will not achieve the goal. The school needs to create an incentive for holding these meetings. One method might be to require that committee reports be part of an instructor's performance review package or that instructors be required to present committee reports twice a year at a department faculty meeting. The important point is that supporting a goal means providing the time and resources so that it can happen and the incentive structure that will encourage it to happen.

        This leads to our final point with regard to goals: the importance of clearly stated and supported priorities among goals. In the absence of explicit schoolwide priorities, individuals will develop and act on implicit priorities. These implicit priorities are likely to be a source of conflict in a school. For example, many vocational schools offer customized training for local employers. Unless there is a clear statement of the priorities for these activities vis-à-vis the school's regular program of classes, departments may enter into conflicts over space and instructor time. Limited time and resources and an abundance of goals demand that priorities be set. Priorities guide individuals in allocating time and resources among competing demands, as in the example above.

        Goals are crucial to an accountability system. However, no matter how clearly stated or strongly supported, if a system is unable to measure progress toward achieving them, accountability will fail. In the next subsection we discuss the importance of measures and ways in which failures in the measurement system can affect local school accountability systems.

MEASURES

The Role of Measures

        Although the term "measure" may seem abstract or theoretical, the concept of a measure is familiar to everyone involved in education. A measure is nothing more than a quantitative index describing the status of a phenomenon. Test scores are a common educational measure, but educational measures also include counts and tallies of outcomes (e.g., course enrollment, attendance, participation in extracurricular activities) and ratings of performance (e.g., grades, judgments about the adequacy of performance on job-related tasks). Common vocational education measures include scores on tests of occupational knowledge, tabulations of the percentage of graduates finding jobs in a particular occupational field, and proportions of occupational competencies mastered by students.

        Whether we like it or not, measures play important roles in our society. They provide quantitative information about diverse phenomena from athletic performance to judgments about beauty. Such data seem to hold a tremendous fascination for people and to wield a powerful influence over our lives. Things we can quantify seem to carry greater weight than things we cannot.

        This is true in vocational education as well. Although there are many valuable vocational outcomes that do not translate directly into simple measures, e.g., self-esteem, "quality of the workforce," and deportment, when people think about vocational program outcomes, they frequently think in quantifiable terms. Under the circumstances, the confusion between goals and measures alluded to in the previous subsection is understandable. It is easy to focus on outcomes that can be counted while overlooking the importance (or lack of importance) of the things we are counting.

        The fascination with and respect for data pervade the current debate about educational reform. The campaign for educational choice relies upon measures of school quality to inform parental decisionmaking. Standardized test scores are one such measure that seems to have tremendous credibility both for parents and educational policymakers. The re-authorization of the federal vocational education act mandates the establishment of measures and standards for program evaluation and improvement purposes. Both of these reform efforts are geared toward increasing accountability in education, and they would be crippled without an appropriate set of measures.

        The chief role that measures play in accountability is to provide evidence of the attainment of goals, and the most important measures are those that are goal-related. Vocational programs frequently have goals for students that involve learning occupational knowledge, performing job-related skills, completing a sequence of courses leading to competency in an occupational area, and finding employment or pursuing additional schooling. Consequently, measures of occupational knowledge, job-related skills, course completion, and placement are highly relevant to vocational programs.

        What characteristics should measures have if they are to be used as tools for local accountability?[23] The most important features of individual measures are consonance with goals, technical quality, and meaningfulness. In addition, the collection of measures used to describe a goal should be sufficient to portray overall status with respect to the goal. This characteristic of sets of measures can be called sufficiency. Consonancy and sufficiency are discussed in the next paragraph, followed by quality and meaningfulness.

        An individual measure is consonant with the goals of a particular school or program if it provides information that is relevant to an endorsed goal of the school or program. For example, the percentage of students who complete a unit on teamwork skills is a measure that is relevant to the goal of preparing students to be effective workers in the modern workplace. As a single measure, it is consonant with this broad goal. However, completion of the teamwork unit is inadequate if it is the only measure of this goal. Many other dimensions of preparedness would need to be assessed before one could judge attainment of this goal with confidence. In this case, a set of measures is needed to assess performance with respect to this particular goal. Other constructs one might want to measure include personal management skills, problem solving ability, communication, and basic skills. A set of measures would be sufficient if it provided adequate evidence to judge attainment of the goal.

        The technical quality of a measure usually is judged in terms of reliability and validity. For the purposes of this nontechnical discussion, reliability can be equated with accuracy and validity with appropriateness. A measure is reliable/accurate if it produces a score with a minimum of error. Measurement errors can come from many sources: ambiguous directions, poorly written questions, and "human errors" in compiling information or computing statistics. In tests, scores can be affected by the selection of questions or the choice of question formats. One way to determine the reliability of a test is to administer it two or more times to the same individuals. This process will seldom produce identical scores; however, if the tests are reliable, the scores will be quite similar. Large differences in scores are an indication of unreliable tests. To be useful as indicators of goal attainment, measures must be accurate.

        A measure is valid/appropriate for a particular purpose if it reflects the condition it is being used to represent. In the example above, the passing rate on the teamwork unit might not be a valid indicator of workplace preparation. One person could pass the unit and be a poor employee; another could fail the unit and be an excellent employee (because he or she did not take it seriously or because the unit did not address teamwork in the workplace well enough). In reality, appropriateness/validity is not a characteristic of the measure itself but of how it is interpreted by the user. To be effective for accountability purposes, measures must be valid indicators of the status of goals they are taken to represent.

        Finally, measures should be simple, clear, and direct enough to be understandable to constituents. This is what we mean by the broader term meaningful. Measures must be neither too complex nor too elaborate if they are to make sense to the average person.

Limitations of Measures

        Measures are ineffective as elements of a local accountability system if they are

        Measures that are not consonant with goals fail to provide necessary information for program improvement. Measures that are unreliable or invalid give false signals about the status of the system. Measures that are not meaningful to constituents cannot be translated into appropriate feedback. These types of deficiencies limit the value of the measures as tools for accountability. The following examples illustrate the practical limitations on measures we have encountered in vocational programs.

        Lack of Correspondence Between Measures and Goals. A set of measures fails to correspond to program goals if there are goals that are unmeasured or measured incompletely. In this situation, constituents lack objective information to judge program success. For example, if the broadly stated goal of a cosmetology program is to prepare students to be successful cosmetologists, then students' grades in cosmetology courses provide some measure of the attainment of this goal. However, grades are an incomplete measure of this goal. They do not indicate specific knowledge of key elements of cosmetology, they do not differentiate between knowledge of facts and the ability to perform the tasks associated with the job, and they do not necessarily correspond to likely success as a cosmetologist.

        In comparison, it might be possible to combine a larger set of measures to judge the attainment of the cosmetology program goal. Such a set could consist of the following measures:

        In this instance, no single measure would provide adequate data to judge the program's success in meeting its goal. However, in combination, these measures might be adequate to assess goal attainment.[24]

        When goals are unmeasured or measured incompletely, people have to rely on subjective judgments about goal attainment or they have to ignore the unmeasured goals. If concerned constituents have only their own subjective impressions to use as a basis for feedback to programs, decisionmaking can more easily become politicized. Similarly, if no data are collected to determine whether the program is meeting a goal (e.g., serving students with special needs), less attention is likely to be paid to this goal (e.g., the needs of these students).

        It is also possible to include measures that do not correspond to any goals. One must be cautious that such measures do not supplant goals and become the focus of decisionmaking. There is a natural tendency to attend to whatever data are produced and, by extension, the implicit goals they instantiate. Recent emphasis on competency testing may provide a case in point; programs may attend to test scores and the actions that can be taken to raise scores while not attending to the original goal that scores were supposed to reflect, e.g., job preparation. More generally, the mere existence of data is a powerful magnet to attention, and collecting measures that do not correspond to explicit goals can raise the implicit goals embodied in those measures to a prominence they do not deserve.

        Measures That Are Technically Inadequate. There are a number of ways that measures can be inadequate in terms of quality. In technical, psychometric language, we define quality in terms of reliability and validity. Measures are reliable if they are accurate and consistent. For example, in most schools the registrar's report of the percentage of students who complete the cosmetology sequence in the allowed amount of time is likely to be accurate, and, if conducted again, it would likely yield the same results. In contrast, a follow-up survey of the percentage of completers who found jobs in a field related to cosmetology may not be as trustworthy. Such surveys usually have high nonresponse rates, and those who do respond may not be representative of all program participants. As a result, the results of the survey might change if it were conducted again and the conclusions drawn from the survey are likely to be inaccurate. Because of the importance of employment as an outcome of vocational education and the fact that it is frequently assessed through a survey of some kind, it is especially important for participants in a local accountability system to be aware of these potential problems.

        Measures are valid for a particular purpose if the inferences drawn from them (e.g., about the occupational knowledge of students and their preparation for work) are correct. For example, performance on the state licensing examination probably is a reasonable indication of readiness to be a cosmetologist. On the other hand, performance on a ten-item test developed by a commercial publisher in one state may or may not be an appropriate way to judge preparation for cosmetology in another state.

        Rather than engage in a detailed theoretical discussion of sources of measurement error and threats to validity,[25] we will describe a few examples of measurement problems associated with technical quality that are likely to be encountered in the vocational context. This is not meant to be an exhaustive review of potential measurement problems but an illustrative presentation of the ways in which data can be deficient in practice. These examples include questions about the technical quality of

        Occupational competency testing plays an increasing role in vocational education programs. A growing number of states and organizations are developing occupational competency assessments, and a growing proportion of these assessments are expanding beyond the use of paper-and-pencil multiple-choice tests to include alternative assessment techniques such as performance-based measures and portfolios. There are many ways tests can be unreliable or invalid for local accountability purposes. As the use of tests increases, these concerns grow.

        A brief review of reports about vocational testing programs currently available or under development suggested a number of potential psychometric problems, particularly with new forms of assessment. Alternative forms of tests generated repeatedly from item banks may not be comparable in difficulty, so scores may not be reliable. Short competency tests may not sample adequately from the domain of skills needed to perform a job, so results may fail to reveal significant deficiencies in students, yielding invalid impressions of competence. Measures of performance that pose hands-on tasks may have imprecise guidelines for scoring, so standards will vary from teacher to teacher. Collections of student work products in portfolios may reflect optimum performance under conditions in which outside assistance and revisions are permitted, not typical performance under joblike conditions. All these are potential threats to the reliability and/or validity of measures, and all may lead to conclusions about program performance that are incorrect.

        Measurement quality also is a concern in the case of follow-up placement and employment surveys. These measures often suffer from poor data collection procedures and low response rates. For example, in cases where follow-up data are collected by the State Department of Education or other state agency at some distance from the program site, local programs often find it necessary to supplement state efforts with locally collected data that more closely match actual placements. This problem is lessened when state data collection is based on electronic linking of school records and employment data (e.g., unemployment insurance fund contributions), but this is done only in a few states.

        More often schools are responsible for generating their own placement data but are not given adequate resources to do a thorough job. When individual programs are given the responsibility of tracking their own graduates, conditions are ripe for errors. Programs tend to interpret responses in the most positive light. For example, they are likely to trust one student's report of a second student's job status or to accept a student's comments about job intentions in lieu of data on actual job placement.

        Projections of labor market demand are another area in which measures often are invalid. Many local programs receive projections from state or federal agencies to use to estimate local demand for training. This information should help programs better plan to meet the needs of local employers. However, aggregated demand projections often yield invalid estimates of local demand, and they can lead to poor program planning. For example, a statewide shortage of nurses may not translate into a local shortage. Similarly, locally generated data on employment demand are not always trustworthy. In both cases, employers have been guilty of responding in terms of the employees they would like to have under ideal conditions, not the employees they will actually hire.

        A related concern has to do with the robustness of measures in high-stakes contexts. There is ample evidence that test scores and other measures can be corrupted (i.e., scores no longer reflect underlying ability) as the importance attached to the measure increases.[26] In the vocational context, this means that when the stakes attached to scores increase (for example, if it is necessary to pass a test to receive a certificate of completion in a vocational program), scores tend to rise irrespective of changes in actual knowledge. This occurs for many reasons, including familiarity with the tests after repeated use and conscious "teaching to the test."

        Unfortunately, most measures are susceptible to corruption. Despite the rhetoric that people are designing a new generation of tests designed "to be taught to," many vocational competency tests are likely to be corrupted if pressures on scores grow. Fortunately, local accountability concerns do not usually create such high stakes for performance, but they can, and certainly statewide influences increase the importance of performance and place added demands on measures.

        Measures also may be invalid as accountability tools when external conditions, such as unemployment rates, affect them. Program performance, particularly placement of graduates, is not solely a function of training effectiveness; it also is affected by local economic conditions. For example, placements may decline in a recession though the quality of the training and the skill of the graduates have not changed. Under these circumstances it would be incorrect to use a measure of placements to indicate attainment of specific training goals. The measure would still be a valid reflection of community demand, but it would not be a fair indication of graduate skill or instructional quality. Consequently, one must be cautious about interpreting outcome measures that are linked to local economic conditions.

        Measures That Are Not Meaningful. The last criterion for effective measures is meaningfulness. There are many ways in which measures may fail the test of meaningfulness in practice. Measures are of limited value if they are unclear or confusing (e.g., they are statistically complex), if they are not available in a timely manner, or if they do not address questions that are important to constituents.

        Counts, tallies, and percentages reported at the student or program level are generally well understood, but not all measures are this clear. Complicated learning-style profiles or scaled results from locally constructed occupational competency tests may be too complex or obscure to be understood easily. Some of the worst problems occur when programs make "statistical adjustments." For example, one college reported that "132%, or 632 of the 477 students scheduled to graduate from a real estate training program, went on to graduate."[27] This creates an impression that the program was phenomenally successful (or ridiculously incompetent). In fact, the school had no data at all on the number of students who found jobs in real estate. It may well have been true that this measure was reliable, but it was hardly meaningful to constituents.

        Timeliness is an important attribute of measures because most school-related decisions are time-dependent. Students and parents have to make enrollment choices by a particular deadline, teachers and industry advisors have to make curriculum choices prior to the start of the term, and administrators have to make hiring decisions on an annual basis. For goal attainment data to be useful, they need to be available in a timely manner.

        Finally, measures should provide data that are responsive to the kinds of questions constituents are likely to ask. There may be many ways to gather information relevant to a particular goal, e.g., to serve students with special needs. Some measures will be more relevant to employers, instructors, parents, or students than others. When choices exist, schools should opt for measures that have the greatest meaning for their multiple constituencies. To be meaningful, measures must be understandable, timely, and responsive.

        In conclusion, there are many ways in which measures can fail to fulfill their role in an accountability system in practice. One must not assume that because something is quantitative it is good. Nor should the reader be left with the impression that a measure is poor if it is qualitative. In both cases the important questions to ask are whether the data are consonant with goals, reliable and valid, and meaningful to constituents. These are the criteria that should be used to judge the quality of measures in a local accountability system.

FEEDBACK

The Role of Feedback

        We use the term "feedback" to refer to two related but distinct processes: (1) the flow of information about school-based programs, objectives, and outcomes to constituents and school staff that forms the basis for judgment about how well the total vocational education system is working, and (2) the flow of information (usually judgments, opinions, and interpretations) between the various groups of constituents and the school administrators and staff (see Figure 2.1).

        So defined, feedback is a continuing process that takes many forms--both formal and informal, direct and indirect. For example, course enrollment levels are one piece of information about the health of a given program. This information is published formally in an annual report, and it is also available informally to instructors and staff throughout the year based on their direct observation of classes.

        The level of employer satisfaction is another indication of program performance. Satisfaction can be measured directly through surveys and conversations, or it can be inferred indirectly from employers' participation in advisory groups and employers' continued interest in hiring program graduates. In this manner, constituents' actions constitute indirect feedback about their judgments and opinions. This is true for students and parents as well as for employers.

Limitations in Feedback

        Feedback includes many forms of information sharing, and it is subject to all the problems that can plague human communication. This includes insufficient information and inaccurate or insensitive communication. In the vocational education context, some feedback occurs within organizational boundaries, while other potentially more important communication with constituents must travel across organizational boundaries. This may add an additional layer of difficulty if the organizational culture and the "outside" culture have different standards or expectations regarding communication. For the purposes of this discussion, limitations in feedback will be categorized as

        Insufficient Communication. Information is one key to effective action. For program administrators and staff to initiate, modify, or discontinue programs rationally (see section on organizational change mechanisms), they must have valid, reliable, and meaningful program information to guide them. For example, to the degree that instructors are isolated from local employers and do not receive feedback about employers' hiring priorities, instructors will be unable to adjust program content to employers' changing needs. Similarly, to the degree that school administrators do not receive job placement information on program graduates, their decisions regarding program expansion or contraction will suffer. To the degree that community members lack information to judge the value of their local vocational education and training system, they will be unable to shape it to their needs. They may be less willing to provide the fiscal support it needs as a result.

        Communication may be insufficient for several reasons. People tend to err on the side of sharing too little information rather than too much. Those who have information to communicate often feel that they have communicated more than those receiving information feel they have been given. One of the most difficult aspects of communication is judging the appropriate amount of information to share.

        Second, a person may limit the amount of information he or she communicates publicly for political reasons since the control of information contributes to the exercise of power. Those who possess information have an advantage over those who do not. Sometimes the conscious restriction of information is quite subtle. For example, business representatives who join together in program advisory committees are by their nature local competitors--e.g., beauty salon operators in the same city. This can generate considerable pressure on committee members to be less than entirely forthcoming with information. In one location, an advisory committee felt the need to have a formal written agreement concerning the use of information obtained through committee deliberations. The members believed that this agreement allowed a much freer exchange of information.

        Inadequate communication also can arise as the result of ineffective organizational arrangements. For example, one community college we visited had a centralized placement office that carried out all of the placement support functions for program graduates. While this specialization appeared to be an efficient use of resources, it created an unanticipated buffer between instructors and local employers. Because instructors were not responsible for job placement, they failed to receive the natural flow of communication about employers' needs and program content that occurs during the placement process.

        Lack of Accuracy of Communication. Information can be communicated inaccurately or insensitively for many reasons. First, human communication is susceptible to many forms of bias, both intentional and unintentional. Intentional bias occurs when someone purposely distorts reality. Unintentional bias is more difficult to detect and may be far more common. A local employer who is especially happy or unhappy with the quality of a program graduate can provide unintentionally biased information. If the employer is persuasive, a single strategically placed remark can have a dramatic effect on others' perceptions of the program--regardless of whether the remark truly reflects the overall quality of program graduates. The employer may have no intention other than to report his or her experience with a single graduate, yet a positive or negative anecdote can have much greater impact than a report filled with statistics.

        Second, information can be distorted to serve a particular agenda. Individuals can put their own "spin" on information by highlighting or downplaying either the positive or negative aspect of the information. For example, in one location we visited, a child care worker program was being discontinued. Although job placement rates were high, the program served primarily women, and the graduates were earning a relatively low wage. The administration decided to dismantle the program because, in their opinion, it perpetuated women in low-paying jobs. Clearly wage levels and placement rates provided two different perspectives on the success of the program. Administrators decided to emphasize one piece of information over the other; students might have put a very different spin on the information. We do not cite this as an example of poor or good judgment on the part of these administrators; rather it shows how one piece of information may be emphasized unwarrantedly over another.

        Low Signal-to-Noise Ratio. In some instances, there may appear to be a substantial flow of communication, yet very little useful information is being exchanged. The classic example of this is a political speech, but we do not find large amounts of noise and small amounts of content only in political rhetoric. Vast amounts of noise can masquerade as information in many other settings. In vocational education, this may take the form of undigested, unsummarized, unsynthesized, or unanalyzed statistical information. For example, schools may publish page after page of course enrollment figures. If this information is not summarized or if additional contextual information is not present (such as trends in enrollment over time or local employment figures), the information is effectively noise that the reader must sort through. This is not to say that statistical reports are worthless; rather that it can be difficult to find the key information amid the noise.

        Why does this happen? Often individuals feel that all information that has been collected should be distributed. Furthermore, it takes an experienced data analyst to find appropriate ways to summarize raw data without biasing the information. One method that can be used to ameliorate this problem is to provide summary information in the body of a report or presentation and to include the raw data in an appendix. Providing only the summary or only the raw data is less likely to be satisfactory.

        Communications with low signal-to-noise ratios have predictable effects. First, individuals simply cease to pay attention to the information they are given. Decisions continue to be made but without the benefit of useful information. Second, increased time is devoted to sorting through the data to find and interpret the useful information that is contained amid the noise. Third, the noise is confused with useful information, leading to inappropriate conclusions and actions.

        To summarize, feedback represents the flow of information conveyed by the measures toadministrators, program staff, and school system constituents, and the flow of information between administrators, staff, and constituents. Problems in feedback result in inaccuracies or distortion of information that can lead to poor decisionmaking.

ORGANIZATIONAL CHANGE MECHANISMS

The Role of Organizational Change Mechanisms

        Studies of organizational behavior have documented the tendency for organizations to persist in the same actions rather than to change.[28] Persistence occurs, in part, because there are costs (both real and psychological) associated with change. Schools demonstrate the same inertia and the same tendency to maintain the status quo rather than change. The "loosely coupled" nature of education, in which hierarchical authority is lessened, may make schools somewhat more responsive to environmental pressures than other organizations.[29] However, this same "decoupling" may reduce the likelihood of large-scale change while increasing the ease with which individual classroom instruction responds to pressures for change.[30]

        Our observations of vocational education programs are consistent with these viewpoints. We found that large programmatic changes were infrequent. For example, it was rare for a school to initiate a new program or terminate an old one. On the other hand, change was common within specific occupational programs. Small modifications to courses, curriculum, equipment, student requirements, and assessment methods occurred regularly.

        In discussing organizational change mechanisms, we include both types of change, those affecting programs as a whole and those affecting elements within programs or services. At the school level, the director, principal, or executive committee is the chief agent of change; at the program level, the program coordinator, department head, or curriculum committee acts in this role. In small programs (which were quite common in the schools we visited) there may be only one or two instructors who are responsible for curriculum change. In abstract terms the role of organizational change in the accountability network is the same in all these situations.

        When a local accountability system is functioning effectively, change is fostered because constituents have power to reward or sanction the behavior of the institution. Not only can they make their opinions known regarding the success of the institution, they can act on their beliefs. Students can leave programs and share negative reviews with peers if they do not believe the school is achieving the goals that are most important to them. Likewise, businesses can withhold their participation--on industry advisory committees and as employers of program graduates--if they do not believe the school is achieving goals that are important to them. Similarly, instructors, parents, and community members have power as voters, board members, participants, and advisors to create incentives for achieving appropriate goals.

        Administrators have options for responding to constituent feedback (as well as their own evaluations of program performance), and they play the key role in change. In our model they are the recipients of information on system performance (measures) and of feedback from constituents on their interpretation of measures and the degree to which the program is meeting their needs. Administrators use these inputs to make decisions about continuation or modification of programs.

        Organizational change need not occur through a regular, formal, systematic process for a local accountability system to function. That organizations tend to change in evolutionary and episodic ways does not detract from the accountability model we described. Measures of program performance and feedback from constituents influence program decisionmakers, although it may take time for these effects to be translated into action.

        What are the characteristics of an ideal organizational change mechanism? Three elements seem crucial to us. First, the organization must be responsive. When data suggest that goals are not being met or when feedback suggests that constituents are dissatisfied with performance, actions need to be taken to understand and improve the situation. The key word here is improvement. It is not enough to mollify critics, adapt measures, or discount results. What is required is movement toward a more effective way of doing things.

        The second characteristic of an effective change mechanism is that it must be forward looking. Schools must be willing to look beyond short-term satisfaction to intermediate and long-term goals. Business cycles can affect short-term demand for employees, so schools may need to ignore short-term fluctuations in placements to be responsive to long-term industry demands. Schools must avoid eliminating programs that may be viable in the long run.

        Finally, organizations must be fair. They must respond to the needs of all constituents and not give undue weight to feedback from some groups over others. The adage "the squeaky wheel gets the oil" is often true in educational settings. An effective change mechanism is one that balances the needs of constituents, respecting the desires of large and significant groups without dismissing the wishes of small groups.

Limitations in Organizational Change Mechanisms

        Organizational change and reform can falter for many reasons. In our visits to vocational schools we saw examples of decisionmaking and program reform procedures that were far from ideal. Among the shortcomings were the following:

        Often options for local change are constrained by state or federal regulations and funding guidelines. In one state, new program funding was available only in occupational areas where state labor market demand projections indicated growth. However, local school administrators did not believe these projections accurately predicted local demand. Nevertheless, schools could not receive state funds for new programs unless the programs appeared on the state's approved list. Other kinds of regulations can limit administrators' options for reform. One vocational school had a two- to three-year waiting list for enrollment into its nursing program, but it was not allowed to start additional classes because of limits placed on it by the State Board of Nursing.

        One of the most common restrictions faced by all educational programs, not just vocational education, is limited funding. In the vocational context, resource constraints reduce schools' ability to respond to changes in demand. In many states, funds for vocational educational programs have been "capped," and schools receive no additional resources or only partial funding when they add students or programs.

        Another factor that can reduce the effectiveness of change procedures is overattention to feedback from industry. As an example, bowling alley operators in one community made a strong case that training was needed to prepare mechanics to repair automatic pin-setting machines. The school did its best to conduct an objective survey of demand, which lent some support to industry claims. The industry advisory group was adamant that the program was needed, and they were willing to raise funds for the capital expenditures necessary to prepare the facilities. Despite its reservations, the school accepted the group's help to prepare the facilities and develop the curriculum.

        The program was offered, but enrollments were insufficient to sustain it. After some investigation the school learned that the bowling alley operators themselves were withholding information from employees who might enroll. The owners were unwilling to refer employees because they did not want to pay the higher wages that trained mechanics could command. The results of the employer survey were misleading because owners had indicated "the type of employees they wanted, but not the type of employees they were willing to hire." In retrospect, the school believed it was persuaded by owners' desires without an adequate assessment of owners' commitments. The school complied with the wishes of the advisory committee, partially out of respect for the employers. Unfortunately, the space devoted to the bowling machine repair program could have been used more effectively for other programs.

        A related problem occurs when schools attend to short-term demand without consideration of long-term needs. For example, one community college created a program to train pulmonary therapists based on employer surveys that projected a strong immediate need. However, the needs analysis did not estimate turnover and continuing demand in the field. The school soon found itself with a program that could no longer place graduates because all the positions had been filled.

        Managing change can be difficult when administrators have to balance competing demands or competing principles. For example, one area vocational school eliminated its child care worker program despite continuing demand because the program was training women for an occupation the school identified as a low-paid, traditionally female, and "dead-end job." Administrators judged this training program to be an inappropriate use of resources that might better serve to develop more promising training opportunities.[31] In this case the school gave priority to principles over demand, to broader career-oriented goals over short-term employment goals.

        Finally, although it is an extreme case, some institutions act as if they have no mechanism for change. While effective vocational schools regularly update and redesign facilities to meet the changing training needs of their local communities, other schools seem to have little or no capacity for self-improvement. For example, one high school in an urban area provided vocational programs as part of a larger regional training consortium. The school itself did little to broaden the range of courses allocated to it or to improve the quality of its classes or facilities. One reason for this seeming indifference was that vocational education had little prestige at the school compared to college preparatory academic education. Another reason was that the school had little power to affect the allocation of vocational courses. Either through neglect, bureaucratic inflexibility, or the absence of leadership, the school made almost no efforts to improve vocational programs or facilities. Although this example was striking, we have no reason to believe it is typical of vocational programs.

        Overall, there are a number of ways in which practical constraints inhibit organizational change mechanisms. Even when goals are well articulated, measures well defined, and feedback from constituents prevalent, administrators may be ineffective in translating these elements into action. Administrators are influenced by politics and by external factors beyond their control. They are limited by their own capacities as leaders, and their actions can be affected by weaknesses in their change strategies. This includes failure to be responsive, overattention to short-term solutions, and susceptibility to pressures from vocal groups.

SUMMARY

        Theoretical models can be useful tools for understanding social phenomena, and the model we proposed in Figure 2.1 is helpful for describing accountability at the local level. However, theoretical models of social programs have limits--they describe conditions in ideal terms that are not necessarily implemented in specific situations. In this section we presented a number of practical limitations drawn primarily from our study sites that can reduce the applicability of our model and the effectiveness of local accountability systems. These limitations can be described as deficiencies in the components of the model--goals, measures, feedback loops, and change mechanisms--and the interactions among them. Greater familiarity with the functions of accountability in practice will improve our model and its usefulness as a policy tool.


[19]SEC. 115. STATE AND LOCAL STANDARDS AND MEASURES. While we recognize that goals and standards are not necessarily interchangeable--certainly not when standards represent a minimum acceptable level of performance--we believe that the standards required by the act are, in effect, goals for states to strive to achieve.

[20]The National Council on Education Standards and Testing, Raising Standards for American Education: A Report to Congress, the Secretary of Education, the National Education Goals Panel, and the American People, Washington, D.C., January 1992.

[21]Taken from materials provided to us during a school system site visit.

[22]In their discussion of a system of standards and assessments, the National Council on Education Standards and Testing, op. cit., provides a worthwhile scheme for organizing standards that is readily adaptable for organizing local goals. They suggest the following components: (1) an overarching statement that provides a guiding vision, (2) content standards describing knowledge, skills, etc., to be taught, (3) student performance standards, (4) school delivery standards, i.e., capacity and performance of a school, and (5) system performance standards for each higher administrative level, e.g., district, region, state, etc.

[23]This discussion is derived more from psychometric literature than from specific comments made by interviewees. However, all examples are based on incidents reported during our site visits.

[24]The collection of measures is sufficient only if it provides enough information to determine whether students are adequately prepared to be cosmetologists. This question could be answered empirically by comparing performance on the measures with performance on the job. Such a comparison could establish the predictive validity of the measures. In this case, the measures may be inadequate because they do not contain any indicator of social skills, which are likely to be highly correlated with success in this particular occupation.

[25]For such a discussion, see any comprehensive text on educational and psychological measurement, e.g., W. Mehrens and I. Lehmann, Measurement and Evaluation in Education and Psychology, 4th edition, Holt, Rinehart and Winston, Fort Worth, TX, 1991.

[26]D. Koretz, R. Linn, S. Dunbar, and L. Shepard, "The Effects of High Stakes Testing on Achievement: Preliminary Findings About Generalization Across Tests," paper presented at the annual meeting of the American Educational Research Association, Chicago, IL, April 1991.

[27]This was explained as follows: The number of students scheduled to graduate in 1988-89 was defined as 80 percent of the number of students enrolled in 1987-88. Graduates were defined as the number of students listed as Completers on the NCES 2404-A Postsecondary Enrollment and Completion Report.

[28]J. G. March and H. A. Simon, Organizations, John Wiley & Sons, New York, 1964.

[29]J. G. March and J. P. Olsen, Ambiguity and Choice in Organizations, Universitetsforlaget, Bergen, 1976.

[30]J. W. Meyer and B. Rowan, "The Structure of Educational Organizations," in J. W. Meyer and W. R. Scott (eds.), Organizational Environments, Sage, Beverly Hills, 1983.

[31]They had not yet identified those opportunities and developed appropriate training programs at the time of our visit.


Previous Next Title Page Contents NCRVE Home
NCRVE Home | Site Search | Product Search