Each of these issues is applicable to educational reforms associated with the nation's Tech Prep[1] and related school-to-work opportunities programs.[2] These concerns are apparent in all areas, but particularly in urban and rural areas where educational reform has been particularly difficult. Issues related to implementing effective school-to-work related reforms in urban areas are well-documented in a special issue of Education and Urban Society, edited by Seidman and Ramsey (1995). In that issue, Bragg and Layton (1995) point out that both pieces of federal legislation, the Tech Prep Education Act and the School-to-Work Opportunities (STWO) Act, contain directives to ensure funding is distributed to urban and rural schools and two-year colleges. However, like most other reforms, little is known about the quality of these programs or their impact on students in urban or rural areas.
In most localities, Tech Prep and school-to-work programs have been in operation for less than five years (Bragg, Layton, & Hammons, 1994). As a consequence, too little time has passed for students to have completed an entirely reformed secondary-to-postsecondary Tech Prep program. In many areas of the country, high school graduates participating in Tech Prep only first began to matriculate to the postsecondary level during the 1994-1995 or 1995-1996 school years, creating an unsettling lack of information regarding student outcomes. The National Assessment of Vocational Education (NAVE) drew a similar conclusion regarding the need for systematic evaluation of Tech Prep. Boesel, Rahn, and Deich (1994), authors of the Tech Prep section of the NAVE report, recommended increased emphasis on evaluation of Tech Prep programs "using longitudinal studies of student participation, retention, and educational and employment outcomes" (p. 131). They urged government to collect better information regarding how students participate in Tech Prep at both the secondary and postsecondary levels, and how they progress through the system. The study also emphasized the need for more Tech Prep programs to be developed in sites with high concentrations of special population students and to ensure evaluation is employed to monitor the effectiveness of Tech Prep programs for these students. Often, urban schools and colleges enroll disproportionately large numbers of special population students, making it extremely important to understand how Tech Prep involves and affects these students.
To provide a foundation for future efforts to evaluate programs and assess student outcomes relative to Tech Prep, additional research was needed. This study was designed and conducted to identify, classify, and prioritize student outcomes for Tech Prep. To address concerns that would inevitably be raised concerning stakeholders' differing perspectives toward student outcomes, three stakeholder groups were engaged in the study. They were educators, students, and employers.[3] The research questions that guided the study follow:
With better information about student outcomes, still other benefits are feasible. Researchers, policy leaders, and practitioners may be more likely to determine the circumstances under which the fundamental elements of Tech Prep (e.g., articulation, applied academics, stakeholder collaboration, and education-business partnerships) are most tenable. In addition, information produced by outcomes assessments can contribute to further refinement of the fundamental Tech Prep concept. Making Tech Prep more accessible and effective for "all" students rather than limiting it to the "neglected majority" is a particularly important issue for urban localities such as those engaged in this study. Finally, having better quality information about student outcomes can help to build more accountability into any evolving Tech Prep system, resulting in better program implementation and evaluation at all levels.
Similarly, Dale Parnell's vision of Tech Prep focused on what he thought should be the key elements of the Tech Prep process, but he identified few student outcomes. In his book The Neglected Majority (1985), Parnell advocated high-quality vocational education, applied academics, and strong relationships between business and education. He argued forcefully to refocus schooling to better meet the needs of the "neglected majority" of high school students who were unlikely to obtain the baccalaureate degree. The 2+2 Tech Prep Associate Degree (TPAD) model conceived by Parnell and further developed by Hull and Parnell (1991) was envisioned to be an equivalent track in rigor and stature to college prep.
In the TPAD model, a common core of "math, science, communications, and technology--all in an applied setting" (Parnell, 1985, p. 144) was to be taught during the last two years of high school and the first two years of postsecondary education at a community, junior, or technical college. Ultimately, this articulated and applied secondary-to-postsecondary educational track was intended to culminate with a two-year associate degree, "the preferred degree for employers seeking to fill a broad range of mid-level occupations," according to Parnell (p. 145). In that statement, Parnell identified a student outcome that is widely associated with Tech Prep: completion with a two-year college associate degree. Other student outcomes were not as clearly specified in the writings of Parnell or others influential in formulating the Tech Prep education approach.
a combined secondary and postsecondary education program which--
(A) leads to an associate degree or 2-year certificate;
(B) provides technical preparation in at least 1 field of engineering technology, applied science, mechanical, industrial, or practical art or trade, or agriculture, health, or business;
(C) builds student competence in mathematics, science, and communication (including applied academics) through a sequential course of study; and
(D) leads to placement in employment. (U.S. Congress, P.L. 101-392, 1990)Using this definition as an indicator of the outcomes that could potentially be associated with Tech Prep, it is apparent that students who finish the program should obtain an associate degree or two-year certificate, as was specified by Parnell (1985). Adding to the outcome of an associate degree, this definition alludes to outcomes linked to student competence in targeted vocational and academic subjects as well as job placement. The federal law also encourages state agencies to give special consideration to local grant applications that provide apprenticeships or transfer to four-year baccalaureate-degree programs, suggesting that student outcomes could be expanded beyond placement in entry-level jobs--a primary outcome of traditional vocational education programs--to include outcomes pertaining to further training and education, including four-year postsecondary education.
States are also advised to encourage local Tech Prep programs to address dropout prevention and re-entry, the needs of minority youths, youths of limited English proficiency, youths with handicaps, and disadvantaged youths (U.S. Congress, 1990). An "essential element" of the legislation requires that special populations be ensured equal access to the full range of Tech Prep programs, including support services. As such, the federal legislation ensures that Tech Prep not be limited to a select group of students such as the neglected majority or the traditional college-bound, but be inclusive of all students.
It is important to note that while the federal legislation provides some direction in terms of the kinds of student outcomes that should be assessed relative to Tech Prep, the law does not specify that outcomes assessment or any other form of program evaluation be carried out at the local or state levels.[4] Although, according to Layton and Bragg (1992), when Tech Prep programs first began to be implemented, several states built evaluation into the system of performance measures and standards required by the Carl D. Perkins Vocational and Applied Technology Education Act of 1990 (commonly referred to as "Perkins II").[5] However, at that time, only 40% of the states identified student outcomes for local Tech Prep programs, and most often that outcome was academic skill attainment, an outcome mandated by Perkins II. Yet, even then, state officials were questioning how to measure academic attainment and few other student outcomes were being proposed. (For additional discussion of how states have conducted evaluation within the current political context of Perkins II, see Hoachlander & Rahn (1992); McCaslin & Headley (1993); and Stecher, Hanser, & Hallmark (1995). Even though these studies are not directed toward Tech Prep specifically, they do examine how local and state entities have implemented related student outcomes measures according to the Perkins II mandate.)
In 1993, two years after federal funding became available to plan and implement local Tech Prep programs, local coordinators were asked to rate the stage of implementation of evaluation in regard to Tech Prep programs funded with federal Title IIIE funds (Bragg et al., 1994). Of nearly 50% of all local Tech Prep consortia in the United States, 40% reported they had not even begun to implement formal evaluations of their Tech Prep programs. Another 30% indicated their consortia were in the planning stage of evaluation, showing only a minority of Tech Prep consortia were actively implementing formal evaluations, and most of these were very, very preliminary.[6] Overall, all evaluation-related activities were rated among the lowest of 30 potential components of a Tech Prep program, indicating evaluation continued to be neglected within the first year or two that Tech Prep programs acquired Title IIIE funds.
Generally, indicators of student performance relative to Tech Prep have been compliance-oriented for the purposes of demonstrating accountability to governmental units rather than for improving local programs (Dornsife, 1992). Documentation of student enrollments and program completion primarily at the secondary level but also the postsecondary level have been used most extensively. A doctoral dissertation completed by Hammons in 1992 reported similar findings. However, Hammons did identify outcomes for Tech Prep related to student careers, attitudes and perceptions associated with education and employment, broadening the pool of outcomes that could be associated with student participation in and completion of Tech Prep. This study made an important contribution to the literature in that it did not confine itself to a narrow set of outcomes, but, rather, considered a wide array of potential performance indicators for Tech Prep.
Research conducted by Bragg et al. (1994) concurred with Hammons' earlier conclusions that a broad set of outcomes could be associated with Tech Prep. When local coordinators were asked to rate the priority that should be given to 17 student outcomes, 15 were given a "high" or "very high" priority rating, suggesting evaluations of Tech Prep should be broadly conceptualized and not limited to a few compliance-oriented measures. The following 15 student outcomes were given a "high" or "very high" priority by local coordinators:
According to Hershey and Silverberg (1994), during FY 1993 all states monitored local Tech Prep implementation by having local consortia make progress reports, usually once or twice per year. These reports typically asked local consortia to document how grant funds were used or how particular processes were functioning, (e.g., staff development, consortium membership, or planning activities). According to Hershey and Silverberg, 30 states required that local consortia report program evaluation activities or results. The majority of states required local consortia to inform them about the number of students in Tech Prep. Just over one-half of the state coordinators indicated their states required data on student outcomes:
State agencies most frequently required outcome data on secondary school program completion (23 states), postsecondary program enrollment (23 states), postsecondary program completion (20 states), and students' academic skills (17 states). Reports on job placements and students' technical skills/competencies were required in 15 and 14 states, respectively. (p. 29)These findings contrast with other results reported by Hershey and Silverberg (1994) where local coordinators described evaluation activities as in only a "planning" stage, raising questions about how many local consortia could actually provide the kind of information reportedly mandated by state agencies. Similar to the findings reported by Bragg et al. (1994), many local consortia were planning to collect outcomes data and create computerized databases; however, very few had actually accomplished that goal. Most of the computerized databases were being planned to track transcript data (i.e., courses taken or completed and grades). Fewer were designed to monitor and report student performance relative to specific vocational-technical or academic competencies or work-related experiences (i.e., work-based learning experiences, job placements, or wages).
The national evaluation clearly documents that in FY 1993 very few consortia were engaging students in formal evaluation activities or actively collecting data on student outcomes. When attempting to understand how students move through the Tech Prep system, secondary to postsecondary and beyond, the number of local consortia that were able to provide student outcomes data in the area of participation and completion was so limited as to make most of the estimates meaningless. For example, when asked to provide the number of Tech Prep participants at the secondary level, only 250 of 702 local consortia provided estimates. Far fewer provided estimates regarding high school graduation, employment after high school, postsecondary entry, completion of postsecondary, or employment after postsecondary completion. The national evaluation does not attempt to collect other student outcomes data such as "skills levels, competencies, or grades because they are measured, computed, and interpreted differently across localities" (Bragg et al., 1994, p. 117). Therefore, the census to document Tech Prep implementation nationwide will never report student outcomes beyond student participation and completion. This is a concern since understanding how students benefit from Tech Prep would be useful beyond knowing simply whether they participate and complete prescribed phases of the Tech Prep system.
Here, the national evaluation's primary goal of documenting accountability is apparent, but that may not be as helpful to local and state practitioners as other kinds of evaluation. For example, understanding technical and academic competency attainment among students could help to determine the effectiveness of particular aspects of the school-based curricula. Furthermore, identifying employability skill levels among students could help to determine the quality of the work-based curricula. Fortunately, the related case studies associated with the national evaluation delve into these student outcomes. A report documenting the first in a series of site visits conducted by Hershey, Silverberg, and Owens (1994) focused almost entirely on four processes: (1) articulation; (2) curriculum and instructional enhancement; (3) student recruitment, guidance, and career development; and (4) consortium organization and coordination. Data collected from school records for two cohorts of students in each of the ten case-study sites should help to expand the universe of student outcomes being investigated along with the national evaluation and help to address questions about how students benefit from Tech Prep.
Of all the states responding to our request for information about evaluations, five indicated they were primarily participating in and relying on the U.S. Department of Education-sponsored national evaluation of Tech Prep, research on Tech Prep conducted by the National Center for Research in Vocational Education (NCRVE), or other studies to inform them about how Tech Prep is progressing. One of the state coordinators indicated the federal appropriation for Tech Prep was too limited to allow funds to be diverted away from local and state program implementation. Still, most of these states were monitoring Tech Prep implementation as they did other similar programs, and some were engaging local consortia in formal self-assessments as well.
Twenty-eight states were engaging local consortia in data collection, either by having state staff design and carry out the evaluation, usually in conjunction with local personnel, or by contracting the evaluation to a third party. When a third party was chosen, often it was with vocational-technical personnel employed by a state's land-grant university. Several state agencies have established strong relationships with vocational-technical education units in land-grant universities for the purposes of conducting formal program evaluation. Many of these kinds of units are tapped to evaluate Tech Prep programs, including the vocational-technical units in land-grant universities in Illinois, Minnesota, Missouri, Virginia, and Wisconsin. In addition, third-party evaluations were conducted by other universities, regional education laboratories, or private consulting firms in California, Colorado, Florida, Ohio, Texas, Washington state, and West Virginia. Having reviewed the internal and external evaluation documents produced by state agencies and third-party groups, it seemed that evaluations conducted by third-party agencies were more comprehensive and rigorous than the internal evaluations conducted by state agencies. More of the third-party evaluations used a longitudinal design and standardized data collection procedures, and more provided a comprehensible definition of the population and sample of Tech Prep students and other stakeholders engaged in the study.
Thirteen of twenty-eight states reported they were conducting Tech Prep evaluations, but did not provide formal reports showing data, findings, conclusions, or recommendations. Rather, most provided copies of guidelines, surveys, and site-visit instrumentation used to collect data. Most indicated that although data was being collected and sometimes already available for use by local and state personnel, a formal report was not distributed.
Sixteen states provided copies of formal evaluation reports, most reporting results for FY 1994, although the reports for Illinois, Nebraska, New Hampshire, Ohio, Washington state, and West Virginia included FY 1995 data. None of the reports attempted to compare Tech Prep in disparate settings such as rural, urban, and suburban. (See Table 1 for a summary of the goals and methods used to carry out the sixteen formal evaluations of Tech Prep programs reported here. Appendix A contains additional information about each of these state-level evaluations.)
About one-half of the studies were longitudinal in design, typically lasting for three years to comply with the three-year time period for grant awards specified in the federal Tech Prep Education Act. Most of the studies utilized multiple methods--typically a document review, site visits, and surveys involving various stakeholder groups. Sometimes the evaluations also used observational assessments and secondary analysis of data supplied by MPR Associates, Inc., the organization conducting the national evaluation for the U.S. Department of Education. Eight of the thirteen evaluations had plans to examine student outcomes, although often these outcomes were not specified in the evaluation reports. Several of the reports stated that student outcomes could not be examined because of the early stage of implementation of Tech Prep. However, a few evaluations did report findings relative to Tech Prep student outcomes. Several of the state-level evaluations that made such claims are discussed in this section.
State
|
Evaluation
Goals and Methods
|
Source(s)
|
| California
|
No.
of Tech Prep Consortia: 83
Goals: Determine overall program effectiveness by assessing local and state program implementation and Tech Prep practices. The study focuses on four key areas of concern: 1. a description of Tech Prep education efforts 2. an assessment of program implementation 3. an evaluation of program effectiveness 4. the identification of effective program implementation strategies (p. 14) Methods: Five-year longitudinal study designed to evaluate Tech Prep program implementation by a third-party agency. Methods include document review; site visits and observational assessments; analysis of data submitted to Mathematica Policy Research (MPR) Associates, Inc.; and survey questionnaire administration for business/industry (pp. 14-17).
Major Findings Are Presented in the Following Areas:
|
Rubin
(1994)
|
| Colorado
|
No.
of Tech Prep Consortia: 33
Goals: The overall purpose of the evaluation is to assist the individual projects and state program in meeting goals by providing a comprehensive and objective assessment of processes and outcomes. Specific outcomes of the evaluation are
Methods: Three-year longitudinal design to parallel three-year funding cycle for Tech Prep consortia by a third party. The data collection methods include analysis of data submitted to MPR Associates, Inc., document reviews, surveys of program coordinators, and site visits (pp. 4-7).
Major Findings Are Presented in the Following Areas:
|
Keller
(1995)
|
| Delaware
|
No.
of Tech Prep Consortia: 13
Goals: No specific evaluation goals are presented in the report.
Methods: Compilation and analysis of extant data sources are used to create a summary document on Tech Prep in Delaware. Secondary student data is collected from the Student Registration Form and the VAX Computer System at the Department of Public Instruction. Postsecondary student data is based on Delaware Technical and Community College's internal data system through social security numbers. Non-student data comes from workshop sign-in sheets and reports; graduate follow-up surveys; surveys distributed to students, parents, education personnel, government officials, and business and industry representatives. (Foreword)
Major Findings Are Presented in the Following Areas:
|
Campbell
(1995)
|
| Illinois
|
No.
of Tech Prep Consortia: 40 funded in 1991; 7 demonstration sites funded in
1993-1995
Goals: The stated goals of this third-party evaluation were to describe micro-level Tech Prep programming in selected sites and determine the effects of Tech Prep participation on students (p. 2).
Methods: During FY94 two demonstration sites were studied, and two additional sites were selected using criteria and data from the FY93 National Tech Prep Survey by MPR Associates, Inc. Student samples were determined at each site using the categories of Tech Prep, non-Tech Prep, and pre-Tech Prep. Student data was collected via transcript review, testing, and group interview. The ACT Work Keys instrument Reading for Information and Applied Mathematics were administered to Tech Prep students to determine progress and level of proficiency. Some students were tested with the Work Readiness instrument developed by project staff. In addition, interview data was collected from Tech Prep students, vocational and academic faculty, administrators, and counselors (pp. 2-4).
Major Findings Are Presented for Four Sites in the Following Areas:
|
Roegge
& Evans (1995)
|
| Minnesota
|
No.
of Tech Prep Consortia: 29
Goals: None reported in preliminary report.
Methods: A follow-up evaluation system designed by a third-party evaluation unit uses data collected from Tech Prep Identifier Form, Data Submittal Form, Career Planning Survey, High School Follow-Up Questionnaire, and Employer Follow-Up Form. Most findings appear to be quantitative, although some qualitative findings appear in the preliminary report but the source is unknown.
Major Findings Are Presented in the Following Areas:
|
Brown,
Pucel, Johnson, & Kuchinke (1994)
|
| Missouri
|
No.
of Tech Prep Consortia: 12
Goals: Four objectives were given for the evaluation:
Methods: An evaluation conducted by a third party utilized document (RFPs) review, a Tech Prep coordinator survey, and structured interviews with all 12 Tech Prep coordinators (pp. 10-11).
Major Findings Are Presented in the Following Areas:
|
Ruhland,
Custer, & Stewart (1994)
|
| Nebraska
|
No.
of Tech Prep Consortia: 6
Goals: Document local consortia progress on Nebraska's Tech Prep career goals implementation as of June 1995.
Methods: The report represents a compilation of the implementation status surveys and self-assessments completed by each local consortium. Based on a state model, results are presented in the following areas: C - ommitment of leaders A - rticulation agreements R - elevance of instruction E - ducate staff E - nrich career guidance R - esourceful marketing S - ystematic review and revision
Major Findings Are Presented in the Following Areas:
|
Jurgens
(1995)
|
| New
Hampshire
|
No.
of Tech Prep Consortia: 5
Goals: This third-party evaluation was designed to be formative, ongoing, and focused on process. The purpose was to identify the strengths and weaknesses in a consortium's Tech Prep initiative. Based on the findings, local consortia were expected to be actively involved in further developing strengths and remediating weaknesses, creating a "continuous state of improvement" (p. 7).
Methods: The methods involved site-based self-study focused on the following components: administration and organization; articulation agreements; business, industry, and community involvement; curriculum development; impact on students; promotion and marketing; and staff development. Following the self-study activity, site visits with personal interviews and observations were conducted by third-party consultants and state staff ranging in number from eight to ten members (pp. 7-10).
Major Findings Are Presented on a Site-by-Site and Statewide Basis in the Following Areas:
|
Hammons
& Pittman (1995)
|
| North
Carolina
|
No.
of Tech Prep Consortia: 63
Goals: 1993-1994 Tech Prep project evaluations were conducted for the purposes of collecting and reporting data on the progress of projects funded under the federal Perkins II legislation (p. 1). Methods: In a study conducted by state agency personnel, local Tech Prep consortia representatives presented a structured executive summary of their projects' progress in meeting specified objectives for 1993-1994. Each consortium had 30 minutes to address several categories such as articulation and curriculum integration. A panel of reviewers from the North Carolina Department of Public Instruction and the North Carolina Department of Community Colleges rated progress in each category on a four-point scale indicating goals were Met, Partially Met, Not Met, or Not Applicable (p. 2).
Major Findings and Recommendations Are Presented on a Site-by-Site and Statewide Basis in the Following Areas:
|
North
Carolina Department of Public Instruction & North Carolina Department of
Community Colleges (1994)
|
| Ohio
|
No.
of Tech Prep Consortia: 24
Goals: Year One of this five-year longitudinal evaluation was viewed as a critical period for collecting baseline information and data about Tech Prep implementation to date at both the state and consortia levels (pp. 1-4).
Methods: A multifaceted data collection plan was implemented by MGT of America, a third-party evaluator. The evaluation involved (1) survey data collected from Ohio consortia in fall 1994; (2) site visits and personal interviews with key stakeholder groups; (3) surveys of students, parents, and business/industry representatives; (4) a survey about Tech Prep implementation in five other states (Florida, Michigan, New York, Oklahoma, and Pennsylvania) for the purposes of measuring progress; and (5) a multiyear telephone survey of students in the Tech Prep, College Prep, Vocational Education, and General Education tracks (pp. 1-7-8).
Major Findings Are Presented in the Following Areas:
|
MGT
of America, Inc. (1995)
|
| Rhode
Island
|
No.
of Tech Prep Consortia: 1
Goals: The study examined eight years of program management of the Rhode Island Tech Prep Associate Degree (TPAD) program and posited assertions that students who participate in TPAD (1) are more successful in secondary education than non-TPAD students as evidenced by their performance in core subjects; and (2) participate in postsecondary education more frequently (p. 8).
Methods: The program evaluation employed a comparison group design for outcome measures related to the above assertions. The sample was comprised of 1,350 11th and 12th grade TPAD students from 24 high schools in Rhode Island and 235 non-TPAD students selected by counselors and TPAD liaisons because of their similarity to students who had chosen the TPAD option. Existing instruments were used to assess performance during late spring and early summer of 1994 and information was taken from students' permanent records to create the dataset for this evaluation (pp. 8-10).
Major Findings Are Presented in the Following Areas:
|
Rhode
Island Tech Prep Associate Degree Program (no author or date given).
|
| Tennessee
|
No.
of Tech Prep Consortia: 14
Goals: The evaluation conducted by the Tennessee Board of Regents and the Tennessee Department of Education documents progress made by local consortia during the second year of Tech Prep program implementation.
Methods: None described.
Major Findings Are Provided in the Following Areas:
|
Tennessee
Board of Regents & Tennessee Department of Education (1994)
|
| Texas
|
No.
of Tech Prep Consortia: 25
Goals: This third-party evaluation focused on the description of Tech Prep programs at the local and state levels and the identification of best practices and effective approaches of local projects for improving occupational education (p. i).
Methods: Multiple methods were used to collect data for the evaluation, including document reviews, two-day site visits to all 25 consortia, interviews with state and federal personnel, mail questionnaires sent to 750 consortia members (44% returned), student data, and data from MPR Associates, Inc. (pp. i-ii).
Major Findings Presented Are in the Following Areas:
|
Decision
Information Resources, Inc. (n.d.)
|
| Washington
|
No.
of Tech Prep Consortia: 22
Goals: This third-party evaluation was designed to describe Tech Prep planning and implementation processes carried out by local consortia in Washington state. Methods: Multiple methods were combined to describe Tech Prep planning and implementation processes using case studies and secondary analysis of data provided by local consortia to MPR Associates, Inc. The case studies were conducted in two consecutive years with four consortia, and these studies were intended to provide detailed information about planning and implementation processes along with practices implementers perceived as effective. The case studies were constructed to portray (1) an overview of the consortium, (2) recent accomplishments in key areas such as articulation, (3) strengths and concerns in the consortium's operation, and (4) issues and new directions for the local Tech Prep initiative (Owens, 1995, p. 1; Owens et al., 1995, p. 1).
Major Findings Are Presented for Each Site and Common Themes, Strengths, and Concerns Are Presented in the Following Areas:
|
Owens
(1995)
|
| West
Virginia
|
No.
of Tech Prep Consortia: 15 (since 1991)
Goals: This third-party evaluation was designed to document the implementation progress and best practices of the Tech Prep Associate Degree initiative in West Virginia (p. iii). Methods: Evaluators reviewed annual project reports of each of the pilot TPAD projects in the state and conduct a focus group session with coordinators of the TPAD pilot projects (p. 1).
Major Findings Are Presented in the Following Areas:
|
Harman
& Stowers (1995)
|
| Wisconsin
|
No.
of Tech Prep Consortia: 22
Goals: This third-party evaluation was designed to address pressures for accountability and program improvement information at a time when education is poised for the adoption of complex educational reform initiatives such as proposed by STWO Act (p. 5).
Methods: The evaluation design is based on the concept of benchmarking which is intended to be the framework for school self-assessment and data collection feeding into school planning processes and continuous improvement. Wisconsin's benchmarking model relies on the identification and use of "benchmarks for Tech Prep and STW" in terms of implementation, participation, and outcome. Self-assessment tools and data collection tools are used to provide focus for Tech Prep implementation; identify strengths, gaps, and problems; identify improvement areas; and decide whether changes need to be made.
Major Findings Highlight a Pilot Test of the Benchmarking Model in Six Schools Which Identified the Following Areas:
|
Connell
& Mason (1995)
|
In Illinois, an evaluation involving four secondary sites included students classified as Tech Prep, non-Tech Prep, and pre-Tech Prep (Roegge & Evans, 1995). Student data was collected from transcripts, standardized tests, and group interviews. The findings indicated that Tech Prep students took as many or more science, math, social science, and foreign language courses as pre-Tech Prep students. The results were statistically significant for advanced science courses only. Although the group of Tech Prep students had a lower class rank percentile overall than the pre-Tech Prep students, the Tech Prep students obtained significantly higher composite scores on the ACT than pre-Tech Prep students. This result was statistically significant at the p=.005 level. A small sample of students was given a "Work Readiness" instrument developed by the researchers, and the results revealed that Tech Prep students had a more "anticipatory attitude toward work" than non-Tech Prep students. The data also revealed that more Tech Prep students thought their "classes would help prepare them for a career," and they were more "sure of what they wanted to do as an adult" (p. 8).
In Rhode Island, a sample of 1,350 students was drawn from 11th and 12th grade from 24 high schools (Rhode Island Tech Prep Associate Degree Program, no author or date given). The students were grouped into Tech Prep Associate Degree (TPAD) (n=1,115) and non-TPAD (n=235) categories. "The comparison group [of non-TPAD students] was composed of similar students from TPAD schools whose guidance counselors identified them as appropriate candidates for the TPAD Program, but who had declined participation, and similar students from two non-TPAD schools whose faculty were planning to implement the Program during the 1994-95 year" (p. 9). The two groups compared closely on several demographic characteristics such as gender and ethnicity. Data used for the secondary analysis was gleaned from existing transcript and test results. Findings show the TPAD group had significantly higher grade point averages (GPAs) in math, science, and communications than the non-TPAD group, but prior to participation in Tech Prep the TPAD students had significantly lower GPAs in these subjects than the non-TPAD group. The postsecondary participation rate for the TPAD students was 60% compared to 39% for the non-TPAD students, although missing data for both groups raises questions about these estimates. Nevertheless, the rate of participation in postsecondary education suggests a sizable proportion of students are continuing their education beyond high school, an important element of Tech Prep.
Like Rhode Island, Delaware's evaluation of Tech Prep relies heavily on existing student data from secondary and postsecondary sources (Campbell, 1995). Some of the results of the evaluation are presented for Tech Prep versus non-Tech Prep students. For example, the dropout rates for Tech Prep students are lower than for non-Tech Prep students over the 1990-1991 to 1993-1994 time period, 0.39% and 5.0% respectively. Achievement score comparisons for a random sample of Tech Prep and non-Tech Prep students show the Tech Prep group's average scores in advanced skills reading and math were higher than for non-Tech Prep students. Finally, results indicate a steady increase in the number of high school students earning advanced college credits, showing an increasing number of students are accessing college credits while still in high school.
Finally, some student outcomes results are presented in the evaluation report authored by Owens, Lindner, and Wang (1995). In this study, personnel employed by the Northwest Regional Educational Laboratory (NWREL) conducted case studies on four sites in Washington state. The case study involving the Seattle consortium documented several data collection activities focusing on student outcomes. The report showed that in December 1993 there was a higher rate of participation of Tech Prep than non-Tech Prep students (based on self-identification) in career development activities and applied academic courses. In addition, the study reports telephone interview findings obtained by Dr. Mary Beth Celio showing 79% of Tech Prep graduates were enrolled in postsecondary education compared to 66% for other high school graduates. Celio's study also reported the following findings:
Since Tech Prep is particularly focused on the connection with community colleges, it is important to note that 47% of the Tech Prep students went on to the community college, while only 33% of the non-Tech Prep students did so. The fact that nearly identical percentages of each group went on to a four year school (32% for Tech Prep and 33% for non-Tech Prep) demonstrates that Tech Prep does not limit options for attending four year programs. Equally impressive is that the Seattle high school graduates going on to the community colleges include a higher percentage of students who formerly did not progress beyond secondary education (those with a high school GPA of 2.8 or less, Black and Asian populations, and high school graduates age 19 or older). (Owens et al., 1995, p. 17)Case study findings for one other site in Washington state included a list of desired outcomes; however, data was not reported, probably because it was not yet available. Student outcomes that were identified by the Tech Prep consortium in Yakima Valley included increases in attendance rates, standardized test scores, and postsecondary participation, especially for minority students. The consortium also intended to examine whether suspension and dropout rates were declining as local officials hoped they would be in association with student participation in Tech Prep.
[1] Although 2+2 Tech Prep programs have existed for some time in a few localities and states, Tech Prep was not widespread until after passage of the federal Carl D. Perkins Vocational and Applied Technology Education Act Amendments of 1990.
[2] On May 4, 1994, the U.S. Congress passed the School-to-Work Opportunities (STWO) Act which has a primary goal of encouraging states to plan and implement coordinated school-to-work systems using a variety of models including Tech Prep to assist youth to obtain employment after completing secondary or postsecondary education.
[3] The study sample was comprised of educators, students, and employers actively engaged in Tech Prep implementation as a part of the National Center for Research in Vocational Education's Urban Schools Network.
[4] A national-level evaluation is mandated by the Carl D. Perkins Vocational and Applied Technology Act Amendment of 1990 (Perkins II) and this evaluation is described later in this section.
[5] Perkins II has a primary objective of developing improved accountability systems that require each state to measure student learning gains in basic and more advanced academic skills and student performance in competency attainment. States must also implement measures in one or more of the following areas: job or work skill attainment or enhancement, retention or completion, or job placement.
[6] A targeted follow-up of about fifty Tech Prep consortia indicated by the NCRVE survey to be the most advanced at Tech Prep evaluation in the nation produced disappointing results. Very few formal evaluation plans, instruments, or reports were produced by these sites.