Previous Title Page Contents NCRVE Home





APPENDIX:
METHODOLOGY






Organization of the Interviews

       Interviewers gathered information on university admissions practices using a structured telephone interview, in which most questions were open-ended. The first of two main sections collected background information on each state's higher education system. The second focused on policies and practices for evaluating high school records that included interdisciplinary or applied academics courses or that lacked grades and Carnegie units. More specifically, in the first section of the interview, we collected general information on each state's public university system and the administration of the admissions process; the name of the state's flagship institution; whether the state had minimum curriculum requirements for admission to its public universities; whether the admissions process was administered at the campus, segment, or system level; which office decided whether specific courses on transcripts satisfied curriculum requirements; and under what circumstances a student who had not met the curriculum requirements might still be admitted to a public university.

       The remainder of the interview focused on admissions policies and practices. The first group of items addressed policy on unconventional school records (those without grades or Carnegie units). We asked how often the respondent encountered unconventional transcripts and what was the most common practice for evaluating these applicants. Second, the interview covered policy on interdisciplinary courses--those that combine two or more academic fields. Again, we asked how often the respondent encountered such courses on transcripts; how they evaluated whether these courses counted toward curriculum requirements (in states with subject-specific requirements); and how often such courses were actually applied toward the requirements. Third, we asked the same series of questions about courses that integrate academic and vocational material. The final group of items addressed transfer admissions. We gathered information about transfer of credits from applied or Tech Prep associate's degree programs.


Selecting Institutions

       In every state, the answers to many of the questions posed by this study could be answered only by personnel at the institutional level. This raises the issue of sampling: there is substantial variation in the number of public four-year institutions in different states, from states with one or two institutions (e.g., Delaware, Nevada, and Wyoming) to states with 40 or more institutions (e.g., New York, Pennsylvania, and Texas). The scope of this study permitted interviewing staff from a single institution in each state, rather than a representative sample or comprehensive census of institutions. There are several ways to approach the sampling problem, each with associated costs and benefits.

       A simple or stratified random sampling scheme might be appropriate for a study that permitted interviews at several institutions in a state: a sample representing different types of public institutions (e.g., state colleges as well as research universities) would afford a reasonable picture of practices in place throughout a state system. Randomly selecting a single institution in each state, on the other hand, undermines the comparability of information across states that is necessary for an overview of practices across states. Even multiple-institution sampling remains problematic: differences in the number of public four-year institutions in each state introduce questions of how to represent each state's system adequately.

       A purposive sampling scheme, on the other hand, involves intentionally selecting comparable institutions across states. This approach permits comparisons across states and characterizing practices in a given type of public institution. We chose this approach at the cost of capturing variation in practices that might exist across different types of institutions.

       After careful consideration, we decided to focus on each state system's flagship institution. These schools generally enroll more undergraduates than other public institutions, and they often set a standard that other public institutions seek to emulate. While these are strong reasons to focus on the flagship as a way of representing a state's public four-year institutions, one must also acknowledge the potential costs. Flagship institutions are typically more selective than other public institutions and thus have higher admissions standards. These high standards correspond to applicant pools with more conventional college-preparatory high school programs. Moreover, because they are generally larger than other public four-year institutions, their admissions staffs may have larger caseloads. For all of these reasons, flagship institutions may be less likely than other public institutions to be flexible or innovative in their undergraduate admissions procedures. This study does not paint a comprehensive picture of admissions practices in public higher education. Rather, it focuses on practices at the institutions that enroll the most students and that often set the example for other institutions in a state's system of higher education.


Identifying Respondents

       Identifying individuals who could best answer the interview questions was often a multistep process, since the entities responsible for setting admissions policies and implementing those policies varied across states. Usually there was no single person who could answer all questions: while state agency or coordinating board personnel tended to be most helpful in describing the higher education system and in discussing admissions policies under statewide review, those who implemented the policies and made actual admissions decisions were most familiar with the practices in place. Thus, we first identified who could answer the broader questions about statewide curriculum requirements and where admissions decisions are made and then interviewed that person. Next, we identified people who could provide insight into actual admissions and credit transfer decisions; most often these were senior staff members in the admissions office of the flagship institution.[30] In most cases, at least two people were interviewed in order to answer our full range of questions.

       In every state, we first called a contact person at the higher education coordinating board (HECB), state board of education, or other body that oversees public four-year institutions. We identified contacts at these agencies using several sources: the Education Commission of the States' State Postsecondary Education Structures Handbook(McGuinness, Epper, & Arredondo, 1994), OERI's Raising Standards: State Policies To Improve Academic Preparation for College(Flanagan, 1992), and the 1994 and 1995 Almanac editions of The Chronicle of Higher Education. Respondents at the state level typically answered only questions in the first section of the interview. In some states, the initial contact could only provide the name of the flagship institution, and referred us there or to its university system office for answers to all other questions. (When a respondent was reluctant to name a single flagship institution, we asked the respondent to identify the institution with the largest enrollment.)

       In all states except California (where admissions policies and guidelines for the University of California are developed by the system office), we asked a representative of the flagship institution's admissions office most of the questions.[31] In states with separate coordinating offices for two or more university segments (e.g., both the University of Arkansas and Arkansas State University systems), the interviewer often contacted the segment office that governed the flagship institution when referred by the state-level respondent. Finally, since actual admissions decisions were made at each campus, the interviewer contacted the flagship institution. In most cases, an admissions counselor or supervisor (e.g., an associate admissions director) at the state's flagship university answered the substantive questions about admissions practices that formed the bulk of the interview. In addition, in two states, a second institution was contacted because the respondent at the flagship was aware of another public institution with more experience in evaluating unusual transcripts or courses. In sum, we contacted 48 state-level governing bodies, 8 system offices, and 50 institutions in a total of 48 states.


Coding and Data Checking Procedures

       Interviewers noted responses during the interviews on standard interview forms, which were then coded. First, the interviewers developed a template to facilitate coding the notes into discrete categories. Some items were structured to have only a single response, while others permitted multiple responses. For open-ended items, we defined categories for responses that were cited by several respondents. A coder then coded the interview notes, directing questions to the interviewers when responses were unclear. The coded data was then entered into a spreadsheet to facilitate analysis.

       To check the accuracy of the data, one of the two interviewers carefully reviewed coding sheets against notes taken during the interviews for the questions that specifically pertain to the issues of this report: where decisions were made about accepting courses; how often instances of specific reforms were encountered in the admissions process; and practices for handling unconventional transcripts, interdisciplinary courses, and integrated courses. Before making any corrections in the database, the two interviewers agreed about appropriate coding.


Problems Encountered and Potential Sources of Error

       Errors may have entered into the data at a number of stages. First, some respondents may have answered beyond their immediate knowledge or expertise (e.g., they may have stated as fact an assumption about what other staff members or another office does in a given situation). We found most respondents eager to participate, and in some cases, they may have wanted to appear more knowledgeable than they were. Second, some respondents may have misinterpreted a question and answered a different one without our knowledge (differing terminology might cause errors of this sort). Third, we may have misinterpreted what they said. Fourth, the coder may have misinterpreted what was on the interview forms or entered an incorrect code into the database. While we have cleaned coding errors for the most central questions through careful review, inaccuracies of the first three types cannot be detected post-hoc.

       Respondents had varying degrees of familiarity with these educational reform issues, which also complicated the interviews. In states such as New York, where broad school reform is being implemented in high schools, admissions staff were aware of shifts toward using performance-based evaluations or integrated curriculum, and could discuss their procedures for handling them at length. In other states, however, respondents found some reforms unfamiliar. (In such states, a standard practice or policy may not have been developed.)

       Discussing school reform is also complicated by the lack of a common language to describe new programs, courses, and practices. Many people used "integrated" to mean "interdisciplinary academic"; a term like "Tech Prep associate's degree" was often unfamiliar; and even a term like "governing board" may have meant different things to various respondents. Different institutions or even different individuals in the same institution defined terms according to their conventional use in their workplace. Although the interviewers frequently provided definitions of terms, misunderstandings about terminology may have nevertheless occurred.

       We took steps while interviewing to ensure accuracy. Assessing the knowledge base of an initial respondent, stopping the interview when questions fell out of that person's jurisdiction or expertise, and completing the interview with another respondent (often at another institution or office) proved useful in many instances. On the other hand, it was often difficult for interviewers to assess the knowledge of a respondent. Some respondents may have answered questions based on assumptions about how things should work, rather than on their first-hand experience with admissions decisions.

       On occasion, responses about state-level practices appeared to contradict responses about institution-level practices. For example, a state agency that oversees higher education may have reported that integrated courses could be counted toward curriculum requirements. Individual institutions in that state (such as the flagship), however, may have additional policies governing the kinds of courses that qualify a student for admission. Thus, an admissions officer at the flagship university may have reported that integrated courses never apply toward curriculum entrance requirements. These responses might appear to be contradictory, but both responses can logically coexist (if other institutions in the state accept some integrated courses while the flagship does not). While this type of difference might seem analytically useful, comprehensive comparisons of institution- and system-level practices are not feasible because in most cases we were referred to individual campuses for these questions.

       We generally interviewed at least two people per state. Although data from each contact were recorded separately, the information was then reduced to one response per state per question. These responses came from the level where the decisions are actually made about how to count courses and evaluate unconventional transcripts--in almost all cases, the admissions office of the flagship institution. Tables 1-3 report data from these respondents only. Table 4, which presents findings on policies in place and under review, uses data from both institution- and state-level respondents.


[30] In some states, two universities vied for "flagship" status, in which case the one with the larger enrollment was considered to be the flagship.

[31] The exceptions were Colorado and Nevada, where we were unable to gain cooperation by personnel at the flagship institution.


Previous Title Page Contents NCRVE Home
NCRVE Home | Site Search | Product Search