NCRVE Home | Site Search | Product Search

<< >> Up Title Contents NCRVE Home

Overview of Analysis

How can we explain the types and extent of linkages? Our "Background and Framework" section suggested that a simple economic or socio-psychological model of faculty behavior would predict a set of individual and institutional factors which could be expected to influence the behavior of any individual instructor. This discussion highlighted the likely importance of a faculty member's teaching field and full-time/part-time status, as well as institutional features such as location (e.g., proximity to employers), governance (opportunities to participate in college decisionmaking and interactions between faculty), and resources (time, professional development, and other incentives) which may facilitate or hinder an individual's willingness or ability to undertake linking activities.

In order to assess what factors were most important in explaining linkages, we analyzed our survey and case study data. In this section we report our results in an integrated fashion by discussing a set of key factors which seem to us to explain linking behavior or the absence of it: teaching field; full-time/part-time status; time, resources, and institutional incentives; institutional governance and program boundaries; and local conditions. We discuss each below. Underlying this discussion is a detailed consideration of our interview and other data gained at our four sites, and a comprehensive set of analyses using survey responses.[21] The latter involves two basic components: a formal investigation, using multivariate regression, of the determinants of responses to the connectivity items reported in the previous section; and examination of faculty survey responses to specific questions about the individual and institutional incentives and disincentives to undertake linking activities.

First, we used multiple regression to determine which individual and institutional characteristics had independent effects on the responses of faculty described in the previous section. In other words, we treated faculty responses on connections to the labor market on each survey item in Tables 5, 6, and 7 as outcome variables.[22] Our explanatory variables included a set of individual characteristics of the faculty member: sex, race/ethnicity, age, years of experience teaching in community colleges and in the current institution, degree level, rank, tenure status, part-time status, and primary teaching field. These individual characteristics would be expected to influence an individual's costs and benefits of undertaking any given connecting activity, and therefore are potentially important factors explaining linkages. Our explanatory institutional characteristics included region, urbanicity, total enrollment, governance structure (multi-campus district, single college district, university branch), and whether the faculty are unionized.[23] Again, these variables would be expected to influence an individual faculty member's costs and benefits of participating in connecting activities through their effect on institutional resources, climate, leadership, and so on.[24] Given the difficulty of interpreting the coefficients and magnitude of the effects of independent variables from these models, we simply discuss the estimated direction of the effects below.[25]

The ability of our set of objective individual and institutional characteristics to explain variation in connectivity ratings varies widely across outcome measure. For example, for all faculty, we can typically explain between 10-20% of the variation, with adjusted R-squareds as high as .22 to .23 for some measures (number of times assisted students seeking a job, asking employers about the quality desired in new hires, asking employers about the performance of graduates) and as low as .02 to .04 for others (co-teaching a course with a business representative, number of times given a presentation to business). These R-squareds are not atypical for cross-section data. Since our goal is not to predict the extent of connectivity but simply to highlight which factors seem to be independently associated with greater or less connectivity, this is not a major problem.

Second, we analyzed the faculty survey responses to two additional sets of variables which provide further clues as to variation in connecting activities: perceptions of barriers to building linkages; and perceptions of the institutional climate and support in providing labor market information and promoting linkages. Table 9 reports faculty perceptions of some of the possible barriers to linkages. Survey participants were asked "To what extent do you agree or disagree with the following statements about links to local business, government, and community organizations?" with the response scale being "strongly agree" = 1 and "strongly disagree" = 5. In addition to using these means, we conducted multivariate analyses of the determinants of respondents' views of these barriers. We regressed our subjective barrier rankings on the same set of individual and institutional characteristics discussed above. These results permit us to determine which factors have statistically independent effects on the ratings.[26] Once again, these results are discussed thematically below, in the context of all our other survey and case study evidence.

Further clues as to the extent to which opportunities exist for promoting linkages are shown in Table 10. Faculty were asked to what extent various statements described their institution on a five-fold scale: "does not describe my institution" = 1 to "very much describes my institution" = 5. The means by type of faculty are shown in Table 10. (Underlying frequencies are shown in Appendix Table 4 in Appendix A.) These items provide some indication of how faculty view their institution and its policies.


[21]Recall that our four case study sites were drawn from institutions in our survey sample. In general we found a high degree of congruence between our observations at these colleges and faculty survey responses.

[22]Since in most cases the dependent variable is dichotomous (either 0-1, or a scale of 1-5), ordinary least squares (OLS) is strictly inappropriate. We, therefore, also estimated ordered logit models (in the case of scaled variables) or binary probit models (in the case of 0-1 variables) to confirm our OLS results.

[23]The effects of unionization in community colleges has been explored in further depth with this survey data (see Brewer, Rees, Gray, & Rivera, 1997).

[24]In our survey data analyses, we confined our attention to the set of "objective" individual and institutional variables, although it would be possible in principle to include in such statistical models individual "subjective" predictors such as job satisfaction, or institutional explanatory factors such as campus climate, which could be constructed from other survey items. This approach may lead to statistical problems, however, and in this paper we do not adopt this strategy Since all items were completed at a point in time, it is far from clear if these measures are used whether they can be treated as exogenous in regression models. If they cannot, OLS regressions will yield biased results, and correcting for possible endogeneity using instrumental variables is problematic given lack of obvious identifying variables.

[25]Given the large number of indicators of connectivity--outcome measures--available to us, and the large number of independent variables used in our models, reproducing complete regression results is impractical. More important, it is not informative since the magnitudes of the estimated coefficients have no meaningful interpretation in this context; the scales of the dependent variables are nonlinear. (Results may be obtained from the authors on request.)

[26]Once again, we do not show the regression results themselves (available from the authors), but report statistically significant or interesting results in the text.


<< >> Up Title Contents NCRVE Home
NCRVE Home | Site Search | Product Search