NCRVE Home |
Site Search |
Product Search
6. LESSONS FOR FUTURE POLICY EXERCISES
RAND-designed
policy exercises typically conclude with a feedback session so that
participants can identify aspects of the exercise design that could be
improved. Exercises on a given topic are often rerun, informed by the feedback
from earlier runs. And many recommendations from participants are applicable
to the generic social-policy exercise protocol and can thus turn out to be
useful even if the particular game generating them is not rerun.
Following
are lessons inferred from the critique session of the current exercise and from
observations of panels during the exercise. As implied above, whether they are
adopted in future exercises will depend on whether an exercise much like the
current one is run again, and, barring that, whether they are applicable. It
also depends on whether they are feasible in terms of the analytic capability
required and on what trade-offs must be made to implement them.
- Try
to get more people from job-training programs and some people from youth
service groups to attend. Participants were pleased that the business world
was represented but felt that the balance between education and training
organizations represented leaned too much to the former.
- Reverse
the order of the first two questions structuring the dialogue session. The
first question was intended to draw on participants' personal experiences with
the education system and the workforce, but some felt it made more sense to
start with the second question on the objectives of education. Generally
speaking, facilitators and their panels varied widely in how they conducted the
dialogue, with some adhering more closely to the structure that was offered
than others.
- Use
more strongly varying states, or classify the panels by level of government
(federal, state, or local) instead of by state. The allocations and system
designs that the panels came up with did not differ much by state. To some
extent, that may have reflected insufficient variation in the scenarios given
for Algonquin, which was near the middle of the distribution on most
educational measures, and Montoya, near the bottom.
- Reverse
the order of Moves 1 and 2. Panels generally began their deliberations on
allocating the funds available in Move 1 by attempting to reach consensus on
overall education and training strategies required in their state. This
ambitious activity, envisioned for Move 2, forced panels to squeeze the
allocation itself into a brief period at the end of Move 1 and left some of
them dealing largely with details in the time allocated to Move 2.
- Broaden
the scope of the funds available for allocation in Move 1. Funds to be
allocated excluded all current state expenditures and federal monies spent
within the state on K-16 education (although Pell and Perkins funds were
reallocatable). Some panelists wanted more latitude to remake the system
within their state through the Move 1 allocations. Appreciation was expressed,
however, for the way in which the game design focused the panels on making
tough choices.
- Provide
more data or more time to work with the data available in Move 1. Panelists
had to make allocative judgments regarding a wide variety of systems without
potentially important detail on each--or without the time to draw potentially
important inferences from the data that
were
provided.
Panelists were sometimes left to conjecture based on real states that they
thought the hypothetical ones were intended to resemble.
- Clarify
the presentation of data. Game designers wrestled with the tabular
presentation of baseline data for the Move 1 allocations in response to a
preliminary run of the game at RAND. The result was not entirely successful,
because some panelists were still uncertain as to what was meant by columns
intended to give baseline categorical, unallocatable funds and baseline funds
being combined into the block grant.
- Eliminate
or redirect the Move 1 indicators. In allocating funds in Move 1, panelists
were told future funding could depend on their state's performance on several
indicators. Participants felt these were too oriented toward education, (e.g.,
how many diplomas or degrees are awarded), when that is only partially related
to long-term economic success. By allocating to score well on such indicators,
panelists felt they would fund a "credentialism" that does not have a whole lot
to do with education's purpose. One panel decided, in fact, to ignore the
indicators. The indicators could be more directly related to the economy,
e.g., number of welfare recipients moving off welfare, number of welfare
recipients getting and holding a job.
- Brief
the panels on the allocation outcomes model ahead of Move 1, or make the model
flexible enough to account for provisions attached to the allocations.
Panelists felt they might have allocated funds differently had they known what
were the assumptions tying their actions to outcomes on the various indicators.
Furthermore, because the model could not take into account some strategies
devised to address major problems within their state, e.g., concentrating funds
in districts with special problems, the model outcomes were insufficiently
relevant to the panels' actions.
- Allow
outcomes from the model to be shared. Model outcomes were not briefed;
instead, each panel received its outcomes (and only its outcomes) on hard copy.
Panels could compare their outcomes with outcomes based on no change in
allocations, which were provided, but not with any based on different
allocations.
- Permit
the panels to interact with the model, or at least permit a second model-based
move. Not only were other panels' outcomes not visible, but each panel could
make only one move; it could not try out several different allocations. More
might be learned if the panels could interact directly with the model, trying
different inputs to see how the outputs varied.
It
is worth noting that, although several recommendations dealt with the model, we
were also urged not to place any more emphasis on it--that more could be
learned from Move 2 than from an expansion of Move 1. This ambivalence on the
part of the panelists toward the model reflects our own. When we began
developing the game, we had hopes of designing a model rigorous and
comprehensive enough to project the results of participants' Move 1 decisions
and give them reason to reconsider. This was the role that models had played
in some previous RAND policy exercises. We found, however, that data to
support the relations required in the model were not readily available, and we
could only hypothesize those relations. We thus gradually demoted the model
from a lead role to a supporting part in which it basically got the panelists
to think for awhile about the potential chain of consequences ensuing from
their decisions.
NCRVE Home |
Site Search |
Product Search