Combining cost and performance assessments for decision support
Given the cost and performance assessments that you make, the key question is what level of agency investment in Web-based services to recommend. Is the elaborate version the best level of investment or is it too expensive? What about a moderate or only a modest investment at first? It may be that the cost and performance assessments support taking no initiative on the WWW at all. But how would you know? It is sometimes very difficult to draw an obvious conclusion from all of this information.
There are many ways that cost and performance information can be combined and integrated to support decisions about WWW investments. Three approaches are reviewed in this chapter: benefit-cost analysis, resource allocation methods, and multi-attribute utility models. Describing these three approaches in detail is beyond the scope of this guide. However, a general overview may help you get started, and references for additional reading are provided in case you are interested in learning more. Examples are presented in Chapter 7.
Why would the agency choose one over the others? The answer to this question depends on several factors. There is no one right way to say which method or methods should be used by a specific agency, but certain indicators can be found. If your agency is only concerned with cost, you should probably perform a traditional benefit-cost analysis. If the agency has a short list of performance criteria (as described on p. 17), the multi-attribute utility model should be chosen. If the agency has a long list of performance criteria, they should probably use the resource allocation method.
Figure 5. Indicator/Tool
|
Indicator:
|
Recommended tool:
|
|---|---|
|
Cost important |
Benefit-cost analysis |
|
Short list of performance criteria |
Multi-attribute utility model |
|
Long list of performance criteria |
Resource allocation method |
Benefit-Cost Analysis
If the only performance measure of importance is "cheaper," then you should do a benefit-cost analysis, because it is relatively easy to convert "cheaper" into dollars, and costs are already in dollars. The "cheaper" criteria chosen by the agency can be such items as saved time, avoided cost, and cost savings. Anything that in terms of equipment, space, time and other attributes is in the "cheaper" category.
Benefit-cost analysis provides information on the full cost of meeting specific service objectives through the Web and weighs those against the dollar value of the benefits received. The net benefits of the proposed project, examine the ratio of benefits to costs, determine the rate of return on the original investment, and compare the benefits and costs for each level of aspiration
(i.e., modest, moderate, and elaborate) with the others. Benefit-cost analysis requires three steps:
This description of benefit-cost analysis gives the impression that the three steps will be easy to complete, but, in fact, benefit-cost analysis can be very difficult. The cost worksheets contained in chapter 4 of this guide will help with the first step, but placing a dollar value on each benefit (second step) can sometimes be problematic, especially if these performance variables are of the "faster" or "better" type. How might one calculate the dollar value of reducing customer waiting time by 10% or of providing client information with 20% fewer errors? Increasing the number of in-coming inquiries or the number of services provided may be more costly, not less so. In general, benefit-cost analysis may be most useful where the performance variables are of the "cheaper" type and, therefore, more amenable to quantification in dollar terms.
For a more in-depth discussion of what to do, please see James Edwin Kee, "Benefit-Cost Analysis in Program Evaluation," in Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (eds.), Handbook of Practical Program Evaluation. San Francisco: Jossey-Bass, 1994.
Resource Allocation Methods
Resource allocation methods may be helpful in situations where there are problems attaching dollar values to every benefit, as in benefit-cost analysis. Although resource allocation methods, similar to benefit-cost analysis, compare total benefits to total costs, the measure of benefit is much more generic, judgmental, and subjective. A benefit is assessed on a simple, 100-point rating scale. The increasingly costly versions of Web-based services (from modest to elaborate) are measured, as an index of overall performance, across this scale of holistic benefit. The "no investment--no Web service" alternative anchors the lower end of the 100-point scale (i.e., 0: least overall benefit), while the elaborate version of Web-based services anchors the upper end
(i.e., 100: most overall benefit). Modest and moderate versions are scored somewhere in between (see Figure 6).
To assess the incremental advantage of investing more agency resources in Web-based services, the additional benefit of each alternative (D benefit) is divided by its additional cost (D cost), relative to the benefit and cost of the "no investment--no Web service" alternative. The version with the largest incremental advantage of benefit relative to cost would seem to be preferable. The difficulty with resource allocation methods, however, is the problem of reducing multiple performance measures to a single, aggregate benefit scale. This approach may be more practical than benefit-cost analysis when many of the performance measures are of the "faster" or "better" type, but it may be criticized as not sufficiently analytical.
Again you should try to decide between modest, moderate, and elaborate versions of the system. In this model, you may have a number of criteria, maybe six or seven, that would indicate advantages of the service. In the resource-allocation model we give the lowest level a 0 benefit, and the top level a 100 benefit. Then position the intermediate levels (modest and moderate) somewhere between 0 and 100. Given all of these benefits, instead of putting them on a dollar scale, put them on a utility scale from 0 to 100. For example, if the moderate level is fairly close to the elaborate level, that could mean that moderate might get an 80 utility. If modest seems to be fairly close to the middle between no investment and moderate you might give this a 40.
Costs are used more objectively. For the "no investment-no Web service" option, the cost would be 0 dollars. The cost worksheet would give us the costs for the other levels of investment (modest, moderate and elaborate). We then use this information to create a ratio of incremental benefit to incremental cost. You can treat these as pseudo-benefit-cost ratios.
By dividing the benefits by the costs, a ratio of benefits to costs is created. Within the frames of available resources, the agency should pick the level of investment that precedes the level where the ratio starts to decline.
Figure 6. Blank Resource Allocation Method

The above figure can be used as a guide. Fill out the benefits associated with the different levels of investment inside of the boxes (or on a separate sheet of paper), and use those to assess the benefit (between 0 and 100 for the different levels of investment).
This method is somewhat more subjective than the first. Judgment is required in determining how to aggregate the benefit across these multiple dimensions. For an example with numbers, please refer to Figure 15 on page 34.
For additional information, please see Sandor P. Schuman and John Rohrbaugh, "Decision Conferencing for Systems Planning," Information & Management, 1991, 21, 147-159.
Multi-attribute Utility (MAU) Models
This model should be used when there are relatively few performance measures. Although, in principle, complex situations can be analyzed using a MAU model, for best results a complex MAU analysis should be done in consultation with an expert. A sample MAU model is described below (for an example with numbers, please refer to Figure 16, page 35 and Figure 17, page 35).
Figure 7. Sample Multi-attribute Utility Model
|
|
Alternatives
|
||||||
|---|---|---|---|---|---|---|---|
|
Rank
|
Weight
|
Criteria
|
No Investment
|
Modest
|
Moderate
|
Elaborate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Total Utility:
|
|
|
|
|
||
Typically, each performance criterion is evaluated separately. The "utility" associated with each of the multiple criterion is scored on a simple, 100-point rating scale. The same 100 point scale is used to assess the alternatives on every performance measure.
Multi-attribute utility models differ from benefit-cost analysis and resource allocation methods in that project cost is treated as just one more performance measure. In particular, a ratio of benefit to cost is not formed. Web-based services that cost the agency more of its resources (moderate and elaborate versions) are rated as having less "utility." Here, the elaborate version of Web- based services anchors the lower end of the scale (i.e., 0: most cost), while the "no investment-- no Web service" alternative anchors the upper end (i.e., 100: least cost).
Typically, some of these performance criterion is more important to the agency than others and are given more weight. Each performance criterion is given a weight. A final overall rating for each of the (now four) options is obtained by computing a weighted sum of the ratings on each individual utility. Utility associated with less cost is merely added to utility associated with more benefit on other performance measures.
© 2003 Center for Technology in Government
