logo

A Cost Performance Model for Assessing WWW Service Investments

Abstract

Acknowledgments

Executive Summary

Introduction

Setting and refining service objectives: A framework

Identification of system features and functionality

Cost assessment for developing and delivering services

Performance assessment

Combining cost and performance assessments for decision support

Sample analyses

Conclusions

Appendix A. Detailed Directions for Worksheet

Performance assessment

Web-based services might improve the agency's performance in several ways and any good performance assessment will depend on how using the Web will affect key work process and outcome variables.

Selecting the best set of process and outcome variables will not be easy. In fact, this may be the most difficult aspect of the entire cost and performance assessment procedure. While many of us have experience with costing out projects, fewer people have substantial experience quantifying performance measures. If you intend to make the strongest case for developing and delivering Web-based services in your agency, you should select performance variables that are viewed widely as important. Avoid variables about which there is considerable disagreement, not only concerning their importance but also about how they will be affected by your service objectives.

Measures also should be preferred if it is possible to think about their measurement in easy, direct, and explicit ways. At this point in the planning process, it is not necessary to have actual data available, so you are not necessarily limited to only those variables about which the agency already collects data. Nevertheless, it would be a good idea to identify measures for which information might be readily generated. You should anticipate that at a later time someone might decide to evaluate the agency's implementation of Web-based services using the work process and outcome variables that you select.

As discussed in Chapter 2, it may be helpful to think about the benefits of a WWW initiative in three performance categories: cheaper, faster, and better. Cheaper refers to all the ways that Web-based services may save resources such as time or money. An initiative may not produce savings immediately but only over the long term, sometimes by avoiding increased or perhaps new agency costs in the future. Faster refers to shortening response times and waiting times, as well as the time required to distribute information that has not been directly requested. Providing information and services more quickly also can be considered as an increase in efficiency, even though no cost savings may accrue to the agency. Better refers to all the other ways in which performance may be improved beyond increasing the efficiencies of cost and speed. These improvements may be viewed as more "qualitative," though they can be measured, too.

You should not limit your thinking about performance improvements to your agency alone. Web-based services may make processes and outcomes cheaper, faster, or better for customers, for the general public, or for other agencies, as well. Keep all important stakeholders and constituencies in mind. We have collected a short list of variables to illustrate cheaper, faster, and better, although it is far from an exhaustive set. Don't let this list constrain your own creativity.

Cheaper


Faster


Better


How well will it work?

Performance assessment, as we are using the term here, does not require collecting data. We are attempting to make forecasts or predictions: how well do we expect Web-based services to perform? Neither optimism nor pessimism is advantageous here--only realism. For each level of aspiration (i.e., modest, moderate, and elaborate), targeted measures of performance are required. These targets will most likely be "judgment calls," since no perfect prediction will be available. As part of the Internet Services Testbed, we asked agency teams to discuss and agree on these targeted measures of performance for their own service objectives. No one felt entirely comfortable with their forecasts but everyone agreed that they were using their most informed judgment at the time. For estimates such as these, informed, consensual judgment from a group of knowledgeable managers is probably the best forecasting approach that we can use in this context.

A thorough treatment of performance measurement is beyond the scope of this guide. A good discussion of how to develop meaningful measures can be found in Information Management Performance Measures: Developing Performance Measures and Management Controls for Migration Systems, Data Standards, and Process Improvement, available from the National Academy of Public Administration, 1120 G Street, NW, Suite 850, Washington, DC 20005.

A sample worksheet for identifying performance variables, measures, and targets is shown below. For examples of its use, see p. 27 and p. 32. Often, only a few performance variables are identified, and this is sufficient to drive the analysis. On the other hand, you may decide that more performance variables are required to measure fully the improvements that you anticipate from WWW use. In principle, there is no limit to the number of performance variables that could be incorporated into this assessment.

Figure 4. Blank Performance Targets

Blank Performance Targets