Skip to main content
 
Making Smart IT Choices: Understanding Value and Risk in Government IT Investments



Information gathering techniques

Experiments

The essential purpose of an experiment is to learn about what influences the way some process or activity actually works. The data are typically a result of direct observation of behavior, albeit in a contrived and controlled situation. Experiments put you one step closer to understanding what might happen in a natural setting. The natural setting involves a combination of many interacting influences that make it very difficult to sort out the independent effects of one factor or another. So an experiment is designed to control enough of the factors to allow an assessment of the impacts of the specific ones that are of greatest interest or importance.

For an IT system or prototype, an experiment can become part of testing or evaluating system performance. The experimental design would have to provide for the system or prototype to function in an essentially natural way. For computing systems, these experiments often take the form of running a set of highly standardized and tested procedures or software routines that simulate actual use in a controlled way. The experimenters can apply the same procedures under systematically varied conditions, such as running the same simulation on varying hardware configurations. Experiments may also involve hypothetical work or service delivery situations. In such an experiment, carefully selected persons perform a standardized set of actions on a system under controlled conditions. The experimenter can observe and record the results of realistic work behaviors or client transactions. If well designed, such experiments can yield highly useful data for assessing systems and prototypes.

What are they?


Ways to study what impacts performance. Experiments areartificially constructed and controlled situations designed to study what affects the performance of some system or process. On occasion, a so-called "natural experiment" can be useful as well, as when a change in the natural setting occurs which works in the same way as a deliberate experimental manipulation of the situation.

For example, if an organization changed a work procedure, but kept the workers, technology, incentives, and work setting constant, a comparison of productivity before and after the procedural change would be a natural experiment.

Direct observations of a situation under controlled conditions. Experiments allow you to directly observe and/or measure a situation, such as service delivery or system performance, under controlled conditions. Experimental controls can eliminate or account for the influence of all but the most important components of a system. This allows direct testing and evaluation of these high priority components.

What are they good for?


Observing and measuring activity. TThe activity of users, clients, and system components can be observed and measured under realistic, controlled conditions. These include: assessing how system performance may be affected under conditions of significantly increasing scale of operations, providing benchmark data for use in evaluating system performance in natural settings, and repeating activities and assessing performance under consistent conditions to test system reliability, stability, and performance.

Assessing system influences. Conducting an experiment on your system will allow you to assess the influence on performance or system behavior, including critical components or operational factors. By controlling for, or eliminating the effects of, other low-importance factors, an experiment can illuminate the role of the most critical components in overall performance.

Evaluating reliability and stability. Experiments also allow you to assess a system's performance under low-frequency or extreme conditions. You can apply varied tests or operations systematically to evaluate performance under a pre-determined set of circumstances.

Some limitations and considerations


Can be costly. Experiments can be expensive to design and conduct. The construction of realistic, controlled conditions may require extensive laboratory facilities, equipment, or similar resources. Materials and protocols must be carefully designed. Participants must be recruited and prepared. The observation, recording, and analysis of experimental data may be very complex and time consuming as well.

May require unrealistic assumptions. When conducting experiments, you may have to make unrealistic assumptions in order to accomplish the necessary controls. These can compromise the validity of the resulting observations. For example, experiments often call for participants to assume particular roles, such as business owner, or teacher, so as to include the necessary range of transaction or clients. The ability of the participant to accurately play that role may be quite limited, and the resulting behavior may not be truly typical of people in that occupation.

May have ethical constraints. The actions that can be taken in an experiment may be governed by ethical or policy constraints. For example, in some designs may be prohibited because they involve unacceptable costs or risks for participants, such as divulging sensitive or potentially damaging information, or being subjected to highly stressful conditions.

Validity depends on the controls. In any experiment, the validity of the data depends directly on the effectiveness of the controls. All potential influences on the outcomes must be taken into account or provided for in effective ways. This requires detailed and extensive knowledge of the processes involved, and all the components of the experiment itself.

For more information


Babbie, Earl (2004). The Practice of Social Research, 10th Edition. Belmont, CA: Wadsworth Publishing.

Cook, T. D. and Campbell, D.T. (1979) Quasi-Experimentation: Design and Analysis for Field Settings. Chicago: Rand McNally College Publications.

Morgan, D., ed. (1993). Successful Focus Groups: Advancing the State of the Art. Newbury Park, CA: Sage Publications.

Morgan, D., Krueger, R., and King, J. (1998). Focus Group Kit. Thousand Oaks, CA: Sage.