Overall Assessment Results
Issues in Interpreting Assessment Results
The performance of any technology, especially the ones used in these pilot projects, is a result of a complex mix of human, organizational, and technical influences. The pilot tests were conducted in real field situations that reflect this complexity, not in laboratory experiments with elaborate and rigorous controls. Therefore, the assessment results, though valid and useful, do not answer many possible questions about the causes and implications of the impacts of the technologies. In addition, the number of test participants was small in some parts of the pilot, so as a result, some of the important work situations and associated challenges may not be represented in the results.
Timing was a particular issue in two of the Local District initiatives. In Monroe County, the participants had access to the various devices for such a short time that it is very difficult to distinguish between results due to improved efficiencies versus the disruptive effects of learning a new work method. Other factors include possible resistance to change by some workers or natural variations in workload. The timing problems were exacerbated by delays in deploying devices in Monroe County and the decision to rotate the various devices among the NYC/ACS participants on two week cycles. In addition, there was limited time for training and deployment support in all three Local District initiatives. The timing in the NYC/ACS test was further complicated because it occurred during a period the workers referred to as a “crisis.” During the test period the workers were allowed to use paid overtime and were instructed to devote extra effort to reducing the backlog of open cases. As a consequence, it is not possible to separate the possible effects of the new technologies from the effects of these management actions.
Other issues are related to use of the data from the central CONNECTIONS repository. We extracted data about entries by all test participants for the month prior to and during their pilot test period to trace possible technology impacts on the timeliness and reporting work flow for progress notes. Our findings on timeliness and work flow impacts (presented below) include analysis of these data, however, the nature of the data supports only very rough conclusions about technology impacts for these tests. The repository records the timing and types of progress notes entered, but not their length or quality. During the pilot test period, the participants were working on a mix of cases, some open for long periods prior to the pilot test, some started and closed during the pilot, and others remaining open at the end of the test period. Therefore, the notes entered during the pilot test period applied to both new and older cases, ranging from as little as a day to over two months old. The number of notes per case varied widely, as did the types of notes entered. Moreover, the data does not include the ultimate disposition of the cases or any rating of the quality of outcomes obtained. Thus the analysis supports only very general conclusions about timeliness and workflow impacts.1A more complete evaluation would require a considerably longer test period, some explicit control factors, and more detailed assessment of note quality and case outcomes.
1To compare the pre-pilot and pilot test periods it was necessary to assume that the two periods were the same with respect to the kinds of cases involved, the distribution of actions required for the cases, and the overall rates of cases opening and closing.
