Findings
Mobility and Use
One important goal of the demonstration project was to assess the way having a laptop affected where work was done.1 Therefore the survey that participants received at the end of the pilot period asked them to estimate the number of hours per week they used their laptop in various locations. The three areas of primary interest for mobile use are in the field, in court, and at home. The reported average use in these three locations across all respondents is shown in Figure 1 below. Overall, the respondents used their laptops a little less than 6.5 hours per week in locations outside of the office during the pilot period, with almost half of that use at home. Use at other locations outside the office amounted to a little over three hours per week.

Figure 1 - Average Hours Per Week of Use by Location - All Districts
The reported use in court of approximately one-half hour per week was somewhat lower than expected, given results from our previous research about the long waiting times in court. The pilot period was less than two months for many of the participants, however, and several reported no court appearances during that time. So these results may not be typical of laptop use over longer time periods or reflect the full potential for significant use in courts. The overall level of reported use may also be a result of limited wireless access available or private space to work in court. This may be due to limited wide-area service or lack of hardware, or both. Opportunities for use in court and while moving about in the field were further limited by conditions in many of the courts and the cold weather.
The overall averages also mask considerable variation among the districts. Some reported much higher levels of use outside the office. The Putnam County respondents reported over nine hours per week of use at home and three in court, while respondents in both St. Lawrence and Suffolk counties reported over nine hours per week of laptop use, on average, in the field. The range of variation in use across these three locations was substantial, from 19 hours per week in one district to less than two in two others. Some reasons for the difference may be due to the range of conditions across the districts as described in Chapter 2. The variability in connectivity and policies may have led to districts being able to use the technology in different locations.
A different pattern of variation can be seen in the reports on impact on work shown in Table 1 below. As with location of use, the survey of all participants at the end of the pilot period asked whether five types of work were better, worse, or about the same with their laptops. For all five kinds of work, the opinions ranged almost exclusively from “about the same” to “much better.” The most positive impacts reported were in the areas of “access to information” and “timeliness of documentation,” with over 50% of the respondents rating these results “somewhat better” or “much better.” Ability to work in court improved for over 30% of the respondents, and communication with supervisors and client service was better for 20% and 28% respectively. Of the 226 participants who answered this question, there were only 22 instances of a reported worsening of ability to work with the laptops. The survey and interview comments included reports of technical difficulties with some devices and poor connectivity that may account for the negative reports on work impacts.
Table 1 - Reported Impacts on Work of Mobile Device Use – All Districts
|
Impacts on:
|
Much
worse
|
Somewhat
worse
|
About the
same
|
Somewhat
better
|
Much
better
|
|---|---|---|---|---|---|
|
(n)
|
(n)
|
(n)
|
(n)
|
(n)
|
|
|
Timeliness of documentation |
2% (5) |
2% (4) |
40% (91) |
40% (91) |
15% (35) |
|
Ability to do work in court |
0% (1) |
1% (2) |
67% (141) |
23% (49) |
9% (18) |
|
Ability to access case
information |
1% (2) |
0% (1) |
36% (80) |
38% (86) |
25% (55) |
|
Communication with
supervisors |
0% (1) |
0% (1) |
78% (173) |
14% (32) |
6% (14) |
|
Service to clients |
1% (2) |
1% (2) |
70% (156) |
21% (46) |
7% (16) |
Productivity
This assessment focused on productivity improvements in two main areas: timeliness of documentation and overall volume of documentation. For timeliness, we used three measures derived from data extracted from CONNECTIONS, NYS’s central child welfare information system:
For volume of work, we used two measures:
In designing the assessment, we attempted to make the pre-pilot period as close a match as possible to the pilot period. This approach supports comparisons of productivity that reflect as much as possible the influence of using mobile technology. Therefore, the productivity data for the pre-pilot period was collected as much as possible for the same workers, doing the same kinds of work as in the pilot period, and for the same number of days for both periods. Since there was some turnover in the pilot participants in some districts, there is some variation in workers between the pre-pilot and pilot periods, but that variation is not large enough to affect the overall results.
- Timeliness of progress notes: These notes are to be entered in the system as soon as possible following the event or activity to be documented. Timeliness would therefore be reflected in how many days elapse between a particular event date and the date the progress note conveying that event was entered. We therefore examined the proportion of progress notes entered each day following the related event. This yielded a productivity improvement measure based on the proportion of notes entered closer to the event date.
- Timeliness of safety assessments: These assessments are to be completed (i.e., approved by a supervisor) within seven days of the opening of an investigation. Our measure of improvement in timeliness of safety assessments was therefore the number of assessments completed within seven days in the pre-pilot period compared to the pilot period.
- Timeliness of case closing: The investigation of a case should be completed within 60 days from its opening. Our measure of improvement in timeliness of case closing was therefore the number of cases closed within 60 days during the pre-pilot period compared to the pilot period.
- The number of progress notes per day entered in the system, prior to and during the pilot period. Using the number per day was necessary, rather than the total number of notes, since the pilot periods varied in length among the districts from over 70 days to a little over 20 days.
- The number of cases closed overall, both within 60 days and later than 60 days.
Productivity could be affected by possible variation in the volume of open cases between the two data collection periods, which would be out of our control. Fortunately there was in fact very little change in overall intake or case volume from the pre-pilot to the pilot period, so the caseload over all 20 districts remained virtually unchanged (see Appendix E for changes in case load from pre- pilot to during pilot period). At the individual district level, however, there were some substantial changes from the pre-pilot to the pilot period. In two districts (Jefferson and St. Lawrence), there was a greater than 20% drop in open cases from the pre-pilot to the pilot period, and in two other districts (Rockland and Seneca) there was a greater than 10% increase in open cases during the pilot test period. For all districts, however, the total difference between the two periods was only 13 cases, out of a total of over 10,000 open in each period.
The results for timeliness and number of case closings seem to be somewhat paradoxical, appearing to show a substantial improvement in the volume of case closing, but a contradictory result vis-à-vis reduction in timeliness. These comparisons are shown together in Figure 2 below.

Figure 2 - Number of Cases Closed - All Districts, Pre-Pilot and During Pilot
The number of cases closed within the 60 day period increased from 2,194 in the pre-pilot period to 2,543 in the pilot period: an improvement in timeliness. However the number of cases closed in longer than 60 days increased as well, suggesting decreased timeliness. This apparent contradiction can be accounted for by the increase in the overall number of cases closed from the pre-pilot period to the pilot period, from 3,836 to 5,090—a 32% increase. Since the overall number of open cases was the same in both time periods, the increase in closing of 60 or more day cases appears to reflect efforts to clean up a backlog of older ones. Since this happened with a simultaneous improvement in timeliness with the less than 60 day cases closed, these results can be interpreted to indicate improvements in both volume and timeliness of work for the pilot period.
The reason for the apparent backlog reduction is not obvious. We asked each of the districts at the beginning of the project to describe changes in policy or practices that accompanied the deployment of the laptops; none reported official instructions to “clean up” any case backlogs. Thus it is not clear if these results are a consequence of administrative direction or a more informal response to the availability of the laptops. This question deserves further attention.
The results for productivity in the number of progress notes are much more clear cut. There was a substantial increase in the overall number of progress notes per day for each tester during the pilot period. The increase, shown in Figure 3 below, is from an average during the pre-pilot period of approximately 56 progress notes per day, up to over 64 per day during the pilot.

Figure 3 - Average Progress Notes/Day Pre Pilot and During Pilot - All Districts
This increase in rate of progress note entry indicates some efficiency gains during the test period. The increase is not related to the number of cases available for work, which was unchanged. Nor does the relatively large increase in progress note output appear to be related directly to an increase in work time. Respondents reported a slightly lower level of overtime during the pilot test period. The gain may be related to increased work done at home not compensated as overtime, but we have no data to test that possibility. The progress note increase is similar in direction to the overall increase in case closings. It seems likely, therefore that the progress note increase is linked to the increase in case closings, and both represent increases in productivity.
This increase in productivity was accompanied by what initially appeared to be lower performance in the timeliness of progress notes. In all the districts, the average elapsed time between an event and progress note entry increased, thus decreasing timeliness. One example of the timeliness results is shown in below. This pattern was consistent across all districts for the 1st through 7th days, so the analysis of progress note timeliness would then show results similar to those in

Figure 4 - Average Percent of Progress Notes/Day Pre and During Test - All Districts
Rather than a simple decrease in overall performance, however, this finding is most likely a direct result of the work on a backlog of closing older cases discussed in relation to Figure 2 above. If there is a backlog of older cases, it seems likely that there is also a backlog of progress note entry for those cases. If the workers are attempting to reduce that backlog by entering progress notes for events farther in the past, then the average delay for progress notes would increase as the “catching- up process” unfolds.
Improving the timeliness of safety assessments is another place where mobile technology may support improved performance. Therefore, the assessment includes examination of the timeliness of safety assessments during the pre-pilot period and the pilot test period. A safety assessment is considered timely if completed (i.e., approved by a supervisor) within seven days of the opening of the case. The analysis below compares the percentage of safety assessment completed within and beyond seven days for the pre-pilot and pilot period (Figure 5, below).

Figure 5 – Percent of Safety Assessment Approvals Pre and During Test - All Districts
These results show a substantial overall decline in the timeliness of safety assessments. In the pre- pilot period, approximately 52% of the safety assessments were completed within the first seven days. That dropped to 38% during the pilot test period. The proportion of safety assessments approved in more than seven days increased correspondingly for the pilot period to over 60%. To see if this result was influenced by the choice of indicator, we examined different ways of counting safety assessment completions, both within and past the seven-day period. These included the results presented in Figure 5 above, which count only safety assessments on cases opened during each period. For other analyses, we also included cases opened prior to the period, provided the safety assessment was approved during the period. The results were similar.
These safety assessment results for timeliness are inconsistent with the productivity improvements for other measures, but do resemble the results for progress note timeliness. This suggests that the same “catching up” effect may be at work. That is, if during the test period the workers were concentrating on clearing up older cases, the timeliness of safety assessment may have been affected. It is also possible that adjusting to the new technology configurations slowed the normal work pace. As with the progress note findings, we do not have sufficiently detailed data about work practices to resolve this issue.
Satisfaction
At the end of the pilot period, participants were surveyed and asked to rate their overall satisfaction with laptop use. The rating used a five-point scale from 5= “Very satisfied,” to 3= “Neither Satisfied nor Dissatisfied,” to 1=”Very dissatisfied.” The average satisfaction rating for each district is shown in Figure 6 below.

Figure 6 - Average Satisfaction Level with Laptop Use - by District
With the exception of Seneca County, all the satisfaction ratings averaged in the positive side of the range, with Albany, Chemung, and Wayne counties reporting very high overall satisfaction levels. The low satisfaction ratings for the Seneca County respondents is not reflected in their other survey results or comments, but may be related to a large workload increase. That district experienced the largest increase in caseload between the pre-pilot and pilot periods, up from 34 to 102 cases closed, and an over 70% increase in the rate of progress note entry. The satisfaction ratings for the other districts do not appear to be similarly related to changes in workload or productivity.

Figure 7 – Percent of Caseworkers that Would Recommend a Laptop to Do CPS Work
In the post-pilot survey participants were also asked if they would recommend using a mobile device (to do CPS work functions) to a colleague. Overall, in all the districts , 81% of the respondents stated “Yes” that they would recommend to their colleague using a mobile device to do CPS work, while 14% said maybe, and 5% said no, they would not.
Relationship of Productivity Gains to Pilot Test Conditions
While there were overall productivity gains for the pilot test period, these gains were not consistent across all 20 districts. That lack of consistency prompted us to examine whether or not variations in the test conditions could account for different productivity gains. Because of the small number of districts and the many variations in test conditions, it was not possible to statistically isolate or measure the independent influence of any particular factor. However it is possible with this number of districts to explore whether there are groupings or clusters of districts that correspond to differences in one factor or another. Therefore we used a statistical clustering technique (K-Means analysis) to see if productivity results appeared to be related to two kinds of test conditions: the availability of overtime compensation for the workers outside normal hours, and the favorability of technology conditions (connectivity, access to laptops). That is, the analysis tests to see if districts could be grouped such that high or low measures of productivity were connected with favorable or unfavorable test conditions.
To perform the analysis, each district was rated as favorable or unfavorable for overtime conditions and technology conditions (see Appendix F for a description of coding for overtime and technology conditions). The K-Means analysis then forms clusters of districts to maximize the differences of the averages (means) across the clusters putting the districts that had higher average productivity gains with one test condition (favorable or unfavorable), and lower gains with the other. If districts with favorable conditions cluster with appreciably higher productivity gains that is evidence of a relationship.
The results below come from separate analyses of increases in case closing and progress note entry clustered separately with overtime and technology conditions. Of the four possible results, three showed a substantial relationship between test conditions and productivity gains in the expected direction, and one less so. Those results are shown in Figure 7 through Figure 10 below. It is important to bear in mind that these results are based on examining only one possible influence on productivity. Therefore, the results do not establish that improving overtime or technology conditions will cause improved productivity, but only that a relationship may exist that deserves further attention.
The analysis results in Figure 7 below show evidence of a relationship between higher case closings performance and more favorable overtime conditions. Case closings in districts clustered with favorable overtime conditions were approximately 25% greater than those in the less favorable overtime conditions. The districts are divided almost equally between the clusters as well, suggesting that the possible relationship is more general across the districts.

Figure 7 - Increases in Case Closing by Overtime Conditions
The evidence of a relationship between overtime conditions and progress note improvement does not appear as strong as for case closings. The analysis seen in Figure 8 below shows only a modest 3% advantage of the favorable overtime cluster versus the unfavorable. Also the distribution of districts between the clusters is quite uneven, suggesting that the possible relationship in this instance is less generally important.

Figure 8 - Increases in Progress Note Entry by Overtime Conditions
Differences in technology conditions appear to be more strongly related to productivity results than the overtime analyses above. For the increases in case closings shown in Figure 9 below, the favorable technology cluster performed about 10% better than the unfavorable one. For this comparison, the districts were evenly divided between the clusters, indicating a rather consistent pattern across the districts.

Figure 9 - Increases in Case Closing by Technology Conditions
A similar but even larger difference is shown in the analysis of progress note entry in relationship to technology conditions. The results in Figure 10 below show a 20% gap in performance between the favorable and unfavorable technology clusters. Though the distribution of districts between the clusters is not quite even, the size of the difference is strong evidence of a connection between the technology conditions and progress note entry.

Figure 10 - Increases in Progress Note Entry per Day by Technology Conditions
Taken together, the results over all analyses present a predominately positive picture of productivity gains during the pilot period. In terms of the overall volume of work, comparisons between the pre- pilot and pilot test periods show substantial increases. Timeliness of case closing improved, even with an increase in the overall number of cases closed over the two periods. Only the timeliness indicators for progress notes and safety assessments show decreases for the pilot test period. The progress note decrease appears to be accounted for by work on closing a higher proportion of older cases during the pilot period, not by an actual slowdown in the documentation process.
With any new technology implementation we would expect significant interactions with the normal work processes. That seems to be the most likely mechanism at work here. In the absence of a measurement effect, our best interpretation of this timeliness impact is essentially the same as for progress notes, i.e., work on a backlog of cases needing both progress notes and safety assessments. That kind of work pattern would shift the overall proportion of timely and late safety assessments for the pilot test period. This issue may be resolved with examination of more work process data than was available for this assessment.
1 The demonstration project included both laptop and tablet computers in some districts. Since this section deals with a mix of the two kinds of devices it is not possible consistently identify which results apply to one or the other device. Therefore we will use the term laptop to include tablet PC’s.
© 2003 Center for Technology in Government
