Skip to main content
 
Assessing Mobile Technologies in Child Protective Services: A Demonstration Project



Findings

Productivity

This assessment focused on productivity improvements in two main areas: timeliness of documentation and overall volume of documentation. For timeliness, we used three measures derived from data extracted from CONNECTIONS, NYS’s central child welfare information system:
  1. Timeliness of progress notes: These notes are to be entered in the system as soon as possible following the event or activity to be documented. Timeliness would therefore be reflected in how many days elapse between a particular event date and the date the progress note conveying that event was entered. We therefore examined the proportion of progress notes entered each day following the related event. This yielded a productivity improvement measure based on the proportion of notes entered closer to the event date.
  2. Timeliness of safety assessments: These assessments are to be completed (i.e., approved by a supervisor) within seven days of the opening of an investigation. Our measure of improvement in timeliness of safety assessments was therefore the number of assessments completed within seven days in the pre-pilot period compared to the pilot period.
  3. Timeliness of case closing: The investigation of a case should be completed within 60 days from its opening. Our measure of improvement in timeliness of case closing was therefore the number of cases closed within 60 days during the pre-pilot period compared to the pilot period.
For volume of work, we used two measures:
  1. The number of progress notes per day entered in the system, prior to and during the pilot period. Using the number per day was necessary, rather than the total number of notes, since the pilot periods varied in length among the districts from over 70 days to a little over 20 days.
  2. The number of cases closed overall, both within 60 days and later than 60 days.
In designing the assessment, we attempted to make the pre-pilot period as close a match as possible to the pilot period. This approach supports comparisons of productivity that reflect as much as possible the influence of using mobile technology. Therefore, the productivity data for the pre-pilot period was collected as much as possible for the same workers, doing the same kinds of work as in the pilot period, and for the same number of days for both periods. Since there was some turnover in the pilot participants in some districts, there is some variation in workers between the pre-pilot and pilot periods, but that variation is not large enough to affect the overall results.

Productivity could be affected by possible variation in the volume of open cases between the two data collection periods, which would be out of our control. Fortunately there was in fact very little change in overall intake or case volume from the pre-pilot to the pilot period, so the caseload over all 20 districts remained virtually unchanged (see Appendix E for changes in case load from pre- pilot to during pilot period). At the individual district level, however, there were some substantial changes from the pre-pilot to the pilot period. In two districts (Jefferson and St. Lawrence), there was a greater than 20% drop in open cases from the pre-pilot to the pilot period, and in two other districts (Rockland and Seneca) there was a greater than 10% increase in open cases during the pilot test period. For all districts, however, the total difference between the two periods was only 13 cases, out of a total of over 10,000 open in each period.

The results for timeliness and number of case closings seem to be somewhat paradoxical, appearing to show a substantial improvement in the volume of case closing, but a contradictory result vis-à-vis reduction in timeliness. These comparisons are shown together in Figure 2 below.

Figure 2 - Number of Cases Closed - All Districts, Pre-Pilot and During Pilot
Figure 2 - Number of Cases Closed - All Districts, Pre-Pilot and During Pilot

 

The number of cases closed within the 60 day period increased from 2,194 in the pre-pilot period to 2,543 in the pilot period: an improvement in timeliness. However the number of cases closed in longer than 60 days increased as well, suggesting decreased timeliness. This apparent contradiction can be accounted for by the increase in the overall number of cases closed from the pre-pilot period to the pilot period, from 3,836 to 5,090—a 32% increase. Since the overall number of open cases was the same in both time periods, the increase in closing of 60 or more day cases appears to reflect efforts to clean up a backlog of older ones. Since this happened with a simultaneous improvement in timeliness with the less than 60 day cases closed, these results can be interpreted to indicate improvements in both volume and timeliness of work for the pilot period.

The reason for the apparent backlog reduction is not obvious. We asked each of the districts at the beginning of the project to describe changes in policy or practices that accompanied the deployment of the laptops; none reported official instructions to “clean up” any case backlogs. Thus it is not clear if these results are a consequence of administrative direction or a more informal response to the availability of the laptops. This question deserves further attention.

The results for productivity in the number of progress notes are much more clear cut. There was a substantial increase in the overall number of progress notes per day for each tester during the pilot period. The increase, shown in Figure 3 below, is from an average during the pre-pilot period of approximately 56 progress notes per day, up to over 64 per day during the pilot.

Figure 3 - Average Progress Notes/Day Pre Pilot and During Pilot - All Districts
Figure 3 - Average Progress Notes/Day Pre Pilot and During Pilot - All Districts

 

This increase in rate of progress note entry indicates some efficiency gains during the test period. The increase is not related to the number of cases available for work, which was unchanged. Nor does the relatively large increase in progress note output appear to be related directly to an increase in work time. Respondents reported a slightly lower level of overtime during the pilot test period. The gain may be related to increased work done at home not compensated as overtime, but we have no data to test that possibility. The progress note increase is similar in direction to the overall increase in case closings. It seems likely, therefore that the progress note increase is linked to the increase in case closings, and both represent increases in productivity.

This increase in productivity was accompanied by what initially appeared to be lower performance in the timeliness of progress notes. In all the districts, the average elapsed time between an event and progress note entry increased, thus decreasing timeliness. One example of the timeliness results is shown in below. This pattern was consistent across all districts for the 1st through 7th days, so the analysis of progress note timeliness would then show results similar to those in

Figure 4 - Average Percent of Progress Notes/Day Pre and During Test - All Districts
Figure 4 - Average Percent of Progress Notes/Day Pre and During Test - All Districts

 

Rather than a simple decrease in overall performance, however, this finding is most likely a direct result of the work on a backlog of closing older cases discussed in relation to Figure 2 above. If there is a backlog of older cases, it seems likely that there is also a backlog of progress note entry for those cases. If the workers are attempting to reduce that backlog by entering progress notes for events farther in the past, then the average delay for progress notes would increase as the “catching- up process” unfolds.

Improving the timeliness of safety assessments is another place where mobile technology may support improved performance. Therefore, the assessment includes examination of the timeliness of safety assessments during the pre-pilot period and the pilot test period. A safety assessment is considered timely if completed (i.e., approved by a supervisor) within seven days of the opening of the case. The analysis below compares the percentage of safety assessment completed within and beyond seven days for the pre-pilot and pilot period (Figure 5, below).

Figure 5 – Percent of Safety Assessment Approvals Pre and During Test - All Districts
Figure 5 – Percent of Safety Assessment Approvals Pre and During Test - All Districts

 

These results show a substantial overall decline in the timeliness of safety assessments. In the pre- pilot period, approximately 52% of the safety assessments were completed within the first seven days. That dropped to 38% during the pilot test period. The proportion of safety assessments approved in more than seven days increased correspondingly for the pilot period to over 60%. To see if this result was influenced by the choice of indicator, we examined different ways of counting safety assessment completions, both within and past the seven-day period. These included the results presented in Figure 5 above, which count only safety assessments on cases opened during each period. For other analyses, we also included cases opened prior to the period, provided the safety assessment was approved during the period. The results were similar.

These safety assessment results for timeliness are inconsistent with the productivity improvements for other measures, but do resemble the results for progress note timeliness. This suggests that the same “catching up” effect may be at work. That is, if during the test period the workers were concentrating on clearing up older cases, the timeliness of safety assessment may have been affected. It is also possible that adjusting to the new technology configurations slowed the normal work pace. As with the progress note findings, we do not have sufficiently detailed data about work practices to resolve this issue.