Skip to main content
photo
 
Appendix I. Survey Methodology, Sample, and Response Rate

The survey sample included a mix of local and state government professionals from criminal justice and public health agencies across the 50 states and Washington, DC. The population is unidentified and therefore we did not use random sampling strategies to identify our potential respondents. These participants were identified either by their involvement in past or current government CBI initiatives or by their positions in government agencies responsible for providing criminal justice or public health related services. In order to have a common understanding in the minds of the survey respondents, cross-boundary was defined at the beginning of the survey as several possible kinds of boundaries, including across different units or departments within a single organization; across different agencies; across different levels of government; and across public, private, non-profit, and academic sectors. An information sharing initiative was defined as a government led-effort to develop the necessary institutional, organizational, and technological policies, processes, and systems that allow organizations or multiple units within a single organization to share and use both internal and external information. Survey respondents were asked their opinions about a mix of policy, organizational, social, and technical factors that relate to one specific, U.S.-based government CBI initiative that they personally participated in within the last five years. Respondents were asked to choose the initiative they knew the best, regardless of its current status (e.g., still in development, defunct, or implemented) or level of success or effectiveness.

The full administration of the survey began by e-mailing invitations to 815 government contacts: 361 individuals in criminal justice agencies and 454 individuals in public health agencies. It contained a description of the survey project and background about the previous research leading up to the survey. Members of our sample were informed that the link to the survey itself would be mailed the following week and they were given the opportunity to opt-out prior to receiving the survey. In addition, the invitation asked for contact information of individuals who could replace the invitees if they chose to opt-out; suggestions for additional survey participants were also welcomed. In total, we had 15 opt-outs from criminal justice agencies, 7 of whom were replaced by alternate contacts, and 36 opt-outs from public health agencies, 4 of whom were replaced by alternate contacts. The invitation also allowed us to check for working e-mail addresses, resulting in 36 non-contacts from criminal justice agencies and 42 non-contacts from public health agencies. A follow-up e-mail containing an individual’s unique survey link was sent approximately one week later and reminders were sent to non-respondents two, four, five, and six weeks after the first survey link e-mail was sent.

Our final sample size was 617 individuals. There were 71 opt-outs without replacements, for a rate of 11.5%. We had 173 completed surveys, for a rate of 28%. The remaining 373 individuals did not complete the survey. The final representation of the sample make-up was satisfactory for doing a statistical analysis. We received responses from 48 states and they were well-distributed across the United States. Breaking down the percentages across policy domain and level of government to compare respondents who completed the survey with the non-respondents and individuals who opted-out, the results were very similar by policy domain; this similarity was also true when we compared the respondents who completed the survey with the entire sample.

However, across all comparisons, we found that the percentages were different by level of government. Therefore, our results are not necessarily representative by level of government.