Skip to main content
photo
 
Building On-ramps to International Research Collaboration



Appendix A: Evaluation approach and Data

The evaluation was conducted between 2007 and 2010. Using both qualitative and quantitative methods, we pursued two evaluation goals. The first was to determine the effi cacy of the WG strategy and iGov Institute strategy as mechanisms for launching transnational DG research collaborations that are innovative, diverse, sustainable, and influential on research practice. The second was to identify replicable actions, resources, incentives, strategies, stakeholders, relationships, and methods that lead to efficacy. We analyzed the data using both descriptive and inferential methods.

WORKING GROUP
  • Participant Survey – In October 2010, a survey was sent to 91 participants identified as current members of the three working groups. In total, 55 participants responded (60 percent response rate, including at least 50 percent from each group). The survey consisted of 35 Likert-type scale items, three questions regarding the number of certain kinds of academic outputs, two open-ended questions and a set of demographic items. Together the questions covered:
    1. Opinions about general and specifi c elements of experience with the working group
    2. Assessment of the value of certain features of the working group strategy, such as the value of faceto-face meetings
    3. Identification of research products such as journal articles and grant proposals associated with participation in the working group
    4. Interactions in the DG community during the time of the experiment such as conference participation and academic exchanges
    5. Demographic questions such as amount of international experience, discipline, institutional location, and rank
    6. Open-ended questions covering personal and professional benefits, achievements, or other community building activities
Additional variables were created or calculated in order to assign respondents to groups according to citizenship (US versus non-US), and length of experience with transnational and comparative research, DG research, and international DG research (i.e., five or fewer years versus six or more). In addition, three multi-item scales were created to represent key concepts in the experiment: working group requirements, international awareness, and individual career effects.
  • Semi-structured interviews with co-chairs – Between 2008 and 2010, the US co-chair from each group participated in three semi-structured interviews concerning the progress of the group. Questions addressed changes in goals, relationship development between junior and senior faculty, experiences at faceto-face meetings, communication tools and strategies, obstacles encountered, and disseminating results.
  • Reflection day workshop – Members from each working group (including both co-chairs, one US working group member, one international member, and a student) participated in a two-day workshop at the end of the grant period to refl ect on the experience, identify lessons learned, and share observations and ideas on the future. Participants engaged in small group interviews based on their group and their role (student, faculty, practitioner, co-chair), as well as plenary sessions with all participants.
  • Observations of working group meetings – One member of the evaluation team attended at least one meeting of each working group as a formal observer.
  • Related artifacts – Meeting agendas, minutes, email correspondence, online collaboration repositories, press releases, and other artifacts were collected to provide context.
The main limitation of the survey portion of the study is its reliance on the reported experiences and opinions of the participants as the main source of data – it tells us what participants did and what they think about the experience, but it does not tell us why or how the results were obtained. Therefore, in the qualitative phase, our goals were to identify specifi c actions, resources, strategies, stakeholders, relationships, and methods that appear to be associated with successful elements of each group. We considered aspects such as leadership, management, goals and incentives, meeting structure, activities between meetings, and technology use to try to understand the dynamics, challenges, and accomplishments of each of the three groups.

iGOV INSTITUTE
  • Observations – One member of the evaluation team attended each of the iGov Institutes (2007-1010) as a formal observer.
  • Exit and follow-up surveys – An exit survey was administered to all of the iGov cohorts (i.e., 2007-2010, total n = 74) within two months of attending the institute. In total, 74 participants responded (a 100 percent response rate). Follow-up surveys were administered to the 2007, 2008, and 2009 cohorts one year later (n=46 of 54, response rate 85 percent) and again to the 2007 and 2008 cohorts two years later (n=27 of 34, response rate 79 percent). The exit survey consisted of 10 Likert scale items with multiple sub-items, open-ended questions, and network questions. The subsequent follow-up surveys tracked the changes in attitudes and opinions of a sub-set of the 10 Likert scale exit survey questions, and added additional Likert scale and open ended questions.
Together the surveys covered the following topics:
  • Opinions about general and specific elements of the experience;
  • Assessment of the value of certain features of the iGov program, such as the value of discussionbased site visits;
  • Identification of research products such as journal articles, or dissertations associated with iGov participation or influence;
  • Interactions in the larger DG community during the time of the experiment such as conference participation;
  • Barriers to engaging in international education opportunities such as funding or visa requirements;
  • Demographic questions such as amount of international experience, discipline, institutional location, and year in doctoral program; and
  • Several open-ended questions covering personal and professional benefits or achievements, and other community building activities.
Additional variables were created or calculated in order to assign respondents to groups according to citizenship (US versus non-US), by gender, by status in doctoral program (Advanced - 3 or more years versus Early - two or fewer years), home base of educational institution (US-based versus Internationally-based), and citizenship in a developed or developing country.
  • Related artifacts – iGov programs, contents of the wiki associated with each year, email correspondence, press releases, and other artifacts were collected to provide context information.
The survey and observation data also served as a way to assess the strengths and weaknesses of each year’s program, examining curriculum, speakers, site visits, location, and overall experience. Using formative assessments provided an active learning cycle from year to year. For example, the addition of a local walking tour, “speed dating” exercise, and junior faculty were the results of the first year evaluation. The exit and follow-up surveys were analyzed by individual cohort and also combined to represent an overall assessment of the iGov strategy.