Would you help us improve our customer service by completing a short survey?
How many times have you been asked this question? How many times have you been the one asking it? The number of times I have asked someone to complete a survey has increased considerably in the last three months.
In May I asked all participants in the science-stakeholder engagement portion of the Willamette Water 2100 (WW2100) research at OSU to complete an online questionnaire about their participation in, expectations of, experiences with, and outcomes from WW2100. After inviting all 238 members of the WW2100 listserv to participate in my survey, reminding them, and then reminding them again, 137 WW2100 participants took the time (approximately 30 minutes) to thoughtfully respond to my questions. 137 respondents out of 238 invited is a respectable response rate of 48%. The questionnaire was about 30 questions long and looked something like this:
In my last post I shared about conducting interviews of select participants in WW2100. Using more than one method to gather data is called a ‘mixed methods approach’ and can deliver more robust results because they were found by more than one way. That is what I intend to do with my qualitative interview data and now my quantitative survey data.
Qualitative data analysis can be done in several ways but I have chosen to use a ‘fancy highlighting program’ called MaxQDA (description credit to Sarah Calhoun, MRM, OSU). This program allows a researcher to identify relevant text and organize it according to codes, or repeating ideas, which can then be displayed together on request. I have just completed my second pass of coding (identifying repeating ideas and organizing them into themes) so that now when I open MaxQDA on my computer, it looks something like this:
Concepts that emerged in the qualitative data can be assessed quantitatively using the survey data. I would like to share some (very) preliminary results from my analyses with you. One challenge that consistently emerged in the interviews was that participants held different expectations for the roles that stakeholders and research team members would play throughout the process. However, although the survey results indicate that stakeholders and research team members were expected to fulfill different roles, stakeholders and research team members did not differ in their expectations for each other. That is, albeit preliminarily, there was not a quantitative difference in expectations for the role of stakeholders and research team members. I’ll have to explore this challenge more.
Another aspect of stakeholder engagement that my survey can really illuminate involves the outcomes of having participated in such a process. My survey measured participant concepts of model utility (the scientific results), process utility (was it worth the time?), feeling heard, and understanding. I then correlated these concepts with a participant’s participation in the project. All four concepts were significantly positively correlated with participation. This means that individuals who reported greater participation also reported greater perception of model utility, process utility, feeling heard, and understanding. How neat is that?!
There is still a lot of work to be done but the data are in and now all that is left is to make sense of them. I split my time between the words of my interviewees and the numbers from my survey. Together they will tell a complete story of the WW2100 science-stakeholder engagement process which can then be read as an example for future engagement projects. Stay tuned for the final results and presentation!