And now it comes to this: thesis data analysis. I am doing both qualitative analysis of the interviews and quantitative analysis for the eye-tracking, mostly. However, I will also quantify some of the interview coding and “qualify” the eye-tracking data, mainly while I analyze the paths and orders in which people view the images.

So now the questions become, what exactly am I looking for, and how do I find evidence of it? I have some hypotheses, but they are pretty general at this point. I know that I’m looking for differences between the experts and the non-experts, and among the levels of scaffolding for the non-experts in particular. For the interviews, that means I expect experts will 1) have more correct answers than the non-experts, 2) have different answers from the non-experts about how they know the answers they give, 3) be able to answer all my questions about the images, and 4) have basically similar meaning-making across all levels of scaffolding. This means I have a general idea of where to start coding, but I imagine my code book will change significantly as I go.

With the eye-tracking data, I’ll also be trying to build the model as I go, especially as this analysis is new to our lab. With the help of a former graduate student in the Statistics department, I’ll be starting at the most general differences, again whether the number of fixations (as defined by a minimum dwell time in a maximum diameter area) differ significantly:  1) between experts and non-experts overall with all topics included and all images, 2) between supposedly-maximally-different unscaffolded vs. fully-scaffolded images but with both populations included, and 3) experts looking at unscaffolded vs. non-experts looking at fully-scaffolded images. At this point, I think that there should be significant differences in cases 1 and 2, but hope that, if significant, at least the value of the difference should be smaller in 3, indicating that the non-experts are indeed moving closer to the patterns of experts when given scaffolding. However, this may not reveal itself in the eye-tracking as the populations could make similar meaning as reflected in the interviews but not have the same patterns of eye-movements; that is, it’s possible that the non-experts might be less efficient than experts but still eventually arrive at a better answer with scaffolding than without.

As for the parameters of the eye-tracking, the standard minimum dwell time for a fixation included in our software is 80 ms, and the maximum diameter is 100 pixels, but again, we have no standard for this in the lab so we’ll play around with this and see if results hold up over smaller dwell times or at least smaller diameters, or if they appear. My images are only 800×600 pixels, so a minimal diameter of 1/6th to 1/8th of the image seems rather large. Some of this will be mitigated by the use of areas of interest drawn in the image, where the distance between areas could dictate a smaller minimum diameter, but at this point, all of this remains to be seen and to some extent, the analysis will be very exploratory.

That’s the plan at the moment; what are your thoughts, questions, and/or suggestions?

Question: should we make available some of the HMSC VC footage for viewing to anyone who wants to see it? I was thinking the other day about what footage we could share with the field at large, as sharing is part of our mandate in the grant. Would it be helpful, for instance, to be able to see what goes on in our center, and maybe play around with viewing our visitors if you were considering either:

a) being a visiting scholar and seeing what we can offer

b) installing such cameras in your center

c) just seeing what goes on in a science center?

Obviously this brings up ethical questions, but for example, the Milestone Systems folks who made the iPad app for their surveillance system do put the footage from their cameras inside and outside their office building out there for anyone with the app to access. Do they have signs telling people walking up to, or in and around, their building that that’s the case? I would guess not.

I don’t mean that we should share audio, just video, but our visitors will already presumably know they are being recorded. What other considerations come up if we share the live footage? Others won’t be able to record or download footage through the app.

What would your visitors think?

Right now, we can set up profiles for an unlimited number of people who contact us to access the footage with a username and password, but I’m talking about putting it out there for anyone to find. What are the advantages, other than being able to circumvent contacting us for the login info? Other possible disadvantages: bandwidth problems, as we’ve already been experiencing.

So, chew over this food for thought on this Christmas eve, and let us know what you think.

At a student conference recently, a man in a plaid jacket with elbow patches was very upset about my poster.  He crossed his arms across his chest and made a lot of noises that sounded like “Hmph!”  I asked if he had any questions.  “Where’s your control?? How can you say anything about your results without having a control group?”

Both the natural sciences and social sciences both share common roots in the discipline of philosophy, but the theoretical underpinnings and assumptions of those two fields are completely different.  I don’t know what happened when science and psychology were young siblings, but man, those are two are separate monsters now.

Just so you know, I have never taken the Qualitative Methods course.  I have never conducted interviews, or analyzed qualitative data.  But I find myself in situations where I need to explain and defend this type of research, as I plan to interview citizen science volunteers about their experience in the program.  I am learning about the difference between qualitative and quantitative data which each step of my project, but there is a LOT I still need to learn.

Here’s what I know about interviews:

  1. They are an opportunity for the participant to think about and answer questions they literally may have never thought about before.  Participants create/reflect on reality and make sense of their experience on the spot, and share that with the interviewer.   Participants are not revealing something to you that necessarily exists already.  Interviewers are not “looking into the mind” of the participant.
  2. It’s important to avoid leading questions, or questions where the answer is built in.  Asking a volunteer “Tell me how you got interested in volunteering…” assumes they were interested when they started volunteering.  Instead, you can ask them to provide a narrative of the time they started volunteering.  When volunteers respond to the prompt “Tell me about when you started volunteering with this program…”  they may tell you what interested them about it, and you can follow up using their language for clarification.  Follow-up and probing questions are the most important.  Good default probes include “Tell me more about that,” and “What do you mean by that?”
  3. You don’t necessarily set the sample size ahead of time, but wait for data saturation.  Let’s say you do 12 interviews and participants all give completely different answers.  You do 12 more interviews and you get fewer new types of responses.  You do 12 more and you don’t get any new types of responses.  You might be done!  Check for new discrepant evidence against your existing claims or patterns.
  4. Reporting qualitative data involves going through your analysis claim by claim, and supporting each claim with (4-5 paragraphs of) supporting evidence from the interviews.  I’ve read that there’s no one right way to analyze qualitative data, and your claims will be valid as long as they represent consistent themes or patterns that are supported by evidence.  Inter-rater reliability is another way to check the validity of claims.

And to the man in the plaid jacket, there are plenty of fields within the natural sciences that are similar to qualitative research in that they are descriptive, like geology or archeology, or in that it may be impossible to have a control, like astronomy.

Let me know what your experience is defending qualitative research, and what your favorite resources are for conducting interviews!

 

Or at least across the globe, for now. One of the major goals of this project is building a platform that is mobile, both around the science center and beyond. So as I travel this holiday season, I’ll be testing some of these tools on the road, as we prepare for visiting scholars. We want the scholars to be able to come to work for about a month and set the system up as they like for capturing the interactions that provide the data they’re interested in. Then we want them to have the ability to log in to the system from their home institutions, continuing to collect and analyze data from home. The first step in testing that lies with those of us who are living in Corvallis and commuting to the center in Newport only a couple times a week.

To that end, we’re starting with a couple more PC laptops, one for the eye-tracker analysis software, and one more devoted to the higher-processing needs of the surveillance system. The video analysis from afar is mostly a matter of getting the servers set up on our end, as the client software is free to install on an unlimited number of machines. But, as I described in earlier posts (here and here), we’ve been re-arranging cameras, installing more servers (we’re now up to one master and two slaves, with the one master dedicated to serving the clients, and each slave handling about half the cameras), and trying to test out the data-grabbing abilities from afar. Our partner in New Zealand had us extend the data recording time after the motion sensors decide there’s nothing going on in order to try and fix frame drop problems during the export. We’re also installing a honking lot more ethernet capability in the next week or so to hopefully handle our bandwidth better. I’ll be testing the video export on the road myself this week.

Then there’s the eye-tracker. It’s a different case, as it has proprietary data analysis software that has a per-user license. We have two, so that I can analyze my thesis data separately from any data collection that may now take place at the center, such as what I’m testing for an upcoming conference presentation on eye-tracking in museums. It’s not really that the eye-tracker itself is heavy, but with the laptop and all the associated cords, it gets cumbersome to go back and forth all the time, and I’d rather not have the responsibility of moving that $30K equipment any more than I have to (I don’t think it’s covered under my renter’s insurance for the nights it would be stored there in between campuses). So I’ve been working on setting up the software on the other new analysis laptop. Now I’m running into license issues, though I think otherwise the actual data transfer from one system to another is ok (except my files are pretty big – 2GB of data – just enough that it’s been a manual, rather than web-based, transfer so far).

And with that, I’m off to start that “eye-tracking … across the universe” (with apologies to the writers of the original Star Trek parody).

When thinking about creating outreach for a public audience, who should the target audience be? What types of questions can you ask yourself to help determine this information? If is ok to knowingly exclude certain age groups when you are designing an outreach activity? What setting is best for my outreach setting? How many entry or exist points should my activity have? Should there be a take-away thing or just a take-away message? How long should the outreach activity run? How long will people stay once my activity is completed? What types of materials are ok to use with a public audience? For example is there anything I should avoid like peanuts? Am I allowed to touch the people doing the activity to help them put something on to complete the activity? What types of things need to be watched in between each activity to avoid spreading germs? How much information should I “give away” about the topic being presented? What type of questions should I ask the participants in regards to the activity or information around the activity? How much assumed knowledge can I assume the audience has about the topic? Where do I find this information out? What are some creditable resources for creating research based educational activities?

These are some of the questions that I was asked today during a Pre-college Program outreach meeting by another graduate student who works with me on OSU’s Bioenergy Program. Part of our output for this grant is to create and deliver outreach activities around Bioenergy. We plan on utilizing the connections among SMILE, Pre-college Programs and Hatfield Marine Science Center since there are already outreach opportunities that exist within these structures. As we were meeting, it dawned on me that someone who has not ever been asked to create an outreach activity as part of their job may see this task as overwhelming. As we worked through the questions, activities and specific audience needs of the scheduled upcoming outreach, it was both rewarding and refreshing to hear the ideas and thoughts of someone new to the field of outreach.

What are some questions you have when creating outreach? What are some suggestions about creating outreach to the general public verse middle school students verse high school students? Do you have any good resources you can share? What are your thoughts?

Last week, Dr. Rowe and I visited Portland Art Museum to help assist with a recruitment push for participants in their Conversations About Art evaluation and I noticed all of the education staff involved have very different styles of how they recruited visitors to participate in the project. Styles ranged from the apologetic (e.g. “do you mind if I interrupt you to help us”), to incentive-focused (e.g. “get free tickets!) to experiential (e.g. “participating will be fun and informative!”)

This got me thinking a lot about  the significance of people skills and a researcher’s recruitment style in educational studies this week. How does the style in which you get participants involved influence a) how many participants you actually recruit, and b) the quality of the participation (i.e. do they just go through the motions to get the freebie incentive?) Thinking back to prior studies of FCL alum here from OSU, I realized that nearly all the researchers I knew had a different approach to recruitment, be it in person, on the phone or via email, and that in fact it is a learned skill that we don’t often talk too much about.

I’ve been grateful for my success at recruiting both docents and visitors for my research on docent-visitor interactions, which is mostly the result of taking the “help a graduate student complete their research” approach – one that I borrowed from interacting with prior Marine Resource Management colleagues of mine, Abby Nickels and Alicia Christensen during their masters research on marine education activities. Such an approach won’t be much help in the future once I finally get out of grad school, so the question to consider is what factors make for successful participant recruitment? It seems the common denominator is people skills, and by people skills I mean the ability to engage a potential recruit on a level that removes skepticism around being commandeered off the street.  You have to be not only trustworthy, but also approachable. I’ve definitely noticed with my own work that on off days where I’m tired and have trouble maintaining a smiley face for long periods of time at the HMSC entrance, recruitment seems harder. All those younger years spent in customer service jobs and learning how to deal with the public in general seem so much more worthwhile!

So fellow researchers and evaluators, my question for you is what are your strategies for recruiting participants? Do you agree people skills are an important underlying factor? Do you over/under estimate your own personal influence on participant recruitment?