This past week has confirmed for me that video coding is an arduous task! Right now I’m continuing to code my video data for my dissertation , and working on my criteria for analysis that will allow me to reduce the data and finish answering my research questions. I’m basically looking at the different modes of how docents interact with visitors (speech, gesture, etc) and suggesting patterns in which they interpret science to the public. I’m cross referencing the themes that emerge from this video analysis with my interview data to come up with some overarching outcomes.

So far the themes seem fairly clear, which is a nice feeling. Plus there seems to be a lot of cross over between the patterns in docent interpretation strategies, and what the literature deems effective interpretation. What is interesting is this group of docents have little to no formal interpretive training. So perhaps good communicative practice emerges on its own when you have constant contact with your audience. Food for thought for professional development activities with informal educators…

What’s interesting about this process is how well I know my data, but how tough it is to get it down on paper. I can talk until I am blue in the face about what my outcomes are coming out as, but it’s like translating an ancient text to get it written up in to structured chapters. Ah, the right of passage that is the final dissertation.

All this video coding has also got me thinking about our development of an automated video analysis process for the lab though. What kind of parameters do we set to have it process the vast landscape of data our camera system can collect, and therefore help reduce the data from the word go? As a researcher, imagining a data set that is already partially reduced puts a smile on my face.

So back to coding. I see coded people….

This week at HMSC we have been working on clearing out Dr. Rowe’s old office to make way for a new research office. This office will become the main working area for the FCL research we conduct in the HMSC visitor center. It will also become the office of the new postdoc we will be hiring for the lab in the future.

With the help of the lovely Maureen and Susan, we cleared out old paperwork and moved furniture to create a more open space for collaborative work and equipment storage. We were very happy with the results!

Hopefully the space will simplify project management for research taking place for the lab!

 

Harrison enjoys the extra space in the new research office!

Susan makes our mark in magnets in the new office

I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

If you’re a fan of “Project Runway,” you’re no doubt familiar with Tim Gunn’s signature phrase. He employs this particularly around the point in each week’s process, where the designers have chosen their fabrics and made at least their first efforts at turning their design into reality. It’s at about this time in the process where the designers have to forge ahead or take the last chance to start over and re-conceptualize.

 

 

This week, it feels like that’s where we are with the FCL Lab. We’re about one-and-a-half years into our five years of funding, and about a year behind on technology development. Which means, we’ve got the ideas, and the materials, but haven’t really gotten as far along as we’d like in the actual putting it together.

For us, it’s a bigger problem, too; the development (in this case, the video booth as well as the exhibit itself) is holding up the research. As Shawn put it to me, we’re spending too much time and effort trying to design the perfect task instead of “making it work” with what we have. That is, we’re going to re-conceptualize and do the research we can do with what we have in place, while still going forward with the technology development, of course.

So, for the video booth, that means that we’re not going to wait to be able to analyze what people reflect on during the experience, but take the chance to use what we have, namely a bunch of materials, and analyze the interactions that *are* taking place. We’re not going to wait to make the tsunami task perfect to encourage what we want to see in the video booth. Instead, we’re going to invite several different folks with different research lenses to take a look at the video we get at the tank itself and let us know what types of learning they’re seeing. From there, we can refine what data we want to collect.

It’s an important lesson in grant proposal writing, too: Once you’ve been approved, you don’t have to stick word-for-word to your plan. It can be modified, in ways big and small. In fact, it’s probably better that way.

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

Last October, Lincoln County School District received news that they were awarded an Innovative Approaches to Literacy Grant to fund Project SEAL (Students Engaging in Authentic Literacy). Dr. Rowe and I, representing Oregon Sea Grant, are the evaluators for this project.  What I enjoy most about working on the evaluation is that it continues to push my understanding of learning, focusing not only on museums but also on the classroom and continually thinking about bridging the gap between the two in new ways.

Project SEAL has so many components to it, including buying new ocean-related books for school libraries, stocking each library with a classroom set of handheld devices such as iPads, and family literacy nights. I am sure these will come up in future blog posts, but today I want to focus on the teacher professional development part of Project SEAL. On February 8th and 9th Project SEAL hosted a Model Classroom (modelclassroom.org) training for around 60 teachers, principals, and media assistants. The Model Classroom has “teachers participate in a set of missions that take them out into the community… [where they will] develop and document project ideas to take back to the classroom.”

We started the training at the Oregon Coast Aquarium and the first mission was for teachers to go around the aquarium to look at exhibits and talk to people (anyone they could find including visitors, educators and volunteers) about a global issue that has a local impact. One group of teachers was contacting local grocery stores and talking to the aquarium gift store about plastic bags while another group was asking visitors questions like “what would you do if you found tsunami debris on the beach?” Yet another group ended up on a research vessel docked nearby. The second mission was to use their mobile devices to create a hook to draw their students into the topic, with an end goal of thinking of ways their students could use these devices to communicate ideas and projects from the field. One group of teachers used iMovie to create a trailer about picking up and properly reporting tsunami debris.

The second day of training was spent in a library of a local school. The day started with an in-depth conversation of what literacy was (when the teachers were in school) versus was literacy is now (in the 21st century). The Model Classroom leaders, project staff and I agreed this was a conversation we’d have to continually come back to because it is so BIG. For most of the rest of the day teachers divided into groups and explored the school, looking at different spaces and the learning opportunities that can occur. They took pictures, wrote descriptions and some groups came up with ideas for improvement.

Project SEAL is in its infancy but it’s such a wonderful project with so many key components. Keep your eyes out for future posts with the ongoing evaluation and tools developed. In the meantime, learn more about Project SEAL and read the teacher’s blog posts at https://sites.google.com/site/oregontestsite/home.