Well the data collection for my research has been underway for nearly 2 months now, how time flies! For those of you new to this project, my research centers on documenting the practice of science center docents as they interact with visitors. Data collection includes video observations of voluntary docents at HMSC using “visitor-mounted” looxcie cameras, as well as pre- and post-observation interviews with those participating docents.

“Visitor-eye view using the looxcies”

My current focus is getting the video observations of  each of the 10 participating docents collected. In order to conduct a post observation interview (which asks docents to reflect on their practice), I need to get about 10-15 minutes of video data of each of the docents interacting with the public. This doesn’t sound like much, but when you can’t guarantee a recruited family will interact with a recruited docent,  and an actual interaction will likely only last from 30 seconds to a few minutes, it takes a fair few families wearing cameras to get what you need. However, I’m finding this process really enjoyable both in getting to know the docents and meeting visitors.

When I first started this project I was worried that visitors would be a little repelled about the idea of having their whole visit recorded. What I’m actually finding is that either a) they want to help the poor grad student complete her thesis, b) they think the cameras are fun and “want a go” or c) they totally want one of the HMSC tote bags being used as an incentive (what can I say, everyone loves free stuff right?!) The enthusiasm for the cameras has gone as far as one gentleman running up to a docent, jumping up and down and shouting “I’m wearing a camera, I’m wearing a camera!” Additionally, and for those star trek fans out there, a number of visitors and colleagues alike have remarked how much wearing a looxcie makes a person look like a borg (i.e. cyborg), particularly with that red light thing…

Now how, may you ask, does that not influence those lovely naturalistic interactions you’re supposed to be observing? Well, as many of us qualitative researchers know, that unless you hide the fact you are observing a person (an element our IRB process is not particularly fond of) you can never truly remove that influence, but you can assume that if particular practices are observed often enough, they are part of the landscape you are observing. The influence of the cameras may alter how naturalistic that interaction may be, but that interaction is still a reflection of social behaviors taking place. People do not completely change their personality and ways of life simply because a camera is around; more likely any behavior changes may simply be over- or under-exaggerated normative actions. And I am finding patterns, lots of patterns, in the discourse and action taking place between docents and visitors.

However, I am paying attention to how visitors and docents react to the cameras. When filtering the footage for interactions, I look out for any discourse that indicates camera influence is an issue. As examples, the docent in the “jumping man” footage reacts surprised to the man’s sudden shouting, open’s his eyes wide and nervously laughs – to which I noted on the video that the interaction from then on may irregular. In one clip I have a docent talking non-stop about waves seemingly without taking a breath for nearly 8 minutes – to which I noted seemed unnatural in comparison to their other shorter dialogue events. Another clip has a docent bursting out laughing at a visitor wearing one of the looxices attached to his baseball cap using a special clip I have (not something I expected!) – to which I noted would have likely made the ability for the visitor to forget about the looxcie less possible.

All in all, however, most visitors remark they actually forget they are wearing the camera as they visit goes on, simply because they are distracted by their actual visit. This makes me happy, as the purpose of incorporating the looxcies was to reduce the influence of being videod as a whole. Visitors forget to a point where, during pilots, one man actually walked into the bathroom wearing his looxcie, and recorded some footage I wasn’t exactly intending to observe… suffice to say, I instantly deleted that video and and updated my recruitment spiel to include a reminder not to take the cameras in to the bathroom. Social science never ceases to surprise me!

A nice article on some of our current efforts came out today in Oregon Sea Grant’s publication, Confluence. You can read the story on-line at http://seagrant.oregonstate.edu/confluence/1-3/free-choice-learning.

One of the hardest things to try to describe to Nathan Gilles who wrote the article (and to the folks who reviewed the draft) is the idea that in order for the lab to be useful to the widest variety of learning sciences researchers, the cyber-technologies on which the museum lab are based have to be useful to researchers coming from a wide range of theoretical traditions. In the original interview, I used the term “theory agnostic” in trying to talk about the data collection tools and the behind-the-scenes database. The idea is that the tools stand alone independent of any given learning theory or framework.

Of course, for anyone who has spent time thinking about it, this is a highly problematic idea. Across the social sciences we recognize that our decisions about what data to collect, how to represent it, and even how we go about collecting it are intimately interwoven with our theoretical claims and commitments. In the same way that our language and symbol systems shape our thinking by streamlining our perceptions of the world (see John Lucy’s work at the University of Chicago for the most cogent explanations of these relationships), our theories about learning, about development, about human interaction and identity shape our research questions, our tools for data collection and the kinds of things we even count as data.

Recognizing this, we struggled early on to develop a way to automate data collection that would serve the needs of multiple researchers coming from multiple frameworks and with interests that might or might not align with our own. For example, we needed to develop a data collection and storage framework that would allow a researcher like John Falk to explore visitor motivation and identity as features of individuals while at the same time allowing a researcher like Sigrid Norris to document visitor motivation and identity as emergent properties of mediated discourse: two very different notions of identity and of best ways to collect data about it being served by one lab and database.

The framework we settled on for conceiving of what kind of data we need to collect for all these researchers from different backgrounds is focused on human action (spoken and non-spoken) and shaped by a mediated action approach to understanding human action. Mediated action as an approach basically foregrounds agents acting in the world through the mediation of cognitive and communicative tools. Furthermore, it recognizes that such mediated action always occurs in concrete contexts. While it is true that mediated action approaches are most often associated with sociocultural theories of learning and Cultural Historical Activity Theory in particular, a mediated action approach itself does not make strong theoretical claims about learning. A mediated action framework means we are constantly striving to collect data on individual agents using physical, communicative, and cognitive tools in concrete contexts often with other agents. In storing and parsing data, we strive to maintain the unity of agent, tools, and context. To what extent this strategy turns out to be theory agnostic or learning theory neutral remains to be seen.

Do visitors use STEM reasoning when describing their work in a build-and-test exhibit? This is one of the first research questions we’re investigating as part of the Cyberlab grant, besides whether or not we can make this technology integration work. As with many other parts of this grant, we’re designing the exhibit around the ability to ask and answer this question, so Laura and I are working on designing a video reflection booth for visitors to tell us about what happened to the structures they build and knock down in the tsunami tank. Using footage from the overhead camera, visitors will be able to review what happens, and hopefully tell us about why they created what they did, whether or not they expected it to survive or fail, and how the actual result fit or didn’t match what they hoped for.

We have a couple of video review and share your thoughts examples we drew from; The Utah Museum of Natural History has an earthquake shake table where you build and test a structure and then can review footage of it going through the simulated quake. The California Science Center’s traveling exhibit Goosebumps: the Science of Fear also allows visitors to view video of expressions of fear from themselves and other visitors filmed while they are “falling”. However, we want to take these a step farther and add the visitor reflection piece, and then allow visitors to choose to share their reflections with other visitors as well.

As often happens, we find ourselves with a lot of creative ways to implement this, and ideas for layer upon layer of interactivity that may ultimately complicate things, so we have to rein our ideas in a bit to start with a (relatively) simple interaction to see if the opportunity to reflect is fundamentally appealing to visitors. Especially when one of our options is around $12K – no need to go spending money without some basic questions answered. Will visitors be too shy to record anything, too unclear about the instructions to record anything meaningful, or just interested in mooning/flipping off/making silly faces at the camera? Will they be too protective of their thoughts to share them with researchers? Will they remain at the build-and-test part forever and be uninterested in even viewing the replay of what happened to their structures? Avoiding getting ahead of ourselves and designing something fancy before we’ve answered these basic questions is what makes prototyping so valuable. So our original design will need some testing with probably a simple camera setup and some mockups of how the program will work for visitors to give us feedback before we go any farther with the guts of the software design. And then eventually, we might have an exhibit that allows us to investigate our ultimate research question.

What did Vygotsky mean when he was referring to mind?  During our weekly theory meeting we as graduate students spent time reflecting on this today. We are currently reading Wertsch, Vygotsky and the Social Formation of Mind and came across two sections that took up the bulk of our conversations.

First we reflected on the choice of the word “mind” in the title.  What does this mean?  Is this a connection to the English translation of Vygotsky’s work Mind and Society trying to link the two books?  Is it a reference to the higher mental functions of brain and thought processes together?  Is it the conscience part of the individual?  Is it the part of the developmental process Vygotsky was referring to that was social and not biological?  We had much conversation on these questions.  What are your thoughts on these questions?  Our group tabled the topic for a future week. 

The second major discussion point was about mediation.  Can one mediate a tool? Or is a tool there to help with mediation?  What is the difference in these questions?  Is there a difference?  After various examples, there seemed to be some agreement on the idea of three stages of tool use.  These stages would progress as the individual develops high mental functions or thought processes.  First would be the use of the tools and gaining a general understanding of the task. Next would be the use of the tools to further that knowledge base. And finally there would be the mediated use of the tools with new tools for new formation of knowledge.  Laura gave a great example of these steps with learning about tides.  What are your thoughts?

I think what finally turned the tide for me in recruitment was emails to specific colleges at OSU. I guess I was confused because I thought I wasn’t allowed to email students, but it seems really I wasn’t allowed to use the “All-Student-Email” list. Sending emails to particular department administrators to forward to their own lists apparently is perfectly kosher, if not exactly completely unbiased recruitment. It did generate a flurry of responses, 50 or so in a few days, with maybe 20% of those guys (going by names only). Email to fraternities, however, seemed to be a dud (I’m not even sure any of them got forwarded), unless it just took a few days for the guys to sign up and I am confusing them with the ones I thought came from department emails.

The best scheduling method so far has been calling those folks who provided a telephone number; I got one on the phone who recalled seeing the doodle poll I sent with available interview times, but he also said he wasn’t sure what it was about. So, despite the end of the sign-up survey noting that a doodle poll would be sent, again, that information seemed to get overlooked.

Another rather wasted effort at recruiting was me sitting with a sign in the Dutch Bros. coffee shop, even when I was offering gift cards to their establishment for participation. I got one guy who was an engineer inquire why I wasn’t signing up engineers, but otherwise, no bites. Ditto for hanging out in the dining hall; one guy eyed the sign but said he wasn’t a Dutch Bros. guy. Cash, it seems, is king, as long as you can convince your funding source you are not laundering money (hint: get receipts).

Now the question is whether all of them will show up. So far, I’ve had one no-show after the phone calls for scheduling. The rest of the week I have about 6 more interviews, which will get me pretty close to finished if all of them show up. I’m sending email reminders the day before, so I’m crossing my fingers.

 

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.