It’s time to buy more cameras, so Mark and I went to our observation booth and wrestled with what to buy. We had four variables: dome (zoomable) vs. brick (non-zoomable) and low-res (640×480) vs. high-res (but wide screen). He had four issues: 1) some places have no power access, so those angles required high-resolution brick cameras (what a strange feature of high-res camera to not require plug-in power!), 2) we had some “interaction” (i.e. close-up exhibit observations) that looked fine at low-res but others that looked bad, 3) lighting varies from area to area and sometimes within the camera view (this dynamic lighting is handled better with high-res), and 4) current position and/or view of the cameras wasn’t always as great as we’d first thought. This, we thought, was a pretty sticky and annoying problem that we needed to solve to make our next purchase.

Mark was planning to buy 12 cameras, and wanted to know what mix of brick/dome and high/low-res we needed, keeping in mind the high-res cameras are about $200 more each. We kept looking at many of the 25 current views and each seemed to have a different issue or, really, combination of the four. So we went back and forth on a bunch of the current cameras, trying to decide which ones were fine, which ones needed high-res, and which we could get away with low-res. After about 10 minutes and no real concrete progress, I wanted a list of the cameras we weren’t satisfied with and then what we wanted to replace each, including ones that were high-res when they didn’t need to be (meaning that we could repurpose a high-res elsewhere). Suddenly, it dawned on me that this was a) not going to be our final purchase, b) still likely just a guess until things were re-installed and additionally installed and lived with. So I asked why we didn’t just get 12 high-res, and if we didn’t like them in the spots we replaced and were still unsatisfied with whatever we repurposed after the high-res, we could move them again, even to the remaining exhibit areas that we haven’t begun to cover yet. Then we can purchase the cheaper low-res cameras later and save the money at the end of the grant, but have plenty of high-res for where we need it. I just realized we were sitting around arguing over a couple thousand dollars that we would probably end up spending anyway to purchase high-res cameras later, so we didn’t have to worry about it right at this minute. It ended up being a pretty easy decision.

Thursday, I had scheduled 3 faculty interviews, including two back-to-back. I do not recommend this approach, not the least of which reason is that the first of the back-to-back ran longer than most, and I barely had time to write analytical notes to collect my thoughts before I had to make sure everything was ready for the next subject. I also didn’t take time to go to stretch my legs and body after intensely concentrating on what to ask next. If I were really pressed, I probably could have begged a moment for the bathroom once the subject arrived and was reading over the consent form, but all in all, it probably took more out of me than was necessary or maybe even wise.

That said, I did make it through, probably because I’d spent so much time preparing before the first one so I didn’t goof up. I have a checklist of things to do before, during and after the interview, and all of my questions and even example probe questions for follow-up printed out as well. I have instructions for a task that I give them explained on a third sheet, and finally, background questions on a final paper. Most of these things I didn’t realize how much I needed until after or during my pilot interviews, another endorsement for trying out your interview strategy beforehand.

Some of my recommendatiions: make sure the blinds in the room are working. I discovered one really old set that was stuck halfway open, so I called maintenance. They haven’t been fixed yet, but at least they closed them so I don’t get glare on my screen for the images I’m showing. Close the window and door, and post a sign that you’re doing an experiment and what time you’ll be done. Of course, this doesn’t  eliminate all the noise when there’s a large biology class across the hall with their doors open, but it helps a lot. Turn off your cell phone, and remind subjects to do the same. Even though it was on my list, I still forgot once yesterday and got distracted. Check the temperature of the room. Even if you don’t get sun glare, or need to worry about it, the blinds can help keep the room cool when you get intense afternoon sun. Of course, then you have to balance this with the room getting stuffy from being so closed up, again a reason that it would be nice to have some time between interviews to air things out if necessary.

Yesterday Shawn and I met with Jess Park at Portland Art Museum (PAM) about an exciting new evaluation project utilizing our looxcie cameras. We had some great conversation about how to capture visitor conversation and interactions in relation to PAM’s Museum Stories and Conversations About Art video-based program. The project will be one the first official evaluation partnerships we have developed under the flag of FCL lab!

PAM has developed these video-based experiences for visitors in order to deepen visitors’ engagement with objects, with each other, and with the museum.  Museum Stories features short video presentations of museum staff talking about specific objects in the collection that have some personal meaning for them. All videos are available on touch screen computers in one gallery of the museum, which also houses the areas where the stories are recorded as well as some of the objects from the museum featured in the stories.  These videos are also available on-line.  Conversations about Art is a series of short videos featuring conversations among experts focused on particular objects in the museum’s collection.  These are available on hand-held devices provided by the museum, as downloads to visitors’ personal hand-held devices, and on the museum website. PAM is now looking to expand the program, and wishes to document some of the predicted and unexpected impacts and outcomes of these projects for visitors. The evaluation will recruit visitors to wear the looxcie cameras during their visit to the pertinent exhibits, including that of object stories. We will likely also be interviewing some of the experts/artists involved in creating the  videos.

We spent time going over the looxcie technologies and how best to recruit visitors in the Art Museum space. We also created some test clips to help the PAM folks working on the evaluation better understand the potential of the video data collection process. I will post a follow up next week with some more details about how we’re using the looxcies.

Shawn and I come back from PAM feeling like the A-Team – we love it when an evaluation plan comes together.

Just a few short updates:

  • We now have a full 25 cameras in the front half of the Visitor’s Center, which gives us pretty great coverage of these areas. Both wide establishment shots and close-up interaction angles cover the touch tanks, wave tanks (still not quite fully open to the public) and a few freshwater creatures tanks that are more traditional tanks where visitors simply observe the animals.
  • Laura got a spiffy new EcoSmart pen that syncs audio with written notes taken on special (now printable from your printer) paper. She showed us how it translates into several languages, lets you play a piano after you’ve drawn the right pattern on the paper, and displays what you’ve written on its digital screen, performing pretty slick handwriting analysis in the process.
  • Katie ran the lab’s first two eyetracking experimental subjects yesterday, one expert and one novice pilot (not quite from the exact study population, but approximately). Not only did the system work (whew!), we’ve even got some interesting qualitative patterns that are different between the two. This is very promising, though of course we’ll have to dig into the quantitative statistics and determine what, if any, differences in dwell times are significant.

Sometimes, it’s a lot of small steps, but altogether they make forward progress!

 

 

The pace of research often strikes me as wonky. This, I suppose, is true of a lot of fields: some days, you make a lot of progress, and some days very little. A series of very small steps eventually (you hope) lead to a conclusion worthy of sharing with your peers and advancing the field. That means a lot of days of working in the trees without being able to see the forest.

Conferences, with their presentation application deadlines, have a funny way of driving research. I applied for the International Conference on Science Communication back in March and outlined all this data I figured I’d have for my thesis by the time the conference rolls around in the first week of September. Amazingly, I’m on track to have a fairly good amount of data, despite delays due to subject recruitment and IRB approval that I’ve talked about before.

However, now I have another twist in the process. Usually, one can work on the conference presentation almost up until the very moment of the presentation, especially if you get to host the presentation slides on your own laptop. This conference, though, requires me to have my final presentation almost 7 weeks before the actual presentation date. I can only assume this is because the conference, to be held in Nancy, France, is going to be held concurrently in both French and English, and thus, the organizers need this time to be able to translate my slides into French (of which I speak not a word).

In any case, this throws a major wrench into my planned schedule! I am doing fine with the pace, and have about half of my needed faculty interviews arranged (with 25% actually completed!). This deadline this week throws me into a strange dilemma of how to present something interesting, especially the visual data from eyetracking experiments, without actually being able to show them at the conference, as far as I can tell. I figure I will have some results from my actual subjects by the time of the conference, but I don’t know which subjects I will want to choose for that part until all of the interviews have been completed. So my solution will be to run a couple of pilot subjects on just the eyetracking part, without the interview. I’ve recruited one of the folks that works closely with us to be more of an “expert” user, and a member of the science and math teacher licensure master’s program to serve as a “novice.” I’m really excited by what the interviews have revealed so far and am hopeful that the eyetracking pilots will go as well. Crossing my fingers that this will be interesting to the conference attendees, too, with whatever verbal updates I can provide to accompany my slides in September.

If you’ve been following our blog, you know the lab has wondered and worried and crossed fingers about the ability of facial recognition not only to track faces, but also eventually to give us clues to visitors’ emotions and attitudes. The recognition and tracking of individuals looks to be promising with the new system, getting up to about 90% accuracy, with good profiles for race and age (incidentally, the cost, including time invested in the old system we abandoned, is about the same with this new system). However, we don’t have any idea whether we’ll get any automated data on emotions, despite the relative similarity of expression of these emotions on human faces.

But I ran across this very cool technology that may help us in our quest: glasses that sense changes in oxygen levels in blood under the skin and can sense emotional states. The glasses amplify what primates have been doing for years, namely sensing embarrassment from flushed redder skin, or fear in greener-tinted skin than normal. Research from Mark Changizi at my alma mater, Caltech, on the evolution of color vision to allow us to do just that sort of emotion sensing has led to the glasses. Currently, they’re being tested for medical applications, helping doctors sense anemia, anger, and fear, but if the glasses are adapted for “real-world” use, such as in decrypting a poker player’s blank stare, it seems to me that the filters could be added to our camera setups or software systems to help automate this sort of emotion detection.

Really, it would be one more weapon in the arsenal of the data war we’re trying to fight. Just as Earth and ocean scientists have made leaps in understanding from being able to use satellites to sample the whole Earth virtually every day instead of taking ship-based or buoy-based measurements far apart in space and time, so do we hope to make leaps and bounds in understanding how visitors learn. If we can get our technology to automate data collection and vastly improve the spatial and temporal resolution of our data, hopefully we’ll move into our own satellite era.

Thanks to GOOD magazine and PSFK for the tips.