We spent this morning doing renovations on the NOAA tank. We deep cleaned, rearranged rocks and inserted a crab pot to prepare for the introduction of some tagged Dungeness crabs. NOAA used to be a deep-water display tank with sablefish and other offshore benthic and epibenthic species, but it has lost some of its thematic cohesion recently. Live animal exhibits bring unique interpretive complications.

All in-tank elements must meet the needs and observable preferences of the animals. This is an area where we cannot compromise, so preparations can take more time and effort than one might expect. For example, our display crab pot had to be sealed to prevent corrosion of the chicken wire. This would not be an issue in the open ocean, but we have to consider the potential effects of the metal on the invertebrates in our system.

Likewise, animals that may share an ecosystem in the ocean might seem like natural tankmates, but often they are not. One species may prey on the other, or the size and design of the tank may bring the animals into conflict. For example, we have a kelp greenling in our Bird’s Eye tank who “owns” the lower 36 inches of the tank. If the tank were not deep enough, she would not be able to comfortably coexist with other fish.

We’re returning the NOAA tank to a deep-water theme based on species and some simple design elements. An illusion of depth can be accomplished by hiding the water’s surface and using minimal lighting. The Japanese spider crab exhibit next door at Oregon Coast Aquarium also makes good use of these principles. When this is done right, visitors can get an intuitive sense of the animals’ natural depth range—regardless of the actual depth of the tank—before they even read the interpretive text.

We’re also using a new resident to help us clean up. The resident in question is a Velcro star (Stylasterias spp.) that was donated a couple of months back. It is only about eight inches across, but the species can grow quite large. Velcro stars are extremely aggressive, and will even attack snails and the fearsome sunflower stars (Pycnopodia helianthoides) that visitors know from our octopus tank. Our Velcro star will, we hope, cull the population of tiny marine snails that have taken over the NOAA tank’s front window in recent months.

Colleen has been very proactive in taking on major exhibit projects like this, and she has recruited a small army of husbandry volunteers—to whom I’ll refer hereafter as Newberg’s Fusiliers—to see them through. Big things are happening on all fronts, and with uncommon speed.

I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

Here’s a roundup of some of our technology testing and progress lately.

First, reflections from our partners Dr. Jim Kisiel and Tamara Galvan at California State University, Long Beach. Tamara recently tested the iPad and QuestionPro/Survey Pocket, Looxcie cameras and a few other apps to conduct surveys in the Long Beach Aquarium, which doesn’t have wifi in the exhibit areas. Here is Jim’s report on their usefulness:

“[We] found the iPad to be very useful.  Tamara used it as a way to track, simply drawing on a pdf and indicating times and patterns, using the app Notability.  We simply imported a pdf of the floorplan, and then duplicated it each time for each track.  Noting much more than times, however, might prove difficult, due to the precision of a stylus.  One thing that would make this even better would be having a clock right on the screen.  Notability does allow for recording, and a timer that goes into play when the recording is started.  This actually might be a nice complement, as it does allow for data collector notes during the session. Tamara was unable to use this feature, though, due to the fact that the iPad could only run one recording device at a time–and she had the looxcie hooked up during all of this. 

Regarding the looxcie.  Tamara had mixed results with this.  While it was handy to record remotely, she found that there were many signal drop-outs where the mic lost contact with the iPad.  We aren’t sure whether this was a limitation of the bluetooth and distance, or whether there was just too much interference in the exhibit halls.  While looxcie would have been ideal for turning on/off the device, the tendency to drop communication between devices sometimes made it difficult to activate the looxcie to turn on.  As such, she often just turned on the looxcie at the start of the encounter.  It is also worth noting that Tamara used the looxcie as an audio device only, and sound quality was fine.
 
Tamara had mixed experiences with Survey Pocket.  Aside from some of the formatting limitations, we weren’t sure how effective it was for open-ended questions.  I was hoping that there was a program that would allow for an audio recording of such responses.  She did manage to create a list of key words that she checked off during the open-ended questions, in addition to jotting down what the interviewee said.  This seemed to work OK.  She also had some issues syncing her data–at one point, it looked like much of her data had been lost, due in part to … [problems transferring] her data from the iPad/cloud back to her computer.  However, staff was helpful and eventually recovered the data.
 
Other things:  The iPad holder (Handstand) was very handy and people seemed OK with using it to complete a few demographic questions. Having the tracking info on the pad made it easier to juggle papers, although she still needed to bring her IRB consent forms with her for distribution. In the future, I think we’ll look to incorporate the IRB into the survey in some way.”
Interestingly, I just discovered that a new version of SurveyPocket *does* allow audio input for open-ended questions. However, OSU has recently purchased university-wide licenses from a different survey company, Qualtrics, who as yet do not have an offline app mode for tablet-based data collection. It seems to be in development, though, so we may change our minds about the company we go with when the QuestionPro/SurveyPocket license is up for renewal next year. It’s amazing how the amount of research I did on these apps last year is almost already out of date.
Along the same lines of software updates kinda messing up your well-laid plans, we’re purchasing a couple of laptops to do more data analysis away from the video camera system desktop computer and away from the eyetracker. We suddenly were confronted with the Windows 8 vs Windows 7 dilemma, though – the software for both of these systems is Windows 7-based, but now that Windows 8 is out, the school had to make a call as to whether or not to upgrade. Luckily for us, we’re skipping Windows 8 for the moment, which enables us to actually use the software on the new laptops since we will still go with Windows 7 for them, and the software programs themselves for the cameras and eye tracker won’t likely be Windows 8 ready until sometime in the new year.
Lastly, we’re still bulking up our capacity for data storage and sharing, as well as internet for video data collection. I have recently put in another new server to be dedicated to handle the sharing of data, with the older 2 servers as slaves and the cameras spread out between them. In addition, we put in a NAS storage system and five 3TB hard drives for storage. Mark assures me we’re getting to the point of having this “initial installation” of stuff finalized …

How much progress have I made on my thesis in the last month? Since last I posted about my thesis, I have completed the majority of my interviews. Out of 30 I need, I have all but four completed, and three of the four remaining scheduled. Out of about 20 eyetracking sessions, I have completed all but about 7, with probably 3 of the remaining scheduled. I also presented some preliminary findings around the eye-tracking at the Geological Society of America conference in a digital poster session. Whew!

It’s a little strange to have set a desired number of interviews at the beginning and feel like I have to fulfill that and only that number, rather than soliciting from a wide population and getting as many as I could past a minimum. Now, if I were to get a flood of applicants for the “last” novice interview spot, I might want to risk overscheduling to compensate for no-shows (which, as you know, have plagued me). On the other hand, I risk having to cancel if I got an “extra” subject scheduled, which I suppose is not a big deal, but for some reason I would feel weird canceling on a volunteer – would it put them off from volunteering for research in the future??

Next up is processing all the recordings, backing them up, and then getting them transcribed. I’ll need to create a rubric to score the informational answers as something along the lines of 100% correct, partially correct, or not at all correct. Then it will be coding, finding patterns in the data and categorizing those patterns, and asking someone to serve as a fellow coder to verify my codebook and coding once I’ve made a pass through all of the interviews. Then I’ll have to decide if the same coding will apply equally to the questions I asked during the eyetracking portion, since I didn’t dig as deeply to root out understanding completely as I did in the clinical interviews, but I still asked them to justify their answers with “how do you know” questions.

We’ll see how far I get this month.

As the lab considers how to encourage STEM reflection around the tsunami tank, this recent post from Nina Simon at Museum 2.0 reminds us what a difference the choice of a single word can make in visitor reflection:

“While the lists look the same on the surface (and bear in mind that the one on the left has been on display for 3 weeks longer than the one on the right), the content is subtly different. Both these lists are interesting, but the “we” list invites spectators into the experience a bit more than the “I” list.”

So as we go forward, the choice not only of the physical booth set up (i.e. allowing privacy or open to spectators), but also the specific wording can influence how our visitors choose to focus or not on the task we’re trying to investigate, and how broad or specific/personal their reflections might be. Hopefully, we’ll be able to do some testing of several supposedly equivalent prompts as Simon suggests in an earlier post as well as more “traditional” iterative prototyping.

Informal educators, scientists, science education faculty, and science institutions worked with the Lincoln County (Oregon) School District to develop and implement professional development for K-12 teachers around ocean literacy and aquatic & marine science.  Oregon Coast Aquatic and Marine science Partnership (OCAMP) ran from 2009-2012 and offered teachers scientific presentations on topics ranging from estuaries to climate change.  Project teachers were also given opportunities to attend and present at national conferences, learn about different aquatic & marine curriculum and materials (including lab materials) available, work in a professional learning community where they had to complete an action research project, and so much more.

For this post, however, I want to focus on lesson plans written by project teachers.  Each OCAMP teacher was encouraged to submit an original lesson plan or a lesson plan that used a pre-existing material either in a new way or over a series of days.  The lesson plans cover a broad range of topics, from plankton to tsunamis.  There is now a wonderful selection of teacher-written (and approved!) lesson plans available on the OCAMP website, http://ocampmsp.webs.com, under the tab OCAMP Developed Lessons. The lesson plans have been organized by Ocean Literacy Principle and by grade level. Hopefully these lesson plans (and other available information) are helpful to both formal and informal educators.