Background

So in class I have been talking about digitizing maps of archaeological sites in the Central Valleys of Oaxaca in order to examine changes in the distribution of sites within the region over time. April Fools! I spoke to my adviser and she thought that this project might be too much to take on this term. Instead, she suggested that I continue developing a project that I did for the remote sensing class last term that involved a spectral classification of alluvial sediments in one of Oaxaca’s Central Valleys. Before I describe that project, it may be necessary to provide some background and justification.

Over the past several years, my lab been working with archaeologists from the US and Mexico on a large collaborative research project focused on assessing changes in political and economic integration in the Central Valleys of Oaxaca, Mexico, the core cultural region of the Zapotec civilization, one of Mesoamerica’s first and most enduring complex societies. To do this, our lab takes samples of ceramics provided by each of our collaborators and analyzes them via Instrumental Neutron Activation Analysis (INAA) to determine their geochemical composition for a suite of 30 elements. We then statistically compare the compositional data from the ceramics to similar data obtained for over 300 clay samples that we have collected from across the region to identify areas where ceramic groups may have been produced. Using this data, we can identify locally produced wares for each site in our database, as well as the sources of wares that were imported to each site. This allows us to model the structure of regional economic networks for different periods of interest and examine changes in regional economic integration over time.

One of the fundamental advantages of this approach has been our comparative database of geochemical information for natural clays from the region. But while the number of samples in this database is fairly high, sampling was largely conducted on an opportunistic basis from exposures of clay in road-cuts, streambanks, and agricultural fields, leading to uneven sample coverage across the study area. To estimate the geochemical composition of clays in areas between samplimg locations, we generated a simple interpolated model of regional clay chemistry that covers the entire Central Valley System at a spatial resolution of one kilometer.

While our interpolated model of regional clay chemistry allows us to identify potential ceramic production areas between our clay sampling locations, it has a couple limitations. First, the model’s low spatial resolution glosses finer-scale differences in clay chemistry that can be readily observed in the original data. Secondly, and more importantly, the model does not account for the way that sediment actually moves through the region’s alluvial system.

The trace-element geochemistry of natural clay is largely determined by parent material. The Central Valleys of Oaxaca are flanked by a series of geologically complex mountain ranges that variously contribute to residual and alluvial sediments across the study area, resulting in discrete differences in observed clay chemistry from one sampling location to the next. When we model the clay chemistry for locations between sampling points using simple interpolation methods, we ignore crucial factors such as parent material and the directionality of sediment transport from one area to the next.

To facilitate the development of a refined model of regional clay chemistry, last term I used multispectral ASTER data from NASA’s EOS satellite to develop a spectral classification of alluvial sediments in the Tlacolula Valley, one of the three main branches of Oaxaca’s Central Valley System (see figure below). While this project allowed us to clearly visualize patterns in the valley’s sediment routing system, we have not yet compared the remote sensing data to the geochemical data for each of our sampling locations to assess its utility in developing a refined model of regional clay chemistry.

Pink_Geo544_Poster

Research Objective

This term, I will build upon my previous spectral classification of alluvial sediments in the Tlacolula Valley to assess whether remotely sensed spectral reflectance data may be used to more accurately model clay chemistry within the Tlacolula Valley. ASTER data contains 14 bands of spectral measurements. Some of these are useful for identifying differences in surface geology, while others are better for identifying vegetation cover and urban areas. Whether any of these bands (or combinations of bands) correlate with regional clay chemistry is an open question. The vast majority of our clay samples were collected not from the surface, but from B horizons in exposed soil profiles. Nevertheless, insofar as the surface of most soil profiles in this area is likely to be derived from similar sediment sources as its subsurface components, it may be possible to correlate spectral surface reflectance with our regional clay composition data. If so, we may be able to use the ASTER data to generate a new, higher resolution model of Tlacolula Valley clay chemistry.

Dataset

This study will rely on data collected by the Advanced Spaceborne Thermal Emission Radiometer (ASTER). This satellite collects data over 14 spectral regions using three subsystems: the Visible and Near Infrared (VNIR), the Shortwave Infrared (SWIR), and the Thermal Infrared (TIR). The VNIR system collects stereoscopic data over three spectral regions in the visible and near infrared spectrum at a spatial resolution of 15 m. The Shortwave infrared spectrometer collects data for six spectral regions in the near infrared at a spatial resolution of 30 m using a single nadir pointing detector. And finally, the TIR spectrometer collects data over five spectral regions at a spatial resolution of 90 m using a single nadir pointing detector.

More specifically, this study will rely on a single tile of ASTER Level 1B Precision Terrain Corrected Registered At-Sensor Radiance (AST_L1B) data collected for a region covering the Tlacolula Valley in January of 2001, a period chosen for its low cloud cover and cleared fields. ASTER Level 1B data is available as a multi-file containing calibrated at-sensor radiance that has been geometrically corrected, rotated to a north-up UTM orientation, and terrain corrected using an ASTER DEM. This will be clipped to an area encompassing the valley floor and adjacent piedmont; mountainous areas outside the study area will be excluded from analysis.

Hypotheses

This project will be more of an exploratory exercise in methods development than a hypothesis test.

To determine whether spectral measurements from the ASTER data can be correlated with Tlacolula clay chemistry, I will use the geographic locations of our clay samples from the Tlacolula Valley to extract spectral profiles corresponding to each sample location. These will then be correlated against individual elements from our geochemical data using a series of stepwise multivariate regression analyses in R or another statistical software package. Given fairly strong correlations between spectral measurements and geochemical data, a refined spatial model of Tlacolula clay chemistry will be generated using these regression formulas.

The project that I conducted last term showed that sediments in upland, piedmont areas of the Tlacolula Valley could be easily classified according to their source lithology; misclassification largely occurred only within the Rio Salado floodplain where sediments become more mixed. In our current interpolated model of regional clay chemistry, elemental estimates between sampling locations are always modeled as intermediate, without respect to parent material or topographic position. If successful, this project will yield element estimate maps that more closely reflect patterns of sediment transport seen in the ASTER imagery.

Expected outcome

If our existing model of regional clay chemistry correlates as well against the ASTER data as the geochemical data from our actual sampling locations, development of a revised model may be unnecessary. If however the original clay data correlates substantially better with the ASTER measurements, a new multi-element model of Tlacolula clay chemistry will be generated using the ASTER data.

That said, there is a very strong chance that the ASTER data will only correlate with a few elements, if any. If this is the case, we will explore other options for generating a revised spatial model of Oaxaca clay chemistry.

Significance

If successful, this project will represent a significant advance in methodology for mapping the elemental composition of alluvial sediments regionally. This has some utility in archaeology for identifying potential sources of clay used to make ancient ceramics, but it may also prove useful for soil scientists, geologists, and other researchers concerned with how the admixture of alluvial sediments may contribute to variability in sediment chemistry at a regional scale.

My level of preparation

I have been using ArcGIS for nearly ten years now, so I am thoroughly prepared for this project in that regard. I also have a very strong background in multivariate statistics, though I haven’t used R in some years. I was only introduced to the image processing software ENVI last term during my remote sensing class, but am confident that I have the skills required to complete this project.

Project Abstract (Taken from a recent conference poster):

The Willamette Valley during the Terminal Pleistocene was an environment in constant flux, creating a changing world for the early inhabitants of the Pacific Northwest. The valley floor contains an extensive record of Pleistocene ecology and archaeology; however, the information is locked within a complex stratigraphic sequence. Using a Geoprobe direct push coring rig, 13 sediment cores were extracted from surficial deposits in the Mill Creek watershed at Woodburn High School. The core samples were analyzed on Oregon State University’s Itrax core scanner, returning high-resolution optical imagery, radiograph images, and x-ray fluorescence (XRF) data. The XRF data is used to construct a chemostratigraphic profile of the study area in order to define and model the distribution of sediments potentially related to late Pleistocene-aged archaeological sites.

Research Question, etc:

I am seeking to explore methods of constructing chemostratigraphic frameworks of sediments at both archaeological and non-archaeological sites. The method that is most typically used to define chemostratigraphy at archaeological sites is portable x-ray fluorescence of previously described stratigraphy, and using multivariate statistics to separate the strata by chemistry. Using an Itrax Core Scanning machine, sediment cores extracted from a drainage at Woodburn High School were scanned and continuous high-resolution x-ray fluorescence (XRF) data was acquired. Using wavelet analysis, I hope to be able to define the site stratigraphy and use it to construct a 2D and 3D representation of the subsurface landscape.

Map
Woodburn High School study area.

Project Dataset:

The dataset consists of XRF data taken at 2mm intervals, from 65 1.5 meter core samples. These cores come from 14 different boreholes covering the majority of the defined study area which is approximately a 200×50 meter area. The data is organized into 14 CSV files containing the XRF results.

 

Hypothesis:

Through preliminary testing I have seen potential in using this method to successfully identify stratigraphy. If the result of the preliminary test translates across all 14 boreholes, the construction of landscape wide stratigraphic profiles from the borehole samples and wavelet analysis is very likely.

 

Approaches:

Throughout the term, and through the process of conducting analysis of the Woodburn sediments, I hope to learn how to better utilize and interpret wavelet analysis data, as well as digitally construct 2D and 3D stratigraphic profiles using interpolation methods.

Breaks in stratigraphy can be shown clearly through changes in color or texture, and multivariate techniques have been very useful to identify them. This method has proven useful to confirm the chemostratigraphy of a site when the XRF measures have an attributed strata. Wavelet analysis allows the user to see possible changes in geochemistry, which gives way to possibly identifying geochemical breaks in strata from borehole data that does not contain established stratigraphic names and boundaries.

In order to conduct the analysis, elements had to be selected in order to do both univariate and bivariate analysis. There were a variety of ways that I could have selected the data, but ultimately, in the test sample, I chose to look at the elements that had the most obvious changes. This allowed me to really understand how wavelet analysis works versus a regular line graph. For the final analysis, I will look at similar completed work, and select the best elements to conduct bivariate analysis with for XRF based mineral studies.

 

Expected Outcome:

Visually, I would like to create a stratigraphic profile for each of the four transects at the site, as well as a 3D representation of the site using ArcScene. As for the data, I would like to create a type stratigraphy that archaeologists can reference in order to help find early archaeological sites in the Willamette Valley  

 

Significance:

The results of this project will hopefully help archaeologists understand the stratigraphy and possibly the environmental conditions in the Woodburn, Oregon area, and possibly the Willamette Valley. The sediments buried in this site could contain clues into which sedimentary deposits that archaeological sites could be hidden in, or at least hidden near.

My Level of Preparation:

I am pretty knowledgeable in ArcGIS and the rest of the Arc/ESRI suite of programs. My python skills are average, with better skill in ArcPy. As for R, I have taken courses that deal with it, and am steadily improving my skills.

Question:

What was the effects of the DWH oil spill and associated response activities (e.g. 200 million gallons of oil, 1.8 million gallons of dispersant, in situ burns, and hundreds of additional boats) on the foraging behavior of approximately 1000 sperm whales residing in the Gulf of Mexico?  Sperm whales are extremely efficient deep-diving marine predators, tending to feed on patches of prey that they locate with a mixture of clicks and creaks (Watwood et al. 2006). Dive profile records indicate that sperm whales in the Gulf of Mexico forage near 520 m depth in the water column (Watwood et al. 2006). Sperm whales feed along and about the 1000-m isobath in the region between Mississippi Canyon and De Soto Canyon (Jochens et al. 2008) (Figure 1.). They may consume several thousand kilograms of prey a day (Best 1979) comprised of about 1000 individuals (Clarke et al. 1997). Given their food consumption needs, their prey resources are likely a critical factor driving their distribution and foraging behavior in the Gulf of Mexico during the spill.

Datasets:

I have requested satellite tag data form Oregon State University Marine Mammal Institute. Fortunately, OSU scientists tagged and tracked sperm whales prior to the spill, during, and after. However, there is a good chance I will need to change projects because I do not currently have datasets for analyses.

Hypotheses:

I hypothesize that the availability of sufficient prey resources required to meet the caloric needs of resident sperm whales outweighed the chaos created by the oil spill and response activities.

Approaches:

I am not sure of the best approach, but hope to get some assistance from my peers. My goal is to measure any shift in foraging areas and to complete trend analysis, hot spot analysis, and cluster analysis of foraging areas prior to, during, and several years post spill.

Expected Outcomes:

It would be ideal to detect patterns that are predictive of food web disturbance that ultimately predict the response to sperm whales to disturbance.

Level of preparation:

I have moderate experience with ArcMap and some experience with spatial analyses with ArcMap. I have an understanding of basic statistics and statistical software Minitab. I do not have any experience with R or Python.

References:

Best, P.B. 1979. Social organization in sperm whales, Physeter macrocephalus. Pp. 227-289 in Behavior of marine animals, Vol. 3, edited by H.E. Winn and B.L. Olla. Plenum, New York.

Clarke, M.R. 1997. Cephalopods in the stomach of a sperm whale stranded between the islands of Terschelling and Ameland, southern North Sea. Bulletin de 1’Institut Royal des Sciences Naturelles de Belgique, Biologic 67-Suppl 53-55.

Jochens, A.E. and D.C. Biggs, editors. 2006. “ Sperm whale seismic study in the Gulf of Mexico; Annual Report: Years 3 and 4.” OCS Study MMS 2006-067. 111 pp., Minerals Management Service, Gulf of Mexico OCS Region, U.S. Dept. of the Interior, New Orleans, LA.

Jochens, A., D. Biggs, K. Benoit-Bird, D. Engelhaupt, J. Gordon, C. Hu, N. Jaquet, M. Johnson, R. Leben, B. Mate, P. Miller, J. Ortega-Ortiz, A. Thode, P. Tyack, and B. W. 2008. 2008. “Sperm whale seismic study in the Gulf of Mexico: Synthesis report.” OCS Study MMS 2008-006. 341 pp, Minerals Management Service, Gulf of Mexico OCS Region, U.S. Dept. of the Interior, New Orleans, LA.

Watwood, S.L., P.J. Miller, M. Johnson, P.T. Madsen, and P.L. 2. Tyack. 2006. Deep‐diving foraging behaviour of sperm whales (Physeter macrocephalus). Journal of Animal Ecology 75(3):814-825.

My spatial problem

A description of the research question that you are exploring.  Global change is occurring from the continuing variability in the climate, the ecological responses to the climate drivers, and the socioeconomic and political response.  These changes alter the landscape in predictable and unforeseen ways, simultaneously causing modifications in interactions between the landscape and all the biological communities.  Our reliance on natural resources such as fish, highlights the coupled impacts of these changes between the human and natural system.  Exploring the impacts of this coupled system this term, the spatial problem that Ill address is to understand how habitat suitability models differ for the giant gourami (Osphronemus goramy) in the Mekong Basin between models that are based on the physical landscape and those that incorporate human impacts.  I will use the environmental indicators surface temperature, salinity, and turbidity to map the potential habitat for the giant gourami with an additional layer informed by indicators of human impacts such as land use, population, and proximity to industry to evaluate the differences.

The giant gourami is an air-breathing fish, native to this region, and grown commercially as a food fish as well as for the aquarium market throughout SE Asia (Lefevre et al., 2014).  The fish inhabits freshwater, brackish, benthopelagic environments in swamps, lakes, and rivers among vegetation, found in medium to large rivers and stagnant water bodies.  People around the world rely on fish as a primary source of protein and income, and the growing aquaculture industry provides roughly half of the global fish supply (FAO, 2014).  However, to meet the demands of a rapidly growing population (exceeding 7 billion by 2020), a rising middle class, and an increasingly urban population (65% by 2020), protein consumption is expected to increase to 45kg per capita by 2020, a 25% increase from 1997—the fish consumption rate is no outlier.

 

gourami

A description of the dataset.

  1. Boundary data for the IUCN defined habitat range for the giant gourami in the Mekong Basin species (shown below).  This status code for the species and is listed in this dataset as “Probably Extant.”  However, this particular status code is listed as “discontinued for reasons of ambiguity.”  So it is my hope that this analysis will provide insight into the IUCN-defined habitat and assess how it has changed through time by assessing the parameters used to develop the IUCN data and evaluate additional landscape variables.
  2. Point data on fish occurence 1930-1982 that Ill use to develop a baseline habitat suitability index: http://www.fishbase.org/Map/OccurrenceMapList.php?genus=Osphronemus&species=goramy&dsource=darwin_all_v2
  3. Current surface temperature from http://www.worldclim.org/
  4. Rivers of SE Asia from http://www.naturalearthdata.com/
  5. Vegetation classifications from NDVI data- time series- http://glam1.gsfc.nasa.gov/
  6. Additional information to evaluate the human impacts is available through the Mekong River Basin Study from the World Resources Institute: http://www.wri.org/resources/data-sets/mekong-river-basin-study

IUCN_Gourami

Hypotheses: I expect that the potential habitat for the giant gourami has increased over time and with increased human impacts do to the physiological resilience of the species.  This fish inhabits regions characterized by fresh to brackish water and in slow-moving areas like swamps, lakes, and large rivers.  Given its unique ability to breather air, this fish can survive in poorly oxygenated water to anoxic areas.  I expect that with climate change, increased urbanization, and the changing hydrologic profile of the system due to potential dams that this fish may become more suitable than others for its ability to live in ‘poorer’ environmental conditions.

Approaches: I hope to use python or modelbuilder to iterate through the available datasets to assess the changing habitat based on a habitat suitability index for the giant gourami.  There is also a time-series tool in Arc that I would like to explore.

Expected outcome: I hope to develop a habitat suitability index for the giant gourami and compare habitat suitability models for the potential habitat based on the changing physical landscape and increasing human impacts.  If the data are available, I hope to create a simple time-series animation for each model.

Significance: Fish production from aquaculture is poised to absorb an increasing amount of this demand for meat, offering techniques that offset some of the environmental costs of production.  Depending on the species and farming conditions, fish production can achieve some of the lowest feed-conversion ratios of any type of terrestrial animal meat production.  If farmed responsibly, some species of the diverse group of air-breathing fish such as the giant gourami present an advantage in aquaculture for their unique ability to breathe air.  However, it is critical to understand the impact of increased production levels on the natural range of the species order to mitigate the unwanted invasions or overloading of the natural environment.  A study to assess the spatio-temporal patterns of the habitat suitability of potential aquaculture species will allow for managers to make informed decisions about aquaculture siting and resource allocation.

Your level of preparation: In terms of my experience with the tools available for this type of analysis, I am starting to develop my comfort with ArcInfo, ModelBuilder, and Python for GIS.  However, am no expert.  I have also been exposed to some statistical applications of R, but am again not an expert.

 

FAO. (2014). The State of World Fisheries and Aquaculture 2014. Rome, Italy.

Lefevre, S., Wang, T., Jensen, a., Cong, N. V., Huong, D. T. T., Phuong, N. T., & Bayley, M. (2014). Air-breathing fishes in aquaculture. What can we learn from physiology? Journal of Fish Biology, 84, 705–731. doi:10.1111/jfb.12302

My Spatial Problem

  1. Research Question

Soil microbial communities are extremely complex, because in each gram of soil there are about 109 microorganisms.  Due to this complexity, studying these communities at a species level and understanding any meaningful relationships is difficult.  Therefore, there is a lot of current research looking at how the composition of the community is dictated by environmental parameters in the soil and understanding how the community shifts as these parameters change.  However, before I can begin to examine these microbial communities, I need to explore the spatial distribution of these environmental factors.  If I do find a neat relationship between significant environmental parameters and microbial composition, I want to explore different methods of how to interpolate this sampled relationship over larger, unsampled areas.

  1. The Dataset

The dataset is a set of soil parameters from samples throughout the state of Oregon. These parameters only include edaphic factors such as pH and texture metrics; however, I also hope to include climate factors such as mean annual temperature and precipitation when exploring this relationship between environmental factors and microbial community distribution.   Samples were collected across Oregon to try and encompass all the Common Resource Areas (CRA).  According to the NRCS (Natural Resource Conservation Service), a CRA is “a geographic area where resource concerns, problems, or treatment needs are similar.”  The CRA  geographic partitioning map has a scale of 1:250,000 and considers landscape factors, human use, climate and other natural resource information.

  1. Hypothesis

I hypothesize that these environmental parameters are indeed influenced by spatial autocorrelation; however, due to the low sampling point density it will be difficult to confidently model the distribution of these soil parameters.  For this reason, it may be beneficial to see how conserved these environmental factors are within their respective CRA. If the values of a given environmental variable are similar within a CRA, the CRA polygon may be a more robust unit of analysis than individual sampling points.

  1. Approaches

Firstly, I need to examine the spatial relationships across these environmental factors and see how strong the relationship are.  I also would like to examine how well these environmental factors are conserved within/between CRAs.

  1. Outcome

I will produce a map of environmental factors interpolated across Oregon using either point data or relationships between CRAs and their respective points.

  1. Significance

Soil microbes provide several different ecosystem services such as nutrient cycling; however, understanding the fundamental parameters dictating microbial community distribution through a landscape is not well understood.  Providing a map of the distribution of these microbial communities can help increase accuracy for regional nutrient cycling models and help quantify soil health.

  1. Expertise

I have very little experience with GIS and Python; however, I have done a fair bit of work with R in my undergraduate degree and graduate work.  I hope to gain some proficiency in ArcGIS and learn how to incorporate R scripts into ArcGIS.  There are also several R packages built around spatial analysis in which I hope to become familiar with.

Problem/Question:

Traditionally, archaeologists classify projectile points based on a number of different morphological features, such as the presence or absence of distinct features, or the shape of the hafting element.  However, projectile points can vary widely in the shape, size, and construction technique. This means that their classification can often be a matter of opinion with little objective data to support it. With the ever increasing availability of 3D scanning and morphometric analysis, it is possible to add level of statistical confidence to these classifications. The goal of this project is to determine how well these new classification methods match traditional classification schemes.

Data set:

The data for this project consists of a number of projectile from the Pilcher Creek archaeological site in northeastern Oregon. In total, 44 complete projectile points were recovered from the site, which were originally classified into 3 separate categories, corner notched, lanceolate, and stemmed lanceolate. The majority of the points are made of fine grained basalt, with a small percentage made of obsidian. These points were originally dated to approximately 8,000 years BP, making them some of the oldest artifacts in Oregon.

Hypothesis/ Approach;

It is expected that the classification groups generated through this analysis will follow mostly with the original classification. and it may be possible to subdivide the points into more than the original 3 categories. The rough work flow for this project starts with creating a high resolution 3D of the artifacts using structured light scanning. These 3D scans are then run through as series of ArcGIS tools to generate a large set of descriptive data which can be used to compare the different points. This data for all the points can then be run through clustering analysis and principal component analysis in order to group artifacts together based on the similarity of their morphology.

Significance;

There are a number of benefits to performing this type of analysis. The first is that the 3D scans of the artifacts are much easier to share with the rest of the archaeological community than the actual physical artifacts. It also provides a way to classify these artifacts with calculable certainty, and without bias. It also lays the groundwork for future research and comparison. As more artifacts are 3D scanned and made publically available, it will be possible to create a comparative collection and classification system which will be greater in size and accuracy than anything previously.

Preparation:

I have a decent amount of experience experience with Arc and a little QGIS. I am comfortable with Python and ArcPy. I used to know a little R, but it has been awhile since I have used it.

Description of the research question I am exploring.

The broad question I am exploring is, “How will climate change affect fire regimes in the Pacific Northwest in the 21st century?” or stated as an overarching hypothesis:

Over the 21st century, projected changes in climate will cause changes in fire regimes in the Pacific Northwest by influencing vegetation quantity, composition, and fuel conditions.

I am exploring this question in the context of model vegetation and fire results from the MC2 dynamic global vegetation model (DGVM). MC2 is a gridded, process model with modules for biogeochemistry, fire, and biogeography. Inputs consist of climate and soil data. Outputs from the model include vegetation type, carbon fluxes and pools, hydrologic data, and values related to fire, including carbon consumed by fire and fraction of grid cell burned.

MC2’s current fire module computes fuel conditions within each grid cell. Fire occurrence is modeled when conditions exceed a set fuel condition threshold. An ignition source is always assumed to be present. This threshold-and-assumed-ignition algorithm has the potential to underestimate fire occurrence in areas that rarely or never meet the fuel condition threshold and to overestimate fire occurrence in areas that frequently exceed the fuel condition threshold. I am currently implementing a stochastic ignitions algorithm that allows the user to set an overall daily ignition probability and applies a Chapman Richards function to a fuel condition measure to determine probability of an ignition spreading into a fire.

I will be running the model with historical climate (1895 to 2010) and future climate (2011 to 2100) to produce potential vegetation results (i.e. land use not taken into consideration). Historical data are downscaled from PRISM data, and future data are downscaled from output data produced by the CCSM4 Climate model using the CMIP 5 representative concentration pathway (RCP) 8.5. The model will be run at a 2.5 arc minute resolution (approximately 4km x 4km cell size).

I will compare the output from the 20th century to that of the 21st century and characterize differences in fire regime spatially and temporally. This will be the first run of the MC2 with the new stochastic ignitions algorithm.

(I have added several references below related to what is discussed here.)

MC2 DGVM results for mean fraction cell burned over 2001-2100 using inputs from CCSM4 outputs from RCP 8.5. MC2 fire algorithm with assumed ignitions.
MC2 DGVM results over Pacific Northwest for mean fraction cell burned over 2001-2100 using inputs from CCSM4 RCP 8.5. MC2 fire algorithm with assumed ignitions.

The dataset I will be analyzing

The dataset I will be analyzing will come from MC2 model runs described above. The extent of the dataset is from 42° to 49° latitude and from -124.75° to -111° longitude (from the southeast corner of Idaho west to the US coast and north to the Canadian border), comprising 169 x 331 spatial grid cells of size 2.5 x 2.5 arc minutes. Outputs are on an annual basis from 1895 through 2100. Water and barren land are mapped out of the dataset.

Outputs include variables for various carbon pools, fluxes, vegetation characteristics, and fire characteristics. Those I will be analyzing include carbon consumed by fire and fraction of cell burned. I will be summarizing the data over the time dimension to compute mean time between fires (essentially fire return interval, but over a shorter time period than might be appropriate for calculating a true fire return interval).

Hypotheses

  • Vegetation, elevation, and climate will cause fire regimes to cluster spatially through influences on fuel quantity, composition, and condition.
  • Projected increased temperature and change in precipitation patterns will cause fire to be more frequent and/or more severe through influences on fuel quantity, composition, and condition.
  • Shifting climate characteristics will cause regions with similar fire regimes to shift in location due to changing fuel quantity, composition, and conditions.

Kinds of analyses

The first analysis I will do is a cluster analysis using mean time between fires, carbon consumed by fire, and fraction of cell burned. I will first summarize data over six time periods to produce six datasets: four 50-year periods (1901-1950, 1951-2000, 2001-2050, and 2051-2100), and two 100-year periods (1901-2000 and 2001-2100). Then I will run a cluster analysis (type to be determined) on each dataset.

Using two or more of the resulting clustered datasets I will explore the differences among clusters within each dataset and between datasets (likely using Euclidian distance between clusters).

I will map clustering results back onto the landscape in order to explore spatial patterns within each dataset and differences in spatial patterns between datasets. I will also compare the spatial pattern of clustering results to the spatial extents of EPA Level III ecoregions to see how well or poorly they align.

If time permits, I will do further analyses to characterize the relationship between vegetation type distribution, climate factors, and fire regime clusters.

Expected outcomes

I expect that cells with the same statistical cluster will be concentrated geographically, that for historical data, these concentrations will align closely with EPA Level III ecoregions, that cluster characteristics will be different between time periods, and that geographical groupings of clusters will shift generally northward and towards higher elevation somewhat between historical and future time periods.

From previous runs of the MC2 and preliminary observations of results from the runs for this project, I know that dominant vegetation type shifts from conifer to mixed forests west of the crest of the Cascade Mountains. Within this region, I expect a large shift in fire regime, with carbon consumed falling and mean time between fires decreasing over much of this region. In other regions, I expect general decreases in the mean time between fires due to warmer temperatures and locally drier summers. I also expect carbon consumed to generally remain constant or locally increase due to more favorable growing conditions.

Importance to science and resource management

Studies using DGVMs commonly produce map and graphic results showing extent and intensity of change over uni- or bidimensional spatiotemporal domains. This approach will provide more quantifiable differences using a multidimensional analysis. The ability to characterize fire regimes this way will allow for better model parameterization and validation, which in turn may lead to greater confidence in model projections.

Model results will provide projected changes across an ecologically and economically important region. Results will help resource managers understand and plan for potential change.

Level of experience

  • Arc: Medium, a little rusty
  • ModelBuilder and Python: Expert, especially with python.
  • R: Medium, a little rusty

References

The Beginner’s Guide to Representative Concentration Pathways: http://www.skepticalscience.com/rcp.php

Bachelet, D., Ferschweiler, K., Sheehan, T.J., Sleeter, B., Zhu, Z., 2015. Projected carbon stocks in the conterminous US with land use and variable fire regimes. Global Change Biol., http://dx.doi.org/10.1111/gcb.13048

Daly, C., Halbleib, M., Smith, J.I., Gibson, W.P., Doggett, M.K., Taylor, G.H., et al., 2008. Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. Int. J. Climatol. 28 (15), 2031–2064, http://dx.doi.org/10.1002/joc.1688.

Question:

Foundational to effectiveness of management of invasive species in agricultural and native landscapes is the question of spatial extent and the ability to quantify the impacts of invasive species on agronomic efforts and native vegetation. Invasive plants represent a threat to both agricultural and native landscapes in the form of reduced ecosystem function, increased resource consumption, and reduced yields from agricultural systems. In spite of significant efforts to control and reduce impacts from invasive species, invasive weeds cause an estimated loss of $2 billion annually in the US. Currently, interspecific competition is one of the major limitations for oilseed  and grain production in dryland cropping systems of the Pacific Northwest (PNW). Opportunities for the for the precision monitoring of managed and native ecosystems have become available through the use of low altitude remote sensing systems and high resolution satellite systems. However, methods for resolving species level classification in high resolution multispectral remote sensing systems remain lacking. This is partially due to the relative novelty of these systems, but is also related the lack of suitable reference data at spatial and temporal scales for regionally based models. The broad research question I’m is how does spectral trajectory relate to weed density, and can this information be used to distinguish the spatial extent of weeds in dryland cropping systems? My prediction, is that by increasing the spatial and temporal resolution of these data, crop and non-crop species will be distinguishable based on their relative rate of change in greenness.

The objectives I have for this class are to 1) determine the spatial and temporal resolutions at which weed species are distinguishable from crop species using a spectral trajectory technique, 2) compare these methods with ground reference data in a dryland cropping systems study. The major outcome of this work would be a method for distinguishing weed species from crop species in a dryland environment, and the identification of the minimum temporal resolution for distinguishing species in multispectral imagery.

Dataset:

The data set I have for addressing these questions is a composite of 7 flights of images taken with a multispectral camera in a cropping systems study in Eastern Oregon. Flights were conducted in conjunction with visual estimates of weed density in semi-permanent monitoring frames installed into the cropping systems study. The images are currently at a low level of processing. One of my goals as a part of this project will be to orthomosaic the images such that I can perform a time series analysis across image collection dates. The temporal resolution is from 3-20 days between flights, whiles the spatial resolution is 3 cm. The images cover the entire spatial extent of the experiment.

Hypothesis:

What I plan to do for this experiment is that after generating an orthoraster, I will be able to distinguish between the quantify of weed species and crop species in a frame based on the spectral trajectory of individual pixels in that frame. The question I hope to answer will be how distant in time do sample dates have to be before weed species can be distinguished from crop species based on their spectral trajectory.

Approach:

I plan on using a trial version of Agisoft to orthomosaic my images. It may be possible to conduct this analysis without performing an ortho mosaicing of the images, however, an orthomosaic will have a number of advantages to non-mosaiced images. Without an orthomosaic every individual analysis will have to be hard coded, whereas with an orthomosaic, I can automate much of the processing of these data.

Outcome:

The goal of this analysis will be to identify the minimum time required to discern species, and to describe the statistical relationship between species abundance based on spectral trajectory and ground reference data.

Significance:

While there has been a significant surge in the interest of UAV’s and low altitude remote sensing, the actual number of useful products for land managers to make decisions based on these data is very minimal. This work would identify the minimum temporal resolution a resource manager would need to have before they can identify weed species based on spectral characteristics.

Preparation:

I would say I have minimal experience in Arc-info, model builder and Python. I have moderate to high level experience in R. I would also consider myself to have moderate to high levels of experience in image processing and image analysis.

 

1. Description of Research Question

Disease transmission is intrinsically tied to space use and behavior: Individuals are exposed to pathogens based on where and with whom they spend their time. I will explore how different spatial personalities may affect individual disease risk and herd disease dynamics in a social species. For this project, I will specifically examine individual realized aggregation (IRA), or the degree to which different individuals in my study system aggregate with others, and will relate IRA to risk of exposure to directly transmitted diseases. To explore this question, I will make use of a unique study system of GPS-collared semi-wild African Buffalo (Syncerus caffer) located in Kruger National Park (KNP), South Africa.

IMG_3439

 

2. Description of Dataset

The dataset I will be analyzing consists of approximately eight months of GPS readings from each of the 70 individual buffalo in the herd, collected at ~30 minute intervals. Accuracy tests have yet to be performed, but GPS collars should have at least 5-10 meter accuracy range. The map below shows the 900 hectare enclosure which serves as the study area.

boma

For a project in a previous class, I created tracking animations using a subset of data from a single individual during a 24-hour period. Output from the tracking tool is shown below. This output shows that we can distinguish between periods of high movement (i.e. tracks are far apart) versus low movement (tracks close together) for each individual.

Screen Shot 2016-03-11 at 9.08.07 PM

3. Hypotheses

I hypothesize that individuals will have different spatial behavioral personalities, demonstrated by the maintenance of relatively stable differences in IRA. This hypothesis is based on previous field observations, suggesting that individuals maintain stable herd positions over time. I further hypothesize that individuals with high IRA will be exposed to more directly transmitted diseases than those with low IRA.

4. Approaches

I expect that the approaches I take will evolve throughout the course of this project, but currently my plan is as follows:

My first step will be putting the buffalo movement data in the correct format. Currently, I have separate text files of GPS readings for each individual buffalo over each capture period. I will need to combine all individuals into a single spreadsheet for each capture period in order look at relative positions of individuals within the herd. I will then sample time points from across the capture period (controlling for weather conditions and time of day) and generate 5, 10, 15, 20, and 25 meter buffers around each buffalo. I will use the buffer zones to calculate number of individuals within each radius and determine the degree of IRA for each individual. I will then compare IRA to disease exposure and infection data collected as part of a larger Foot-and-Mouth Disease Virus study to determine whether there is a relationship between exposure/infection and IRA. Because this is such an extensive dataset, I hope to be able to automate the process of generating buffers around each buffalo at each time point using programming in ArcGIS.

5. Expected Outcome

I want to statistically evaluate IRA for each individual buffalo and produce graphs of average number of neighboring individuals per radius size for each individual. I also hope to statistically evaluate relationships between directly transmitted disease exposure and IRA.

6. Significance

Understanding disease dynamics in social mammals is of fundamental importance in the current context of accelerated infectious disease emergence. Owing to a uniquely tractable study system, this work will be the first to categorize individual variation in spatial behavior and link it to disease risks and transmission dynamics.

This work has implications for predicting and managing animal and human diseases. If key individuals for disease transmission can be identified based on spatial-behavioral traits, efficacy and efficiency of disease control could be optimized via targeted interventions.

7. Level of Preparation

I have moderate experience in GIS and statistical analysis in R. I have completed ST 511 and 512 (Methods of Data Analysis), and for a current side project I am using R to analyzed blood chemistry parameters using linear mixed models. I have taken two GIS courses: Geo 565 (Intro GIS) and Geo 580 (advanced GIS) and have used subsets of my data for projects in both of those courses. However, I definitely am not an expert in either GIS or R and will need help navigating both programs.

 

  1. Description of Research Question:

The Oregon Coast Range routinely plays host to disastrous landslides. The primary reason for these landslides is that the range provides a unique combination of high annual precipitation with the presence weak marine sediments (Olsen et al. 2015). During winter storms, it is not uncommon for major transportation corridors to become inoperable, impacting local economies and the livelihoods of residents (The Oregonian 2015a, 2015b). Overall, landslides in Oregon cost an average of $10 million annually, with losses from particularly severe storms having cost more than $100 million (Burns and Madin 2009).

While these rainfall-induced landslides may sometimes be large, deep-seated failures, they most frequently occur in the form of shallow translational failures. These shallow landslides typically occur in the upper few meters of the soil profile, and may result in heavy damage to forest access roads or the temporary closure of major roads.

Recently, I developed a limit equilibrium slope stability model for use in mapping shallow landslides during rainfall. In its current form, the model a deterministic equation that computes a factor of safety against failure for each cell of a digital elevation model (DEM). The problem with this approach is that if fails to account for spatial and temporal variation of input parameters and it only considers a single DEM resolution. My research question is to explore how the incorporation of a probabilistic framework, which expresses the confidence in each input and multiple scales of application, influences the predictive power of the model.

  1. Datasets:

The dataset analyzed for this project consists of three parts:

  1. Data from Smith et al. (2013), who performed hydrologic monitoring of a clear-cutted hillslope in the Elliot State Forest of southwestern Oregon. Monitoring was performed over a three year period, with measurements of rainfall, volumetric water content, and negative pore water pressure taken at hourly increments. Volumetric water content and negative pore water pressure were measured in eight separate soil pits, with each pit being instrumented three times between 0 and 3.0 meters in depth.
  2. Lidar derived DEM from the Oregon Lidar Consortium for the Elk Peak quadrangle in southwestern Oregon.
  3. The Statewide Landslide Information Database for Oregon (SLIDO) corresponding to the Elk Peak quadrangle.
  1. Hypotheses:

The existing model, despite being insufficient to meet the goals of this project, has provided valuable insight into the influence of rainfall on slope instability. Like other slope stability methods, topography and soil strength will account for most of the stability. These two factors combined are expected to bring soils to a critical state, but not a state of failure. The addition of rainfall will then determine whether slopes fail or not. This approach should be most interesting when using the model to forecast landslide hazards based on predicted weather.

  1. Approaches:

I am not clear on exactly what types of analyses need to be undertaken to further my project. My hope is that the advice from peers and assignments associated with this course will help me choose the necessary steps, given my set of goals. I anticipate that most work will be performed in either ArcGIS or Matlab.

  1. Expected outcome:

This project is expected to produce a statistical model that estimates the probability of failure for a given set of conditions. The model is intended for use in mapping applications, and the primary outcome will be rainfall-induced landslide hazard maps for the Elk Peak quadrangle.

  1. Significance:

Accurate hazard maps allow land managers and homeowners to better understand the risk posed by landslides. This method is expected to go a step forward by using rainfall predictions to produce pre-storm maps, which will provide hazard maps specific to a severe rainfall event. Maps of this nature would be especially important because they would allow agencies like the Oregon Department of Transportation to know where resources might be needed before any damage has actually occurred.

  1. Your level of preparation:
    1. I have extensive experience with ArcGIS and model builder from coursework and research during my master’s degree. I have also served as a TA for the OSU CE 202 course (a civil engineering course on GIS), which gave me greater abilities in troubleshooting ArcGIS and working with Modelbuilder.
    2. My experience with GIS programming in Python is moderate, and mainly the resulting of taking GEO 578.
    3. I have no experience with R.

References

Burns, W.J., and Madin, I.P. (2009). “Protocol for Inventory Mapping of Landslide Deposits from Light Detection and Ranging (LIDAR) Imagery.” Oregon Department of Geology and Mineral Industries, Special Paper 42.

Olsen, M.J., Ashford, S.A., Mahlingam, R., Sharifi-Mood, M., O’Banion, M., and Gillins, D.T. (2015). “Impacts of Potential Seismic Landslides on Lifeline Corridors.” Oregon Department of Transportation, Report No. FHWA-OR-RD-15-06.

Smith, J.B., Godt., J.W., Baum, R.L., Coe, J.A., Burns, W.J., Lu, N., Morse, M.M., Sener-Kaya, B., and Kaya, M. (2013). “Hydrologic Monitoring of a Landslide-Prone Hillslope in the Elliot State Forest, Southern Coast Range, Oregon, 2009-2012.” United States Geological Survey, Open File Report 2013-1283.

The Oregonian (2015a). “U.S. 30 closes and reopens in various locations due to landslides, high water.” December 17, 2015. <http://www.oregonlive.com/portland/index.ssf/2015/12/high_water_closes_one_us_30_ea.html>

The Oregonian (2015b). “Landslide buckles Oregon 42, closing it indefinitely,” December 25, 2015. <http://www.oregonlive.com/pacific-northwest-news/index.ssf/2015/12/landslide_buckles_oregon_42_cl.html>