How is the occurrence of landslides in western Oregon related to the rate and timing of rainfall? The Northwest River Forecast Center (NWRFC) archives 6-hour rainfall accumulation from more than 2000 recording stations throughout the Pacific Northwest (Figure 1). The challenge is that landslides do not tend to occur in the immediate proximity of these stations, and spatial interpolation must be performed to estimate rainfall amounts at landslide locations.
Figure 1: Location of rainfall recording stations provided by the NWRFC.
The Tool and its Implementation
Kriging was selected over inverse distance weighting to better account for variations in rainfall from east to west. Performance of kriging occurred in Matlab to allow for a better selection of inputs, and to simplify the task, which involved kriging every 6-hour measurement for December (124 times).
Kriging works by weighting measured values using a semivariogram model. Several semivariograms were examined to better identify which model would best fit the dataset (Figure 2).
Figure 2: Examples of semivariogram fits to NWRFC rainfall data.
Based on Figure 2, the exponential semivariogram appeared to be the best choice. Another option is the search radius (i.e. how far the model looks for data points). This value was also varied to illustrate the notion of a poor semivariogram fit (Figure 3).
Figure 3: Examples of varying the search radius (lag distance). Search radii from left to right: 1 degree, 5 degrees, 0.2 degrees.
Once each of the 124 surfaces were produced, values were extracted to the locations of major landslides from this past winter. The extracted values were later used to produce rainfall versus time plots, which are described in the next section.
Results
To simplify results for this project, only one landslide is shown. The Highway 42 landslide occurred on December 23, 2015, closing the highway for nearly one month and costing the Oregon Department of Transportation an estimated $5 million to repair. Rainfall versus time profiles were produced for three popular semivariograms (spherical, exponential, and Gaussian) to gauge the influence of choosing one method over another (Figure 4).
Figure 4: Comparison of results obtained from the different semivariograms and PRISM.
Figure 4 shows little effect due to changing the semivariogram model, which is likely a result of having limited variability in rainfall measurements and the distribution of recording stations near the location of the Highway 42 landslide.
To verify the results of this exercise, PRISM daily rainfall estimates were downloaded for the corresponding time period, and compared (Figure 4). This comparison shows that, while the PRISM data does not capture spikes in rainfall amount, the overall accumulation of rainfall appears to be similar, implying that kriging was effective for this application.
The Statewide Landslide Information Database for Oregon (SLIDO, Figure 1) is a GIS compilation of point data representing past landslides. Each point is associated with a number of attributes, including repair cost, dimensions, and date of occurrence. For this exercise, I asked whether or not SLIDO could be subjected to a hot spot analysis, and if so, could the results be insightful.
Figure 1: SLIDO (version 3.2).
The Tool
Hot spot analysis is a tool in ArcGIS that spatially identifies areas of high and low concentrations of an inputted weighting parameter. The required input is either a vector layer with a numerical attribute that can serve as the weighting parameter, or a vector layer whose features indicate an equal weight of occurrence. Outputs are a z-score, p-value, and confidence level bin for each feature from the input layer.
Description of Exercise
Performing the hot spot analysis was more than simply running the tool with SLIDO as an input with the weighting field selected. Selecting an input field was easier said than done, as the SLIDO attribute table is only partially completed. Based on a survey of fields in the SLIDO attribute table, it was clear that repair cost was the best choice. All points having a repair cost were then exported to a new layer, which was then inputted into the hot spot analysis. An important note is that this step greatly reduced the number of points, and their scatter, and the output map looks slightly different than Figure 1.
Outputs
The output of this exercise is a comparison of SLIDO points colored by their repair cost with SLIDO points colored by confidence level bin (Figure 2).
Figure 2: Comparison of coloring by hot spots to simply coloring by cost.
Discussion
The second map in Figure 2 shows the presence of a major hot spot and a major cold spot regarding landslide costs in Oregon. The figure shows that, on average, landslides in the northwest corner of the state are more expensive. This observation can only be made because there appears to be a similar density of points, located at similar distances away from their neighbors, across the entire network of points. The figure also shows that single high-cost landslides do not play a major role in the positioning of hot spots, which is a nice advantage of the hot spot analysis.
In general, I think that the hot spot analysis did a good job illustrating a trend that may not have been obvious in the original data.
Bonus Investigation
In the hot spot documentation, it is stated that the analysis is not conducive to small datasets. An example of performing a hot spot analysis on a small dataset is provided in Figure 3. While there may be a trend in points colored by normalized infiltration rate, the hot spot map shows not significant points.
So in class I have been talking about digitizing maps of archaeological sites in the Central Valleys of Oaxaca in order to examine changes in the distribution of sites within the region over time. April Fools! I spoke to my adviser and she thought that this project might be too much to take on this term. Instead, she suggested that I continue developing a project that I did for the remote sensing class last term that involved a spectral classification of alluvial sediments in one of Oaxaca’s Central Valleys. Before I describe that project, it may be necessary to provide some background and justification.
Over the past several years, my lab been working with archaeologists from the US and Mexico on a large collaborative research project focused on assessing changes in political and economic integration in the Central Valleys of Oaxaca, Mexico, the core cultural region of the Zapotec civilization, one of Mesoamerica’s first and most enduring complex societies. To do this, our lab takes samples of ceramics provided by each of our collaborators and analyzes them via Instrumental Neutron Activation Analysis (INAA) to determine their geochemical composition for a suite of 30 elements. We then statistically compare the compositional data from the ceramics to similar data obtained for over 300 clay samples that we have collected from across the region to identify areas where ceramic groups may have been produced. Using this data, we can identify locally produced wares for each site in our database, as well as the sources of wares that were imported to each site. This allows us to model the structure of regional economic networks for different periods of interest and examine changes in regional economic integration over time.
One of the fundamental advantages of this approach has been our comparative database of geochemical information for natural clays from the region. But while the number of samples in this database is fairly high, sampling was largely conducted on an opportunistic basis from exposures of clay in road-cuts, streambanks, and agricultural fields, leading to uneven sample coverage across the study area. To estimate the geochemical composition of clays in areas between samplimg locations, we generated a simple interpolated model of regional clay chemistry that covers the entire Central Valley System at a spatial resolution of one kilometer.
While our interpolated model of regional clay chemistry allows us to identify potential ceramic production areas between our clay sampling locations, it has a couple limitations. First, the model’s low spatial resolution glosses finer-scale differences in clay chemistry that can be readily observed in the original data. Secondly, and more importantly, the model does not account for the way that sediment actually moves through the region’s alluvial system.
The trace-element geochemistry of natural clay is largely determined by parent material. The Central Valleys of Oaxaca are flanked by a series of geologically complex mountain ranges that variously contribute to residual and alluvial sediments across the study area, resulting in discrete differences in observed clay chemistry from one sampling location to the next. When we model the clay chemistry for locations between sampling points using simple interpolation methods, we ignore crucial factors such as parent material and the directionality of sediment transport from one area to the next.
To facilitate the development of a refined model of regional clay chemistry, last term I used multispectral ASTER data from NASA’s EOS satellite to develop a spectral classification of alluvial sediments in the Tlacolula Valley, one of the three main branches of Oaxaca’s Central Valley System (see figure below). While this project allowed us to clearly visualize patterns in the valley’s sediment routing system, we have not yet compared the remote sensing data to the geochemical data for each of our sampling locations to assess its utility in developing a refined model of regional clay chemistry.
Research Objective
This term, I will build upon my previous spectral classification of alluvial sediments in the Tlacolula Valley to assess whether remotely sensed spectral reflectance data may be used to more accurately model clay chemistry within the Tlacolula Valley. ASTER data contains 14 bands of spectral measurements. Some of these are useful for identifying differences in surface geology, while others are better for identifying vegetation cover and urban areas. Whether any of these bands (or combinations of bands) correlate with regional clay chemistry is an open question. The vast majority of our clay samples were collected not from the surface, but from B horizons in exposed soil profiles. Nevertheless, insofar as the surface of most soil profiles in this area is likely to be derived from similar sediment sources as its subsurface components, it may be possible to correlate spectral surface reflectance with our regional clay composition data. If so, we may be able to use the ASTER data to generate a new, higher resolution model of Tlacolula Valley clay chemistry.
Dataset
This study will rely on data collected by the Advanced Spaceborne Thermal Emission Radiometer (ASTER). This satellite collects data over 14 spectral regions using three subsystems: the Visible and Near Infrared (VNIR), the Shortwave Infrared (SWIR), and the Thermal Infrared (TIR). The VNIR system collects stereoscopic data over three spectral regions in the visible and near infrared spectrum at a spatial resolution of 15 m. The Shortwave infrared spectrometer collects data for six spectral regions in the near infrared at a spatial resolution of 30 m using a single nadir pointing detector. And finally, the TIR spectrometer collects data over five spectral regions at a spatial resolution of 90 m using a single nadir pointing detector.
More specifically, this study will rely on a single tile of ASTER Level 1B Precision Terrain Corrected Registered At-Sensor Radiance (AST_L1B) data collected for a region covering the Tlacolula Valley in January of 2001, a period chosen for its low cloud cover and cleared fields. ASTER Level 1B data is available as a multi-file containing calibrated at-sensor radiance that has been geometrically corrected, rotated to a north-up UTM orientation, and terrain corrected using an ASTER DEM. This will be clipped to an area encompassing the valley floor and adjacent piedmont; mountainous areas outside the study area will be excluded from analysis.
Hypotheses
This project will be more of an exploratory exercise in methods development than a hypothesis test.
To determine whether spectral measurements from the ASTER data can be correlated with Tlacolula clay chemistry, I will use the geographic locations of our clay samples from the Tlacolula Valley to extract spectral profiles corresponding to each sample location. These will then be correlated against individual elements from our geochemical data using a series of stepwise multivariate regression analyses in R or another statistical software package. Given fairly strong correlations between spectral measurements and geochemical data, a refined spatial model of Tlacolula clay chemistry will be generated using these regression formulas.
The project that I conducted last term showed that sediments in upland, piedmont areas of the Tlacolula Valley could be easily classified according to their source lithology; misclassification largely occurred only within the Rio Salado floodplain where sediments become more mixed. In our current interpolated model of regional clay chemistry, elemental estimates between sampling locations are always modeled as intermediate, without respect to parent material or topographic position. If successful, this project will yield element estimate maps that more closely reflect patterns of sediment transport seen in the ASTER imagery.
Expected outcome
If our existing model of regional clay chemistry correlates as well against the ASTER data as the geochemical data from our actual sampling locations, development of a revised model may be unnecessary. If however the original clay data correlates substantially better with the ASTER measurements, a new multi-element model of Tlacolula clay chemistry will be generated using the ASTER data.
That said, there is a very strong chance that the ASTER data will only correlate with a few elements, if any. If this is the case, we will explore other options for generating a revised spatial model of Oaxaca clay chemistry.
Significance
If successful, this project will represent a significant advance in methodology for mapping the elemental composition of alluvial sediments regionally. This has some utility in archaeology for identifying potential sources of clay used to make ancient ceramics, but it may also prove useful for soil scientists, geologists, and other researchers concerned with how the admixture of alluvial sediments may contribute to variability in sediment chemistry at a regional scale.
My level of preparation
I have been using ArcGIS for nearly ten years now, so I am thoroughly prepared for this project in that regard. I also have a very strong background in multivariate statistics, though I haven’t used R in some years. I was only introduced to the image processing software ENVI last term during my remote sensing class, but am confident that I have the skills required to complete this project.
Project Abstract (Taken from a recent conference poster):
The Willamette Valley during the Terminal Pleistocene was an environment in constant flux, creating a changing world for the early inhabitants of the Pacific Northwest. The valley floor contains an extensive record of Pleistocene ecology and archaeology; however, the information is locked within a complex stratigraphic sequence. Using a Geoprobe direct push coring rig, 13 sediment cores were extracted from surficial deposits in the Mill Creek watershed at Woodburn High School. The core samples were analyzed on Oregon State University’s Itrax core scanner, returning high-resolution optical imagery, radiograph images, and x-ray fluorescence (XRF) data. The XRF data is used to construct a chemostratigraphic profile of the study area in order to define and model the distribution of sediments potentially related to late Pleistocene-aged archaeological sites.
Research Question, etc:
I am seeking to explore methods of constructing chemostratigraphic frameworks of sediments at both archaeological and non-archaeological sites. The method that is most typically used to define chemostratigraphy at archaeological sites is portable x-ray fluorescence of previously described stratigraphy, and using multivariate statistics to separate the strata by chemistry. Using an Itrax Core Scanning machine, sediment cores extracted from a drainage at Woodburn High School were scanned and continuous high-resolution x-ray fluorescence (XRF) data was acquired. Using wavelet analysis, I hope to be able to define the site stratigraphy and use it to construct a 2D and 3D representation of the subsurface landscape.
Project Dataset:
The dataset consists of XRF data taken at 2mm intervals, from 65 1.5 meter core samples. These cores come from 14 different boreholes covering the majority of the defined study area which is approximately a 200×50 meter area. The data is organized into 14 CSV files containing the XRF results.
Hypothesis:
Through preliminary testing I have seen potential in using this method to successfully identify stratigraphy. If the result of the preliminary test translates across all 14 boreholes, the construction of landscape wide stratigraphic profiles from the borehole samples and wavelet analysis is very likely.
Approaches:
Throughout the term, and through the process of conducting analysis of the Woodburn sediments, I hope to learn how to better utilize and interpret wavelet analysis data, as well as digitally construct 2D and 3D stratigraphic profiles using interpolation methods.
Breaks in stratigraphy can be shown clearly through changes in color or texture, and multivariate techniques have been very useful to identify them. This method has proven useful to confirm the chemostratigraphy of a site when the XRF measures have an attributed strata. Wavelet analysis allows the user to see possible changes in geochemistry, which gives way to possibly identifying geochemical breaks in strata from borehole data that does not contain established stratigraphic names and boundaries.
In order to conduct the analysis, elements had to be selected in order to do both univariate and bivariate analysis. There were a variety of ways that I could have selected the data, but ultimately, in the test sample, I chose to look at the elements that had the most obvious changes. This allowed me to really understand how wavelet analysis works versus a regular line graph. For the final analysis, I will look at similar completed work, and select the best elements to conduct bivariate analysis with for XRF based mineral studies.
Expected Outcome:
Visually, I would like to create a stratigraphic profile for each of the four transects at the site, as well as a 3D representation of the site using ArcScene. As for the data, I would like to create a type stratigraphy that archaeologists can reference in order to help find early archaeological sites in the Willamette Valley
Significance:
The results of this project will hopefully help archaeologists understand the stratigraphy and possibly the environmental conditions in the Woodburn, Oregon area, and possibly the Willamette Valley. The sediments buried in this site could contain clues into which sedimentary deposits that archaeological sites could be hidden in, or at least hidden near.
My Level of Preparation:
I am pretty knowledgeable in ArcGIS and the rest of the Arc/ESRI suite of programs. My python skills are average, with better skill in ArcPy. As for R, I have taken courses that deal with it, and am steadily improving my skills.
What was the effects of the DWH oil spill and associated response activities (e.g. 200 million gallons of oil, 1.8 million gallons of dispersant, in situ burns, and hundreds of additional boats) on the foraging behavior of approximately 1000 sperm whales residing in the Gulf of Mexico? Sperm whales are extremely efficient deep-diving marine predators, tending to feed on patches of prey that they locate with a mixture of clicks and creaks (Watwood et al. 2006). Dive profile records indicate that sperm whales in the Gulf of Mexico forage near 520 m depth in the water column (Watwood et al. 2006). Sperm whales feed along and about the 1000-m isobath in the region between Mississippi Canyon and De Soto Canyon (Jochens et al. 2008) (Figure 1.). They may consume several thousand kilograms of prey a day (Best 1979) comprised of about 1000 individuals (Clarke et al. 1997). Given their food consumption needs, their prey resources are likely a critical factor driving their distribution and foraging behavior in the Gulf of Mexico during the spill.
Datasets:
I have requested satellite tag data form Oregon State University Marine Mammal Institute. Fortunately, OSU scientists tagged and tracked sperm whales prior to the spill, during, and after. However, there is a good chance I will need to change projects because I do not currently have datasets for analyses.
Hypotheses:
I hypothesize that the availability of sufficient prey resources required to meet the caloric needs of resident sperm whales outweighed the chaos created by the oil spill and response activities.
Approaches:
I am not sure of the best approach, but hope to get some assistance from my peers. My goal is to measure any shift in foraging areas and to complete trend analysis, hot spot analysis, and cluster analysis of foraging areas prior to, during, and several years post spill.
Expected Outcomes:
It would be ideal to detect patterns that are predictive of food web disturbance that ultimately predict the response to sperm whales to disturbance.
Level of preparation:
I have moderate experience with ArcMap and some experience with spatial analyses with ArcMap. I have an understanding of basic statistics and statistical software Minitab. I do not have any experience with R or Python.
References:
Best, P.B. 1979. Social organization in sperm whales, Physeter macrocephalus. Pp. 227-289 in Behavior of marine animals, Vol. 3, edited by H.E. Winn and B.L. Olla. Plenum, New York.
Clarke, M.R. 1997. Cephalopods in the stomach of a sperm whale stranded between the islands of Terschelling and Ameland, southern North Sea. Bulletin de 1’Institut Royal des Sciences Naturelles de Belgique, Biologic 67-Suppl 53-55.
Jochens, A.E. and D.C. Biggs, editors. 2006. “ Sperm whale seismic study in the Gulf of Mexico; Annual Report: Years 3 and 4.” OCS Study MMS 2006-067. 111 pp., Minerals Management Service, Gulf of Mexico OCS Region, U.S. Dept. of the Interior, New Orleans, LA.
Jochens, A., D. Biggs, K. Benoit-Bird, D. Engelhaupt, J. Gordon, C. Hu, N. Jaquet, M. Johnson, R. Leben, B. Mate, P. Miller, J. Ortega-Ortiz, A. Thode, P. Tyack, and B. W. 2008. 2008. “Sperm whale seismic study in the Gulf of Mexico: Synthesis report.” OCS Study MMS 2008-006. 341 pp, Minerals Management Service, Gulf of Mexico OCS Region, U.S. Dept. of the Interior, New Orleans, LA.
Watwood, S.L., P.J. Miller, M. Johnson, P.T. Madsen, and P.L. 2. Tyack. 2006. Deep‐diving foraging behaviour of sperm whales (Physeter macrocephalus). Journal of Animal Ecology 75(3):814-825.
A description of the research question that you are exploring. Global change is occurring from the continuing variability in the climate, the ecological responses to the climate drivers, and the socioeconomic and political response. These changes alter the landscape in predictable and unforeseen ways, simultaneously causing modifications in interactions between the landscape and all the biological communities. Our reliance on natural resources such as fish, highlights the coupled impacts of these changes between the human and natural system. Exploring the impacts of this coupled system this term, thespatial problem that Ill address is to understand how habitat suitability models differ for the giant gourami (Osphronemus goramy) in the Mekong Basin between models that are based on the physical landscape and those that incorporate human impacts. I will use the environmental indicators surface temperature, salinity, and turbidity to map the potential habitat for the giant gourami with an additional layer informed by indicators of human impacts such as land use, population, and proximity to industry to evaluate the differences.
The giant gourami is an air-breathing fish, native to this region, and grown commercially as a food fish as well as for the aquarium market throughout SE Asia (Lefevre et al., 2014). The fish inhabits freshwater, brackish, benthopelagic environments in swamps, lakes, and rivers among vegetation, found in medium to large rivers and stagnant water bodies. People around the world rely on fish as a primary source of protein and income, and the growing aquaculture industry provides roughly half of the global fish supply (FAO, 2014). However, to meet the demands of a rapidly growing population (exceeding 7 billion by 2020), a rising middle class, and an increasingly urban population (65% by 2020), protein consumption is expected to increase to 45kg per capita by 2020, a 25% increase from 1997—the fish consumption rate is no outlier.
A description of the dataset.
Boundary data for the IUCN defined habitat range for the giant gourami in the Mekong Basin species (shown below). This status code for the species and is listed in this dataset as “Probably Extant.” However, this particular status code is listed as “discontinued for reasons of ambiguity.” So it is my hope that this analysis will provide insight into the IUCN-defined habitat and assess how it has changed through time by assessing the parameters used to develop the IUCN data and evaluate additional landscape variables.
Hypotheses: I expect that the potential habitat for the giant gourami has increased over time and with increased human impacts do to the physiological resilience of the species. This fish inhabits regions characterized by fresh to brackish water and in slow-moving areas like swamps, lakes, and large rivers. Given its unique ability to breather air, this fish can survive in poorly oxygenated water to anoxic areas. I expect that with climate change, increased urbanization, and the changing hydrologic profile of the system due to potential dams that this fish may become more suitable than others for its ability to live in ‘poorer’ environmental conditions.
Approaches: I hope to use python or modelbuilder to iterate through the available datasets to assess the changing habitat based on a habitat suitability index for the giant gourami. There is also a time-series tool in Arc that I would like to explore.
Expected outcome: I hope to develop a habitat suitability index for the giant gourami and compare habitat suitability models for the potential habitat based on the changing physical landscape and increasing human impacts. If the data are available, I hope to create a simple time-series animation for each model.
Significance: Fish production from aquaculture is poised to absorb an increasing amount of this demand for meat, offering techniques that offset some of the environmental costs of production. Depending on the species and farming conditions, fish production can achieve some of the lowest feed-conversion ratios of any type of terrestrial animal meat production. If farmed responsibly, some species of the diverse group of air-breathing fish such as the giant gourami present an advantage in aquaculture for their unique ability to breathe air. However, it is critical to understand the impact of increased production levels on the natural range of the species order to mitigate the unwanted invasions or overloading of the natural environment. A study to assess the spatio-temporal patterns of the habitat suitability of potential aquaculture species will allow for managers to make informed decisions about aquaculture siting and resource allocation.
Your level of preparation: In terms of my experience with the tools available for this type of analysis, I am starting to develop my comfort with ArcInfo, ModelBuilder, and Python for GIS. However, am no expert. I have also been exposed to some statistical applications of R, but am again not an expert.
FAO. (2014). The State of World Fisheries and Aquaculture 2014. Rome, Italy.
Lefevre, S., Wang, T., Jensen, a., Cong, N. V., Huong, D. T. T., Phuong, N. T., & Bayley, M. (2014). Air-breathing fishes in aquaculture. What can we learn from physiology? Journal of Fish Biology, 84, 705–731. doi:10.1111/jfb.12302
Soil microbial communities are extremely complex, because in each gram of soil there are about 109 microorganisms. Due to this complexity, studying these communities at a species level and understanding any meaningful relationships is difficult. Therefore, there is a lot of current research looking at how the composition of the community is dictated by environmental parameters in the soil and understanding how the community shifts as these parameters change. However, before I can begin to examine these microbial communities, I need to explore the spatial distribution of these environmental factors. If I do find a neat relationship between significant environmental parameters and microbial composition, I want to explore different methods of how to interpolate this sampled relationship over larger, unsampled areas.
The Dataset
The dataset is a set of soil parameters from samples throughout the state of Oregon. These parameters only include edaphic factors such as pH and texture metrics; however, I also hope to include climate factors such as mean annual temperature and precipitation when exploring this relationship between environmental factors and microbial community distribution. Samples were collected across Oregon to try and encompass all the Common Resource Areas (CRA). According to the NRCS (Natural Resource Conservation Service), a CRA is “a geographic area where resource concerns, problems, or treatment needs are similar.” The CRA geographic partitioning map has a scale of 1:250,000 and considers landscape factors, human use, climate and other natural resource information.
Hypothesis
I hypothesize that these environmental parameters are indeed influenced by spatial autocorrelation; however, due to the low sampling point density it will be difficult to confidently model the distribution of these soil parameters. For this reason, it may be beneficial to see how conserved these environmental factors are within their respective CRA. If the values of a given environmental variable are similar within a CRA, the CRA polygon may be a more robust unit of analysis than individual sampling points.
Approaches
Firstly, I need to examine the spatial relationships across these environmental factors and see how strong the relationship are. I also would like to examine how well these environmental factors are conserved within/between CRAs.
Outcome
I will produce a map of environmental factors interpolated across Oregon using either point data or relationships between CRAs and their respective points.
Significance
Soil microbes provide several different ecosystem services such as nutrient cycling; however, understanding the fundamental parameters dictating microbial community distribution through a landscape is not well understood. Providing a map of the distribution of these microbial communities can help increase accuracy for regional nutrient cycling models and help quantify soil health.
Expertise
I have very little experience with GIS and Python; however, I have done a fair bit of work with R in my undergraduate degree and graduate work. I hope to gain some proficiency in ArcGIS and learn how to incorporate R scripts into ArcGIS. There are also several R packages built around spatial analysis in which I hope to become familiar with.
Traditionally, archaeologists classify projectile points based on a number of different morphological features, such as the presence or absence of distinct features, or the shape of the hafting element. However, projectile points can vary widely in the shape, size, and construction technique. This means that their classification can often be a matter of opinion with little objective data to support it. With the ever increasing availability of 3D scanning and morphometric analysis, it is possible to add level of statistical confidence to these classifications. The goal of this project is to determine how well these new classification methods match traditional classification schemes.
Data set:
The data for this project consists of a number of projectile from the Pilcher Creek archaeological site in northeastern Oregon. In total, 44 complete projectile points were recovered from the site, which were originally classified into 3 separate categories, corner notched, lanceolate, and stemmed lanceolate. The majority of the points are made of fine grained basalt, with a small percentage made of obsidian. These points were originally dated to approximately 8,000 years BP, making them some of the oldest artifacts in Oregon.
Hypothesis/ Approach;
It is expected that the classification groups generated through this analysis will follow mostly with the original classification. and it may be possible to subdivide the points into more than the original 3 categories. The rough work flow for this project starts with creating a high resolution 3D of the artifacts using structured light scanning. These 3D scans are then run through as series of ArcGIS tools to generate a large set of descriptive data which can be used to compare the different points. This data for all the points can then be run through clustering analysis and principal component analysis in order to group artifacts together based on the similarity of their morphology.
Significance;
There are a number of benefits to performing this type of analysis. The first is that the 3D scans of the artifacts are much easier to share with the rest of the archaeological community than the actual physical artifacts. It also provides a way to classify these artifacts with calculable certainty, and without bias. It also lays the groundwork for future research and comparison. As more artifacts are 3D scanned and made publically available, it will be possible to create a comparative collection and classification system which will be greater in size and accuracy than anything previously.
Preparation:
I have a decent amount of experience experience with Arc and a little QGIS. I am comfortable with Python and ArcPy. I used to know a little R, but it has been awhile since I have used it.
Description of the research question I am exploring.
The broad question I am exploring is, “How will climate change affect fire regimes in the Pacific Northwest in the 21st century?” or stated as an overarching hypothesis:
Over the 21st century, projected changes in climate will cause changes in fire regimes in the Pacific Northwest by influencing vegetation quantity, composition, and fuel conditions.
I am exploring this question in the context of model vegetation and fire results from the MC2 dynamic global vegetation model (DGVM). MC2 is a gridded, process model with modules for biogeochemistry, fire, and biogeography. Inputs consist of climate and soil data. Outputs from the model include vegetation type, carbon fluxes and pools, hydrologic data, and values related to fire, including carbon consumed by fire and fraction of grid cell burned.
MC2’s current fire module computes fuel conditions within each grid cell. Fire occurrence is modeled when conditions exceed a set fuel condition threshold. An ignition source is always assumed to be present. This threshold-and-assumed-ignition algorithm has the potential to underestimate fire occurrence in areas that rarely or never meet the fuel condition threshold and to overestimate fire occurrence in areas that frequently exceed the fuel condition threshold. I am currently implementing a stochastic ignitions algorithm that allows the user to set an overall daily ignition probability and applies a Chapman Richards function to a fuel condition measure to determine probability of an ignition spreading into a fire.
I will be running the model with historical climate (1895 to 2010) and future climate (2011 to 2100) to produce potential vegetation results (i.e. land use not taken into consideration). Historical data are downscaled from PRISM data, and future data are downscaled from output data produced by the CCSM4 Climate model using the CMIP 5 representative concentration pathway (RCP) 8.5. The model will be run at a 2.5 arc minute resolution (approximately 4km x 4km cell size).
I will compare the output from the 20th century to that of the 21st century and characterize differences in fire regime spatially and temporally. This will be the first run of the MC2 with the new stochastic ignitions algorithm.
(I have added several references below related to what is discussed here.)
The dataset I will be analyzing
The dataset I will be analyzing will come from MC2 model runs described above. The extent of the dataset is from 42° to 49° latitude and from -124.75° to -111° longitude (from the southeast corner of Idaho west to the US coast and north to the Canadian border), comprising 169 x 331 spatial grid cells of size 2.5 x 2.5 arc minutes. Outputs are on an annual basis from 1895 through 2100. Water and barren land are mapped out of the dataset.
Outputs include variables for various carbon pools, fluxes, vegetation characteristics, and fire characteristics. Those I will be analyzing include carbon consumed by fire and fraction of cell burned. I will be summarizing the data over the time dimension to compute mean time between fires (essentially fire return interval, but over a shorter time period than might be appropriate for calculating a true fire return interval).
Hypotheses
Vegetation, elevation, and climate will cause fire regimes to cluster spatially through influences on fuel quantity, composition, and condition.
Projected increased temperature and change in precipitation patterns will cause fire to be more frequent and/or more severe through influences on fuel quantity, composition, and condition.
Shifting climate characteristics will cause regions with similar fire regimes to shift in location due to changing fuel quantity, composition, and conditions.
Kinds of analyses
The first analysis I will do is a cluster analysis using mean time between fires, carbon consumed by fire, and fraction of cell burned. I will first summarize data over six time periods to produce six datasets: four 50-year periods (1901-1950, 1951-2000, 2001-2050, and 2051-2100), and two 100-year periods (1901-2000 and 2001-2100). Then I will run a cluster analysis (type to be determined) on each dataset.
Using two or more of the resulting clustered datasets I will explore the differences among clusters within each dataset and between datasets (likely using Euclidian distance between clusters).
I will map clustering results back onto the landscape in order to explore spatial patterns within each dataset and differences in spatial patterns between datasets. I will also compare the spatial pattern of clustering results to the spatial extents of EPA Level III ecoregions to see how well or poorly they align.
If time permits, I will do further analyses to characterize the relationship between vegetation type distribution, climate factors, and fire regime clusters.
Expected outcomes
I expect that cells with the same statistical cluster will be concentrated geographically, that for historical data, these concentrations will align closely with EPA Level III ecoregions, that cluster characteristics will be different between time periods, and that geographical groupings of clusters will shift generally northward and towards higher elevation somewhat between historical and future time periods.
From previous runs of the MC2 and preliminary observations of results from the runs for this project, I know that dominant vegetation type shifts from conifer to mixed forests west of the crest of the Cascade Mountains. Within this region, I expect a large shift in fire regime, with carbon consumed falling and mean time between fires decreasing over much of this region. In other regions, I expect general decreases in the mean time between fires due to warmer temperatures and locally drier summers. I also expect carbon consumed to generally remain constant or locally increase due to more favorable growing conditions.
Importance to science and resource management
Studies using DGVMs commonly produce map and graphic results showing extent and intensity of change over uni- or bidimensional spatiotemporal domains. This approach will provide more quantifiable differences using a multidimensional analysis. The ability to characterize fire regimes this way will allow for better model parameterization and validation, which in turn may lead to greater confidence in model projections.
Model results will provide projected changes across an ecologically and economically important region. Results will help resource managers understand and plan for potential change.
Level of experience
Arc: Medium, a little rusty
ModelBuilder and Python: Expert, especially with python.
R: Medium, a little rusty
References
The Beginner’s Guide to Representative Concentration Pathways: http://www.skepticalscience.com/rcp.php
Bachelet, D., Ferschweiler, K., Sheehan, T.J., Sleeter, B., Zhu, Z., 2015. Projected carbon stocks in the conterminous US with land use and variable fire regimes. Global Change Biol., http://dx.doi.org/10.1111/gcb.13048
Daly, C., Halbleib, M., Smith, J.I., Gibson, W.P., Doggett, M.K., Taylor, G.H., et al., 2008. Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. Int. J. Climatol. 28 (15), 2031–2064, http://dx.doi.org/10.1002/joc.1688.
Foundational to effectiveness of management of invasive species in agricultural and native landscapes is the question of spatial extent and the ability to quantify the impacts of invasive species on agronomic efforts and native vegetation. Invasive plants represent a threat to both agricultural and native landscapes in the form of reduced ecosystem function, increased resource consumption, and reduced yields from agricultural systems. In spite of significant efforts to control and reduce impacts from invasive species, invasive weeds cause an estimated loss of $2 billion annually in the US. Currently, interspecific competition is one of the major limitations for oilseed and grain production in dryland cropping systems of the Pacific Northwest (PNW). Opportunities for the for the precision monitoring of managed and native ecosystems have become available through the use of low altitude remote sensing systems and high resolution satellite systems. However, methods for resolving species level classification in high resolution multispectral remote sensing systems remain lacking. This is partially due to the relative novelty of these systems, but is also related the lack of suitable reference data at spatial and temporal scales for regionally based models. The broad research question I’m is how does spectral trajectory relate to weed density, and can this information be used to distinguish the spatial extent of weeds in dryland cropping systems? My prediction, is that by increasing the spatial and temporal resolution of these data, crop and non-crop species will be distinguishable based on their relative rate of change in greenness.
The objectives I have for this class are to 1) determine the spatial and temporal resolutions at which weed species are distinguishable from crop species using a spectral trajectory technique, 2) compare these methods with ground reference data in a dryland cropping systems study. The major outcome of this work would be a method for distinguishing weed species from crop species in a dryland environment, and the identification of the minimum temporal resolution for distinguishing species in multispectral imagery.
Dataset:
The data set I have for addressing these questions is a composite of 7 flights of images taken with a multispectral camera in a cropping systems study in Eastern Oregon. Flights were conducted in conjunction with visual estimates of weed density in semi-permanent monitoring frames installed into the cropping systems study. The images are currently at a low level of processing. One of my goals as a part of this project will be to orthomosaic the images such that I can perform a time series analysis across image collection dates. The temporal resolution is from 3-20 days between flights, whiles the spatial resolution is 3 cm. The images cover the entire spatial extent of the experiment.
Hypothesis:
What I plan to do for this experiment is that after generating an orthoraster, I will be able to distinguish between the quantify of weed species and crop species in a frame based on the spectral trajectory of individual pixels in that frame. The question I hope to answer will be how distant in time do sample dates have to be before weed species can be distinguished from crop species based on their spectral trajectory.
Approach:
I plan on using a trial version of Agisoft to orthomosaic my images. It may be possible to conduct this analysis without performing an ortho mosaicing of the images, however, an orthomosaic will have a number of advantages to non-mosaiced images. Without an orthomosaic every individual analysis will have to be hard coded, whereas with an orthomosaic, I can automate much of the processing of these data.
Outcome:
The goal of this analysis will be to identify the minimum time required to discern species, and to describe the statistical relationship between species abundance based on spectral trajectory and ground reference data.
Significance:
While there has been a significant surge in the interest of UAV’s and low altitude remote sensing, the actual number of useful products for land managers to make decisions based on these data is very minimal. This work would identify the minimum temporal resolution a resource manager would need to have before they can identify weed species based on spectral characteristics.
Preparation:
I would say I have minimal experience in Arc-info, model builder and Python. I have moderate to high level experience in R. I would also consider myself to have moderate to high levels of experience in image processing and image analysis.