After tinkering with several different tools and datasets, my end goal for this class became to digitally reproduce my manual methodology for mapping wetlands.
The reason my methodology is manual (rather than digital) is because temporal and spatial context are very important when mapping wetlands in the Willamette Valley using Landsat satellite imagery.
For example, riparian wetlands along the Willamette River have the same spectral signal as poplar farms and orchards in the same vicinity. If using a single date to classify wetlands, a digital classification may confuse land use types. However, looking backward and forward in time, I can tell which areas are stable wetlands and which are rotated crops. Additionally, I have other layers such as streams, soils, elevation, and floodplain to see whether it is logical for a wetland to exist in a certain area.
When I classify wetlands I have several different data layers available to aid in decision making:
Spectral
- (40) annual Landsat tasseled cap images from 1972-2012
- Lidar inundation raster
- Streams
- Soils
- Others
- Spectral-only supervised classification -Use time series as additional channels
- Object oriented classification
- Incorporate ancillary with spectral
- Mix
Arc seemed like the perfect application to attempt to incorporate my spectral Landsat data with my other layers.
At first I thought using one of the regression tools on a mapped sample of my study area could allow me to classify the rest of my study area using OLS predictions. However, when I looked at my data, the relationships between datasets did not appear robust enough.
Instead, I decided do to a binary raster overlay to find possible locations of wetlands. I selected a small study area focused around South Corvallis because it had a good mix of both riparian forest stands as well as hardwood orchards.
I included spectral data from a 2011 tasseled cap image using the indices brightness, wetness, and greenness. I calculated the spectral stats for BGW using areas of known riparian wetlands. A new binary raster was created for B,G, and W; pixels with data around the mean +/- one standard deviation were given a “1” and all other pixels were given a “0”.
I also included a 1 meter Lidar two year inundation floodplain map. Areas in floodplain were reclassified as “1” with all other space as “0” as well as a 100m distance raster of the streams in the area
All layers were given equal weight in the raster overlay. The result was a raster with little to no error of omission but high error of commission.
(Areas of yellow indicate manually mapped and validated riparian wetlands; green is result of raster overlay).
Just for comparison, I decided to run a spectral classification in ENVI using my manually mapped wetlands as regions of interest (i.e. training sites).
The result presented increased error of omission but decreased error of commission.
You can run spectral classification in Arc but the process is not streamlined and can become convoluted. Additionally, ENVI automatically extrapolated classification of an image based on training data; Arc is clunky when it comes to prediction and extrapolation.
My final thoughts on the Arc spatial statistics toolbox are that:
- Arc spatial stats are (mostly) useless when it comes to categorical/non-continuous data
- Especially useless when it comes to mixing continuous with non-continuous
- Raster tools are not multi-channel friendly (I had to process each band separately)
- Classification and prediction are convoluted, multistep processes in arc
- Operations that ENVI, R, etc. do flawlessly
- I should probably expand to other software for serious analysis
- eCognition, habitat suitability mappers, etc.
Hi Kate!
Your work looks great. How did you do the binary raster overlay? Is that an Arc tool? I might need to do something along those lines in the future. There are several methods to calculate animal home ranges and I want to find out which areas area shared by all them. The overlay seems something I could use.
Great work Kate,
This type of classification seems perfectly suited for habitat suitability modeling, as you mentioned. You could include data from multiple images in order to account for seasonal land use changes. With a quick google search I found a report that used MaxEnt to ID presence/absence of wetlands using remote imagery (although MaxEnt wasn’t good at identifying wetland type–it was in West Virginia in case you want to track it down). If you’re planning on pursuing this analysis further and would like to delve into MaxEnt or some other habitat suitability modeling technique, I’d be happy to share my insights–MaxEnt is pretty easy to use but there are definitely some quirks/things to be aware of.
hey Kate, your conclusions are pretty similar to mine for similar type of non-continuous data and the limitations of Arc, although I do think there are some tools and approaches in Arc that are appropriate. But for high power analyses I think you’re right that you’ll need to explore options outside of Arc, or maybe a combination of Arc and other software analyses.
Good luck!
Hey kate, great job! If you’re interested in trying object oriented classification on your data, I have a copy of eCognition and a single license dongle you can use. I used it to try to predict disturbance agents. It was a little clunky and black-boxish, but pretty powerful. -Justin
Kate,
Great work. Looks like your classification/weighted regression is pulling out the relict stream channels of the (formerly braided) Willamette River. Is there a way to use this information in your predictions?
Julia