So this is the third time I am writing this, as the WordPress site continues to delete my work even after saving a draft!

As you may know I have been working on Envision outputs, trying to compare resulting datasets in terms of property values and expected flooding and erosion impacts over the next 90 years in Rockaway Beach, Oregon. My original dataset was split into Individual Decision Units (IDUs) with varying sizes and values.

11a

As we’ve seen in my tutorial, small scaled interpolated data can then be upscaled to larger parcels, in this case tax lots. Using the Areal Interpolation tool I was able to create the following surface from my known points. It follows the general trends of the property values fairly well even if the data only moderately fits the model.

2

After hearing from that class that my data would be better transformed, I set my Values per SqFt to a log scale. The resulting surface was more clear, however, there are discrepancies between the known values for properties and their interpolated values. I have yet to figure out why this occurs, but I’m hesitant to use the newly created tax lot polygon values until I figure it out.

3

In the meantime I have decided to compare flooding and erosion results data from several time series: 2010, 2040, 2060, and 2100. To make these comparisons, I read up on and chose the tool Feature Compare which compares shapefiles based on an attribute or geometry. The tool needs parsed down data, so I created new shapefiles of properties solely affected by flooding or erosion through a series of definition queries and the export of new shapefiles. Once in the tool though, I started running into problems as it recognized differences in the attribute table (i.e. the number of rows0 but did nothing to note the differences on the map in coverage. I’ve been looking for a tool that would fit my needs elsewhere but have yet to come across it.

Instead I moved onto comparing hot spot locations to these areas of flooding and erosion. The hotspot analysis tool is pretty straight forward, and creates a new shapefile of GiZscores which rank the property values based on their standard deviations away from mean. The hot colors denote greater positive standard deviations.

4 4a

Again, I used the intersect tool to find the areas of greatest property value and greatest extent of damage. I first intersected the erosion and flooding data over the course of the four times to find areas which are continually under “attack”.

Over the course of 90 years, only one subdivided tax lot was repeatable impacted by erosion. According to the Assessed Values for those properties, the three tax lots were at approximately $97,000. This kind of analysis could help determine if protection structures, such as revetments or beach nourishment, would be economically feasible. Similarly, flooding impacts a total of 44 tax lots over the course of 90 years.  The total assessed values for the properties impacted is much higher, at about $3.1 million.

5 5a

This is mostly due to the occurrence of flooding in the “hotspot” of Rockaway Beach. If only these “hotspot” areas (greater than one standard deviation away from the mean) only 16 properties are affected with a total loss of about $2.3million, or almost 75% of the $3.1 million total.

6 6a

Overall, I’m not sure ArcGIS10.1 has the tools I need to perform the analysis I described above. As I become more comfortable with R, I hope to pull these datasets into the program and perform more statistical analysis. In the meantime, I will continue trying different ways to present the data and glean new insights.

So, I’ve been able to get a few more datasets from ENVISION, including flooding and erosion data for the 1980 and every decade after that. The flooding and erosion risk was calculated via ENVISION using estimates of sea level rise from a report by the National Research Council which is  specific to the Pacific Northwest. I am currently struggling to map the non-georeferenced tables of point data from the years 2000-2040 in order to map it and perform hotspot analysis, but I am sure I can figure out how to display it and/or join it to spatially referenced data eventually.

Future analyses includes:

1. I was hoping to compare hotspot results from decade to decade, using some sort of tool that compares point layers and changes over time.

2. I’d like to integrate and collect the data from the 1980-2040 span (so 60 years) and see which areas will have the greatest flooding over that time. Could help with long-term planning.

3. Since flooding and erosion is expected to occur along the coast and will be true for all 60 years of data, how can I parse out the more interesting information a bit further inland and make the map outputs more useless (and no crowded by this dominant factor)? This question may resolve itself if I perform task 1 since no change will be shown in places that encounter flooding/erosion frequently.

4. Potentially take these analyses and perform them on non-spatial attributes of the data i.e., property values, critical structures, etc. 

5. Combine analyses with demographic data. Are the hotspots in  areas of low income or other demographic community?

Many things I can do with the dataset if I can figure out how to handle the data. Cheers!

 

As I started to describe in class, my project will be dealing with output results from the model software ENVISION.  ENVISION is a GIS-based tool for scenario based community and regional planning, and environmental assessments.  It combines a spatially explicit polygon-based representation of a landscape (IDUs or Individual Decision Units in my case), a set of application-defined policies, landscape change models, and models of ecological, social, and economic services to simulate land use change and provide decision-makers, planners, and the public with information about resulting effects.

The ENVISION project I am involved with is centered on Tillamook County and its coastal communities. Through a series of stakeholder meetings (which have included a range of people such as private landowners and state land use commissioners) our group identified several land use policies to implement in the ENVISION model. The policies were then grouped into three types of management responses: the baseline (or status quo), ReAlign, and two types of Hold the Line (high vs. low management) scenarios. These policy scenarios have been combined with current physical parameters of the coastline such as dune height and beach width, and will be also linked with climate change conditions at low, medium, and high levels for the next 30, 50, and 100 years.

Since ENVISION is GIS-based already, I am having a tough time coming up with a problem that complements the project in ArcGIS.  ENVISION does a great job of visualizing the changes expected for each location along the coast via videos, graphs (see below), and can even include economic estimations.

County Wide

Therefore, it may be best to explore the capabilities of software like R to analyze the output data. One idea would be to calculate the probability of occurrence for these different events and total number of occurrence.  I need to take a deeper look into how these events are calculated to begin with, and determine the inherent estimates of probability and uncertainty.  This type of analysis would help determine whether this type of exercise is beneficial for stakeholders and would help answer their own questions of trust in the results.

Another idea would be to focus on specific results from ENVISION and try to determine exactly how one policy is affecting the coastline and creating such disparate results. For instance, the graph below shows Numbers of Flooded and Eroded Structures in Pacific City under three types of scenarios. What is causing the large number of eroded/ flooded structures between 2035 and 2040? Why is there such a small difference between ReAlign and Hold the Line strategies if they are employing such different options? Some of these questions may be answered with a greater understanding of ENVISION, however, these are the types of questions that may be asked by stakeholders and it would be prudent to provide more quantitative answers that ArcGIS or R could glean.

 Pacific City