So this is the third time I am writing this, as the WordPress site continues to delete my work even after saving a draft!
As you may know I have been working on Envision outputs, trying to compare resulting datasets in terms of property values and expected flooding and erosion impacts over the next 90 years in Rockaway Beach, Oregon. My original dataset was split into Individual Decision Units (IDUs) with varying sizes and values.
As we’ve seen in my tutorial, small scaled interpolated data can then be upscaled to larger parcels, in this case tax lots. Using the Areal Interpolation tool I was able to create the following surface from my known points. It follows the general trends of the property values fairly well even if the data only moderately fits the model.
After hearing from that class that my data would be better transformed, I set my Values per SqFt to a log scale. The resulting surface was more clear, however, there are discrepancies between the known values for properties and their interpolated values. I have yet to figure out why this occurs, but I’m hesitant to use the newly created tax lot polygon values until I figure it out.
In the meantime I have decided to compare flooding and erosion results data from several time series: 2010, 2040, 2060, and 2100. To make these comparisons, I read up on and chose the tool Feature Compare which compares shapefiles based on an attribute or geometry. The tool needs parsed down data, so I created new shapefiles of properties solely affected by flooding or erosion through a series of definition queries and the export of new shapefiles. Once in the tool though, I started running into problems as it recognized differences in the attribute table (i.e. the number of rows0 but did nothing to note the differences on the map in coverage. I’ve been looking for a tool that would fit my needs elsewhere but have yet to come across it.
Instead I moved onto comparing hot spot locations to these areas of flooding and erosion. The hotspot analysis tool is pretty straight forward, and creates a new shapefile of GiZscores which rank the property values based on their standard deviations away from mean. The hot colors denote greater positive standard deviations.
Again, I used the intersect tool to find the areas of greatest property value and greatest extent of damage. I first intersected the erosion and flooding data over the course of the four times to find areas which are continually under “attack”.
Over the course of 90 years, only one subdivided tax lot was repeatable impacted by erosion. According to the Assessed Values for those properties, the three tax lots were at approximately $97,000. This kind of analysis could help determine if protection structures, such as revetments or beach nourishment, would be economically feasible. Similarly, flooding impacts a total of 44 tax lots over the course of 90 years. The total assessed values for the properties impacted is much higher, at about $3.1 million.
This is mostly due to the occurrence of flooding in the “hotspot” of Rockaway Beach. If only these “hotspot” areas (greater than one standard deviation away from the mean) only 16 properties are affected with a total loss of about $2.3million, or almost 75% of the $3.1 million total.
Overall, I’m not sure ArcGIS10.1 has the tools I need to perform the analysis I described above. As I become more comfortable with R, I hope to pull these datasets into the program and perform more statistical analysis. In the meantime, I will continue trying different ways to present the data and glean new insights.