We’re going to look at some beach morphology data from Benson Beach, just north of the North Jetty on the Columbia River. The beach is accessible from Cape Disappointment State Park near Ilwaco, WA.

In 2010 and 2011, the USGS, OSU, WA State Department of Ecology, and the Army Corps of Engineers did a joint study of the impacts of beach nourishment. The Army Corps traditionally dumps dredging materials offshore, outside of the reach of littoral sediment transport. In the summer of 2010, they deposited these materials on high-energy, often erosive Benson Beach.

Today, we’ll do a difference analysis of beach elevations before and after the beach nourishment. Data was taken with foot- and Polaris side-by-side-borne GPS, and with jetskis equipped with GPS and sonar.

 

Start by copying the file S:\Geo599\Data\SEXTON_Difference_Tutorial into your own folder.

We’re going to start with the point data. It’s been cleaned, but otherwise has not been processed. I had some trouble with opening it, so please let me know if you have any issues – I’ve got a back-up plan.

The S01 data is the “before nourishment” baseline survey, performed on July 11, 2010. The S06 data is the “after nourishment” survey, performed September 22, 2010.

 

Open ArcMap.

In the catalog, navigate to where you saved the folder. Find the S01 folder and right click on the file: S01_20100711_topo.

Create a feature class from XY table.

Choose Field 1 for X, Field 2 for Y

Leave Z as “none” (we have Z data, but it got real mad when I told it here).

Click the Coordinate System of Input Coordinates button:

Select “Projected Coordinate Systems,” then “State Plane,” then “NAD 1983 (Meters).”

Navigate to and select the coordinate system called:

NAD_1983_StatePlane_Washington_South_FIPS_4602

Choose where to save your output feature class. I called mine: XYS01_20100711_topo.shp

Hit OK and cross your fingers – this is where your program might crash!

1

Once it’s done thinking (and maybe not responding for a bit – don’t panic if that happens, it kept working for me), navigate to the new shape file in your folder and drag it into your working area.

This should be pretty exciting, except that all the symbols are one color. To better visualize, right click on the layer, and find properties. Click the “Symbology” tab and then “Quantities” on the left. Change the “Value” to “Field3” and select a color ramp. Field three is your Z values, and it will color by elevation!

PS: when you tell it to use Field3, it gives you an angry message. I just hit OK, and it moved on and looked fine. If anyone has any suggestions, I’m all ears.

2

 

While this was super cool, I thought we’d short cut pulling in the rest of the XY data, and I’ve provided you with shape files of the rest.

In the S01 folder, there is a help folder. In there, you’ll find my version of the S01 topo shape file (which you likely just completed) and aS01 bathy shape file, called XYS01_20100711_bathy. Drag that into the work space. You can change the colors to see the elevation, but it’s not necessary to our next steps.

 

You’ll also need to drag in the S06 shape files. I still gave you the XYZ text files in the S06 folder, but again provided a “help” folder with shape files of each point set. Drag these files into your work space:

XYS06_20100922_topo.shp

XYS06_20100922_bathy.shp

You should now have four layers: two bathy, two topo; one from each survey.

Now we’re going to make some surfaces.

In the toolbox, open 3DAnalyst, then Data Management, then TIN, and find the “Create TIN” tool. (or search for it…)

Name your first TIN S01

You’ll need to choose the projection again: NAD_1983_StatePlane_Washington_South_FIPS_4602

Then choose the two S01 feature classes, bathy and topo.

When they appear in the list of input features, you’ll need to change the Height Field to Field3!

Hit OK and it will make a TIN.

3

Hopefully, it looks like this:

4

Repeat the process for the S06 data set. Don’t forget to specify the Height is in Field3.

5

Lastly, we’re going to perform a difference analysis. This will compare the baseline beach survey with the “after nourishment” beach survey and find where sand was deposited, and where erosion happened after the deposition (since the survey wasn’t directly after).

 

In the 3DAnalyst toolbox, expand the “Triangulated Surface” options, then open the “Surface Difference” tool.

Set the Input Surface to “S06

Set Reference Surface to “S01

Name your output feature, I called mine “Beach_Change”

Expand the Raster Options and name an output raster, I also called this one “beach_change”

Leave cell size as default (10)

Click OK! This might take a minute.

6

The resulting raster can be recolored to more readily show differences.

7

The feature class generated currently shows only where there sand was deposited (blue), eroded (green), or unchanged.  While this might be helpful, I wanted elevation changes in a feature class.

If you’d rather a shape file:

Open the attribute table for surface_dif1 feature class

Add a field, call it “ele_dif” and make it a “float”

Open field calculator and calculate the elevation change by multiplying volume and the code (negative means loss of sediment, positive means gain), and dividing by “SArea.” Then click okay.

8

Close the attribute table and open Properties. Choose the “Symbology” tab, then “Quantities.”

Choose the new “ele_dif” as the value, and increase the number of classes (I chose 9). When you hit okay, the feature class will show the elevation changes!

9

 

I started the term with some lofty ideas of learning to process LiDAR data in Arc. Throughout the term, I’ve had many trials and tribulations, many as trivial as my inability to open the data I was interested in processing. While the term has felt difficult and trying, as I have written this blog (3 times now, due to wordpress…), I’ve concluded that I’ve actually accomplished a lot.

Realistically, I started this course with very little knowledge of data processing in Arc (like none at all). I was pretty good at making things look pretty and making informative maps of the places we would next be mapping. When it came to actually pulling in raw data and making sense of it, I probably wasn’t at an “advanced spatial statistics” level. So what I took from the course was a whole bunch of rudimentary skills I hadn’t known I didn’t know.

I pulled data from the group I worked with for two years: the Coastal Monitoring and Analysis Program at WA Dept of Ecology. They’re really good at footbourne GPS data taking, walking profiles and using jetskis and sonar. While I was there, we acquired a laser to perform LiDAR scans, a multibeam sonar to map bathymetry, and an inertial measurement unit to tie it all together on a constantly moving boat. Many of the dead ends I encountered related to the never ending battle to synthesize these very complex systems together. We’re still working on it; you’ll see what I mean.

So I started with a simple scan. One pass of the laser on a beach where the team had put out retired Dept of Transportation signs as possible targets.

1

Circled in yellow are the plywood targets made while I was working there, and used in many scans. Turns out they’re not very reflective. In red are two stop signs, the one on the left is lightly painted with white paint and then made into a checker board with some tar paper. The one on the right was untouched.

The goal here is to view intensity data. When the laser receives a return from one of its laser pulses, it records the intensity of the light being returned. White or bright colors reflect more light back, whereas dark colors absorb some of the light. This means intensity values can almost function like a photo. We can use those to pick out the center of the targets in the scan and ground truth with GPS positions taken from on land. Then we’ll feel more confident with our boat’s scanning accuracy. Right now these targets are just serving to make us 100% sure that our boat data is wrong.

Below is a profile-ish view of the target scan, showing intensity. The same things are circled as above. You’ll note the old targets are really hard to distinguish.  Also notice the black, low intensity returns at the edge of the stop signs with the high intensity white in the center – this is what we were going for!

2

Getting this view was really exciting. But as I played with the 3D views in Scene, I discovered something awful with the DOT signs (it was actually really exciting, but awful for those who wanted these targets to work). While the ring of low intensity returns from the edge of the stop sign, the high intensity center returns are not showing at the face of the sign, where they should be. Instead, they are scattered behind the sign. You can see this in the plan-ish view below:

3

With some help from classmates, we discovered that this is due to the stuff they use to get the high reflectivity. When light hits these types of signs from any angle, it scatters so that anyone looking at the front of the sign (even without a light source) can see them. This means that the laser’s assumption that the light was hitting the sign and coming straight back was not valid; in fact, the light was coming back to the laser at angles and taking a longer time than it should have. What is still unexplained is why there appears to be no returns actually on the face of the sign.

Ultimately, the unpainted versions of DOT signs are not going to work for our purposes. But I thought I’d do a last bit of educating myself on what I would do if I had good data to work with. I imported the GPS points taken at the center of each target into Scene, where they could display in 3D. It was easy to see that, beyond the bad target reflectivity, we also have a problem with our system’s calibration. The two points from the plywood targets are circled in purple. Despite the challenge in picking out the center of these targets, it’s obvious the points do not agree.

4

My ultimate goal with other scans would be to quantify sediment movement over time by subtracting two surfaces. Although I don’t need to monitor this beach, the scan was one of my smaller ones, so I used it to learn to make a TIN.

The TIN method, shown in my tutorial, is not intended to account for large data shadows that result from performing horizontal LiDAR (as opposed to airborne LiDAR). This means that it doesn’t like to leave spots uninterpolated and instead creates a triangle over areas of no data. Thus, if we drove by with the boat, scanning, and got a few returns off of a tree or house 500m behind the beach of interest, Arc’s TIN interpolation would create a surface connecting the beach data to the far off tree/house data, and most of that surface would be silly. My method for dealing with this issue was to delete any of this far-off data, since our region of interest (ROI) is the beach.

Not surprisingly, this was a challenge too. You cannot delete points in an LASD file in Arc. After many trials, converting it to Multipoint was the best option. This process worked for this small scan, but not for larger data sets. After it was converted to multipoint, I could click areas and delete sections of data. I could not delete individual points, but for my learning purposes, I decided that didn’t matter. I removed the backdune data and as many of the pilings as possible. I used the “Create TIN” tool and came up with this surface.

5

Once again, it served to highlight where we had some error in our collection system. The discontinuities on the beach helped us pinpoint where our problems are coming from.  Each of the discontinuities occurs where the laser did more than a single pass over the landscape – it either scanned back over where it had already been, or sat and bounced around (this happens, we’re on a boat). If our boresight angle (the angle we’re telling the laser it is looking, relative to the GPS equipment) were correct, it should line up when we scan from different angles.

Throughout the term, I came back to a scan we did on Wing Point, on Bainbridge Island, WA. I had mild success with processing the scan: I was able to open it in both ArcMap and Scene. However, most other attempts failed. Creating a TIN caused the system to crash, but also would have connected many unrelated points (like the ones separated by water and the shorelines without data in-between). After success with multipoint with the small scan, I tried that route, but the scan appears to be too large. I am hopeful that if I have them trim the data down in another program (the laser’s proprietary program), then I could do a difference analysis when the laser’s communication is perfected and the scans line up properly. Below is a shot of the wing point data!

6

I learned so much throughout this course. I reprocessed a bunch of data from Benson Beach, WA in preparation for my tutorial. Since this is lengthy already, I am posting the tutorial separately. Obviously, I ran into a bunch of data difficulties, but I feel infinitely more confident working with Arc programs than I did in March. I know that I have tools that COULD be used to process this type of data, if the data provided were properly cleaned and referenced. Arc may not be the optimal tool for processing horizontally collected LiDAR, the tools I learned can be applied to much of the sediment transport and beach change topics I’m interested in.

I spent two full years of my life tromping through wilderness, sacrificing life and limb for the most complete data sets humanly possible. We covered miles of remote beaches on foot to install geodetic control monuments and take beach profiles, separated by as little as 10 meter spacing. We brethlessly operated our pricey new boat in less than one meter of water to collect just one more line of multibeam sonar bathymetric data or to get the right angle to see a dock at the end of an inlet with our mobile LiDAR. One of the most trying, and perhaps most dangerous tasks undertaken by our four-person team was the installation of large plywood targets before LiDAR scans. Boat based LiDAR is not yet a commonly employed data collection method, and our team has been executing foot-based GPS surveys for years. We were dead set on ground truthing our new “high-accuracy” toys before we decided to trust them entirely.

A co-worker created large, plywood targets of varying configurations: black and white crosses, X’s, circles, targets, and checker boards. We tested them all, and determined the checker board to show up best after processing the intensity of the returns from a dry dock scan. For the next 12 months, we hiked dozens of these 60 centimeter square plywood nightmares all over the Olympic Peninsula for every scan, placing them at the edge of 100 meter cliffs, then hiking to the bottom to be sure we had even spacing at all elevations. After placing each target (using levels and sledges), we took multiple GPS points of its center to compare with spatial data obtained LiDAR. We collected so much data, other research groups were worried about our sanity.

Then, we finally sat down to look for these targets in the miles and miles of bluff and beach topography collected. Perhaps you already know what’s coming? The targets were completely impossible to find; generously, we could see about one of every ten targets placed. Imagine our devastation (or that of the co-worker who had done most of the hiking and target building).

So the spatial question is rather basic: where are my targets?

I hope to answer the question with a few different LiDAR data sets currently at my disposal. The first is a full LiDAR scan of Wing Point on Bainbridge Island, WA. It’s one of the smaller scans, covering only a few miles of shoreline. Deeper water near the shoreline allowed the boat to come closer to shore, and the data density is expected to be high. We hope to find a few targets, and have GPS data corresponding to their locations. Currently, the file is about 5 times the size recommended by Arc for processing in ArcMap. On first attempts, it will not open in the program. While dividing the file would be easy with the proprietary software used with the LiDAR, I’d like to figure out how to do that with our tools. This will be one of the first mountains to climb.

The second data set is a more recent target test scan. Since my departure and determining the frustrating reality of the plywood targets, the group has found some retired Department of Transportation (DOT) signs. They have used gorilla tape and spray paint to create target patterns, similar to the test done with the original batch. I’ve been given one line of a scan of these new target hopefuls. My goal here is to ascertain the abilities of ArcMap for processing target data and aligning it with GPS points, without the added trials of trying to find the darn targets. Of course, I’m already hitting blocks with this process, as well. Primarily, finding the targets requires intensity analysis. Intensities should be included in the .LAS file I’m opening in ArcMap, but they are not currently revealing themselves. My expectation is that this is related to my inexperience with LiDAR in ArcMap, but that remains to be seen.

PGB_Target_Test_pano

Writing this post, I’m realizing that my link to spatial statistics currently seems far in the future. Just viewing the data is going to be a challenge, since the whole process is so new to me. The processing will hopefully result in an error analysis of the resulting target positions, when compared to the confidence of ground collected points. Furthermore, the Wing Point data was taken for FEMA flood control maps, and that sort of hazard map could be constructed once rasters or DEMs are created.

A large part of me is horrified by how much I’ve taken on, deciding to figure out how to use ArcMap for LiDAR processing when my experience with the program is already rather primitive. However, I’m excited to be learning something helpful and somewhat innovative, not to mention helpful to the group for whom I spent so many hours placing targets.