I started the term with some lofty ideas of learning to process LiDAR data in Arc. Throughout the term, I’ve had many trials and tribulations, many as trivial as my inability to open the data I was interested in processing. While the term has felt difficult and trying, as I have written this blog (3 times now, due to wordpress…), I’ve concluded that I’ve actually accomplished a lot.
Realistically, I started this course with very little knowledge of data processing in Arc (like none at all). I was pretty good at making things look pretty and making informative maps of the places we would next be mapping. When it came to actually pulling in raw data and making sense of it, I probably wasn’t at an “advanced spatial statistics” level. So what I took from the course was a whole bunch of rudimentary skills I hadn’t known I didn’t know.
I pulled data from the group I worked with for two years: the Coastal Monitoring and Analysis Program at WA Dept of Ecology. They’re really good at footbourne GPS data taking, walking profiles and using jetskis and sonar. While I was there, we acquired a laser to perform LiDAR scans, a multibeam sonar to map bathymetry, and an inertial measurement unit to tie it all together on a constantly moving boat. Many of the dead ends I encountered related to the never ending battle to synthesize these very complex systems together. We’re still working on it; you’ll see what I mean.
So I started with a simple scan. One pass of the laser on a beach where the team had put out retired Dept of Transportation signs as possible targets.
Circled in yellow are the plywood targets made while I was working there, and used in many scans. Turns out they’re not very reflective. In red are two stop signs, the one on the left is lightly painted with white paint and then made into a checker board with some tar paper. The one on the right was untouched.
The goal here is to view intensity data. When the laser receives a return from one of its laser pulses, it records the intensity of the light being returned. White or bright colors reflect more light back, whereas dark colors absorb some of the light. This means intensity values can almost function like a photo. We can use those to pick out the center of the targets in the scan and ground truth with GPS positions taken from on land. Then we’ll feel more confident with our boat’s scanning accuracy. Right now these targets are just serving to make us 100% sure that our boat data is wrong.
Below is a profile-ish view of the target scan, showing intensity. The same things are circled as above. You’ll note the old targets are really hard to distinguish. Also notice the black, low intensity returns at the edge of the stop signs with the high intensity white in the center – this is what we were going for!
Getting this view was really exciting. But as I played with the 3D views in Scene, I discovered something awful with the DOT signs (it was actually really exciting, but awful for those who wanted these targets to work). While the ring of low intensity returns from the edge of the stop sign, the high intensity center returns are not showing at the face of the sign, where they should be. Instead, they are scattered behind the sign. You can see this in the plan-ish view below:
With some help from classmates, we discovered that this is due to the stuff they use to get the high reflectivity. When light hits these types of signs from any angle, it scatters so that anyone looking at the front of the sign (even without a light source) can see them. This means that the laser’s assumption that the light was hitting the sign and coming straight back was not valid; in fact, the light was coming back to the laser at angles and taking a longer time than it should have. What is still unexplained is why there appears to be no returns actually on the face of the sign.
Ultimately, the unpainted versions of DOT signs are not going to work for our purposes. But I thought I’d do a last bit of educating myself on what I would do if I had good data to work with. I imported the GPS points taken at the center of each target into Scene, where they could display in 3D. It was easy to see that, beyond the bad target reflectivity, we also have a problem with our system’s calibration. The two points from the plywood targets are circled in purple. Despite the challenge in picking out the center of these targets, it’s obvious the points do not agree.
My ultimate goal with other scans would be to quantify sediment movement over time by subtracting two surfaces. Although I don’t need to monitor this beach, the scan was one of my smaller ones, so I used it to learn to make a TIN.
The TIN method, shown in my tutorial, is not intended to account for large data shadows that result from performing horizontal LiDAR (as opposed to airborne LiDAR). This means that it doesn’t like to leave spots uninterpolated and instead creates a triangle over areas of no data. Thus, if we drove by with the boat, scanning, and got a few returns off of a tree or house 500m behind the beach of interest, Arc’s TIN interpolation would create a surface connecting the beach data to the far off tree/house data, and most of that surface would be silly. My method for dealing with this issue was to delete any of this far-off data, since our region of interest (ROI) is the beach.
Not surprisingly, this was a challenge too. You cannot delete points in an LASD file in Arc. After many trials, converting it to Multipoint was the best option. This process worked for this small scan, but not for larger data sets. After it was converted to multipoint, I could click areas and delete sections of data. I could not delete individual points, but for my learning purposes, I decided that didn’t matter. I removed the backdune data and as many of the pilings as possible. I used the “Create TIN” tool and came up with this surface.
Once again, it served to highlight where we had some error in our collection system. The discontinuities on the beach helped us pinpoint where our problems are coming from. Each of the discontinuities occurs where the laser did more than a single pass over the landscape – it either scanned back over where it had already been, or sat and bounced around (this happens, we’re on a boat). If our boresight angle (the angle we’re telling the laser it is looking, relative to the GPS equipment) were correct, it should line up when we scan from different angles.
Throughout the term, I came back to a scan we did on Wing Point, on Bainbridge Island, WA. I had mild success with processing the scan: I was able to open it in both ArcMap and Scene. However, most other attempts failed. Creating a TIN caused the system to crash, but also would have connected many unrelated points (like the ones separated by water and the shorelines without data in-between). After success with multipoint with the small scan, I tried that route, but the scan appears to be too large. I am hopeful that if I have them trim the data down in another program (the laser’s proprietary program), then I could do a difference analysis when the laser’s communication is perfected and the scans line up properly. Below is a shot of the wing point data!
I learned so much throughout this course. I reprocessed a bunch of data from Benson Beach, WA in preparation for my tutorial. Since this is lengthy already, I am posting the tutorial separately. Obviously, I ran into a bunch of data difficulties, but I feel infinitely more confident working with Arc programs than I did in March. I know that I have tools that COULD be used to process this type of data, if the data provided were properly cleaned and referenced. Arc may not be the optimal tool for processing horizontally collected LiDAR, the tools I learned can be applied to much of the sediment transport and beach change topics I’m interested in.