I’m a recovering scientist managing a remote sensing group at the NOAA Coastal Services Center. In my spare time, when I’m not torturing staff, I try to fit in some technical work on lidar processing and distribution.
Submitted by Kirk Waters on June 27, 2012
It seems like every day I hear a statement about high-resolution lidar that bugs the heck out of me. I even hear it from our staff at the Center. So, I thought I'd write a little entry about it and see if any of them pick up on it. So, what I hear is, "We need high resolution lidar for X, Y, and Z." Sometimes it's for estimating sea level change impacts, sometimes it's for habitat modeling; could be for nearly anything. While I do often hear it for sea level change, my example is wetlands delineation in relatively flat areas like South Carolina or Florida. To quote one friend (we'll call him Randy) interested in wetlands:
It appears that most of it will be point spacing of 1.4 meters which will only give me 2’ contours. This will not be that useful for me in the Lowcountry with difficult wetland issues. I need to be able to get to 6” contours or no more than 1’ contours. I wonder how much data there is out there with point spacing of 0.5-1.0 meter?
This sort of problem is not that uncommon, so let's take a closer look at what Randy needs and what's he's asking for. Like many others, Randy is confusing accuracy with point spacing (aka resolution). Accuracy is a measure of how closely the measured values are believed to be to the true values. In the case of lidar points or a digital elevation model, it's the accuracy of the elevation that we're talking about. The point spacing generally refers to how far apart the measurements are. For a digital elevation model, this would be the grid spacing or resolution.
So, how does the contour interval come into it? The contour interval allowed is not related directly to the point spacing. When the guidance for contour intervals was developed, people were not generating point spacing anywhere near what we have now. It is the vertical accuracy that determines the contour interval allowed under the National Map Accuracy Standards and the National Standard for Spatial Data Accuracy (NSSDA).
So, let's look at that, because our South Carolina friend, Randy, still has a problem. To make 1' contours and follow the federal accuracy standards, you need 9.25 cm RMSE data. You can do that with lidar and most of our more recent collections are targeted at that. However, I don't think any of the South Carolina collections had that as a specification. Some of them may meet it anyway, so you'd have to look at the QA reports to see what the data came in at. Randy may also be working in marsh areas and marshes often have accuracy problems.
For 6" contours, you'd need under 5 cm RMSE. That's pretty much going to happen when all the stars align or you're flying from helicopter. We have seen some data that meets that criteria, but we've never used those specifications for a job because the cost would be very high; we've just gotten lucky that the stars aligned. Note that as technology improves, the stars align more often.
The last question I'd ask is whether Randy really needs contours to do his work. Randy, like many of us old dinosaurs, has grown up with paper maps with contours. Maybe we need some encouragement to look at things differently. Contours are generally something for visual interpretation by a human. You're throwing out a lot of data and information when you make them and the rules for the intervals are related to that interpretation process in my opinion. What is he trying to see and/or do? There may be better ways to see features and you can often see features that are far smaller than the contours will pick up. For example, for some types of features, looking at hillshades made using a very low sun angle can make subtle features easier to see.
Of course, the reality is that accuracy and resolution aren't totally unrelated. If I look at elevation data with a resolution of 30 meters, I have to wonder what an accuracy measure really means. Is that related to the average elevation within a cell? The minimum? The maximum? If there is significant variability within the cell (maybe a levee runs through it), is there even a right answer for the cell from which to judge its accuracy? As we look at smaller and smaller cells - higher resolution - we expect the height variability within the cell to decrease so we can at least make a more meaningful statement about the accuracy. But, accuracy and resolution are also not perfectly correlated either. Improvements in technology for lidar have made them both better, but the factors that make a lidar collection high resolution (high pulse rate or multiple overflights) are different than the factors that improve accuracy (closer base stations, narrow swath width, good PDOP).
So, how do you tell what you need? If you need to see spatial details, you need high resolution. If you need to really know how high something is, you need high accuracy. For a lot of today's problems, you need both. Just make sure you know what you're asking for or you never know what you'll get.