Hi,
I have 4 separate shapefiles that each contain between 110 and 1700 polygons that were generated using a segmentation classification. For each polygon within the shapefiles, I need to compute the average, min, max, and standard deviation of canopy height, as well as the 3 canopy cover metrics. I would like these outputted as a csv with each row numbered like the polygons, 1-110 for example.
So ideally my output for the first shapefile containing 110 polygons would be a csv with 110 rows and 6 columns (average, min, max, std, cov, dns, and gap)
In order to iterate through all of the polygons I am using -lop polygons.shp
The data was delivered as tiled, with classification as follows:
1- default/unclassified - laser returns not included in ground class, composed of vegetation and man-made structures
2- ground
6- anthropogenic - permanent man made features
7- noise
I was unsuccessful in running this on the tiles, so I used lasmerge to create one file.
I was able to run lasclassify on this single file, but it resulted in a ton of warnings, several for each polygon, like this: WARNING: polygon 2 > has duplicate point at count 137
Why am I getting these warnings, and will they affect my results?
As far as the output, there are more rows in the csv than there are polygons in the shape file. what would account for this?
Also, all of the canopy cover metrics are 0. Is this because of the classification style? Is it possible to reclassify already classified data? There are no buffers on the tiles
Thanks very much for all the help,
Pete