Martin Isenburg wrote:
> Hello,
>
> the most recent release of LAStools (130409) fixes a bug in the TIF
> raster writer that affected any non-float single-band output such as
> density and intensity rasters or gray shaded images. In addition there
> should be correct geo-references in the GeoTIFF tags now.
>
> The lascanopy tool has a few more metrics. The tool now also produce
> the canopy cover using option '-cov'. The canopy cover is computed as
> the number of first returns above the height cutoff divided by the
> number of all first returns and output as a percentage.
>
> lascanopy -i forest\*.laz -merged -cov -otif
Nice...
>
> Similarly, with the option '-dns' the canopy density can be produced.
> The canopy density is computed as the number of points above the
> height cutoff divided by the number of all returns.
>
> lascanopy -i other_forest\*.laz -merged -dns -obil
Very good....
>
> In addition, the tool can also concurrently produce several height
> count rasters. The option '-c 0.5 2 4 10 50', for example, would
> compute four rasters that count the points whose heights are falling
> into the intervals: [0.5, 2), [2, 4), [4, 10), and [10, 50).
>
> lascanopy -i area1\*.laz -merged -d 0.5 2 4 10 50 -oasc
Even better!
>
> In the same manner the option '-d 0.5 2 4 10 50' will produce a
> relative height density raster in which the above counts are divided
> by the total number of points and scaled to a percentage.
>
> lascanopy -i area1\*.laz -merged -d 0.5 2 4 10 50 -odtm
and this is pretty much perfect!
My own code which does vegetation cover/runnability classification for
orienteering base maps uses the standard 4 vegetation heights
(ground/low/medium/high) and base the decision on the percentage of
returns from each class:
I generate a benchmark which consists of a bunch of different patches
which have been manually determined (as "Light green, Medium green,
Dense green, Yellow (open), "Open with low brush", "High canopy plus low
brush" etc, then I look for the closest match for all other terrain areas.
>
> All these products can be requested simultaneously. If you could check
> the new functionality and report any issues back to me. I hope they
> compute the correct this but I am (still) not a forester. See the
> README file for more details.
If I can get all the percentage counts from a single run, then I can get
rid of most of my current perl code. :-)
BTW, the method I am using to accumulate the counts is somewhat interesting:
I first classify each point as ground/low/medium/high, then I use this
class value in a lookup table for a set of increments:
These are 1, 256, 65536, (1 << 24) so that I can use a single 32-bit
variable to collect all the counts for a given cell, as long as the cell
size and point density is such that there can never be more than 255
points of each class in a given cell.
The nearest match search is a brute force minimum distance (sum of
squares of delta for each class) classifier, I have considered various
ways to speedup this, including memoization, but I need an awful lot of
cells (and/or a _lot_ of benchmark cells) to make this a win.
Terje
--
- <
Terje.M...@tmsw.no>
"almost all programming can be viewed as an exercise in caching"