Contour generation for orienteering maps

388 views
Skip to first unread message

Sverre Froyen

unread,
Jun 1, 2016, 11:03:46 AM6/1/16
to last...@googlegroups.com
Hi,

I brought this question up in a private email some time ago and it was suggested that I re-ask it here. I finally had some time to work on the issue and I have made some progress on a possible solution. The original question was:

I’m with the Rocky Mountain Orienteering Club and I’ve been playing around with LASTools, Terje Mathisen’s Perl scrips and OpenMapper with a goal of generating a set of contours for a nearby park. This park contains lots of open flat areas, and while the contour lines for steeper terrain are very good, contour lines for the flat areas are noisy, tend to meander and show lots artifacts that are not apparent in the terrain.

I suspect that this is a result of LIDAR “noise” caused by tiny terrain features such as small rocks, tussocks or perhaps small patches of vegetation.

Initially I tried various ways of thinning the data. This was only marginally successful — in the sense that I could only get improvement in a flat area at the expense of losing relevant detail elsewhere. Attempting to increase the contour smoothing parameter had the same effect.

Terje Mathisen suggested to work on the raw ground points instead. Following this idea, I came up with the following algorithm. This was designed such that hillsides would receive less smoothing than flat areas.

1) Select a new (not yet processed) LIDAR point (P0).

2) Collect all points in a circle around P0.

3) Fit a linear regression model to the point set from (2).

4) Calculate a weighted average of the heights for the point set (2), where the weights depend on the distance and height difference between P0 and the point in (2). The height difference is calculated using the regression model from (3), not the actual height of the points.

5) Copy P0 to the output using the height from (4) instead of its original height.

6) Goto (1)

The weight calculation uses the height difference to suppress the influence of points uphill or downhill from P0, essentially averaging over the full circle in flat areas and over a narrow ellipse oriented with the long axis parallel to the (expected) contour line on hills.

The attached image shows 0.5 m contours for a piece of terrain that includes a flat area and and a hillside (the diameter of the circle is 100 m). The red contours are generated from the old, original data and the blue contours from the smoothed data (using the procedure above). Aa you can see, the smoothed-data contours in the flat area are considerable smoother than the raw-date ones and in the hilly part are virtually the same.

Actual parameters used in this particular case are:

Radius of circle in (2): 10 m
Weight of point in (4): exp(-d/scale)
where d is: sqrt(dx^2 + dy^2 + (5*dz)^2)
and the scale factor is chosen such that the weight at the edge of circle is 0.005

Next step will be to speed up the process. At the moment I’m using las2txt to obtain the points in step (2). Having to relaunch las2txt for every pass through the loop limits the processing speed to around 5 points per second. If I can retrieve the points in (1) and (2) from a single process, it looks like this speed could be increased by up to a factor of 100.

Thoughts, comments?

Regards,
Sverre


 

Michael Collins

unread,
Jun 1, 2016, 9:04:47 PM6/1/16
to last...@googlegroups.com
Sverre,

As you know from Terje's discussion of the Perl scripts (http://tmsw.no/mapping/basemap_generation.html), part of contour interpretation is looking at what's "real" information, and what's noise (accurate without being meaningful).

Your process might help you get to smoother contour lines, but it may still not be meaningful for orienteering. In the other hand, anything you can do to smooth out your contour lines before bringing them into OpenOrienteering Mapper will save you time tweaking things in that program.

Looking at your image, for example, the ridges and gullies implicit in the contours lines in the steep section (red _and_ blue lines) probably wouldn't be useful for an orienteer attempting to traverse that section, since there few places where you the same bend is present in adjacent contour line. So, it would make more sense to remove all of the small detail, except where a true ditch or ridge is identified by field checking. 

What I've done with contour lines is to import them into OpenOrienteering Mapper (at 1 m spacing, even if the final map will be 2.5 m, 3 m, or 5 m contours), but then to manually smooth out any contours where a particular bend isn't shared by the neighboring (intermediate) contours. So, with a final contour interval of 5 m, I would remove any wiggles in the contours lines that don't show up on the neighboring 1 m contours, and I only add form lines where the "2," "3," and "4" contours all show the same feature that doesn't show up on the "0" or "5" contours.

While a script could probably be written to handle quite a bit of this automatically, it's as much about visual interpretation, since that's how it will be used by orienteers. Of course, any of this work might be adjusted by someone field checking and seeing how it presents in real life.

More information about QGIS:



Michael Collins
President, Chicago Area Orienteering Club

My name is Michael R. Collins.  I am a powerful, intimate, courageous, and limitless man.  My purpose is to create a peaceful and loving world by teaching others to love unconditionally.

Terje Mathisen

unread,
Jun 1, 2016, 9:14:14 PM6/1/16
to last...@googlegroups.com
I'm assuming this is a 2D fitting, i.e. you are looking for the local
slope normal?
>
> 4) Calculate a weighted average of the heights for the point set (2),
> where the weights depend on the distance and height difference between
> P0 and the point in (2). The height difference is calculated using the
> regression model from (3), not the actual height of the points.
With the 5X weight for dz you are effectively ignoring points
significantly higher/lower than the center. :-)

>
> 5) Copy P0 to the output using the height from (4) instead of its
> original height.
>
> 6) Goto (1)
>
> The weight calculation uses the height difference to suppress the
> influence of points uphill or downhill from P0, essentially averaging
> over the full circle in flat areas and over a narrow ellipse oriented
> with the long axis parallel to the (expected) contour line on hills.

Right.
>
> The attached image shows 0.5 m contours for a piece of terrain that
> includes a flat area and and a hillside (the diameter of the circle is
> 100 m). The red contours are generated from the old, original data and
> the blue contours from the smoothed data (using the procedure above).
> Aa you can see, the smoothed-data contours in the flat area are
> considerable smoother than the raw-date ones and in the hilly part are
> virtually the same.

That's actually quite nice, congratulations!

By doing it this way you avoid any chance of getting smoothing artifacts
like contour lines which cross, something which is quite possible when
you smooth individual contour lines.
>
> Actual parameters used in this particular case are:
>
> Radius of circle in (2): 10 m
> Weight of point in (4): exp(-d/scale)
> where d is: sqrt(dx^2 + dy^2 + (5*dz)^2)
> and the scale factor is chosen such that the weight at the edge of
> circle is 0.005
>
> Next step will be to speed up the process. At the moment I’m using
> las2txt to obtain the points in step (2). Having to relaunch las2txt
> for every pass through the loop limits the processing speed to around
> 5 points per second. If I can retrieve the points in (1) and (2) from
> a single process, it looks like this speed could be increased by up to
> a factor of 100.
At least 100X!

I would use a hash map indexed by (x,y) (truncated to meter resolution),
i.e. collect all points which are located in a given cell.

During processing I would look at an 11x11 cell matrix around the
current point, with a bitmask to exclude those corner cells which have
to be outside the 5 m limit:

What remains is probably just 10-15% more points than actually fits
within the circle, so the final pruning can savely happen during the
distance calculation (d):

dx = x - x0; dy = y - y0; dx2 = dx*dx; dy2 = dy*dy;

Now we do the pruning using the squared horizontal distance:

d2 = dx2+dy2;
if (d2 >= 100) goto next;

Now we know that this point is OK, calculate the scale factor:

dz = 5*(z - z0);
dz2 = dz*dz;
d2 += dz2;

Now we need to calculate the final weight:

We pre-calculate the inverse scale factor, invscale = 1.0/scale, so that
we can multiply instead of divide in the inner loop:

weight = exp(-d * invscale);

Next we notice that taking the exponent of a product is the same as
adding the exponents, so we pre-calculate

exp_invscale = exp(invscale);

We also not that exp(-d) is the same as exp(-sqrt(d2)) which is the same
as exp(-d2)*0.5.

At this point we're left with

weight = exp(-d2)*0.5 + exp_invscale;

Since we know that d2 will be in the 0.0 to 100.0 range for flat areas
and somewhat more in very steep hillsides, I would approximate the exp()
function with a lookup table, using something like 256 entries:

double approx_exp(double d2)
{
int i = int(d2);
double r = 0.0;
if (i < 256) r = exp_table[i];
return r;
}

If we have 2 points per square meter then we'll need to repeat that
calculation about 180 times per point or 360 times per square meter.

If you need even more speed then I'd look at first compressing each cell
into a single (average) sample.

Terje

--
- <Terje.M...@tmsw.no>
"almost all programming can be viewed as an exercise in caching"

Reply all
Reply to author
Forward
0 new messages