LiDAR intensity variations across flightlines (+ extra problem: clouds)

481 views
Skip to first unread message

Martin Isenburg

unread,
Jan 19, 2015, 11:15:48 AM1/19/15
to LAStools - efficient command line tools for LIDAR processing
Hello,

following up on a  discussion on "Intensity images and normalization" that was started a while back and had a recent revival [1] there is now a concrete example after a client meeting today here in the Philippines. The task is to create intensity images from the LiDAR to assist with sunsequent object-based classification of vegetation cover using a raster processing package.

Attached are 4 images that illustrate the issues encountered. The first one shows that there are drastic differences in the intensity that result in discontinuites in the image that follow the pattern of the flightlines. The second and the third image show the brighter and the darker intensities of the two flightlines that cause most of the observed discontinuity. These three images are created with lasview pressing <2> and <4> to select different flightlines and with pressing <c> several times to get the intensity coloring after running the following:

lasview -i intensity_issue.laz -clamp_intensity_above 255

The final image shows a completely different problem. The darker flightline has partial cloud cover where no returns are recorded. But that is not all. Around the cloud (where the laser beam 'barely' made it through the water vapor) there are drastically lower intensity values. You may notice that the darker flightline looks difference here to due to the different command line options that were used to enhance the contrast.

lasview -i intensity_issue.laz -clamp_intensity_above 63 -scale_intensity 4

Given all these obstacles generating perfect intensity images seems hopeless. However, what do you think is the best combination of strategies to implement for this customer to get fairly reasonable intensity rasters that help subsequent object-based classification of vegetation, I like to collect ideas before I code up anything new, Are there any ideas beyond what Steven is outlining in [1] ... ?

Regards,

Martin @rapidlasso

intensity_issues_01.jpg
intensity_issues_02.jpg
intensity_issues_03.jpg
intensity_issues_04.jpg

Stoker, Jason

unread,
Jan 20, 2015, 7:09:28 PM1/20/15
to <lastools@googlegroups.com>
Hi Martin-

I personally am OK with not getting perfect, normalized intensity images.  To me an intensity value is supposed to signify the relationship between the photons that were sent out and how they came back to the detector- and understanding how and why they were scattered.  I had always hoped that an intensity value could be understood similar to a reflectance value in satellite imagery, where we would correct for all systematic, understood calibration components (scan angle, beam divergence, flying height, etc.) and would involve an understanding of radiative transfer and BRDF effects.  

You would expect, all other things equal, that in general the wider the scan angle, the higher the flying height and the wider your beam divergence, the lower the intensity value would be for example.  Other influencing (not calibration) relationships should be due to the target- the slope/direction of the surface normal, density/LAI/height effects of vegetation, color/type of underlying bare earth terrain and other scattering components. These are the types of effects that we actually would want intensity values to help us explain/understand.  If we hit the same target in different swaths, we should expect different intensity values, as the relationship between the sensor and the photons going out and coming back have changed- similar to BRDF effects, however in our case our 'sun' is each outgoing laser pulse.  And the chances that we hit the exact same spot in one swath as in another swath, where we have the exact same interaction between photons and target/scatterers is so small that we should honestly rarely expect to get the same intensity value (depending on the bit depth I guess) between swaths for the same XY (and return), without some kind of correction based on trajectory and calibration components. 

And if we were really smart about it, we could correct for atmospheric effects like they do for satellite data- although in airborne data we usually are under the clouds and most of the 'atmosphere' unlike satellites, so our atmospheric effects are minimal compared to satellite imagery. I say usually, as Martin shows, in some places atmospheric effects can go below the trees :-)  To compute surface reflectance for satellite imagery for example, we correct for water vapor, ozone, geopotential height, aerosol optical thickness, and elevation using data from MODIS and use radiative transfer models to generate top of atmosphere (TOA) reflectance, surface reflectance, brightness temperature, and masks for clouds, cloud shadows, adjacent clouds, land, and water.  But these are physical-based models, not generic histogram normalization processes.

I think we need a BRDF-type solution/correction, and in my opinion any modifications should be physics-based, because a lot of times the unknown variables (LAI/density/species, understory veg, bare earth discrimination, etc) are the ones we actually want to solve for.  Without a systematic, well understood correction, hopefully in the calibration process, we are just making pretty pictures, and probably confusing the issue.

What I hope does not happen is a type of 'fix' that makes pretty intensity pictures, that are not useful as data or information.  Or if there are modifications of intensity values in a swath to make a project normalized, those changes are recorded in the metadata.  That way if people are trying to perform physical-based model relationships (using the relationship between photons out versus photons back) they aren't confused as to why their models aren't working.  They aren't working because the answer they are trying to solve for (intensity) has been changed already. 

And IMO this is why swath-based processing systems are really needed, to fully utilize the information that is really coming off of the sensor, instead of working with project tiles after data has been adjusted, or 'fixed'.  If normalized project intensity was required though, I could however see a solution being recording two intensity values- the raw, per swath value, and the adjusted, normalized project intensity?


Jason M. Stoker
US Geological Survey
National Geospatial Program
Office: 970-226-9227


Mike Windham

unread,
Jan 20, 2015, 8:11:18 PM1/20/15
to last...@googlegroups.com
This comment is purely from a visualization stand point, we run a "histogram equalization" process across the intensity values, then take that to 0-255, and in almost all cases this produces a much nicer visual experience, to include across multi swaths of data.  

For example below, in this Denmark data set comprised of I think 404 laz files,  you can see first the normal intensity representation scaled across 0-255, then it switches to our "histogram equalized" version.   We deal with a wide range of data types, and this seems to be the most sane way to give great visualizes from the many different types of reporting we see in intensity values.   Overall not sure this helps, just seems analytics across this histogram type of scale, might be helpful in some cases.  
--
New Spin Logo Mike Windham | Inventor, US Marine & CEO | New Spin

755 Research Pwy. Ste. 540 Oklahoma City, OK 73104
p. 405-200-1880 Ext. 11  f. 800-360-6949
e. mi...@newspin.com  w. newspin.com



The information contained in this transmission may contain privileged and confidential information, including client information protected by federal and state privacy laws. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution, or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

Silvia Franceschi

unread,
Jan 22, 2015, 4:05:07 AM1/22/15
to last...@googlegroups.com
Hi Martin,
consider that the lidar return intensity depends on (1) the path length, (2) the local incidence angle that the laser beam makes with the footprint surface, (3) atmospheric attenuation, (4)
transmit pulse energy of the laser, and (5) the aggregate laser optics and receiver characteristics. When the same laser is used to collect data, one generally assumes variations in #5 uniformly
impact the data. Without detailed information about the stability of the particular laser in use, variations in #4 are also assumed to uniformly impact the data. The mitigation of
intensity variations due to #2 and #3 is an active research area investigated in the technical literature.
We used two different methodologies to do this normalization:
1. calibrate the difference in the intensity values between neighbouring tiles using the intensity values present in the overlapping zone and then applying this relations to the whole tile
2. use the algorithm in [1] considering the distance between the aircraft and the terrain and compensate for difference in flying height changes in ground
topography (not effects on local incidence angle due to surface aspect though), and other variations in path length, such as observations taken at different scan angles. 

Hope this helps.

If you find some other interesting methodology I would really appreciate if you will link the scientific papers in order to improve the tools.

Silvia

[1] Normalizing Lidar Intensities - GEM Center Report No. Rep_2006-12-001
Reply all
Reply to author
Forward
0 new messages