Hi Martin-
I personally am OK with not getting perfect, normalized intensity images. To me an intensity value is supposed to signify the relationship between the photons that were sent out and how they came back to the detector- and understanding how and why they were scattered. I had always hoped that an intensity value could be understood similar to a reflectance value in satellite imagery, where we would correct for all systematic, understood calibration components (scan angle, beam divergence, flying height, etc.) and would involve an understanding of radiative transfer and BRDF effects.
You would expect, all other things equal, that in general the wider the scan angle, the higher the flying height and the wider your beam divergence, the lower the intensity value would be for example. Other influencing (not calibration) relationships should be due to the target- the slope/direction of the surface normal, density/LAI/height effects of vegetation, color/type of underlying bare earth terrain and other scattering components. These are the types of effects that we actually would want intensity values to help us explain/understand. If we hit the same target in different swaths, we should expect different intensity values, as the relationship between the sensor and the photons going out and coming back have changed- similar to BRDF effects, however in our case our 'sun' is each outgoing laser pulse. And the chances that we hit the exact same spot in one swath as in another swath, where we have the exact same interaction between photons and target/scatterers is so small that we should honestly rarely expect to get the same intensity value (depending on the bit depth I guess) between swaths for the same XY (and return), without some kind of correction based on trajectory and calibration components.
And if we were really smart about it, we could correct for atmospheric effects like they do for satellite data- although in airborne data we usually are under the clouds and most of the 'atmosphere' unlike satellites, so our atmospheric effects are minimal compared to satellite imagery. I say usually, as Martin shows, in some places atmospheric effects can go below the trees :-) To compute surface reflectance for satellite imagery for example, we correct for water vapor, ozone, geopotential height, aerosol optical thickness, and elevation using data from MODIS and use radiative transfer models to generate top of atmosphere (TOA) reflectance, surface reflectance, brightness temperature, and masks for clouds, cloud shadows, adjacent clouds, land, and water. But these are physical-based models, not generic histogram normalization processes.
I think we need a BRDF-type solution/correction, and in my opinion any modifications should be physics-based, because a lot of times the unknown variables (LAI/density/species, understory veg, bare earth discrimination, etc) are the ones we actually want to solve for. Without a systematic, well understood correction, hopefully in the calibration process, we are just making pretty pictures, and probably confusing the issue.
What I hope does not happen is a type of 'fix' that makes pretty intensity pictures, that are not useful as data or information. Or if there are modifications of intensity values in a swath to make a project normalized, those changes are recorded in the metadata. That way if people are trying to perform physical-based model relationships (using the relationship between photons out versus photons back) they aren't confused as to why their models aren't working. They aren't working because the answer they are trying to solve for (intensity) has been changed already.
And IMO this is why swath-based processing systems are really needed, to fully utilize the information that is really coming off of the sensor, instead of working with project tiles after data has been adjusted, or 'fixed'. If normalized project intensity was required though, I could however see a solution being recording two intensity values- the raw, per swath value, and the adjusted, normalized project intensity?