--
Post to this group at plots-i...@googlegroups.com
Public Lab mailing lists (http://publiclab.org/lists) are great for discussion, but to get attribution, open source your work, and make it easy for others to find and cite your contributions, please publish your work at http://publiclab.org
---
You received this message because you are subscribed to the Google Groups "plots-infrared" group.
To unsubscribe from this group and stop receiving emails from it, send an email to plots-infrare...@googlegroups.com.
This seems to be a timely topic as there have been a couple of other discussions about using a Raspberry Pi camera to make NDVI images (one here: https://publiclab.org/notes/LaPa/12-03-2015/how-do-i-set-a-costum-white-balance-of-the-noir-modulo-cam).
It’s pretty exciting that Fiji could run on a Pi. If so, then anything is possible. Ned Horning’s plugin for Fiji is probably not the right route to an automated system (it requires user input at multiple points), but the plugin code could be a starting point.
The Python code might be more likely to offer a lightweight route to processing photos into NDVI. My recollection is that the various iterations of this code were intended to process single photos from infrared sensitive cameras with red or blue filters, not photos pairs from dual camera systems, but maybe both versions exist. Versions of Python code are here: https://github.com/publiclab/infrapix/blob/master/README.md and here: https://github.com/p-v-o-s/infragram-js.
The choice of single or dual camera systems requires some thought. The plants will be close enough to the cameras that parallax will be a big issue, so aligning photo pairs will be messy and maybe unsatisfying. This alignment is also processor intensive, so avoiding it might be wise. So this could be a good application of a single camera NDVI system. A Pi NoIR camera with a red filter (like Wratten 25A) might be the best choice (I say that with no information about anybody ever trying that combination). This setup will provide a rather pure NIR image in the blue channel and a mixed red + NIR image in the red channel.
How much good plant health information can be derived from those two channels is a big question. Generally, such systems (with other cameras) can be effective at distinguishing foliage from non-foliage, but their capability to distinguish stressed from healthy foliage is unproven. In very controlled environments when the lighting is absolutely consistent and things like leaf angle and leaf surface moisture are constant, it might be possible to discern some plant health differences. However, your little parsley seedlings are probably going to be pampered and healthy. This will be a great test of DIY NDVI images to determine if they can tell you anything other than that the plants are still alive.
Your idea of using a background with a “non-plant” signature is intriguing. Plants will generally reflect lots of NIR, so a background that absorbs NIR could make this easy (if you capture an NIR image or channel). Subtracting the background then has the potential to allow quantification of the size of the plant. This might be the most straightforward and interesting automated reporting to try. If the image used for this also had a visible light channel (or a mostly visible light channel) you could also compute average (pseudo-) NDVI for the plant part of the image. It remains to be seen whether that value will have much power to discern very subtle changes in the actual health of the plant.
One of the pitfalls in interpreting NDVI values in close-up images of plants is that the 3D surface of plants causes shadows and highlights and aspect variance with different color balances. Ideally, you would want an image of a single, flat leaf at a constant orientation to the camera and light source. This calls for a very broad-leafed and large-leafed plant, maybe something as different as possible from parsley ;^)
To produce the most meaningful NDVI information, it might be worthwhile using Ned Horning’s calibration procedure. This requires calibration targets of known reflectance in the visible and NIR spectral ranges of interest. If you have access to a spectrometer that can measure (even crudely) the percent reflectance of red light (600-700 nm) and NIR (700-900 nm) from some different colored surfaces, you could calibrate your system so the NDVI values are more robust as plant health, lighting, and exposure vary. Ideally, the targets would be in every photo (so they would have to be waterproof), but maybe just intermittent calibration photos would be sufficient.
I’m looking forward to closely following your tweeting parsley.
Chris
Sort of related -- early (failed) attempts at dual camera and single camera NDVI timelapse videos of plants.
--
--