It's great they used and published their findings, but I have some reservations on the image analysis methods, why use hue to wavelength if you can do an absolute calibration with CLF lamps?
In the past year and a half I am working at a company called MobileODT where I participate in the development of a biomedical imaging multi-spectral camera. Last January we presented in SPIE photonics west some of our work, part of it was verification of color correctness when imaging with a smartphone, see link below. One conclusion we came to and were supported by other in the conference is that setting your imaging parameters correctly is crucial. We used a Nexus 6 saving RAW dng images with set white balance. those images then needed to be converted from RGB to gray, as the source was a raw image we could employ different white balance, different gamma etc. a good source is the rawpi python package where you could find different conversion schemes. Whenever you can do something in more than one way - you can bet there are many wrong ways but not more that one right way. A later talk presented a different way which they claimed to be more correct, sadly they didn't share it with me... If you do find a manual on this - please share it!
https://spie.org/Publications/Proceedings/Paper/10.1117/12.2252189It is my *opinion* using a compressed lossy format with auto settings may be problematic, you should ideally use hard coded settings, save in raw, make sure you translate to gray properly. test your system with a known target such as color calibration card, then start doing the interesting stuff. I'd recommend you check out either the picam, or even better the noir cam which can take raw images for a low cost rather than a webcam or phone.
Another tip following Pie's, I found on ocean optics website is to use styrofoam as a reflectance target, it is generally very spectrally flat at ~95% reflectance and easy to find.
That being said - I do think the work you published here is really cool and interesting!