Public Lab Research note

NDVI from Infrablue

by cfastie | June 20, 2013 03:55 20 Jun 03:55 | #8308 | #8308

Image above: All possible pairs of NIR and VIS values that produce NDVI values in the standard range for healthy vegetation (0.2 to 0.8) form a triangular result space.

There has been concerted recent effort to capture infrablue photos that have a lively orange tint. These infrablue photos of vegetation produce NDVI images with a broad range of values which are in the range considered to be appropriate for healthy vegetation. The figure above delineates this range in the Cartesian space of pairs of near infrared (NIR) and visible light (VIS) values that would describe two of the three color channels in an infrablue pixel. To get NDVI values in this range, the NIR value must be 1.5 to 9 times greater than the VIS value. In other words, the recorded amount of NIR light reflected from healthy vegetation would be1.5 to 9 times greater than the measured amount of blue light.


Two infrablue photos from different infrablue cameras (Powershot SX230 and A810) with Apollo 4400 filters. The histograms for the three color channels for the rectangle of lawn to the right is inset. Note the difference in channel separation between the two photos.

The infrablue photos taken by some cameras are not very orange and do not record the desired ratio of NIR to VIS light. Brenden has taken an exhaustive series of test photos with different cameras and filters, and I have highlighted two of them here. These were taken with Canon Powershots which had their IR block filters removed and had a piece of Apollo 4400 filter in front of the lens. Both cameras were white balanced while pointing at the same white paper. The photo from the SX230 (with a CMOS sensor) is not as colorful as the photo from the A810 (with a CCD sensor). The histograms for the rectangle of grass at the right in these photos shows the difference in separation between the red (NIR) and blue (VIS) channels. More separation generally indicates more saturated colors, and also translates into higher NDVI values. NDVI is a scaled ratio of NIR to VIS for each pixel, and widely separated histograms (graphs of all the pixel values) indicate that an average pixel will have a higher NIR:VIS ratio and therefore a higher NDVI value.


Color NDVI images for the two infrablue photos above. The histograms show the range of NDVI values in the lawn area to the right and the color table used to color the images. The A810 image has higher NDVI values and a greater range of values.

The NDVI images for these photos confirm this. The histogram of values in the rectangle of lawn in SX230 NDVI image shows an average value of 167 (the range is 0-255). The average value in the lawn in A810 NDVI image is 199, and the histogram is twice as broad – there is a greater range of values. This greater dynamic range in NDVI makes it possible to discriminate among different levels of plant health.


Floating point grayscale NDVI images for the infrablue photos above. These images use the actual NDVI values (range -1 to +1) instead of the digital numbers (0 to 255) in most jpeg images. The histograms show the range and distribution of NDVI values computed from the digital numbers in the lawn area of the infrablue photos. Note that the NDVI axes are scaled differently.

The actual NDVI values for these images are used to make the floating point grayscale images. The histograms for the lawn area in these images display the mean NDVI values as 0.31 for the SX230 image and 0.56 for the A810 image. The NDVI axes in these histograms are not scaled the same, but the difference in range extent of NDVI values is obvious. In the A810 image, most of the NDVI values are between 0.4 and 0.7, well within the standard range of NDVI values for healthy vegetation.

Many of the Brenden’s combinations of camera, settings, white balance, and filter did not produce similar results, which is why his efforts are so important. Thanks much to Brenden for his great contributions.


In case others are interested in the answer to this question:

The histograms of the infrablue photos are screen captures from Photoshop pasted onto a capture of the image in Photoshop with a marquee displayed. The histogram is just for the selected area in the marquee. Histograms have a few display options. The other histograms are screen captures from Fiji. With an image displayed, click Analyze/Histogram. Click "Live" and drag the cursor to select an area of the image. Click "RGB" to toggle through the colors (it won't show them all at once). If the displayed image is floating point, an initial dialog allows scaling the axes, but these scales don't apply to a "live" histogram of a marquee. Click Contol+ to enlarge the histogram window.

Reply to this comment...

I'm very interested in the work you've done with these and other cameras. With this set of photos, were they taken in JPEG or RAW? I know many point and shoot cams can't do raw, so I'm guessing these are jpegs, but I wanted to confirm that.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

These are Brenden's photos, so I'm not sure, but I think he was experimenting with different camera settings, which are mostly irrelevant if you work from the RAW image data. So they are probably jpg, as are all of the infrablue photos I have taken. If the camera records RAW, the custom white balance setting is not applied to the image, so you have to do that after the fact. I have not figured out how to reconstruct what a custom white balance setting does by post processing RAW data. But there must be a way.

Reply to this comment...

Ok that's good to know. At the moment, I have a canon ps g12 with the schott bg3 infrablue filter that is saving photos as raw, and my canon dslr rebel t3 with full-spectrum should be arriving shortly. Additionally, my university's biology department has kindly lent me a high-end field spectrometer. My plan is to compare the nrb raw band reflectances from the g12 to the field data. Additionally, I will be doing the same with the dslr, but the process will be to use a visible light passing filter to take the camera to stock (from ldp llc) to take target photos, then get the field spectro data from those targets, and then switch the filter to a 715nm and greater passing filter (blocks visible) and photo the same targets.

My question before about the jpg's was related to whether the use of jpg with white balance versus raw w/o wb would be the likeliest explanation for differences in contrast between cameras (versus physical differences in the sensor). This matters because the research I'm starting would hopefully be applicable to the widest range of canon cameras as possible, but the large variations from the analysis above made that seem less likely. I suppose I'll add a third step to this analysis, which will be capturing the same targets with the same 2 cameras at the same times of day, in both raw AND jpg (white balanced)

Thanks again!

Reply to this comment...

Login to comment.