Public Lab Research note


NDVI microscopy

by MaggPi | July 18, 2018 09:20 18 Jul 09:20 | #16741 | #16741

MaggPi was awarded the Basic Barnstar by warren for their work in this research note.


My first use of image-sequencer https://publiclab.github.io/image-sequencer/examples/#steps= so I am not sure what to expect..

Image examples are microscopic images of plant cells. IR image was collected with a 850nm LED. Color image with a white light LED. I know NDVI is designed for remote sensing applications so I am not sure if this is worthwhile? Also not sure if microscope slide and cover glass changes image spectral properties?

First image is a Pine Leaf example. Is that the right image sequence for NDVI analysis?

Slide1.JPG

Second image is of a leaf and has a flat field image problem. Can you add flat field correction to Image sequencer? I believe the supplied version of the Raspberry Pi camera has it’s own flat field correction and we need to provide an alternate correction when the Pi camera is used with a different (objective) lens.
Slide2.JPG

Image sequencer works great (amazingly easy to use) but I am a little nervous as to it how it actually works. My experience with photoshop and opencv is that processing steps often depend on many decisions that depend on the situation.

@warren, @icarito, @amirberAgain, @bronwen @tech4gt


10 Comments

Wow, this is really amazing! 🎉

Reply to this comment...


@warren awards a barnstar to MaggPi for their awesome contribution!

Reply to this comment...


@cfastie you might enjoy this one! 

On Wed, Jul 18, 2018 at 5:16 AM \<notifications@publiclab.org> wrote:

Public Lab contributor MaggPi just posted a new research note entitled ' NDVI micrsocopy':

Read and respond to the post here: https://publiclab.org/notes/MaggPi/07-18-2018/ndvi-micrsocopy


My first use of imager sequencer so I am not sure what to expect..

Image examples are microscopic images of plant cells. IR image was collected with a 850nm LED. Color image with a white light LED. I know NDVI is designed for remote sensing applications so I am not sure if this is worthwhile? Also not sure if microscope slide and cover glass changes image spectral properties?

First image is a Pine Leaf example. Is that the right image sequence for NDVI analysis?

Slide1.JPG

Second image is of a leaf and has a flat field image problem. Can you add flat field correction to Image sequencer? I believe the supplied version of the Raspberry Pi camera has it’s own flat field correction and we need to provide an alternate correction when the Pi camera is used with a different (objective) lens.
Slide2.JPG

Image sequencer works great (amazingly easy to use) but I am a little nervous as to it how it actually works. My experience with photoshop and opencv is that processing steps often depend on many decisions that depend on the situation.

[@warren](/profile/warren), [@icarito](/profile/icarito), [@amirberAgain](/profile/amirberAgain), [@bronwen](/profile/bronwen) [@tech4gt](/profile/tech4gt)


You received this email because you are subscribed to some or all of the following tags: .

To change your preferences, please visit https://publiclab.org/subscriptions.

Report spam and abuse to: moderators@publiclab.org

Check out the blog at https://publiclab.org/blog | Love our work? Become a Public Lab Sustaining Member today at https://publiclab.org/donate

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Great job, @MaggPi....this is super cool!

Reply to this comment...


I'm just presenting on your great post on the NASA AREN call right now (https://publiclab.org/aren) and we have questions!

  1. did you illuminate these separately at separate times?
  2. did you use the same, full-open camera and only change the light, instead of filtering separately?

Very cool! Folks on the call are excited about this test and would love to learn more!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


We also just tried using different colormaps, with some success! Stretched:

download.png

And "fastie":

download-1.png

Reply to this comment...



MaggPi,

This approach is quite intriguing. You are correct that NDVI was invented and then used for 40 years primarily for high altitude aerial or satellite images (each pixel can include multiple square meters or hectares of ground surface). Producing NDVI images from low altitude (kite, balloon, drone) photography requires major changes in the way the NDVI images are interpreted (each pixel can include a small part of single leaf), and this fact is lost on some practitioners. Producing NDVI images from microscopic photos of plants requires even more modifications to how we interpret the results (only the pixels capturing chloroplasts might be relevant).

One important consideration is that the microscopic photography must be done on freshly prepared plant material. Is that a cross section of a living pine needle? NDVI depends on the way plant pigments absorb visible versus NIR light and that changes as the plant is stressed. Preparing the leaf sample for microscopy can cause stress, but it could take hours or a day for that stress to change the way light is absorbed by the pigments. So there is time to capture photos while the pigments are still behaving normally.

Traditionally, photos used for NDVI are captured while vegetation is illuminated by the sun and sky. Sunlight has a particular ratio of red:NIR wavelengths which is key to the production of NDVI images. NDVI is just a measure of the difference between how much of the red versus NIR in that sunlight is reflected from the vegetation. If the vegetation were illuminated by a giant LED, the NDVI result could be drastically different because the ratio of red:NIR in the LED might differ from that of sunlight.

In your example, you used different LEDs to make the visible and NIR photos (I assume you used the red channel in the RGB photo for your red data?). So there was some ratio of red:NIR in the light effectively illuminating your subjects, but we don't know how it compares to the ratio in sunlight.

Did you use a Pi NoIR camera to take both photos? If so, how much NIR was emitted by the white LED? And how much of that NIR was captured by the red channel in the sensor? These answers determine what wavelengths were captured by the channel you used to represent visible light.

Because you used two separate photos to make each NDVI image, the exposure of the photos could have altered the effective ratio. Healthy foliage reflects several times more NIR than red light. That is the difference that must be captured to make an NDVI image. If you make two photos of plant pigments, one of reflected red light and one of reflected NIR light, and both are well exposed photos, then there will be little difference between the brightness of the pigments in the two photos. The adjustments made to the exposure (brightness) of each photo will have made the brightness of both photos similar. Computing NDVI for each pixel with those two photos will have little meaning.

A potential workable approach could be to:

  1. Illuminate your living or freshly prepared plant sample with sunlight or artificial light which mimics the red:NIR ratio of sunlight.
  2. Take a photo with a full spectrum camera (pi NoIR) with a red filter in front of the lens. That filter must transmit only red wavelengths.
  3. Take a photo of the identical scene with the same camera with an NIR filter in front of the lens. That filter must transmit only NIR light.
  4. The two filters should transmit the same percentage (e.g., 100%) of either red or NIR light.
  5. The same exposure settings must be used for both photos (same ISO, shutter speed, f/stop, gain).
  6. Use the red channel of the red photo and any channel (or the mean or sum of all three) of the NIR photo to compute NDVI (NIR-Red)/(NIR+Red) for each pixel.

A remaining problem with this approach is that the camera sensor is not as sensitive to NIR light as it is to red light. So even if you control everything as described above, the values in the NIR photo will not be as large as they should be to represent how bright the NIR light reflected from the sample was. To adjust the result to compensate for that, you could use the sum of all three channels in the NIR photo or you could just multiply the NIR value in each pixel by a fudge factor (also called calibration constant). Or you could find a red filter which transmits only a portion of the span of red wavelengths (e.g., 640-660nm) and an NIR filter which transmits a wide range of NIR wavelengths (720-900nm).

With such a system, you should be able to capture microphotographs which produce NDVI images which show clearly where chloroplasts are. Or you can take a normal color photograph in which chloroplasts will be the only thing that is green.

Chris

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Wow, thanks for the great response! I did this fairly quickly and I will try to catch up with the questions:

  1. did you illuminate these separately at separate times? Yes, first visible and then IR.
  2. did you use the same, full-open camera and only change the light, instead of filtering separately? Yes and No , the camera settings were the same and there were no physical changes to the camera (except light swap) BUT the Raspberry Pi NoIR camera automatic gain control (AGC) was on. Most likely different gain settings were used.
  3. Is that a cross section of a living pine needle? No, most likely dead for years.
  4. I assume you used the red channel in the RGB photo for your red data? No, it was RGB.

Here is reprocessed RGB split version. It’s not exactly the same as before since the IR is also split.

Slide2.JPG

  1. Did you use a Pi NoIR camera to take both photos? Yes, 6a. How much NIR was emitted by the white LED? Not certain.
    6b How much of that NIR was captured by the red channel in the sensor? Not certain

My initial thinking is that given the 100nm separation between the WHT / IR (850nm) LEDs and the high output; the amount of overlap would be difficult to notice.


Comment 1: Since the LED source approach doesn’t ratio the sun’s energy it seems inappropriate to call it an NDVI measurement. The approach, however, may be useful in a different context. 1) It exploits the key NDVI reflectance difference between vis and IR. 2) The basic equation (Image A –Image B)/(Image A + Image B) seems useful for general image enhancement even if the input isn’t spectrally pure for a valid NDVI result. 3) It’s very easy to implement (on/off LEDS) and could also be easily automated by the Raspberry Pi GPIO.

Comment 2. Does seem like a lot of wasted work since green chloroplasts are visible under white light. Is it possible other (non green but with IR )plant areas contribute to NDVI response?

Comment 3. It might be possible to mimic the Red/NIR ratio better with other LEDs. The likely candidates would be a red and IR (950nm) LED. This would increase the spectral separation and they are both low cost. See below for a red LED/ IR 850nm trial on a pine cross section.

Slide5.JPG

Appreciate all the interest, comments and support.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Very interesting post! NDVI is correlated with chlorophyll pigments, correct? Different phytoplankton types have different concentrations of chlorophyll so NDVI could potentially be used to differentiate between types of phytoplankton. That could be used to inventory species in a specific body of water or it could potentially be used as an assessment of water health. There are certain species responsible for algal blooms (including toxic blooms) and if you could get a measure of their concentration in a sample you could monitor the growth over time leading up to an algal bloom. I’m not sure if any of this is feasible, but it’s an idea.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Login to comment.