Question: AstroPlant RPi sensory system

Sidney_AstroPlant is asking a question about infragram
Follow this topic

by Sidney_AstroPlant | July 03, 2017 14:13 | #14609


​Dear community members,

I am currently involved in the development of a hydroponics prototype kit for assessing plant growth in the Netherlands. It is an open science initiative in collaboration with ESA aimed to measure optimum environmental conditions for plants. Hereby we use a Raspberry Pi to measure all kinds of environmental data with the kit. I have a couple of challenges:


  • I'm new to the field of hyperspectral imaging and found the InfraGram initiative very exciting. I use a Pi Noir camera with a blue filter, however I haven't managed to get the desired images that allow for assessing photosynthesis in plants. Is there anyone working with the Rpi Noir camera? Preferably I would like to read into some documentation if available or a contact person to discuss further details. In our case we have a controlled environment and can construct a specific background if needed for optimum pictures. Below a picture taken with the RPi Noir camera (taken outside of the kit):

image description

  • We would like to have water quality testing such as Electrical Conductivity, PH mainly. Most Raspberry Pi interface-capable sensors are very expensive, and we are looking for cost-effective solutions.

Thanks in advance!

Kind regards, Sidney (on behalf of AstroPlant)



10 Comments

Just wanted to say this looks really cool!

Reply to this comment...


I haven't used the Pi NoIR cam and have not seen your photos, so I can only respond from general principles.

  1. Camera sensors are not as sensitive to near IR as they are to visible light. So even though the sensor might be sensitive to wavelengths between 700nm and 1000nm, They capture less of that range than they do in the range of any single color (R,G,B). So the ratio of VIS:NIR (e.g., Red:NIR) you can derive from a modified consumer type camera will not be similar to the actual ratio in the light being reflected from plant leaves. This ratio might even be reversed.
  2. A modified camera (IR cut filter replaced with color bandpass filter) cannot produce both a pure visible light channel and a pure NIR channel. A red filter can produce a fairly pure NIR signal in the blue channel but the red channel will have red and NIR mixed in a mostly unknown proportion. A blue filter will produce a blue channel with blue and NIR mixed and a red channel with red and NIR mixed. With the proper filter combined with the proper sensor, the band of interest will dominate in the mixed channels and the hack will work. Otherwise, not so much.
  3. These weaknesses can be compensated for by altering the values returned by the camera for the channels of interest. This is called either calibration or fudging depending on whether the alteration is based on knowledge of the system or an attempt to make the results look right.
  4. With two cameras and the proper filters, pure NIR and VIS channels can be captured, but the issue of low sensitivity to NIR remains.
  5. If you know (quantitatively) the relative sensitivity of NIR and VIS in your cameras, you can correct for this issue.

Chris

Thanks Chris for your response. My assumption was that the blue filter supplied with the RPI Noir camera blocks the red channel fully. So you get a clean NIR and blue channel. From your response I see that there are two problems, 1) The RPi camera most likely does not capture most of the NIR light. 2) the light that is captured is mixed, e.g. the red channel still has red and NIR mixed & the NIR itself is mixed. The only solution I see is to use two cameras with a known relative sensitivity (one of public labs examples). Is this correct? I hope that anyone else has tried working with the RPI camera as I still would like to look for a solution for it. I added a test picture I made with the camera in the question block above.

Is this a question? Click here to post it to the Questions page.


Using two cameras solves one of the two problems I described (mixed NIR and VIS). That problem can also be solved by measuring or making assumptions about the proportion of NIR and VIS in the mixed channel(s). This requires some data about the sensitivity of individual channels to particular wavelength bands. Some cameras have been characterized in this way. The sensor in the RPi camera has been characterized in the visible part of the spectrum, but I have not seen results for the NIR portion. A haphazard subset of consumer cameras has been characterized in both the VIS and NIR. You could apply those results to other cameras and this might get you as close as you need to be.

The other problem, that cameras are not as sensitive to NIR as to VIS, can be solved using the same data that solves the first problem. You need to know how much to inflate the NIR values to compensate for the sensor's lower sensitivity in that range.

Ned Horning has been working on a workaround for both of these problems that involves placing targets of known spectral reflectance in the bands of interest in each photo. You then figure out how much the brightness of pixels in the targets differs from what they should be, and use that information to adjust all the other pixels in the photo. This is more straightforward if the photos are captured in raw format so you avoid the heavy processing (gamma correction, color balance, etc) that happens when the camera makes jpegs. I think this method can be derailed by the issue of mixed NIR and VIS in each channel because different parts of the photographed scene can have different proportions of NIR and VIS, so you never really know what that proportion is for any pixel. But maybe it works when applied only to the foliage parts of an image where the NIR:VIS ratio is more predictable.


Thanks for adding a photo. NDVI is based on the assumption that the light illuminating the foliage has a certain proportion of NIR and VIS -- the proportion that is in sunlight (e.g., satellite photos don't show vegetation when it is cloudy, so satellite NDVI exists only when direct sunlight illuminates the foliage). So taking photos outside will be more useful for troubleshooting.

If your project will be using artificial lighting, then your NDVI system might have to be calibrated for the particular proportion of NIR and VIS in those lights. Also, fluorescent lamps do not produce much NIR, so you can't really do NDVI under some indoor lighting.

Photographic exposure is very important for computing NDVI. Both the VIS and the NIR channels should be well exposed with most of the areas of foliage neither over or under exposed.


The kit is fully enclosed with LED with a certain blue and red ratio (I have to verify what the exact ratio is). This brings me to the conclusion that it might not even be possible as chances are that there is almost none NIR emitted from the LED's. I assume that the plants would have to be taken outside to make the pictures. I'll try to contact Ned Horning and scan through his posts to see what is possible, I think we'll need more support to get this feature going. Again thanks for your elaborate response.


A simple option might be to turn on some NIR LEDs when the photography happens. The NIR:VIS ratio might not match sunlight, but if you know what it is you can work with it. The goal is to know whether the foliage is absorbing lots of red and reflecting lots of NIR (healthy foliage) or whether there is less "difference" between those two values (less healthy foliage). Because you control the quality of the light, you have an advantage over NDVI in the wild where the intensity and spectrum of sunlight varies through the day and year. Your "NDVI" might not be comparable with Landsat's, but it can still be a very good indicator of how foliage health is changing. If your goal is limited to internal comparisons, you don't have to solve the problem of low sensitivity of cameras to NIR. As long as there is some NIR, and you can measure its intensity well enough to detect changes, you can monitor how the reflection of red and NIR are related to each other.

If you are going to turn on special NIR lights for the photography, you could also turn on special red lights (and turn off all other lights). Then you really know both the intensity and spectral quality of the light impacting the plants. So instead of Landsat's approach which is to measure the intensity of radiance at narrow wavelength bands in the red and NIR (and filtering out all that other sunlight), you can illuminate the plants with narrow bands of red and NIR. Then the filtering in front of the camera sensor can be less critical because you know all the light is in two narrow bands.


Using NIR LEDs seems like a good option to look into and I'm happy to hear that I might not have to deal with extensive filtering! I just got a response from Ned Horning, I hope to share this discussion with him to look for options.


Reply to this comment...


Glad you posted here as you know I'm a fan of the project. :)

Reply to this comment...


Thanks for the positive messages! I see that both of you are active in open source hardware and software initiatives. We are thinking to share our hardware/software architecture at some point.

Reply to this comment...


Log in to comment