Public Lab Research note


Testing a Midopt DB660/850 filter: NIR contamination of the red channel

by Corymbia | February 19, 2019 06:16 19 Feb 06:16 | #18394 | #18394

Inspired by several research notes in Public Lab (many thanks to nedhorning, cfastie and others for their work), I recently had an Olympus mirrorless camera (E-M10 Mark II) converted to NIR using a Midopt dual bandpass DB660/850 filter. The filter blocks all wavelengths except for two narrow bands: red and NIR, centered around 660 and 850 nm, respectively. As a result, the blue and green channel record mostly NIR (with a possible contamination with red light), while the red channel records mostly red light, yet contaminated with NIR. Since my aim is to calculate NDVI, I decided to test the NIR contamination of the red channel.

I bought two lens filters:

1. Hoya R72 NIR filter - filter transmitting only light at wavelengths higher than ~720 nm (NIR)

2. Hoya IR+UV cut-off filter - filter transmitting only visible light

I also bought ten ceramic tiles with different colours, including white, grey and black, to serve as reference targets. In a few weeks I will be able to measure their reflectance at 660 and 850 nm using a spectrometer, in order to calibrate the images following nedhorning's advice. At the moment, I used them together with some plant material (dry and healthy grass as well as three leaves) to have a look at the NIR contamination.

image description

Figure 1. The camera setup

With the camera on a small tripod (I'll look for a full sized one next time), I took three photos with the same settings (ISO 400, shutter speed 1/80, aperture f/5.0, white balance set on a red tile), in the following order:

Image 1. No lens filter (red channel: red+NIR, blue channel: NIR)

Image 2. Hoya R72 filter (red channel: NIR, blue channel: NIR)

Image 3. Hoya IR+UV cut-off filter (red channel: red, blue channel: a little red)

image description

Figure 2. Three images taken with the same settings, with 1) no lens filter on, 2) a Hoya R72 Infrared filter, 3) a Hoya IR+UV cut-off filter.

_
_

Using the same exposure settings unavoidably led to a certain over- or under-exposure (at least in some channels).

The white balance was custom set on the red tile, resulting in rather weird colours (Fig. 2, top). In Image 1 (taken without any lens filter on), the tiles are grey (they probably reflect just as much red light as NIR), while the leaves are blueish (they reflect more NIR (blue and green channel) than red light (red channel).

I also tested with white balance set on a white surface, and the resulting colours / pixel values / NDVI values were nearly identical (not surprising, considering that the camera cannot see other "colours" anyway).

The photos were saved in JPEG and RAW which I later converted to DNG using Adobe DNG converter. All six images (three images in both JPEG and DNG) were imported to Matlab following Rob Sumner's fantastic guide 'Processing RAW Images in MATLAB' (https://rcsumner.net/raw_guide/RAWguide.pdf).

Finally, I calculated average pixel value in 17 rectangular "regions of interest" representing 17 different targets:

image description

Figure 3. Reference targets (including 10 ceramic tiles) and the two filters.

The two tables below include the mean pixel values for the red and blue channels of the JPEG (top) and DNG images, using custom white balance set on a red tile. The range of values is 0 to 256, where 0 means no light recorded in that particular channel and 256 means overexposure.

Table 1. Average pixel values in the red (first three columns) and blue (last three columns) channels of Images 1-3, saved in JPEG format.

image description

... and the same for the DNG:

Table 2. Average pixel values in the red (first three columns) and blue (last three columns) channels of Images 1-3, saved in DNG format.

image description

A few things can be noticed:

1) When using white balance set on a red tile: regardless of image format ******and **reference target, the red channel values in Image 1 (red light + NIR) and Image 3 (red light only) are very similar. This was the case even for vegetation where I expected to see a larger difference due to a strong reflection of NIR compared to red light by plants.

2) Surprisingly, with the R72 lens filter on (the filter reflecting all light except for NIR above 720 nm), the red channel values were close to 0. This was not the case for the blue and green channels (full data in a link at the end of the note).

3) The blue and green channel values were close to 0 with the IR+UV cut-off filter on (this was expected).

4) Particularly in case of the DNG, some areas in images 1 and 3 (mainly the white, red and yellow tiles) were unfortunately overexposed in the red channel. This was difficult to avoid when keeping the settings unchanged among the three photos. The overexposure may also to some extent be a result of incorrect image processing in Matlab.

5) Surprisingly, in the DNG image, blue channel values in Image 2 (taken with R72 filter) are nearly twice as high as in Image 1 (with no lens filter on). This is not the case with the JPEG images. I haven't figured out the reason yet, but I'm quite sure it must be in my code :) If any of you have an idea what I might have done wrong, please share.

Finally, I calculated NDVI based on pixel values in Image 1 (Fig. 4 for DNG below). The highest values were around 0.8-0.9 (for well-watered grass and a healthy leaf), and the lowest around -0.6 for some of the tiles - and dead grass. As you can see, the camera had some trouble with black and blue tiles (first two from the left in the upper row plus the first in the bottom row) where NDVI ranges between -0.1 and +0.2. I'm also surprised by mostly negative NDVI values for dry grass (I expected low positive).

image description

Figure 4. NDVI calculated from DNG image (with white balance set on a red tile).

Despite getting reasonable NDVI values, I couldn't stop thinking about why why the contamination with NIR of the red channel seems so low, particularly for vegetation which reflects NIR to a much higher extent than red light.

This made me think about the role of white balance. In the DNG file's metadata, a set of three white balance coefficients is saved. These coefficients are used when importing images to Matlab, to give pictures a 'proper' colour. For this reason, I went back to this step and processed the DNG images again, this time changing all white balance coefficients to 1, and thus (correct me if I'm wrong) skipping the white balancing altogether.

This gave me the following results:

Table 3. Average pixel values in the red (first three columns) and blue (last three columns) channels of Images 1-3, saved in DNG format, without white balacing.

image description

**

What can we see this time?

**

1) Without white balancing, the red channel values are again rather similar between images 1 and 3 (althouth less so than in the previous example) - but not for vegetation. In particular, red channel values for healthy vegetation in Image 3 (red light only) amounted to only half of those in Image 1 (red light + NIR). This is what I expected to see - healthy vegetation reflects more NIR than red light, which, as far as I understand, leads to a stronger contamination of the red channel with NIR compared to objects with a constant reflectivity across the spectrum.

2) With the R72 Infrared filter on, the values in all three channels were similar. This is not surprising, since the camera could only see NIR (around 850 nm), and the red, green and blue pixels are all similarly sensitive to it.

3) In all three images and for all reference targets, values in the red channel were higher than in the blue channel (as far as I understand, because the red channel could always see more light than the blue channel - either NIR or visible). Result? Only negative NDVI values, as the numerator (NIR-VIS) is negative:

image description

Figure 5. NDVI calculated from DNG image (without white balancing).

I'm planning more tests, mainly to see the influence of different exposure settings (exposure compensation, but also shutter speed/aperture pairs) on the NDVI values. As I mentioned earlier, I will also test the reflectance of my tiles with a spectrometer, and try to calibrate the images following nedhorning's research notes.

Will the calibration "cancel out" the effect the white balancing had on NDVI values? I can't say. Until then, I'm planning to always set custom white balance using a red (or white) card before taking a new set of pictures. Modifying white balance coefficients in image processing is easy; finding out what the light conditions were at the time of taking the images can prove more difficult :)

All the images and data used in this note can be downloaded from my Google drive:

https://drive.google.com/drive/folders/1Qodov4Ci1DRXekC4vlHuVSVFWA3QlWZj?usp=sharing

The Matlab code for importing DNG images can also be downloaded from the drive. I'll also try to upload the one for JPEG images when I clean it up (the only differences are the lack of all the DNG pre-processing and slightly different mask coordinates).

Any comments / advice / questions welcome!


20 Comments

Hi Corymbia – It’s nice to see some new work in this area.

I see that you used the Adobe DNG converter. You might want to look into that to understand what processing is happening in the conversion. My understanding is that most converters do not produce a direct or minimally processed conversion. I used the dcraw program since it produces an image that is closest to the raw pixels values. It might not be an issue but could be worth checking out.

Getting the reflectance data from the red tile could provide some insight into the results you’ve been getting. I have no idea what the IR reflectance is for tile but from your image 1 photo it doesn’t look like the IR % reflectance is nearly as high as the red paper I was using.

I’m not sure if you can change this but I was not able to view your tables.

I look forward to seeing a note using calibrated RAW data. Keep up the good work.

Reply to this comment...


Hi Ned, Thanks for your comment, it was mainly your notes that inspired my camera conversion :)

I (hope I) fixed the tables by inserting print screens of the tables. I'm still fighting with formatting the text, but I hope you can now see the tables - let me know if this isn't the case!

Regarding the DNG conversion, unfortunately I'm no expert in image processing or Matlab, so I simply followed Rob Sumner's advice from his Matlab RAW guide:

"For our purposes, we must do a slight configuration after downloading the DNG Converter. After opening Adobe DNG Converter, click on Change Preferences and in the window that opens, use the drop-down menu to create a Custom Compatibility. Make sure the ‘Uncompressed’ box is checked in this custom combatibility mode and the ‘Linear (demosaiced)’ box is unchecked."

The reason behind my new camera conversion and all the testing is a plant stress experiment my colleagues are running. It will go for about a month, in a glasshouse with partly transparent roof, meaning different light condition on each day. I was originally thinking to simply set the white balance on a red tile/card before each "run", then take a picture of my ceramic tiles (before and after shooting), take pictures of the plants and finish with another picture of my ceramic tiles. However, i'm thinking to include two or three ceramic tiles in each photo. Any recommendation on which ones to use? I'm thinking to use three with a very low, medium and high reflectance, plus perhaps the red one.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


The tables look great. Before the end of the week I'll try to compare your converted image with on that I run through dcraw to see if they are similar. Unfortunately I don't have Adobe.

It sounds like an neat project. If you are using the RAW images, and I suggest you do, they the white balance shouldn't be applied. That said, you're white balance setting each day seems sensible. Using the ceramic tiles could be an issue since they look very shiny and likely influenced by specular reflection and therefore very sensitive to geometry relative to the light source and camera. A rougher surface would allow more diffuse reflections. If keep the orientation of the tiles the same for each calibration run that should be ok. It's difficult for me to choose the "best" tiles for calibration without seeing the reflectance data that you'll get from the spectrometer. It would be best if you have high, medium and low reflectance targets for the red and NIR wave bands. Different density gray targets that reflect evenly from red to NIR would be nice but you have to make do with what you have. You'll also have to make sure none of the calibration targets saturate the camera sensor.

Reply to this comment...


I just read over Rob Sumner's RAW guide and see Adobe DNG Converter is free. It also looks like he did some pretty rigorous tests so my previous concerns about a robust RAW conversion are reduced.

Reply to this comment...


I realised that my tiles may be too shiny and grabbed some matt paint samples (pieces of matt paper) from a local store. They’re small and light, meaning I could easily have them in each photo. Still, they’re far from perfect when it comes to reflectance - strongly influenced by geometry. I’ll certainly look for better (rougher) targets - would you recommend having them in each picture or is it enough for calibration purposes to take a shot of them every 50 „tree” pictures or so?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


If you're using them for calibration you should be fine taking photos of the targets every so often. The calibration coefficients shouldn't change that much from session to session.

Reply to this comment...


Thanks for the advice. How are the calibration coefficients calculated? I need to take photos of >50 trees in a glasshouse, so even within one session the photos will be taken from different angles and most probably with different exposure settings. That's why I was thinking to include the reference targets in each photo.

Regarding the reference targets, I agree that even matt tiles are not perfect. I also tried matt paint paper samples from a local store - a little better, but similar. My newest purchase is a few pieces of felt (material) - blackish plus a lighter and a darker grey/beige. I also bought two pieces of rough sandpaper (black and white), but particularly the black one seems rather shiny. I'm thinking to use them all, better to have too many than too few. My colleague will be holding a white coreflute sheet as background for the images. It shouldn't be difficult to stick a few pieces of darker and lighter felt (and/or sandpaper) on the sides, and make sure they are visible in each photo.

I'm also thinking to get a white umbrella to provide a more diffuse light, but I'm not sure it will be possible as there's not much space in the glasshouse.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


The calibration coefficients are, in the simplest case, the slope and y-intercept of the linear regression line for wavelength (based on calibration targets) vs RAW pixel values. Some plots are in this note: https://publiclab.org/notes/nedhorning/06-23-2014/calibrating-raw-images-a-step-toward-jpeg-calibration

It would be good for you to collect data at least once where you use the calibration panels each time you change the imaging geometry or lighting conditions. You should find that the slope of the linear regression line does not change much but the y-intercept does.If that is the case then you only need to photograph one tile each time the geometry and lighting conditions change.

If you have access to a spectrometer you should test the different materials to see what the reflectance is like in different wavelengths. To test for spectral vs diffuse reflectance properties you can set up an experiment where you change the geometry of the light source and/or spectrometer detector to see how much those changes effect the radiance from the targets. If you do a search for bidirectional reflectance measurements or BRDF you can see how researchers typically do these sorts of measurements.

Reply to this comment...


Thanks, I will try!

One more problem I noticed yesterday was that (at least when I set the white balance on a red/white surface) the red channel gets saturated very easily while the other two are strongly underexposed. I’m using a white coreflute sheet as a background, which probably adds to the problem. I may try painting it grey. Still, the leaves were also overexposed many times. I will need to check the histograms for the three channels each time I take a picture.

Did you have a similar problem?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


I use RAW pixel values for calibration analysis and since white balance doesn't impact RAW values I don't usually worry about it. If you want to use the JPEG data then you can experiment with different white balance targets. I'm not sure what you mean by a red/white surface. To set white balance I would use a homogeneous surface like a red card. Chris Fastie (cfastie) is the king of white balance and he might have some better suggestion.

Reply to this comment...


How do you make sure the white balancing step is omitted? I tried that myself (see table 3 and figure 5 in my note), but the resulting NDVI values were unreasonable (negative for all targets including vegetation). I’m not sure how to interpret that except for the basic explanation that red pixel values were always higher than blue.

With white/red I meant either white or red (as far as I know, with the Midopt filter, setting WB on surfaces of either of these colours gives similar results).

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Can you post one of your RAW images? I looked at Image1.dng and that shows the Bayer pattern so it needs to go through a debayer process to create an RGB image. I assume the Matlab script is doing that. It's quite possible that the RGB order that you are using is not the correct order for your camera sensor. The ImageJ script I use I need to specify the first four pixels in the image. Most of my images were R-G-R-G and after looking at your image my first guess would be that your images have the same order since the upper left pixel is bright.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Sure, I've just uploaded ORF files for all three images to the google drive (link at the end of my post).

Unfortunately I couldn't find any information about the Bayer filter arrangement for my Olympus camera. I assumed it was RGGB (RG - first two pixels in row 1, GB - first two pixels in row two). The order you mention (RGRG) is the same. I tried with using the other three options, but got very weird results.

Reply to this comment...


I have to run out now but will try to make time to check this in the morning. It's also possible it's due to the nature of the filter and after calibration the NDVI will be as you expect. Without calibration I wouldn't expect a great NDVI result but would expect better results than what you are getting.

Reply to this comment...


I used this following dcraw command: dcraw -D -W -4 Image1.ORF

to create this image which has been scaled to 16-bit integers:

https://drive.google.com/open?id=1bFGBqBBAgaVN4eBTLxgsn77AR-8c17h_

That image is what I expect from your camera. The red channel has the highest values for the green vegetation.

How are you getting the RGB values from the DNG image? I thought the DNG image is a single band with the Bayer pattern (since you followed "Linear (demosaiced)’ box is unchecked") so to get RGB your software must be converting the Bayer pattern to three layers and maybe that process is scaling the image somehow? Or, maybe I misunderstand the process. I need to look more closely at Rob Sumner's paper. Hopefully I'll have more time tomorrow. Sorry I'm a bit slow with this. It's been a while since I thought about image calibration.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Hi, I have some trouble previewing my processed DNG images in Matlab, but the channel values are similar to the ones in your image. Since the red channel values are always higher than the blue channel values (hence the red hue), NDVI must be negative for all surfaces (NIR-VIS is negative when VIS, that is red channel, is so high).

I came up with another idea to estimate the red channel contamination today. My results show that the contamination isn't a fixed value - it's higher for plant targets compared to targets with flat reflectance curve along the spectrum. I've just looked at the data from "no white balance" DNG, and it's clear that the difference between the red and blue channel is proportional to the contamination of the red channel with NIR. In fact, it's almost a 1:1 relationship (x = 0.983y, R2 = 0.735). That means that by subtracting blue channel value from the red channel, we can get a good estimation of VIS in red channel. Once I did that, I received a beautiful NDVI image without any white balancing! :) I'll try to upload some in a while.

Reply to this comment...


It's good you discovered that. Yesterday I started to write about the need to subtract the blue band (NIR) from the red band (red + NIR) so you get a better representation of red reflectance in the red band but deleted it since I thought you had already done the subtraction.

My understanding is that for most camera sensors the red channel is a little more sensitive to NIR than the blue channel so you can apply a small offset if you find that but in your case it might not be necessary.

Reply to this comment...


Subtracting the blue channel from the red channel is a good first step for correcting for contamination of the red channel with NIR. With one particular camera that has been characterized for NIR and red sensitivity, it seems that subtracting twice the blue channel from the red channel is appropriate (see thread: https://groups.google.com/forum/#!msg/plots-infrared/aJhM30D6bUM/ZYuNm7gAHQAJ;context-place=forum/plots-infrared). You might have an even better way to make this correction because you did the clever observation with two filters.

However, you might be missing some steps. The red and blue channels must also be corrected because the sensor is less sensitive to NIR than it is to red, and because what the camera measures (brightness) is not identical to what you want to know (radiance). This is what the calibration curve is for. To make the calibration curve, you need to know how the proportion of red to NIR in the RAW data differs from the actual proportion of red to NIR radiance in the light travelling from the subject (a leaf) to the camera. That is what the calibration targets are for.

To use calibration targets, you must know how much of the incoming NIR energy is reflected from them compared to how much of the incoming red energy is reflected. A spectrometer can tell you that. Good commercial calibration targets reflect the same proportion of the incoming NIR and red wavelengths (and all other wavelengths). But colored tiles probably do not. So until you measure the spectral reflectance of your tiles, you can't use your tables above to adjust the values in any meaningful way.

For example, in Table 3, the red channel values for the tiles are very similar between Image 2 (NIR) and Image 3 (red). Let's assume your filters were perfect and that the values for Image 2 and Image 3 were identical. Two variables determine how much red and NIR were coming from the tiles to the camera: How much red and NIR was in the light shining on the tiles, and how much of each was reflected from the tiles. Sunlight generally has less NIR than red, but you don't yet know about how much of each is reflected from the tiles.

But this is all getting a bit confusing for me.

Chris

Reply to this comment...


Thank you both, I really appreciate your help.

My tiles were far from perfect, but from next week on I will be able to test with six Spectralon samples (from white to black). That should provide much better results.

Just a question to you both - in Table 3 (or „DNG: no WB” sheet in the spreadsheet in my drive), the green and blue pixel values are much higher in Image 2 (with R72 filter on) than in the other two images - they are more or less equal among the three channels as the camera can only see NIR. Can you think of any explanation? Why would pixel values in green and blue channels increase after putting on a lens filter? The exposure settings were all the same among the three images.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


I can think of two possible reason. The most obvious would be that the illumination changed and the other is that the sensor temperature change. Sensors like those in a camera can be very sensitive to changes in temperature which is why a number of high cost sensors use liquid nitrogen as a coolant. There are probably other possible causes but those come to mind.

Reply to this comment...


Login to comment.