Public Lab Research note


Spectrometer Noise

by stoft | April 16, 2016 07:01 16 Apr 07:01 | #12987 | #12987

stoft was awarded the Empiricism Barnstar by warren for their work in this research note.


Abstract

This is a follow-on to my discussion of Spectrometer Stability and is an attempt to observe and compensate for what appears as significant "drift and noise" from the camera. Again, the test configuration has been designed to reduce the source and mechanical noise to zero so as to observe only camera noise. There is evidence of both drift and noise, evidence that the noise is primarily Gaussian and evidence that much of the noise can be diminished through multi-frame averaging. However, drift compensation remains an issue.

References

PLab 3 Spectrometer Upgrade Prototype

Spectrometer Stability

Spectrometer Time Filter

Spectrometer Noise Solution?

Spectrometer Peak-Hold

Spectrometer DVD-Alignment Auto-Correction

Disclaimers

When sampling and analyzing drift and noise, it is not always easy to correlate those errors with a specific source. While camera AGC and detector noise remain the most likely causes, there may be other factors yet undiscovered.

Sample Rate

In the previous set of stability tests, data was accumulated only one point per minute; a useful overview, but it missed a lot of detail. The general rule for sampling is the Nyquist rate which states that the minimal number of samples to detect a periodic signal at a give frequency is 2x that frequency. (Imagine representing a sine wave by just 2 points; one at the peak and one at the valley.) However, two things result: 1) that sample rate will give a poor representation of the signal's waveform and 2) with non-periodic waveforms it will tell us nothing about what happens between those points.

Given that the camera is capable of 30 frames / sec, I wrote some Matlab code to extract one line of pixel data at ~6 frames/sec and store a total of 15 minutes of raw spectral line data (~10Mb) for later analysis. Again, the same mechanically rigid proto V3 and Solux 4700K lamp were configured exactly the same while collecting the data.

Plots

The first plot show 15 min of R/G/B/S ('S' means the (R+G+B)/3 spectrum curve) data with the sample number as the X-axis units. The data is the extraction of the same pixel's value (470, 550 and 620 nm for R/G/B and 550nm for S) as it is recorded over the time period. The Y-axis is the pixel intensity data from the camera.

STest_0-15RGBS6sps.gif

Note that it is easy to identify both drift and noise in these signals and that R/G/B have different noise levels which do not correlate with their average intensity value. I do not have an explanation for this as yet, The next plot is the same data only just a "zoom-in" to the middle of the above plot so as to see the noise with a bit more detail. Based on the first plot, the same "random" appearance was expected.

STest_0-15RGBS6spsZoom.gif

Analysis

First, it would be good to know a bit more about this noise and one simple method is to just plot a distribution:

STest_0-15RedDistrib.gif

STest_0-15GrnDistrib.gif

STest_0-15BluDistrib.gif

STest_0-15SpecDistrib.gif

While these distributions are not exactly the same, they are all similar to the Blue channel which appears reasonably Gaussian. This is helpful because 1) Gaussian noise was expected and 2) averaging the data is a simple and effective method to reduce it's effect. To check this, the Blue channel data was processes with a 31-sample running average and the resulting distribution is plotted below:

STest_0-15BluDistrib31ptAvg.gif

As another visualization of the effectiveness of this level of averaging, the R/G/B/S plot was re-generated after averaging on each channel.

STest_0-15RGBS6sps31ptAvg.gif

Conclusions

These plots, especially the last two, show:

1) Camera noise can be reduced by averaging each pixel (of the selected line of pixels crossing the spectral band) over about 30 frames; roughly 5 sec of recording.

2) Some drift remains but, at least at 550 nm in the combined spectral plot, the error would be reduced to ~+/- 2.5%.

3) Doing no frame data averaging, thus including all the noise, essentially means retaining ~10% error which appears like adding a guess -- a potential for a 10% error each time a "capture" is performed. It's like a "roll of the dice".


5 Comments

Great analysis. Based on this, I wonder if we should more strongly encourage people to do live capturing, and to have a time averaging default setting... Basically make it default to do an average of X frames. What do you think?

What is a good minimum of frames to average?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Yes, I'd agree that time averaging would help and, if possible, it sould be the default to reduce this noise component. I do not have 30 fps data, only 6 fps data, but I suspect the noise likely looks Gaussian at any frame rate. Assuming that is true, then it does become a question of the number of captures for how ever long that takes. The impact is that the user would need to keep their system, and test configuration, constant for that period to assure the smallest DC drift. (This again points to the need for mechanical stability.) As for the number of data points ... the running average of 31 points (~5 sec at 6 fps) reduced the noise distribution by about half but the effect is not linear. I'll think more about extracting a curve to show the trade-off, but having to take data for a few seconds seems likely.

Reply to this comment...


Excellent; so we'll think about timeframe plus # of samples. I think we could prompt people in the interface to "ensure stable reading" during a few seconds -- even as we get folks using more mechanically stable setups, this could be relevant if they've customized or added a lighting/sample-holding setup, or are trying for portable readings or something.

In terms of interface, we could have a checkbox that says [ ] 5-second exposure for noise reduction or something like that, and show a popup for the 5 seconds that says Do not move: recording... what do you think?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Ideally, the user would select a "stable" ('smooth' or 'broad peak') region to monitor and then the software would take samples (and average) while monitoring stability (low DC drift) and notify the user when the measurement is done -- or timed out with an unstable result. This is how most instruments work which are dealing with measurement drift. Just defaulting to blind measurements for 5 sec MIGHT give better results, but if the user is causing drift with an unstable setup, then the result is only an average of a lot of noise and gives a false indication of stability. The actual stability must be measured to be able to tell the user it was, in fact, stable. Actually, this basic measurement (even without the averaging) should be a mandatory part of SWB.

Reply to this comment...


Actually, you need several additional pieces for SWB: 1) average 3-5 parallel lines, 2) monitor stability and take data until it IS stable or times out with an error and 3) time-average the 3-5line pixel data, and 4) monitor the non-band dark field and subtract the average as residual DC offset of ambient light.

Reply to this comment...


Login to comment.