What I want to do
In a previous post about water quality measurements with DIY spectrometers, I stated that sensor sensitivity will be one of the challenges to face. The spectral workbench software, as realized at the moment, uses a single row of the webcam video stream to create spectra. I proposed two things:
1) to average over a few lines of the picture 2) to average over time / increase exposure time
My attempt and results
The workbench software seemed to intimidating, so I wrote a little python script that demonstrates how averaging over space and time can help increase the signal to noise ratio. The script is available on github if you want to give it a try yourself. I haven't got the spectrometer here at the moment, but to show you some results I ran the script on my webcam, facing the ceiling.
Here's the part of the webcam-stream evaluated to generate a spectrum (RGB channels are binned together, the colors encode intensity): Due to the averaging the noise level went down significantly.
I applied different methods to extract the actual spectrum from the greyscale/intensity pics:
Same, but zoomed in a bit:
Questions and next steps
I'll test this these days with the spectrometer attached, but I don't see any issues arising. Here, I used 100 frames (25fps * 4 secs) for the average. If the sample is really dim, why not expose half a minute or longer. The only restriction is probably memory, as the raw data is stored in a numpy-array. I'd definitely recommend to include this in the spectral workbench software.
Next step for me will be to apply this procedure for measuring typically very low intensity water spectra. These can be used to determine water quality.
Amendment
Finally, I'm back to my office and the spectrometer. Here's a daylight spectrum, cloudy sky, with some trees on the side. The spectrum is not calibrated whatsoever, but looks smooth and as expected.
5 Comments
Very cool! I'd love to help write an API macro which does this on the site... it could tag the spectrum with "smoothed" as well.
Reply to this comment...
Log in to comment
cool
Reply to this comment...
Log in to comment
Actually SW used to smooth at capture time. The other way to do it is not time-smoothing, but averaging in data from the extra dimension in each frame -- but probably time smoothing is easier in SW, since it asks you first to choose a row, and records data over time.
(an hour later) ...OK, I published a macro adding smoothing using the API: http://publiclab.org/notes/warren/07-19-2013/smoothing-macro-using-spectral-workbench-api
Reply to this comment...
Log in to comment
Interesting experiments. Just be careful of the word "enhancement" as it is imprecise within this technical realm. Yes, averaging is useful (assuming the noise is gaussian and random) in reducing the visual errors caused by that noise. However, it can only give you an average. Another way to think of averaging is as a LPF (low pass filter) where you are removing (attenuating) the high-frequency information in the data. Yes, you can see the low frequency info more easily, but you are also eliminating sharp spectral lines (or reducing the amplitude of those lines) which thereby reduces the information about those spectral lines. Also realize that averaging does not ADD any new information -- i.e. is does not "enhance" the resolution of a measurement.
Cheers, Dave
Reply to this comment...
Log in to comment
Hi, Philipp -- I did end up adding a "smooth" macro, which i've demonstrated in this post: http://publiclab.org/notes/warren/10-09-2013/trying-to-detect-emission-lines-in-flare-spectra-from-chalmette
It does not save the smoothed data yet; but we'll probably add that soon.
Reply to this comment...
Log in to comment
Login to comment.