Introduction
[or how my effort to characterize the raspberry pi camera sensor made me design a spectrometer]
In a previous note I described my efforts to create a well characterized multispectral camera. At the end of the post I mentioned that to accomplish this task I needed the spectral response curve of the sensor chip in the raspberry pi camera. Having this spectral response curve would allow for more sophisticated inverse modeling of vegetation characteristics.
The spectral response of any imaging sensor (or most of them anyway) is determined by the formula used on the microlenses in the bayer filter. In practice every imaging sensor is monochrome, it's only by adding this bayer filter, a checkerboard of tiny red/green/blue filters alternatively overlaying all pixels, that you can extract color from your imaging sensor. Sadly, most spectral responses of the imaging sensors are corporate secrets it seems. This being said, this doesn't mean you can't measure it!
Measuring the spectral response of a sensor is generally done using a monochromator, a light source which emits a particular wavelength, and a spectrometer, a device to measure the intensity of that light source in function of wavelength. Here the monochromator emits light of a known wavelength which is simultaneously measured by the spectrometer and the imaging sensor. The spectrometer provides a true intensity measurement at this wavelength while the imaging sensor provides an intensity measurement for every bayer filter colour at this particular wavelength. If we cycle through all wavelengths the output of such an analysis are spectral response curves, showing the sensitivity of each bayer filter component colour across all wavelengths.Although this methodology is sound finding a monochromator is rather hard. Yet an alternative approach exists. Below you can see the spectral response of a canon 40D as measured by maxmax.com using a monochromator, this is what I'm aiming for.
A monochromator uses a diffraction grating to split a known light source into it's component wavelengths. This same diffraction grating is not selective and at any given time outputs all light components only at a slightly different angle. The monochromator only passes the desired wavelength, as shown below (left image).
So, in theory we could use a diffraction grating to do all the work for us without the intermediary and elusive monochormator! However, the transmission properties of a diffraction grating are wavelength dependent. This is the reason why in an ordinary (monochromator / spectrometer) setup you need to measure the true intensity as well as the image sensor response simultaneously. The only way to calculate the spectral response curve of the sensor is to factor in the wavelength dependent transmission properties of a grating. For most classroom gratings these properties are not described, but when ordering from an optical instrument builder they are!
In short, given a known light source (characterized using a spectrometer), a cheap but characterized grating it is possible to get a crude approximation of the spectral response of any imaging sensor using one image (well two actually as you need to calibrate the relative location of the spectrum - using a CFL light for example)!
The accidental raspberry pi spectrometer
As mentioned above a diffraction grating splits light in it's wavelength component (intensities). This is the basic spectroscopy principle. However, the angle of the diffracted light as it 'exits' the diffraction grating is dependent on the number of slits (grooves) in the grating. To correctly align any sensor (preferably parallel) with the grating and register the diffracted light a little math is required.
For a diffraction order m, given an incident beam of light at angle sin(ti), a given wavelength l and a slit density d; a diffracted beam will exit at an angle sin(td) according to:
d [ sin(ti) - sin(td) ] = m \l
or in function of the 'exit' angle:
td = asin(m*d \ l + sin(ti))
Using this relationship we can calculate the incident angle at which the the exiting light at a given wavelength will be orthogonal to the grating (or parallel to a sensor), or alternatively the angle at which the detector should be placed.
Using the above equation I ran an analysis for a set number of incident angles and wavelengths to extract the overall diffraction properties for a 300 lines/mm and 1351 lines/mm grating (a professional Thorlabs grating and DVD grooves respectively). The figures from the analysis you can find below.
Optimal grating angles, where the diffracted light exits orthogonal to the grating (going straight into a sensor) is calculated to be ~12 and ~67 degrees for (300 and 1351 lines/mm respectively).
I'll be using a 300 lines/mm grating at a 12 degree angle in my design. Some drafting in FreeCAD has rendered me an intial design of a raspberry pi spectrometer. I have yet to laser cut the design, but I have all the parts so some time next week. So rather soon I'll be able to put everything together and calculate the spectral response curves of the raspberry pi camera sensor, which is rather exciting!
4 Comments
Thanks for sharing your work. I've had the same trouble trying to find the spectral response curve for the OV5647 sensor, so this technique was very interesting.
Reply to this comment...
Log in to comment
If I can back out the spectral response I'll post it online for sure. So give it some time.
Reply to this comment...
Log in to comment
Uh, perhaps I am not fully understanding on my initial readthrough, but since we have a big need to characterize sensitivity in the webcams we use in our spectrometers on Spectral Workbench (for exposure calibration), could we use an exposure-calibrated spectrometer to characterize the webcam in a non-exposure-calibrated spectrometer?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
There are three variables in this setup:
1) the light source's output, which needs to be characterized or idealized (you could use the sun as a reference) 2) the wavelength dependent transmission of your grating 3) the spectral response of your bayer array on the imaging sensor
You need at least two of these to solve for the third! In my setup I have 1 and 2 characterized, which would provide me with 3. This analysis can be done for any imaging sensor.
If you want to get at 2, the transmission dependent nature of the grating, you will need to have the sensor calibrated and access to a reliable (known) light source. I'm in a privileged position as I have access to lab grade light sources as well as an ASD fieldspec pro which can serve as a true reference. If my efforts are successful for the OV5647 camera chip I would be willing to invest some time in doing the same analysis for the chipset in the camera of the kit.
Another approach to calibrate everything more easily has been discussed here. I think this approach is valid and probably easier to implement. It's a rough first cut, but might make a difference without any additional cost. I think that at a certain point people should either consider buying a professional grating (which aren't that expensive, but well characterized - my raspberry pi setup will run for <$150 which isn't too bad but not cheap anymore).
Within the context of calibration, I'm wondering if you couldn't mine the spectral workbench data. How large is the variability of say a known light source - camera combination (assuming that people tag it correctly). This should tell you something about variability across setups (precision) which should technically be the same (given the calibration routine in the workbench I assume the accuracy will be reasonably consistent). If things aren't precise, than the error of the build probably messes things up more than what you could correct with any post processing and sensor / grating characterization.
Reply to this comment...
Log in to comment
Login to comment.