Public Lab Research note


Automatic Gain Control vs. Inverse square law

by viechdokter | April 16, 2016 16:07 16 Apr 16:07 | #12989 | #12989

After my first trials with my torch light and the problems with the "light rings" it was emitting I tried again, this time focussing it better and keeping the light path straight. I took spectra of that LED torch light at a lot of different distances.

The "inverse square law" states that the light intensity (flux) is inversely proportional to the distance of the light source squared. So intensity times distance squared should be constant.

Intensity * distance2 = const.

The distance part is easy, but what about the intensity/flux? Red, green, blue? Average? Well I took the CSV data sheet and summed up ALL values in ALL three channels.

Then I put the sums into the above equation. Well, there was no constant. One distance brought a five times higher "constant" than another distance.

I guess thats because of the Automatic Gain Control of the webcam in my spectrometer.

And now my next thought: could that somehow be a way to find out more about the specifics of the AGC of the webcam? Derive some AGC curve that under certain circumstances could just be substracted from other curves to take away the AGC effect?

Any ideas?


4 Comments

Hi! Yes indeed, see @stoft's work on gain control; we're trying to generalize his work for other light sources, and also develop it into a Spectral Workbench feature.

Reply to this comment...


Reply to this comment...


Ooops! Guess I tried to re-invent the bicycle here, huh? Looks like all my thoughts have been thought before - and many more... ;-)

Will read it in detail (which will probably take some time because stoft always does very good, thoughtful and deep stats) and if I still have any thoughts left I might ...

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


This is and interesting observation and thought experiment. And I'd agree that controlling a light source's intensity while monitoring the camera's response is a logical approach to look for a defined correlation. However, I think "the fly in the ointment" is that under normal spectrometer measurements, there is very little light in the spectral band within the camera's image field -- especially when there is no clipping of R/G/B channels. I've been guessing that any "AGC effect", when the image field is very near to black, would place the AGC control loop at one extreme with full gain. This suggests that the only intensity correlation that would be available as a correction factor is at the end of the gain range which is normally very non-linear. (Medium light and mid-range for the camera image as a whole, i.e. just low-light conditions' might be much more linear.) The data suggests some randomness to the low frequency "drift" I observed which is what leads me to guess that the drift is just the near-DC noise component of the AGC effect. If that were correct, then there'd be no way to develop a correction from the signal. I'd considered the use of background variation as a substitute, but all I've observed so far is just noise as that "signal" level is essentially zero so noise is the only thing left. However, averaging the noise from the dark field, taken at the same time as the signal data, could be subtracted from the signal as a background offset. This is easily done and I've found benefits in reducing the low-level residual noise of the spectral data.

Reply to this comment...


Login to comment.