I was going to post a new research note on a Rhodamine B concentration study yesterday and decided to double check some anomalies that had been occurring both with Rhodamine B and Fluorescein. I found a few glaring errors in that work and decided not to post any of it. I'm going to discuss the mistake first before discussing the importance of understanding the fundamentals of spectroscopy.
First mistake, I was taking the spectra of the Blanks,ie.,(the solvents in which the samples are dissolved in) but I was not subtracting them from the samples, only using their wavelengths as my excitation values.
Second, I was using the wrong Bandwidth value for the spectrometer when inputting those numbers to remove Rayleigh/Raman scatter.
Thirdly, although the Fluorescein standard dilution is correct, the excitation wavelength I was using is NOT. The Rhodamine B solution that I use, is NOT standardized correctly according to Turner Industries specifications. I misread a critical step in their process.
So I spent all morning today, redoing ALL my standards and solutions to a tee! I have and am, re-evaluating my techniques and preps, and making the necessary adjustments in my procedures in order to preform at a more scientific and accurate pace.
Below are the results from my corrections today, I decided to use the Rhodamine B test because it has a max absorption of 543nm and emission at 565nm.
The excitation wavelength that the Oregon Medical Laser Center used for this scan was 510nm, of course I used 532nm, I believe that if I would have matched their excitation wavelength my emission line would have started at approximately the same place as the OMLC. There is still a 2nm discrepancy at the peak wavelength of 567nm, which should have been 565nm.
Next I am presenting the specifications for the laser that I am using:
It does have great stability for a portable laser.
I wanted to touch on how to determaine the bandwidth of the spectrometer that you are using, especially the home brew type, I understood before about bandwidth begin proportional to slit width, but instead of me actually taking the time to do the actual calculations I just assumed that since I was using a particular slit width, well it must just be the same number? Wrong!
Below are the values transfered into the equation for determining the actual bandpass using the information from my spectrometer's specs: 0.12mm slit width, 361nm ruling density (I upgraded the DVD piece to 8.4g.) at 2770 lines per mm. at 40 degrees.
As you can see at the end, I have a spectral bandwidth of 8.1nm, now this is for the wavelength set at 532nm (very important). If I am reading the literature correct, you have to change this value at each excitation wavelength. For fluorescein I would have had to set it at either 490nm or 495nm, those are the excitation wavelength's for that molecule, so using a 470nm Blue LED ( thanks to @stoft, for his invaluble help and knowledge on all this.)
So using a 532nm laser for fluorescein was never going to produce the expected results because of its dominating wavelength. Perhaps a Blue laser at a wavelength of 473nm might work to produce the right energy transfers.
References:
http://www.iss.com/resources/research/technical_notes/PC1_MeasQuantumYldVinci.html
http://www.rapidtables.com/math/number/PPM.htm#conversion
8 Comments
On the plots .... seems odd that the 532nm source signal does not appear. Was it removed from the data? If so, how was it's influence on the remaining plot data handled? There does appear to be some low-level noise in the the data starting about 525nm.
On the resolution calculation ..... yes, I did find the power point presentation source it came from and another page describing that calculation as well. However, I believe that calculation is specific to the optical parameters of an ideal slit, an ideal diffraction grating and an ideal focal plane.
With the PLab device, the "spectral bandpass" of the device is influenced by both 1) the slit being used as a collimator (only an approximation of collimated light) and 2) the detector is not just an ideal focal plane, but a CCD pixel array.
So, a simpler estimate for the theoretical device resolution (w/o additional collimation errors) is three (3) times the ratio of (the wavelength span) / (number of pixels) detecting that span. For the older 640x480 webcam, where the 400-800nm span was about 2/3 the width of the image, the theoretical resolution would be ( (800-400)/((2/3)x640) x 3 ) = 3nm. The '3' is required because with quantized pixel detectors, it requires a minimum of 3 pixels to describe a "peak". [ The idea is "how many side-by-side, separate, narrow, individual signal peaks can be detected. ]
Another, more practical, approach is to remember that the FWHM of a narrow signal (i.e. laser) being detected by a spectrometer also describes the resolution bandwidth. [This is where the resolution is worse than the actual bandwidth of the source (laser) which, in the case of simple devices like the webcams for PLab spectrometers, should be true.] I've not found a reputable measure of the BW of a pocket laser, but I suspect it is <<1nm. Assuming that is true, careful repeat measurements of the FWHM "smearing" of a pure laser source, by the PLab spectrometer, should validate calculated estimates.
Ok, all that said .... what happened to the effect of the slit. Right; yes, the slit does matter. If the slit was so small that it illuminated only two grating lines, the resolution would be very low (and there'd be very little light. If wider to illuminate many grating lines, the interference pattern improves the resolution until the slit is very wide and then the light is no longer colimated and the resolution becomes poor.
An approximation is R = m x N, where R is the grating "resolving power", m is the diffraction mode order (assume 1 for most cases) and N is the number of grating lines illuminated by the slit. For a 0.12mm slit (and exactly the same illumination of the grating) and a 1350 lines/mm DVD grating, R = 1 x 0.12 x 1350 = 162. Then, the resolution BP of the slit + grating at 532nm would be BP = wavelength / R = 532 / 162 = 3.28nm. This could be calculated in reverse, and start with the pixel-based resolution limit of the camera (3nm from above) and calculate the "matching" maximum slit width as: BP = 532 / (Ws x 1350), so Ws (slit width) = 532 / (3.0 x 1350) = 0.13mm. This would be the slit width where the slit contributes as much to the bandwidth limit as the pixel resolution -- and a wider slit will not help because of the pixel limit.
It is important to remember that all of this is theoretical and that practical considerations, such as the slit being used as the colimator, and the relatively poor optics of the webcam will overshadow all of this and probably further reduce the system resolution.
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@stoft hey dave, I did remove the laser line from the plot, more so, when I subtracted the blank from the sample, part of the laser line was still there, the blank ( ethanol,) had a peak at 526nm and the laser line had a peak at 531nm?
I also re-averaged the RGB channels of the signal to try and get rid of some more noise, was that a mistake? it did work and made the signal much easier to work with, especially with the new value of the bandpass number, but I see your point about the power point presentation parameters as a perfect senario, that was just the best example i've found so far, but your formula is makes more sense in this type of set up.
So I have two DVD gratings I can use, the 4.7g abd the 8.4g, right now I have the 8.4g at 277o lines, so I just inputted the values from your formula R = 1 X 0.12 X 2770 = 332.4 R = 532/332.4 = 1.60. I also noticed from the pwr point presentation that they were referring to a rotating diffraction grating, that certainly would be ideal, but I'm not sure how practical that would really be to construct.
So what real steps can we take to side step some of these webcam limitations?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Hmmm ... all of these signals are so close to each other, I suspect that separating them is not so trivial a task ..... while that application is capable of performing the math, it is not clear to me that the resulting plot necessarily represents what it appears to show. The potential trouble concerns removal of signal information (and, therefore, the extraction of other signal information) when signals have been combined -- as they are in the original spectrum. It is not always possible to separate signals this way without, among other errors, the resulting signal containing large uncertainties.
Admittedly, it does depend on the nature of each of the signals. So, to avoid letting such errors "creep in" and create illusions in the resulting data, it is generally best to plot the full, unprocessed original measurement data first. Then, performing single processing steps (and plotting each step's data on the same graph as the original) forces each manipulation of the original data to be shown -- which also forced the requirement of discussing why the result is, or is not, valid and what impact that processing has on the data: e.g. why is averaging valid?, what information is thrown away before the next step? how has the measurement error been affected?, etc..... Without this, the reader (and maybe the author as well) is just taking the giant, hidden, set of processing steps on faith -- a process which is very prone to critical "oops!".
Averaging has it's value -- but it must be declared as part of the processing and that effect must be documented (eg. plotted) since it throws away information. If the goal is only to look for very general "shapes" to spectra, averaging can filter out that information. However, the type of filter can be important as filters can actually create artifacts that look like real data. It's a similar protocol of maintaining the trail of evidence so the source is never lost or contaminated in a way which either obscures or destroys information.
Numerically, 1.6nm is really good. However, that is just a theoretical value based on true colimated light and it assumes infinite pixel resolution. Step 2, is to look carefully at some data from the spectrometer in question and find the wavelength and pixel number range --i.e. calc the ratio of (nm span) / (# pixels) which is the detector's spatial resolution of the spectra. e.g. [ (730nm - 420nm) / (pixel#650 - pixel#320) ] x 3 = [(730-420)/(650-320)]x3 = (310/330)x3 = 0.94 x 3 = 2.8nm CCD detector bandwidth limit.
The third significant bandwidth limiting factor is the colimation error -- that light "rays", which are not parallel, can still illuminate the camera but they will be at a different incident angle to the ideal 'colimated' "rays" -- so their diffraction angles will be slightly different from ideal -- which causes "smearing" of the spectral image at the CCD sensor -- which adds to lower the resolution -- but is harder to estimate.
Resolution limits generally cannot be eliminated by simple post-processing. Averaging of multiple measurements can help some by reducing gaussian noise, but that is not really attacking the fundamental resolution limits. Fortunately, many signals of interest are relatively broad compared with the resolution. It is signals like those of a CFL or laser whose accurate measurement are most affected by the resolution limits.
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@stoft hey Dave, I thought we never say "oops," but "Ah...Interesting!"
Spekwin does include automatically in the legend "averaged," but I removed it, I did even think about leaving it there. I will include it if I use it again, I was experimenting with it as another way of taming some of the signal noise from this detector. I always work from a copy when it comes to the raw data file ( CSV) learned that the hard way.
I have to make sub folders within the main folder I am working from ( because of my unique problem.) I see the value though in what you are saying about the trail of evidence aspect, and I was working along that train of thought yesterday. I broke down each step in data processing in sequential order, and it helps considerably.
I also wanted to correct another error I made about the LED arrays I was talking about earlier, I was confusing LED arrays with photo multipliers, it was the photo multipliers in which wavelength can be selected. I just wanted to clear that up, sorry.
So let me ask you, can a polarizing filter be used to align collimated light coming from the slit to the detector? and could that solve part of the problem of bandwidth error?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@stoft Hey Dave I thought you might find this interesting. I did an integration time capture of my 405nm laser and the 532nm laser because I wanted to experiment with the wavenumber and transmission/reflection aspect of spekwin32 and see if I could calculate the integration times of both lasers through an empty quartz cuvette.
The 405nm UV laser pointer has an integration time of 245.46 m/s and the 532nm laser is 189.41 m/s
Reply to this comment...
Log in to comment
@stoft hey dave I'm including the 2 plots again with data labels
Reply to this comment...
Log in to comment
Unfortunately ..... while photomultipliers respond to many wavelengths, they do not differentiate between wavelengths (they just multiply the electron count in response to a photon striking a photocathode [so loss of wavelength information] -- but do provide high sensitivity) .... and a polarizing filter is non-directional other than polarization -- collimation requires optics such as a lens which focuses light from a point source ... alas, spectrometers are complicated devices .....
I do not follow the 'integration time' concept nor the 'm/s' units. Integration time for laser light would be "near instantaneous", 245 m/s can't be measurements/sec or meters/sec and 245 mS would be much too long. I'm also not following the plots and the units of annotation. The webcam's AGC might have a measurable time response (eg. 30 fps video) but then that would be a measurement of the camera's ACG response, not the laser ... so I'm baffled .....
Reply to this comment...
Log in to comment
@stoft not to worry dave, I am experimenting with how to use the integration aspect of spekwin32, I'm trying to see if I can measure the minimum integration time that cmos supports and see how fast the detector can read out all of the pixel information.
I got the idea from another engineer on hackaday, although he built his own CCD ( you can check it out here if you want - http://hackaday.io/project/10738-ottervis-lgl-spectrophotometer/log/40940-recordings-of-laser-lines
This is just an excerpt from the help file from Spekwin32 on Integration menu:
"After calling this menu item, use the mouse to zoom into the x axis range to be integrated. Left and right border of the zoom box are the integral boundaries. For each spectrum a separate message window is shown, containing the boundaries, the integral value of the spectrum and the average value within the selected range.
Hint: The oscillatory strength will be shown additionally, if the axis types wavenumbers and absorption coefficient are selected. Of course, the value of the oscillatory strength makes sense only for selection of the first electronic transition for integration. The following formula is used to calculate the oscillatory strength:"
I'm still workin' on it!
Reply to this comment...
Log in to comment
Login to comment.