FPGARelated.com
Forums

50 MSPS ADC with Spartan 3 FPGA - clock issues

Started by Unknown December 8, 2006
sp_mclaugh@yahoo.com wrote:
<snip>
> Yes, but assume that we have a pure 4.2 MHz sine wave, and we sample > where the slew rate is fastest (at the zero crossings, if the sinusoid > goes from -1 to +1). Call the difference between two such samples > max_change. Then, with worst-case jitter, instead of seeing max_change > between two samples, we see max_change * (t_sample + 2*t_jitter) / > (t_sample). This assumes a first-order expansion around the fast-slew > area. In other words, treat that area as having a constant slope (good > approx for a sinusoid), so the amplitude between samples is linearly > related to the time between samples. But, once we read the values into > the FPGA, we treat them as if they were only seperated by t_sample. If > the change-per-unit-time increases, doesn't that directly translate to > a change in maximum frequency? So... is my 4.305 MHz cutoff above > correct?
The 4.2 MHz cutoff is the right cutoff to design for because 1) these are based on ideal-time samples in your filter space and 2) you probably won't have a "brick wall" filter. You should have an analog filter on the front end if your input isn't guaranteed to be cleanly band-limited (such as the steps from a 27 MHz DAC) to help reduce any initial aliasing but the analog filter doesn't need to be extreme, just to have a good block between 45 and 55 MHz since that range would alias back down to your ~5 MHz range of interest. A digital filter can clean up what's left but you don't need to design for 4.305 MHz rather than your desired 4.2 MHz in the digital realm though the difference is rather minor. <snip>
> So what happens between these two extremes (signal being either > completely DC or completely high frequency - 4.2 MHz)? Surely if the > signal was completely 1 Hz, we wouldn't expect to see jitter uniformly > distributed from 0 to 25 MHz, correct? Shouldn't the maximum frequency > of jitter-induced noise be a percent (>100%) of the maximum frequency > of the input signal?
Again, the jitter has an effect on the 1 Hz measurement - a very small amount - but you will see a noise floor all the way out to 25 MHz from the jitter if the other system noise (including measurement noise) didn't swamp out those extremely small values. Imagine a .01% random noise source added to your signal. You will see that entire noise source in your spectrum. It's just very small and not worth worrying about in this application. You will have more jitter-induced error at higher frequencies than at lower frequencies. Happily, the higher frequencies for video produce less noticeable artifacts. If your noise floor for low frequencies was -40 dB, you might have objectionable results, especially if you're trying to process single frames. If the -40dB noise floor is at the higher frequencies, you have the perceived color getting off track a bit in a composite signal or loss of precision in fast intensity changes for component video. The main luminance content is still very clean. <snip>
> Ah, now that does make sense to me. If my signal really *was* just a > sinusoid (ie, a single tone), then maybe I could even develop some > algorithm to pick out the min and max samples (where slew was lowest). > Of course, that's not possible with my (real) video signal.
If you just picked out the min and max, you wouldn't gain any noise averaging from the other samples. If you have two independent jitter sources that individually induce 100 ps of RMS jitter, what would the two jitter sources do to your signal? You wouldn't end up with 200 ps RMS jitter; you'd end up with about 140 ps. Jitter is statistical in nature. If RMS jitter is based on 1 standard deviation, the chances of getting the jitter values to add hits at 2 standard deviations, not 1. If you average more samples with random distributions, your probability of getting less noise overall is reduced by the same reasoning even if the samples at the slower slew rates didn't reduce the jitter-induced noise on their own. <snip>
> The source of the jitter is beyond my knowledge, but this is certainly > good to hear. I will definitely low-pass my signal as close as I can to > 4.2 MHz (depending on how steep my filter is, which depends on how much > FPGA real estate I have to spare).
There's no need to over-design. A "clean" signal can still have some noise (or some alias) and meet all your needs. If you could experiment with different cutoff frequencies or steepness, you might gain better insight into what qualities deliver "better" results at what cost. Superb opportunity for learning experience.
> One last question/comment. Wouldn't this be an ideal example of when to > use dithering? ie, my LSB isn't really significant, so I shouldn't > treat it as if it was. I've never used dithering before, but maybe I > can use an LFSR (linear feedback shift register) or some other > technique to add one LSB of randomness to the samples... ?
Dithering is useful if you're trying to avoid frequency spurs typically related to the nonlinearity of the ADC you're using. If you want to get a 3 MHz sinewave and a 100 kHz sinewave superimposed without 2.9 and 3.1 MHz components 80 dB below the main sinewave, then yes - dithering is helpful. For video you shouldn't notice any problems from the slight non-linearity of today's converters. You'll already have noise in your system from the amplifiers, the converter, and the jitter-induced effects. This is another aspect that could add nicely to the learning experience but keep in mind that the added dither has to be kept out of the frequency range of interest, such as feeding it through a bandpass that has good bandstops up to 5 MHz and 45-55 MHz (for aliasing) as well as a good rolloff by the time you reach 95 MHz; I wouldn't recommend it because of the stringent analog filter design needs, but seeing the difference is informative.
Nico Coesel wrote:
> "Gabor" <gabor@alacron.com> wrote:
<snip>
>> Quick calculation: >> using 4.2 MHz full scale (of the ADC input range) sine wave >> 4.2MHz is about 26 Mradians/s >> ADC input range corresponds to -1 to +1 of normalized sine >> 1 LSB of 8-bit ADC is therefore 1/128 (normalized). >> 1 / (26M * 128) is about 0.3 nS >> >> So for a 1 LSB sampling error, you could live with 300 pSec of >> sampling jitter. My guess is that the threads you looked at >> were concerned about significantly smaller acceptable jitter, >> as would be the case in most networking applications where >> the sampling rate and bandwidth are closer to the same >> frequency. > > Isn't this calculation a bit crude? I suppose the spectrum of the > jitter is also important.
The calculation is crude, sure. But what DO we know about the jitter from the oscillator and the jitter induced by switching within an FPGA? Almost nothing. There's little chance to "count on" any kind of jitter spectrum for doing anything beyond a first order approximation. If the first order effects are considered, the secondary issues are... secondary.
One more thing:  If you're doing your own ADC board (leaving the 
Spartan3 board to the "experts") you would do yourself the best service 
by including your oscillator there and supplying that clock to the FPGA. 
  If you don't have a global clock pin on the ribbon cable header, you 
can still use a twisted pair (signal and ground) to route the clock 
independently to the unused DIP header on the Digilent board.