Reply by John_H December 10, 20062006-12-10
One more thing:  If you're doing your own ADC board (leaving the 
Spartan3 board to the "experts") you would do yourself the best service 
by including your oscillator there and supplying that clock to the FPGA. 
  If you don't have a global clock pin on the ribbon cable header, you 
can still use a twisted pair (signal and ground) to route the clock 
independently to the unused DIP header on the Digilent board.
Reply by John_H December 10, 20062006-12-10
Nico Coesel wrote:
> "Gabor" <gabor@alacron.com> wrote:
<snip>
>> Quick calculation: >> using 4.2 MHz full scale (of the ADC input range) sine wave >> 4.2MHz is about 26 Mradians/s >> ADC input range corresponds to -1 to +1 of normalized sine >> 1 LSB of 8-bit ADC is therefore 1/128 (normalized). >> 1 / (26M * 128) is about 0.3 nS >> >> So for a 1 LSB sampling error, you could live with 300 pSec of >> sampling jitter. My guess is that the threads you looked at >> were concerned about significantly smaller acceptable jitter, >> as would be the case in most networking applications where >> the sampling rate and bandwidth are closer to the same >> frequency. > > Isn't this calculation a bit crude? I suppose the spectrum of the > jitter is also important.
The calculation is crude, sure. But what DO we know about the jitter from the oscillator and the jitter induced by switching within an FPGA? Almost nothing. There's little chance to "count on" any kind of jitter spectrum for doing anything beyond a first order approximation. If the first order effects are considered, the secondary issues are... secondary.
Reply by John_H December 10, 20062006-12-10
sp_mclaugh@yahoo.com wrote:
<snip>
> Yes, but assume that we have a pure 4.2 MHz sine wave, and we sample > where the slew rate is fastest (at the zero crossings, if the sinusoid > goes from -1 to +1). Call the difference between two such samples > max_change. Then, with worst-case jitter, instead of seeing max_change > between two samples, we see max_change * (t_sample + 2*t_jitter) / > (t_sample). This assumes a first-order expansion around the fast-slew > area. In other words, treat that area as having a constant slope (good > approx for a sinusoid), so the amplitude between samples is linearly > related to the time between samples. But, once we read the values into > the FPGA, we treat them as if they were only seperated by t_sample. If > the change-per-unit-time increases, doesn't that directly translate to > a change in maximum frequency? So... is my 4.305 MHz cutoff above > correct?
The 4.2 MHz cutoff is the right cutoff to design for because 1) these are based on ideal-time samples in your filter space and 2) you probably won't have a "brick wall" filter. You should have an analog filter on the front end if your input isn't guaranteed to be cleanly band-limited (such as the steps from a 27 MHz DAC) to help reduce any initial aliasing but the analog filter doesn't need to be extreme, just to have a good block between 45 and 55 MHz since that range would alias back down to your ~5 MHz range of interest. A digital filter can clean up what's left but you don't need to design for 4.305 MHz rather than your desired 4.2 MHz in the digital realm though the difference is rather minor. <snip>
> So what happens between these two extremes (signal being either > completely DC or completely high frequency - 4.2 MHz)? Surely if the > signal was completely 1 Hz, we wouldn't expect to see jitter uniformly > distributed from 0 to 25 MHz, correct? Shouldn't the maximum frequency > of jitter-induced noise be a percent (>100%) of the maximum frequency > of the input signal?
Again, the jitter has an effect on the 1 Hz measurement - a very small amount - but you will see a noise floor all the way out to 25 MHz from the jitter if the other system noise (including measurement noise) didn't swamp out those extremely small values. Imagine a .01% random noise source added to your signal. You will see that entire noise source in your spectrum. It's just very small and not worth worrying about in this application. You will have more jitter-induced error at higher frequencies than at lower frequencies. Happily, the higher frequencies for video produce less noticeable artifacts. If your noise floor for low frequencies was -40 dB, you might have objectionable results, especially if you're trying to process single frames. If the -40dB noise floor is at the higher frequencies, you have the perceived color getting off track a bit in a composite signal or loss of precision in fast intensity changes for component video. The main luminance content is still very clean. <snip>
> Ah, now that does make sense to me. If my signal really *was* just a > sinusoid (ie, a single tone), then maybe I could even develop some > algorithm to pick out the min and max samples (where slew was lowest). > Of course, that's not possible with my (real) video signal.
If you just picked out the min and max, you wouldn't gain any noise averaging from the other samples. If you have two independent jitter sources that individually induce 100 ps of RMS jitter, what would the two jitter sources do to your signal? You wouldn't end up with 200 ps RMS jitter; you'd end up with about 140 ps. Jitter is statistical in nature. If RMS jitter is based on 1 standard deviation, the chances of getting the jitter values to add hits at 2 standard deviations, not 1. If you average more samples with random distributions, your probability of getting less noise overall is reduced by the same reasoning even if the samples at the slower slew rates didn't reduce the jitter-induced noise on their own. <snip>
> The source of the jitter is beyond my knowledge, but this is certainly > good to hear. I will definitely low-pass my signal as close as I can to > 4.2 MHz (depending on how steep my filter is, which depends on how much > FPGA real estate I have to spare).
There's no need to over-design. A "clean" signal can still have some noise (or some alias) and meet all your needs. If you could experiment with different cutoff frequencies or steepness, you might gain better insight into what qualities deliver "better" results at what cost. Superb opportunity for learning experience.
> One last question/comment. Wouldn't this be an ideal example of when to > use dithering? ie, my LSB isn't really significant, so I shouldn't > treat it as if it was. I've never used dithering before, but maybe I > can use an LFSR (linear feedback shift register) or some other > technique to add one LSB of randomness to the samples... ?
Dithering is useful if you're trying to avoid frequency spurs typically related to the nonlinearity of the ADC you're using. If you want to get a 3 MHz sinewave and a 100 kHz sinewave superimposed without 2.9 and 3.1 MHz components 80 dB below the main sinewave, then yes - dithering is helpful. For video you shouldn't notice any problems from the slight non-linearity of today's converters. You'll already have noise in your system from the amplifiers, the converter, and the jitter-induced effects. This is another aspect that could add nicely to the learning experience but keep in mind that the added dither has to be kept out of the frequency range of interest, such as feeding it through a bandpass that has good bandstops up to 5 MHz and 45-55 MHz (for aliasing) as well as a good rolloff by the time you reach 95 MHz; I wouldn't recommend it because of the stringent analog filter design needs, but seeing the difference is informative.
Reply by December 9, 20062006-12-09
As for your previous post, I've got it printed up and sitting on my
workbench area. Lots of good info in there. I'll just respond to this
one for now.

John_H wrote:

> sp_mclaugh@yahoo.com wrote:
> > In the worst-case scenario, we would have an input signal with > > a purely 4.2 MHz frequency component (would never happen for video, but > > just for the arguement). If two samples were taken, each experiencing > > maximum sample clock jitter, but in opposite directions, then they > > would be seperated by (sample time + 2 * jitter). However, we would > > treat them as if they were seperated by only (sample time). > > > > Wouldn't this only introduce noise up to a frequency of: > > 4.2 MHz * (sample time + 2 * jitter) / (sample time) ? > > > > ie, for 250 ps of jitter on a 20 ns clock, with a 4.2 MHz signal being > > sampled, I could expect to see noise up to 4.305 MHz...? > > The jitter introduces amplitude errors, not frequency errors. Any > amplitude or frequency error can induce problems in the other domain > (which is why the ADC frequency error - phase, actually - induces the > amplitude error). You're analyzing the signal as if it's in an ideal > sampling domain so the errors will show up as amplitude noise.
Yes, but assume that we have a pure 4.2 MHz sine wave, and we sample where the slew rate is fastest (at the zero crossings, if the sinusoid goes from -1 to +1). Call the difference between two such samples max_change. Then, with worst-case jitter, instead of seeing max_change between two samples, we see max_change * (t_sample + 2*t_jitter) / (t_sample). This assumes a first-order expansion around the fast-slew area. In other words, treat that area as having a constant slope (good approx for a sinusoid), so the amplitude between samples is linearly related to the time between samples. But, once we read the values into the FPGA, we treat them as if they were only seperated by t_sample. If the change-per-unit-time increases, doesn't that directly translate to a change in maximum frequency? So... is my 4.305 MHz cutoff above correct?
> > Or, instead of assuming an input with a purely 4.2 MHz component, go to > > the other extreme. Assume the input is a constant DC signal. The jitter > > on the sampling clock wouldn't cause any noise at all here, would it? > > The jitter won't induce noise on the DC signal, correct. Great > observation. You still get the benefit of the ADC noise being reduced > at DC.
So what happens between these two extremes (signal being either completely DC or completely high frequency - 4.2 MHz)? Surely if the signal was completely 1 Hz, we wouldn't expect to see jitter uniformly distributed from 0 to 25 MHz, correct? Shouldn't the maximum frequency of jitter-induced noise be a percent (>100%) of the maximum frequency of the input signal?
> If you were to only sample at 8.4 MS/s, your 4.2 MHz sinewave would have > maximum sample errors at the highest slew of the signal with maximum > deviations that constructively add to produce the maximum error.
Yes, I think we are talking about the same thing (compare to what I mentioned above). ie, the first sample is jittered so that it occurs too early, while the second occurs too late -- and all of this happening where slew is the highest.
> When > you have a 50 MS/s stream looking at the 4.2 MHz signal, your maximum > values are still the maximums but you throw many other samples in with > that same period. Each sample point will have similar noise power, but > weighted by the signal slew rate; the top and bottom of the sinusoid are > closer to DC for jitter analysis reasons so the noise power isn't > constant for all sample points but significantly reduced in the slower > slew regions. Filtering over the wider bandwidth allows the worst > sample errors to be filtered with the smallest sample errors leading to > an overall reduction in jitter-induced noise.
Ah, now that does make sense to me. If my signal really *was* just a sinusoid (ie, a single tone), then maybe I could even develop some algorithm to pick out the min and max samples (where slew was lowest). Of course, that's not possible with my (real) video signal.
> I would expect most of your jitter to be high-frequency since you're > coming from a crystal source with the induced noise coming from that > "ideal" signal getting phase distortions through various buffer stages > from the slight induced shifts of threshold point. Higher frequency > jitter is easier to remove from your overall system noise than low > frequency jitter that induces real phase shifts in your observed data.
The source of the jitter is beyond my knowledge, but this is certainly good to hear. I will definitely low-pass my signal as close as I can to 4.2 MHz (depending on how steep my filter is, which depends on how much FPGA real estate I have to spare). One last question/comment. Wouldn't this be an ideal example of when to use dithering? ie, my LSB isn't really significant, so I shouldn't treat it as if it was. I've never used dithering before, but maybe I can use an LFSR (linear feedback shift register) or some other technique to add one LSB of randomness to the samples... ?
Reply by Nico Coesel December 9, 20062006-12-09
"Gabor" <gabor@alacron.com> wrote:

> >sp_mclaugh@yahoo.com wrote: >> Hello, >> >> I'm in the middle of a project which involves digitizing and decoding >> baseband NTSC composite video. Right off the top, I'll let everybody >> know that this is part of an educational project (part of it for a >> university project, though it's largely a hobbyist type project). I >> realize that the project will be useless in a couple years, and that >> there are pre-made devices out there, but I still want to do it. >> >> That being said, I think the hardest part of the whole project (for me) >> is just getting the data into the FPGA (cleanly)! I know very little >> about clock management, and I'm worried that I'm pushing the limits of >> my setup. Let me briefly describe what I'm doing. >> >> The traditional way to sample NTSC video, as I understand it, is to use >> dedicated chips to derive a "pixel clock" off of the hsync. This clock >> then feeds the ADC, and perhaps the FPGA. I am not doing this. I am >> using a fixed, free-running crystal oscillator clock (50 MHz Epson >> SG-8002JF). For the record, that clock came on my Digilent Spartan 3 >> starter board, which I'm using for the project. I plan on sampling at >> the full 50 MSPS, even though the video signal is band-limited to about >> 4.2 MHz. >> > >Quick calculation: >using 4.2 MHz full scale (of the ADC input range) sine wave >4.2MHz is about 26 Mradians/s >ADC input range corresponds to -1 to +1 of normalized sine >1 LSB of 8-bit ADC is therefore 1/128 (normalized). >1 / (26M * 128) is about 0.3 nS > >So for a 1 LSB sampling error, you could live with 300 pSec of >sampling jitter. My guess is that the threads you looked at >were concerned about significantly smaller acceptable jitter, >as would be the case in most networking applications where >the sampling rate and bandwidth are closer to the same >frequency.
Isn't this calculation a bit crude? I suppose the spectrum of the jitter is also important. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nl
Reply by John_H December 9, 20062006-12-09
sp_mclaugh@yahoo.com wrote:
> John_H wrote: > > Regarding the frequency range of noise due to sample clock jitter > (sampling using an ADC much faster than required for a given > band-limited signal): > >> Since the noise you'll see from the clock jitter will be spread across >> the full 25 MHz bandwidth of your 50 MS/s data stream > > On a second reading, I was wondering if you could explain this a bit > further. In the worst-case scenario, we would have an input signal with > a purely 4.2 MHz frequency component (would never happen for video, but > just for the arguement). If two samples were taken, each experiencing > maximum sample clock jitter, but in opposite directions, then they > would be seperated by (sample time + 2 * jitter). However, we would > treat them as if they were seperated by only (sample time). > > Wouldn't this only introduce noise up to a frequency of: > 4.2 MHz * (sample time + 2 * jitter) / (sample time) ? > > ie, for 250 ps of jitter on a 20 ns clock, with a 4.2 MHz signal being > sampled, I could expect to see noise up to 4.305 MHz...?
The jitter introduces amplitude errors, not frequency errors. Any amplitude or frequency error can induce problems in the other domain (which is why the ADC frequency error - phase, actually - induces the amplitude error). You're analyzing the signal as if it's in an ideal sampling domain so the errors will show up as amplitude noise.
> Or, instead of assuming an input with a purely 4.2 MHz component, go to > the other extreme. Assume the input is a constant DC signal. The jitter > on the sampling clock wouldn't cause any noise at all here, would it?
The jitter won't induce noise on the DC signal, correct. Great observation. You still get the benefit of the ADC noise being reduced at DC.
> Please excuse the simple question, this is probably something > elementary, but it's new to me! > > Sean
If you were to only sample at 8.4 MS/s, your 4.2 MHz sinewave would have maximum sample errors at the highest slew of the signal with maximum deviations that constructively add to produce the maximum error. When you have a 50 MS/s stream looking at the 4.2 MHz signal, your maximum values are still the maximums but you throw many other samples in with that same period. Each sample point will have similar noise power, but weighted by the signal slew rate; the top and bottom of the sinusoid are closer to DC for jitter analysis reasons so the noise power isn't constant for all sample points but significantly reduced in the slower slew regions. Filtering over the wider bandwidth allows the worst sample errors to be filtered with the smallest sample errors leading to an overall reduction in jitter-induced noise. I would expect most of your jitter to be high-frequency since you're coming from a crystal source with the induced noise coming from that "ideal" signal getting phase distortions through various buffer stages from the slight induced shifts of threshold point. Higher frequency jitter is easier to remove from your overall system noise than low frequency jitter that induces real phase shifts in your observed data.
Reply by John_H December 9, 20062006-12-09
sp_mclaugh@yahoo.com wrote:
<snip>
> As did I ! I'm looking into the differential clock approach now, though > I fear that it won't be do-able. I *think* the Spartan 3 can do > differential output, using special features of the IOB's, but it seems > that some external setup/calibration components (resistors) are > required. It would be up to Digilent (producer of my starter board) to > have properly implemented these. There appear to be quite a few > "special" output modes (ie, LVPECL, etc) and I would be lucky for them > to have implemented exactly the one I need. Building my own PCB for the > Spartan is out of the question at this time (it would take me a year or > more to learn all the necessary skills). I could be mistaken - maybe > there is an easy way. That's just my current best-guess after a few > hours of research.
Driven differential signals don't need the resistor networks in the Spartan3. You can generate an LVDS signal from pins marked as complementary pairs without any passives involved; a 100 ohm differential termination at the differential ADC clock is still important. The ideal situation would have these signals routed next to each other with specific differential impedances but I expect your best bet will be to find the complementary signals that don't have anything else routed between and are roughly the same length. There might not be a lot to choose from. If I recall, the Digilent Spartan3 board has a 40-pin header with one power and one ground (or similarly abysmal path for return currents. The header you connect to might be responsible for introducing most of your system jitter per Gabor's comments on return current. If you have many unused signals on that connector, driving them to output logic low with a strong IOBSTANDARD will help. Changing them to hard wired grounds would be better still. I believe the ribbon cable helps add to the size of the crosstalk effects so keeping that short will also help. But the differential clock is that much more attractive. You might consider using a "dead bug" addition to your Digilent board. There are small differential drivers available. If you tack the chip upside down by the oscillator (imagine a bug with its legs in the air) you can wire the oscillator output right to the discrete differential driver input. Use a twisted pair to deliver this clock directly to a 2-pin header on your ADC board. If you're not designing the board and it already has only a single-ended input, you can tack a differential receiver to your ADC board in the same way. If you use this approach to deliver a very clean clock (making up for a poorly designed signal header) consider hot-gluing or epoxying the twisted pair to the board so you have a mechanical strain relief that keeps the wires from ripping off your tacked-on chip. <snip>
> That's good to know. I wonder if I should still worry about routing the > clock through the FPGA's output header to drive the ADC. Perhaps there > would be added jitter due to other reasons, such as active switching > flip-flops near the driving IOB... ? I'm basically repeating this from > another post I've read, I don't know what order of noise we're talking > about here, and whether it's negligible compared to my poor oscillator.
If you're using "mild" I/O switching strengths, you'll be better off than using strong drives. If you look at the data sheet for SSO recommendations, you'll see which standards tend to be nasty and which "play nice." If you're dealing with inputs rather than outputs, things will be much better - it's the current surge from driving the outputs that cause the majority of the jitter-inducing crosstalk. <snip> <snip>
> Ah yes, a timing budget is something I will be doing. Of course, the > rest of my design isn't finished yet, so I don't yet know what type of > max setup times I'll need. I guess if I use input buffers (using > IOB's), the setup time to get the data into the FPGA will be > independent of the rest of my design, right? I've never touched any IOB > features before, but it seems easy (just set a single attribute, I > think...?).
If you arrange the design to register the ADC outputs directly in the FPGA's IOBs, you can find the setup and hold times in the Spartan3 data sheet without having to look at the Timing Analyzer report. Even when I specify register packing in IOBs and use input registers, I still use OFFSET IN (BEFORE) constraints on my input signals to get a very big warning if something didn't end up in the IOB like I planned.
> On the other hand, couldn't I avoid the issue altogether by using a DCM > to adjust my FPGA clock by the clock-to-out time of the ADC? That way, > the data is ready right on the rising edge of my FPGA clock. It seems > that I can make adjustments in increments of 1/256 of my clock > frequency.
The DCM gives you flexibility. But when you do your timing budget, you might find there's a better way to reduce the uncertainties rather than just shifting the clock by the reported delay. The shift might be close to optimal but the delay is specified as a worst case, not typical. When you have a "best clock scheme" figured out and the DCM isn't *between* the oscillator and the ADC, you might get better results with the DCM but not necessarily withe any added phase shift. <snip>
> So in essence, by sampling at 50 MSPS rather than the minimum of 8.4 > MSPS, and then applying a low pass with cutoff around 4.2 MHz, I'm > getting rid of about (25-4.2)/25 * 100% = 83% of the noise to to jitter > on the ADC clock (assuming the noise content is uniformly distributed > from 0 to 25 MHz)... Does that calculation sound right (assumes ideal > filters, etc)? If so, what a pleasant surprise!
It *sounds* right but I haven't been performing these calculations myself recently so my view from 20,000 feet says it's pretty reasonable.
>> You seem on target with knowing much of what to look for in the design. >> I hope it's fun. > > I appreciate the kind words, though I think I'm right on the borderline > capability-wise. Let's hope I'm not right below that line - close > enough to waste a lot of time, but just too far to ever get it working! > But yes, it should be a fun project. > > The info you gave was very helpful, thanks! > > Regards, > > Sean
Reply by December 9, 20062006-12-09
John_H wrote:

Regarding the frequency range of noise due to sample clock jitter
(sampling using an ADC much faster than required for a given
band-limited signal):

> Since the noise you'll see from the clock jitter will be spread across > the full 25 MHz bandwidth of your 50 MS/s data stream
On a second reading, I was wondering if you could explain this a bit further. In the worst-case scenario, we would have an input signal with a purely 4.2 MHz frequency component (would never happen for video, but just for the arguement). If two samples were taken, each experiencing maximum sample clock jitter, but in opposite directions, then they would be seperated by (sample time + 2 * jitter). However, we would treat them as if they were seperated by only (sample time). Wouldn't this only introduce noise up to a frequency of: 4.2 MHz * (sample time + 2 * jitter) / (sample time) ? ie, for 250 ps of jitter on a 20 ns clock, with a 4.2 MHz signal being sampled, I could expect to see noise up to 4.305 MHz...? Or, instead of assuming an input with a purely 4.2 MHz component, go to the other extreme. Assume the input is a constant DC signal. The jitter on the sampling clock wouldn't cause any noise at all here, would it? Please excuse the simple question, this is probably something elementary, but it's new to me! Sean
Reply by December 9, 20062006-12-09
Wow! This newsgroup is like having a whole team of consultants or
professors. Barely took an hour to get two really helpful replies!

John_H wrote:
> o You wonder about the jitter on the clock. > > I liked Gabor's calculations that showed you wouldn't have much of a > problem in data accuracy for your situation. The differential clock > approach would make things cleaner overall.
As did I ! I'm looking into the differential clock approach now, though I fear that it won't be do-able. I *think* the Spartan 3 can do differential output, using special features of the IOB's, but it seems that some external setup/calibration components (resistors) are required. It would be up to Digilent (producer of my starter board) to have properly implemented these. There appear to be quite a few "special" output modes (ie, LVPECL, etc) and I would be lucky for them to have implemented exactly the one I need. Building my own PCB for the Spartan is out of the question at this time (it would take me a year or more to learn all the necessary skills). I could be mistaken - maybe there is an easy way. That's just my current best-guess after a few hours of research.
> o You were worried about bypassing the DCM. > > Your FPGA won't use a DCM unless you explicitly include it.
That's good to know. I wonder if I should still worry about routing the clock through the FPGA's output header to drive the ADC. Perhaps there would be added jitter due to other reasons, such as active switching flip-flops near the driving IOB... ? I'm basically repeating this from another post I've read, I don't know what order of noise we're talking about here, and whether it's negligible compared to my poor oscillator.
> o You're concerned about getting the right sampling point for the data. > > The clock-to-out times should be well specified for the ADC you choose. > At 50 MHz, you'll probably have no issues but if your timing adds up > to be tight, you might run a DCM from the same clock feeding the ADC or > improve your timing budget through other means.
I think you're talking about the same thing I say a bit further down (offsetting the FPGA clock by the clock-to-out time), but correct me if I'm wrong.
> If you can use a > global clock I/O pair on the S3 part on your headers (I don't think I/O > is available for S3E global clocks, just input) you could even use the > clock as it appears on the FPGA pad/ball feeding the ADC as the input > to your global clock buffer with a little care.
As of even yesterday, anything about the internal clock distribution in the FPGA would have flown right over my head. However, earlier this afternoon, I was reading a bit about the global clock buffers, etc. It'll take me awhile to digest all the literature I've read from Xilinx, plus what you wrote. So I'll get back to you on that one. Though if you're in the spoon-feeding type of mood, my mouth is open.
> Put together a timing budget that shows what your times are from the > clock edge until data is ready at the FPGA pins and compare that with > what the FPGA needs in setup time relative to the 20 ns clock period. > It's the amount of slack in the budget that tells you if your > implementation is a breeze.
Ah yes, a timing budget is something I will be doing. Of course, the rest of my design isn't finished yet, so I don't yet know what type of max setup times I'll need. I guess if I use input buffers (using IOB's), the setup time to get the data into the FPGA will be independent of the rest of my design, right? I've never touched any IOB features before, but it seems easy (just set a single attribute, I think...?). On the other hand, couldn't I avoid the issue altogether by using a DCM to adjust my FPGA clock by the clock-to-out time of the ADC? That way, the data is ready right on the rising edge of my FPGA clock. It seems that I can make adjustments in increments of 1/256 of my clock frequency.
> o Polyphase filtering is only part of what you can do. > > Since the noise you'll see from the clock jitter will be spread across > the full 25 MHz bandwidth of your 50 MS/s data stream, you could either > subsample your signal (aliasing all the noise into your slower rate > baseband) or you can actively filter the signal before decimating with > an FIR or other filter without taking up excessive resources. Good > execution on this aspect of a video design is superb experience for > digital design.
Good point! On my first reading, I got caught up on the "subsample" part for awhile, and kept thinking thoughts about running an ADC below the center frequency of a narrow band-pass signal. Then I realized that you were referring to the method I use to choose which samples to keep (ie, decimation, etc), and the "aliasing noise into..." part became clear. Now, it turns out that I *was* going to include a low-pass block in my polyphase resampler, but I must confess, I wasn't thinking of cutting out noise due to clock jitter in my ADC. I knew that I had to band-limit my signal before decimation, but I figured that the only high-frequency information would be noise coming directly from the video source. Cutting out a large chunk of the noise caused by jitter in my sampling clock is a very welcome bonus! So in essence, by sampling at 50 MSPS rather than the minimum of 8.4 MSPS, and then applying a low pass with cutoff around 4.2 MHz, I'm getting rid of about (25-4.2)/25 * 100% = 83% of the noise to to jitter on the ADC clock (assuming the noise content is uniformly distributed from 0 to 25 MHz)... Does that calculation sound right (assumes ideal filters, etc)? If so, what a pleasant surprise! ____
> > You seem on target with knowing much of what to look for in the design. > I hope it's fun.
I appreciate the kind words, though I think I'm right on the borderline capability-wise. Let's hope I'm not right below that line - close enough to waste a lot of time, but just too far to ever get it working! But yes, it should be a fun project. The info you gave was very helpful, thanks! Regards, Sean
Reply by December 8, 20062006-12-08
Comments below.

Gabor wrote:

> Quick calculation: > using 4.2 MHz full scale (of the ADC input range) sine wave > 4.2MHz is about 26 Mradians/s > ADC input range corresponds to -1 to +1 of normalized sine > 1 LSB of 8-bit ADC is therefore 1/128 (normalized). > 1 / (26M * 128) is about 0.3 nS > > So for a 1 LSB sampling error, you could live with 300 pSec of > sampling jitter. My guess is that the threads you looked at > were concerned about significantly smaller acceptable jitter, > as would be the case in most networking applications where > the sampling rate and bandwidth are closer to the same > frequency.
Thanks, it's nice to have a concrete figure like that. I hadn't thought to work backwards and calculate what jitter I can live with (not yet knowing how much jitter I have).
> I would guess that your clock oscillator should have much > less than 300 pS jitter unless it is poorly bypassed (power > supply decoupling). You can run this through the FPGA > without a DCM. Additional jitter would then only come from > threshold jitter and ground bounce at the FPGA input, which > can be minimized by not using the adjacent IOB's or driving > the adjacent IOB's to ground.
OK, after spending *far* too long on Epson's web site (the eea.epson.com site is poorly organized, though epsontoyocom.co.jp is better), I found some jitter figures. It says that for a 15pF load and a 3.3V power source, I should expect 200 ps maximum cycle-to-cycle jitter, or 250 ps maximum peak-to-peak jitter. As you said, that is assuming a clean (isolated) power source. I'll describe the power source in a second. But first, let me paste two lines from Epson's data sheet that sound a bit ominous: "Because we use a PLL technology, there are a few cases that the jitter value will increase when SG-8002 is connected to another PLL-oscillator. In our experience, we are unable to recommend these products for applications such as telecom carrier use or analog video clock use. Please be careful checking in advance for these applications (jitter specification is max 250 ps / CL = 5 pF." Perhaps they recommend against it because most commercial applications would need more than 8 bits of resolution (10 is usually used, I think, maybe 12 for professional video equipment). After reading that, do you still think that my application will be OK? And even if I run the clock through the FPGA? I don't mind spending $20 or whatever on getting a better clock, if it sounds like the best solution. I want this system for perform reasonably well, and I'm willing to pay for it. The starter board even has an optional 8-pin clock socket, so it would be exceptionally easy to do. After reading the specs on that Epson clock, I know *why* they included that socket! :-) Anyway, I'll now quickly describe the power supply (and decoupling of clock power input) on the Digilent starter board: - All power first comes from a 5V AC-DC wall plug - All power then goes through a LM1086CS-ADJ 3.3V regulator - For the FPGA, 2.5V and 1.2V are generated from the 3.3V - The 3.3V is used directly (shared) by a number of on-board components, including the crystal oscillator clock - There appear to be 35 parallel 47nF capacitors between 3.3V and ground - The only other isolation provided to the oscillator's power pin is another locally placed 47nF capacitor between power and ground Does it sound like the clock power input is adequately isolated (clean)? I don't have a "gut feeling" one way or the other.
> I would worry more about accounting for off-board routing > and ground returns.
What do you think about the previous plan I mentioned. I'd use about 6" of standard ribbon cable (about the same grade as ATA cabling) to connect from a header on the Digilent starter board to the ADC breadboard.
> Using a differential clock from the > FPGA to the ADC board would help. If you don't have > an ADC that directly takes a differential clock you'll need > to add a receiver of some sort.
I've never used a differential clock before. I wonder if my Spartan can do that... Some initial searching did turn up some mention of differential output pins (being used mostly for DDR memory clocks). If I can't do it on-chip though, there's no point, because I have to get to the breadboard to mount any discrete chips. There's no extra space on the starter board. And I don't intend to build a custom PCB (with the FPGA) to replace the starter board.
> By this time you'll have > a significant delay built up on the data clock, so running > the clock back to the FPGA along with the data will help > you to properly sample the ADC data.
I understand why there would be delay, but can you explain the part about running the clock back to the FPGA? Since it's a fixed delay, couldn't I just use the DCM to delay the Spartan's clock by a fixed amount?
> HTH, > Gabor
Very much !! Thanks.