FPGARelated.com
Forums

DCM input clock

Started by Andyman November 11, 2003
Hi all,

I have a design which takes data from an external ADC.  The ADC
provides a 35 Mhz clock.

Currently the design feeds the input clock through a DCM and uses the
180 degree phase shifted version to sample and reassemble the ADC
data.  This all then passes to our system clock domain via one of the
(wonderfully useful) Xilinx self addressing asychronous FIFO's.

As this is for a communications system we need to perform clock
recovery (actually this might be better described as carrier recovery)
and the ADC provides a means to tune the clock it uses to sample with.
 Of course this means that the clock we are feeding into the DCM to
get our internal sampling clock (and driving the SAF - in essense a
block RAM) is going to be changing.  Each change could be in the order
of 200 Hz and performed a few thousand times a second.  The total
change from the clock the DCM locked with could be up to 100 KHz but I
could bodge...ahem...design my way round this to make it significantly
less.

Although the above is working fine at the moment I am worried about
the DCM loosing its lock due to the varying input clock.  Can anyone
suggest how much the DCM will tolerate before throwing a wobbly?

Thanks,

Andy
Andy,

100KHz out of 35 MHz is .1/35, or 1/350, or ~2800 ppm.  The spec in the
data sheet is +/- 100 ppm, but that was for an instantaneous change in
frequency (largest step size).  200 Hz out of 35 MHz is ~6 ppm.

The DCM tracts phase changes by moving its taps.  The tap movement for the
CLK0/CLK180 (DLL outputs) is 6*the 2's complement of the jitter filter
times the number of clocks for one tap change (~50 ps, V2, ~30ps, V2P).

By changing the jitter filters to 0xFFFFh, you can now track phase changes
much faster (roughly 6 clocks per tap, or 50ps in 6 35 MHz clocks, or
50ps/172ns).  This is roughly 34.99MHz to 35.01 MHz (10 KHz instant step
per every 6 clock cycles allowed).  With the default settings, it is ~256
times slower (or 10KHz/256 ~ 40 Hz)

I think you need to set your jitter filter settings to allow more change.

I think you will find that this is the fastest tracking capability of any
general purpose 24MHz to 420 MHz phase locked loop device in
exisitence......(and you can vary it by the jitter filter values).

That said, you can monitor the LOCKED signal, as it will lose lock it the
tap runs off either end of the delay line for any reason.  As well, if the
CLOCK_IN_STOPPED status bit goes high, that is an indication that the
input clock has missing pulses (really bad) and the DCM may need to be
reset.

The DFS has a different state machine, but also tracks frequency and phase
thru tap movements, and is faster than the DLL state machine (no jitter
filter at all), so the DLL is the limiting element.

Austin



Andyman wrote:

> Hi all, > > I have a design which takes data from an external ADC. The ADC > provides a 35 Mhz clock. > > Currently the design feeds the input clock through a DCM and uses the > 180 degree phase shifted version to sample and reassemble the ADC > data. This all then passes to our system clock domain via one of the > (wonderfully useful) Xilinx self addressing asychronous FIFO's. > > As this is for a communications system we need to perform clock > recovery (actually this might be better described as carrier recovery) > and the ADC provides a means to tune the clock it uses to sample with. > Of course this means that the clock we are feeding into the DCM to > get our internal sampling clock (and driving the SAF - in essense a > block RAM) is going to be changing. Each change could be in the order > of 200 Hz and performed a few thousand times a second. The total > change from the clock the DCM locked with could be up to 100 KHz but I > could bodge...ahem...design my way round this to make it significantly > less. > > Although the above is working fine at the moment I am worried about > the DCM loosing its lock due to the varying input clock. Can anyone > suggest how much the DCM will tolerate before throwing a wobbly? > > Thanks, > > Andy