FPGARelated.com
Forums

Propagation delay sensitivity to temperature, voltage, and manufacturing

Started by folnar June 6, 2006
We currently have a Spartan 3 FPGA in our design. In our design, we are
using two DCM's specifically. One is driving an off-chip ADC and the
other is driving an FPGA register (which registers the data coming back
from the off-chip ADC).  One clock is manually phase-shifted at
synthesis relative to the other clock to resolve clock skew issues
between the ADC processing of the data and the output of the ADC being
latched into the register. The clock signal to the ADC can be
phase-shifted to bring the overall timing within constraint.



This setup works so far, but the question is whether it will continue
to work as the FPGA temperature changes, and over variations between
different chips or slight variations in the voltage supplies. The DCM
driving the ADC is physically placed far from the pin which it drives.
The routing from DCM to pin is accomplished via hex lines and
combinational logic. Are there any general rules which are capable of
predicting the sensitivity of propagation delay to temperature,
voltage, and chip variations?

This is what I had sent dirctly to the poster, a few days ago. Maybe
it's of general interest:

First I have to warn you about using a DCM output as ADC clock. Experts
agree that the ADC clock should be as jitter-free as possible, and a
DCM output has significant jitter, which costs you dearly in raising
the ADC noise floor. I shudder thinking about the jitter = noise you
feed into the ADC.

Regarding the more benevolent aspect of delay change: Delay increases
with temperature.
in some cases delay is proportional to the absolute (Kelvin)
temperature. In some cases it is more stable.
I think you can measure it, and more conveniently:
Build a ring oscillator (feedback with a single inversion) of
equivalent snippets of delay and LUTs, and then measure the frequency
as function of temperature. You get tremendous resolution and thus
accuracy.

My advice, give the ADC a clear clock, then use the DCM to adjust the
phase. And minimize routing and gating.

Peter Alfke Xilinx Applications

Folnar,

The DCM delay lines themselves operate from a band gap reference whose
output is tuned to have the reverse slope of the delay vs. temperature
(patented technique, PTC bandgap to offset delay).

What happens is that as the delay increases as temperature increases,
the voltage increases to cancel.  And the same for cold (voltage goes
down as the temperature goes down).

Then the taps of the delay line are part of the feedback loop that is
actively trying to keep the skew from CLKIN to CLKFB identically 0.

As long as all clock signals use the global clocks (which are matched,
and have very good delay vs. temperature and voltage behavior),
everything should work to the specifications (basically +/- 100 ps over
process, voltage, and temperature for phase accuracy of a DCM).

The phase shift is always calculated to be some percentage of a period
(as XXX parts in 256), which is how many taps fit into one clock pulse.
 As the taps are changed, they are changed for the phase offset as well
by solving for the correct tap to take the shift from, so that over
voltage, temperature, process, the shift in phase is kept exact (to +/-
1 tap).

Using the CLKFB externally also takes the IOB, and any external
components out of the loop (or more accurately puts them in the feedback
loop to account and correct for them).

It is extremely unlikely (probably impossible) that one DCM is +100 ps,
and the other is -100 ps.  Since they are identical cells, their errors
would tend to track together (for the deterministic parts).  The random
error is always +/- a delay line tap, so for two becomes +/- 2 delay
line taps.  That is likely, and quite probable that one zigs when the
other zags (effecting a two tap difference).

Clocks are not intended to go through doubles, hexes, or long lines, and
definitely not through CLB LUTs, so if you are doing that, you are all
on your own.

Austin

folnar wrote:
> We currently have a Spartan 3 FPGA in our design. In our design, we are > using two DCM's specifically. One is driving an off-chip ADC and the > other is driving an FPGA register (which registers the data coming back > from the off-chip ADC). One clock is manually phase-shifted at > synthesis relative to the other clock to resolve clock skew issues > between the ADC processing of the data and the output of the ADC being > latched into the register. The clock signal to the ADC can be > phase-shifted to bring the overall timing within constraint. > > > > This setup works so far, but the question is whether it will continue > to work as the FPGA temperature changes, and over variations between > different chips or slight variations in the voltage supplies. The DCM > driving the ADC is physically placed far from the pin which it drives. > The routing from DCM to pin is accomplished via hex lines and > combinational logic. Are there any general rules which are capable of > predicting the sensitivity of propagation delay to temperature, > voltage, and chip variations? >