FPGARelated.com
Forums

FFT on an FPGA

Started by Raymond August 17, 2006
In article <44e6e7e0$0$6994$9b4e6d93@newsspool1.arcor-online.net>,
Kolja Sulimma <news@sulimma.de> wrote:

> David M. Palmer schrieb: > > In article <SmjFg.4549$SZ3.3956@dukeread04>, Ray Andraka > > <ray@andraka.com> wrote: > > > Does anybody do the following thing?: At each step, the adders check > > for whether the high order bit of any result is set. (Just an N-way OR > > gate). If so, then the next step includes a right-shift (for all > > elements), otherwise there is no right-shift. In the end, you get an > > FFT result which must be scaled by 2^number-of-right-shifts, but has as > > much resolution as is possible given the number of bits (for the > > largest-magnitude components). > > > See my previous post. > For the last half of inputs you loose M-1 bits of resolution. This > is not much better than just shifting all inputs by M-1 bits to the > right at the beginning.
You only lose low order bits if you need high-order bits.
> Floating point with S bits in the significand is not better for summing > up many small values than S bit fixed point.
It all depends on what you are doing. If you are searching for tones in noisy data, where the strengths of the tones and the noise aren't determined at design time, then shift-if-you-need-it is a good strategy because you don't care about the noise in frequency space, only the strongest signal. It gives you overall dynamic range between, but not within, measurements. If you are trying to filter out an interfering frequency to see what's underneath it, then losing everything below 2^-n of the largest frequency tone is sub-optimal. If you use floating point with too few bits in the significand, then there will be significance loss in a complicated pattern in frequency space. -- David M. Palmer dmpalmer@email.com (formerly @clark.net, @ematic.com)
Yes, that is referred to as block floating point.  It is fairly common 
for hardware FFT engines.

David M. Palmer wrote:
> In article <SmjFg.4549$SZ3.3956@dukeread04>, Ray Andraka > <ray@andraka.com> wrote: > > >>Make no bones about it, floating point arithmetic is expensive in terms >>of amount of logic. There are some shortcuts for FFTs that will >>increase the dynamic range without going to a full floating point >>implementation. The most common is to use block floating point which >>rescales the entire FFT result by a common power of 2 selected so that >>the largest signal does not overflow. Block floating point is typically >>applied after each FFT stage, and is an effective way to limit the word >>widths without resorting to floating point arithmetic. > > > Does anybody do the following thing?: At each step, the adders check > for whether the high order bit of any result is set. (Just an N-way OR > gate). If so, then the next step includes a right-shift (for all > elements), otherwise there is no right-shift. In the end, you get an > FFT result which must be scaled by 2^number-of-right-shifts, but has as > much resolution as is possible given the number of bits (for the > largest-magnitude components). > > Sort of similar to floating point, except that there is only one > exponent for all components. >
David M. Palmer wrote:

asking about block floating point
> > It all depends on what you are doing. > > If you are searching for tones in noisy data, where the strengths of > the tones and the noise aren't determined at design time, then > shift-if-you-need-it is a good strategy because you don't care about > the noise in frequency space, only the strongest signal. It gives you > overall dynamic range between, but not within, measurements. > > If you are trying to filter out an interfering frequency to see what's > underneath it, then losing everything below 2^-n of the largest > frequency tone is sub-optimal. If you use floating point with too few > bits in the significand, then there will be significance loss in a > complicated pattern in frequency space. >
Exactly. It is useful when the dynamic range within a set is not large, but the dynamic range of possible inputs is large
Yes, that is referred to as block floating point.  It is fairly common 
for hardware FFT engines.


David M. Palmer wrote:


> > Does anybody do the following thing?: At each step, the adders check > for whether the high order bit of any result is set. (Just an N-way OR > gate). If so, then the next step includes a right-shift (for all > elements), otherwise there is no right-shift. In the end, you get an > FFT result which must be scaled by 2^number-of-right-shifts, but has as > much resolution as is possible given the number of bits (for the > largest-magnitude components). > > Sort of similar to floating point, except that there is only one > exponent for all components. >
Ray Andraka wrote:

> > I tried to allude to that fact, but reading over my post I see I missed > the point. Our floating point FFT takes advantage of this fact. It > uses fixed a small fixed (4/8/16 point) point kernel and normalizes the > input set to a common exponent before passing it through the kernel,and > then denormalizes the intermediate result before storing it. In essence > it has a coarser granularity of floating point operators. Accuracy > matches a full floating point implementation but the implementation uses > a fraction of the hardware of a full floating point implementation.
I'll be presenting a poster session paper on this at HPEC 2006 next month. I'll post the foils on my website after the conference.