FPGARelated.com
Forums

Minimal-operation shift-and-add (or subtract)

Started by Tim Wescott September 1, 2016
There's a method that I know, but can't remember the name.  And now I 
want to tell someone to Google for it.

It basically starts with the notion of multiplying by shift-and-add, but 
uses the fact that if you shift and then either add or subtract, you can 
minimize "addition" operations.

I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc.

-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim Wescott:
> There's a method that I know, but can't remember the name. And now I > want to tell someone to Google for it. > > It basically starts with the notion of multiplying by shift-and-add, but > uses the fact that if you shift and then either add or subtract, you can > minimize "addition" operations. > > I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >
Booth? -Lasse
On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen wrote:

> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim Wescott: >> There's a method that I know, but can't remember the name. And now I >> want to tell someone to Google for it. >> >> It basically starts with the notion of multiplying by shift-and-add, >> but uses the fact that if you shift and then either add or subtract, >> you can minimize "addition" operations. >> >> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >> >> > Booth? > > -Lasse
That's it. Thanks. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On 9/1/2016 4:24 PM, Tim Wescott wrote:
> On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen wrote: > >> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim Wescott: >>> There's a method that I know, but can't remember the name. And now I >>> want to tell someone to Google for it. >>> >>> It basically starts with the notion of multiplying by shift-and-add, >>> but uses the fact that if you shift and then either add or subtract, >>> you can minimize "addition" operations. >>> >>> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >>> >>> >> Booth? >> >> -Lasse > > That's it. Thanks.
That is very familiar from college, but I don't recall the utility. It would be useful for multiplying by constants, but otherwise how would this be used to advantage? It would save add/subtract operations, but I can't think of another situation where this would be useful. If the algorithm is doing an add and shift, the add does not increase the time or the hardware. If building the full multiplier, an adder is included for each stage, it is either used or not used. When done in software, the same applies. It is easier to do the add than to skip over it. -- Rick C
On Thu, 01 Sep 2016 18:40:25 -0400, rickman wrote:

> On 9/1/2016 4:24 PM, Tim Wescott wrote: >> On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen wrote: >> >>> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim >>> Wescott: >>>> There's a method that I know, but can't remember the name. And now I >>>> want to tell someone to Google for it. >>>> >>>> It basically starts with the notion of multiplying by shift-and-add, >>>> but uses the fact that if you shift and then either add or subtract, >>>> you can minimize "addition" operations. >>>> >>>> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >>>> >>>> >>> Booth? >>> >>> -Lasse >> >> That's it. Thanks. > > That is very familiar from college, but I don't recall the utility. It > would be useful for multiplying by constants, but otherwise how would > this be used to advantage? It would save add/subtract operations, but I > can't think of another situation where this would be useful. > > If the algorithm is doing an add and shift, the add does not increase > the time or the hardware. If building the full multiplier, an adder is > included for each stage, it is either used or not used. When done in > software, the same applies. It is easier to do the add than to skip > over it.
I asked here and on comp.arch.embedded. It's for a guy who's doing assembly-language programming on a PIC12xxx -- for that guy, and for a small range of constants (4 through 7), it can save time over a full- blown multiplication algorithm. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Den fredag den 2. september 2016 kl. 00.40.28 UTC+2 skrev rickman:
> On 9/1/2016 4:24 PM, Tim Wescott wrote: > > On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen wrote: > > > >> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim Wescott: > >>> There's a method that I know, but can't remember the name. And now I > >>> want to tell someone to Google for it. > >>> > >>> It basically starts with the notion of multiplying by shift-and-add, > >>> but uses the fact that if you shift and then either add or subtract, > >>> you can minimize "addition" operations. > >>> > >>> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. > >>> > >>> > >> Booth? > >> > >> -Lasse > > > > That's it. Thanks. > > That is very familiar from college, but I don't recall the utility. It > would be useful for multiplying by constants, but otherwise how would > this be used to advantage? It would save add/subtract operations, but I > can't think of another situation where this would be useful. > > If the algorithm is doing an add and shift, the add does not increase > the time or the hardware. If building the full multiplier, an adder is > included for each stage, it is either used or not used. When done in > software, the same applies. It is easier to do the add than to skip > over it.
you only need half the stages so it is half the size* and the critical path through your adders are only half as long * you need a few multiplexors to choose between x1 and x2, subtract is invert and carry in -Lasse
> > It basically starts with the notion of multiplying by shift-and-add, but > uses the fact that if you shift and then either add or subtract, you can > minimize "addition" operations. > > I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >
Canonical Signed Digit (CSD) representation. The CSD form of any number has at least 1/3 of the bits cleared. 244 = 011110100 = +000-0100 (Replace 01111 with +000-)
> > Canonical Signed Digit (CSD) representation. The CSD form of any number has at least 1/3 of the bits cleared. > > 244 = 011110100 = +000-0100 > > (Replace 01111 with +000-)
I meant: 244 = 011110100 = +000-0+00
On 9/1/2016 7:09 PM, Tim Wescott wrote:
> On Thu, 01 Sep 2016 18:40:25 -0400, rickman wrote: > >> On 9/1/2016 4:24 PM, Tim Wescott wrote: >>> On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen wrote: >>> >>>> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim >>>> Wescott: >>>>> There's a method that I know, but can't remember the name. And now I >>>>> want to tell someone to Google for it. >>>>> >>>>> It basically starts with the notion of multiplying by shift-and-add, >>>>> but uses the fact that if you shift and then either add or subtract, >>>>> you can minimize "addition" operations. >>>>> >>>>> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >>>>> >>>>> >>>> Booth? >>>> >>>> -Lasse >>> >>> That's it. Thanks. >> >> That is very familiar from college, but I don't recall the utility. It >> would be useful for multiplying by constants, but otherwise how would >> this be used to advantage? It would save add/subtract operations, but I >> can't think of another situation where this would be useful. >> >> If the algorithm is doing an add and shift, the add does not increase >> the time or the hardware. If building the full multiplier, an adder is >> included for each stage, it is either used or not used. When done in >> software, the same applies. It is easier to do the add than to skip >> over it. > > I asked here and on comp.arch.embedded. It's for a guy who's doing > assembly-language programming on a PIC12xxx -- for that guy, and for a > small range of constants (4 through 7), it can save time over a full- > blown multiplication algorithm.
Oh, sure, I use that all the time for constant multiplication. No magic involved. In some cases (such at multiplying by 4) it is simpler to just use a simple shift and add (which becomes trivial for a single bit). Even for multipliers with two bits set, it is easier to use a simple add the shifted values. In fact, the only case of multiplying by numbers in the range of 4 to 7 where Booth's algorithm simplifies the work is 7. Since there are only four cases, I expect this could easily be implemented as four special cases. -- Rick C
On 9/1/2016 7:22 PM, lasselangwadtchristensen@gmail.com wrote:
> Den fredag den 2. september 2016 kl. 00.40.28 UTC+2 skrev rickman: >> On 9/1/2016 4:24 PM, Tim Wescott wrote: >>> On Thu, 01 Sep 2016 13:19:09 -0700, lasselangwadtchristensen >>> wrote: >>> >>>> Den torsdag den 1. september 2016 kl. 21.53.33 UTC+2 skrev Tim >>>> Wescott: >>>>> There's a method that I know, but can't remember the name. >>>>> And now I want to tell someone to Google for it. >>>>> >>>>> It basically starts with the notion of multiplying by >>>>> shift-and-add, but uses the fact that if you shift and then >>>>> either add or subtract, you can minimize "addition" >>>>> operations. >>>>> >>>>> I.e., 255 = 256 - 1, 244 = 256 - 16 + 4, etc. >>>>> >>>>> >>>> Booth? >>>> >>>> -Lasse >>> >>> That's it. Thanks. >> >> That is very familiar from college, but I don't recall the utility. >> It would be useful for multiplying by constants, but otherwise how >> would this be used to advantage? It would save add/subtract >> operations, but I can't think of another situation where this would >> be useful. >> >> If the algorithm is doing an add and shift, the add does not >> increase the time or the hardware. If building the full >> multiplier, an adder is included for each stage, it is either used >> or not used. When done in software, the same applies. It is >> easier to do the add than to skip over it. > > you only need half the stages so it is half the size* and the > critical path through your adders are only half as long
I think you need to look at the algorithm again. The degenerate case is a multiplier with alternating ones and zeros. An add or subtract is needed at each 1->0 or 0->1 transition. Since every bit is a transition you still need an adder/subtractor for every bit. Of course you could add logic to detect these cases and do fewer adder ops, but then that is not Booth's algorithm anymore and is much more complex. Booth's algorithm looks at pairs of bits in the multiplier, this would require looking at more bits. If you are thinking in terms of constant multiplication then again, this is a modified method that combines Booth's with straight adds.
> * you need a few multiplexors to choose between x1 and x2, subtract > is invert and carry in
Multiplexers are not low cost in any sense in many technologies, but it doesn't matter. Booth's algorithm doesn't use multiplexers. -- Rick C