Linear Feedback Shift Registers for the Uninitiated, Part II: libgf2 and Primitive Polynomials

Jason SachsJuly 17, 2017

Last time, we looked at the basics of LFSRs and finite fields formed by the quotient ring \( GF(2)[x]/p(x) \).

LFSRs can be described by a list of binary coefficients, sometimes referred as the polynomial, since they correspond directly to the characteristic polynomial of the quotient ring.

Today we’re going to look at how to perform certain practical calculations in these finite fields. I maintain a Python library on bitbucket called libgf2, which nobody else actually uses, that can help with these calculations.

A Quick Introduction to libgf2

In case you’re feeling iframe-deficient, here’s that animation of the LFSR based on \( H_{25} = GF(2)[x]/(x^5 + x^2 + 1) \):

The 25 in \( H_{25} \) denotes the polynomial coefficients in hex form: 0x25 = 0b100101 so there are taps on shift cells 2 (\( x^2 \)) and 0 (\( x^0 = 1 \)).

We could just as easily show \( H_{12345}: \)

One of the questions we will answer today is how to tell whether these LFSRs are maximal-length. For \( H_{25} \) it’s easy, we can just look at the output and count to see that it has a period of \( 2^5 - 1 = 31 \). For \( H_{12345} \) we could do the same thing but we’d be waiting a few hours to go through the full period \( 2^{16} - 1 = 65535 \).

When you’re trying to do other calculations in normal arithmetic like finding the area of an 8x12 rectangle, or the number of different possible values that can be contained in an N-bit number, you don’t turn to counting. It’s slow and impractical. The same thing is true in finite fields; there are much easier ways of doing arithmetic like addition and multiplication and exponentiation.

So let’s use libgf2.gf2.GF2Element to help us out.

from libgf2.gf2 import GF2Element

e25 = GF2Element(1, 0x25)
e12345 = GF2Element(1, 0x12345)
e4567 = GF2Element(1, 0x4567)
print e25
print e12345
print e4567
print e25.coeffs
print e25.field
GF2Element(0b00001,0x25)
GF2Element(0b0000000000000001,0x12345)
GF2Element(0b00000000000001,0x4567)
1
GF2QuotientRing(0b100101)

The GF2Element class doesn’t contain much state, just a value coeffs and a finite field field. If you print out an instance of this class, the value is given in binary and the polynomial in hexadecimal. (Yeah, I know, inconsistent; whatever.) The coeffs value corresponds to the state of a Galois LFSR, and the field is the quotient ring that is isomorphic to the LFSR.

Here are many of the operations we can perform on a GF2Element:

# left-shift by k = multiplication by x^k
x3 = e25 << 3
print "x3           = ", x3
x22 = e25 << 22
print "x22          = ", x22
# right-shift by k = multiplication by x^(-k)
x1 = x3 >> 2
print "x1 = x3 >> 2 = ", x1
# addition
print "x1 + x3      = ", x1 + x3
print "x1 + 0x11    = ", x1 + 0x11    # look, we can pass in integer literals!
# subtraction (which behaves identically to addition)
print "x1 - x3      = ", x1 - x3
# multiplication
print "x1 * x3      = ", x1 * x3
print "x22 * x3     = ", x22 * x3
# inverse
print "x1 ^ -1      = ", x1.inv
# division
print "x3 / x1      = ", x3/x1
print "0x11 / x1    = ", 0x11 / x1
# raise to an integer power
print "x3 ** 4      = ", x3 ** 4
# equality and inequality
print "x1 == x3", x1 == x3
print "x1 != e25", x1 != e25
print "x1 == e25 << 32", x1 == e25 << 32

# Python automatically provides augmented assignment
# for these operators (although really it just creates a new value object)
# - addition and subtraction
# - multiplication 
# - left-shift and right-shift
c = e25 << 15
ca = [c]
print "c = %s, ca = %s" % (c, ca)
c <<= 11
c *= x3
c **= 3
print "c = %s, ca = %s" % (c, ca)
c >>= 25
print c
x3           =  GF2Element(0b01000,0x25)
x22          =  GF2Element(0b10101,0x25)
x1 = x3 >> 2 =  GF2Element(0b00010,0x25)
x1 + x3      =  GF2Element(0b01010,0x25)
x1 + 0x11    =  GF2Element(0b10011,0x25)
x1 - x3      =  GF2Element(0b01010,0x25)
x1 * x3      =  GF2Element(0b10000,0x25)
x22 * x3     =  GF2Element(0b11001,0x25)
x1 ^ -1      =  GF2Element(0b10010,0x25)
x3 / x1      =  GF2Element(0b00100,0x25)
0x11 / x1    =  GF2Element(0b11010,0x25)
x3 ** 4      =  GF2Element(0b01110,0x25)
x1 == x3 False
x1 != e25 True
x1 == e25 << 32 True
c = GF2Element(0b11111,0x25), ca = [GF2Element(0b11111,0x25)]
c = GF2Element(0b11001,0x25), ca = [GF2Element(0b11111,0x25)]
GF2Element(0b00001,0x25)

Some of these are fairly simple to implement (much easier than “regular” integer arithmetic, actually!), and I’ll go into more detail later in this article.

What does multiplication in this finite field really mean? Think about it for a second: \( x^3 (x^4 + x^2 + 1) = x^4 + x^3 + 1 \) is kind of bizarre; we’re multiplying polynomials and then doing some weird modulo operation. That’s abstract algebra for you, we’re just going to follow the rules and try to forget about finding real-world significance in the result.

If you do need to cling to significance and meaning, it might be easier if you think about each element as being a power of the generator \( x \); for example \( x^4 + x^2 + 1 \) is just another way of stating \( x^{22} \pmod {p(x)} \) and \( x^4 + x^3 + 1 \) is just another way of stating \( x^{25} \pmod{p(x)} \) — so if you multiply \( x^{22} \) by \( x^3 \), of course you’ll get \( x^{25} \).

Or think of the LFSR state: multiplication by some monomial \( x^d \) is equivalent to advancing the state \( S[k] \) forward to \( S[k+d] \). If we want to advance the LFSR by one step, we shift left and condition XOR; if we want to advance it forward by \( d \) steps, we need to multiply the state polynomial by \( x^d \) in this finite field. This vaguely smells like a z-transform, but in finite-field-land instead of the discrete-time continuous-value areas we typically work with in digital signal processing.

Enough about multiplication.

In Part I we talked briefly about another fairly abstract aspect of finite fields: primitive polynomials. These are polynomials \( p(x) \) that, when used as the characteristic polynomial of a quotient ring, create a finite field that has \( x \) as a generator and allows \( x^k \) to cycle through all the nonzero elements in the finite field. It’s hard to ascribe any inherent significance to a polynomial that is primitive; it just follows certain rules, and once we understand those rules and we can identify primitive polynomials, we get maximal-length LFSRs for free.

Determining Whether a Polynomial is Primitive

In practice, the way you can tell if a particular polynomial is primitive is very easy; all you have to show is that the smallest positive value for \( m \) that produces \( x^m = 1 \) is \( m = 2^N - 1 \).

So first we try \( x^{2^N - 1} \):

print e25 << 31
print e12345 << 65535
print e4567 << 16383
GF2Element(0b00001,0x25)
GF2Element(0b0000000000000001,0x12345)
GF2Element(0b10000011100100,0x4567)

Because the results for the first two evaluations show that \( x^{2^N-1} = 1 \), the fields \( H_{25} \) and \( H_{12345} \) might have primitive polynomials; \( H_{4567} \) definitely does not; in fact, the element \( x \) has order of only 204:

for k in xrange(1,1000):
    if (e4567 << k) == e4567:
        print k
        break
204

Shifting left by \( 2^N-1 \) is necessary to check for a primitive polynomial, but not sufficient, because there might be a smaller value of \( m \) for which \( x^m = 1 \). But if there is, then \( m \) divides \( 2^N-1 \), so we just have to check some of its divisors. 31 is prime, so we know \( H_{25} \) has a primitive polynomial. But \( 2^{16} - 1 = 65535 = 257 * 17 * 5 * 3 \) so we have to check certain divisors of 65535, basically its prime cofactors, which are the numbers you get if you leave out each of its prime factors one at a time:

for p in [3,5,17,257]:
    c = 65535/p
    print "x^%d = %s" % (c, e12345 << c)
x^21845 = GF2Element(0b1010101011110001,0x12345)
x^13107 = GF2Element(0b1111010011010111,0x12345)
x^3855 = GF2Element(0b0100001011110000,0x12345)
x^255 = GF2Element(0b0111000001000011,0x12345)

None of these are equal to 1, so that means there can’t be any period smaller than \( 2^{16}-1 \), and therefore \( H_{12345} \) corresponds to a maximum-length LFSR. Compare that with \( H_{12383} \) which has a period of \( 257 \times 17 = 4369 \):

ek = GF2Element(1,0x12383)
print ek << 65535
print ek << (257*17)
print ek << 257
print ek << 17
GF2Element(0b0000000000000001,0x12383)
GF2Element(0b0000000000000001,0x12383)
GF2Element(0b0101100000001010,0x12383)
GF2Element(0b0100011100000110,0x12383)

Not a maximal-length period, so it’s not a primitive polynomial.

In libgf2, there’s a function called checkPeriod which will do this for you; you give it polynomial coefficients in hexadecimal form and an expected period \( m \), and one of the following happens:

  • if \( x^m \neq 1 \) then it returns 0 — the polynomial is not primitive.
  • if \( x^c = 1 \) for some cofactor \( c = m/p \) for a prime factor \( p \), then it returns \( c \) — the polynomial is not primitive.
  • if \( x^m = 1 \), and \( x^c \neq 1 \) for all cofactors \( c \), then it returns \( m \) — the polynomial is primitive.

So the polynomial is primitive if checkPeriod(poly, m) == m.

from libgf2.gf2 import GF2Polynomial, checkPeriod

for poly in [0x25, 0x4567, 0x12345, 0x12383]:
    p = GF2Polynomial(poly)
    expectedPeriod = (1<<p.degree)-1
    print '0x%5x: checkPeriod(poly, %d) = %d' % (poly, expectedPeriod, checkPeriod(poly,expectedPeriod))
0x   25: checkPeriod(poly, 31) = 31
0x 4567: checkPeriod(poly, 16383) = 0
0x12345: checkPeriod(poly, 65535) = 65535
0x12383: checkPeriod(poly, 65535) = 21845

Other important requirements which are necessary (but not sufficient) for primitive polynomials \( p(x) \) in \( GF(2)[x] \) derive from the fact that all primitive polynomials are irreducible which means they cannot be factored into polynomials of a smaller degree.

  • The coefficient \( a_0 = 1 \) is always present — for example \( x^4 + x \) is not primitive since it has a factor \( x \) and is therefore not irreducible;
  • There are always an odd number of nonzero coefficients — for example \( x^4 + 1 \) is not primitive since it has two nonzero coefficients. There are at least two ways of proving that this is the case:
    • One is that if you plug in \( x=1 \) and evaluate the polynomial, then you find that \( p(1) = 0 \), which means that \( x=1 \) is a root, which in turn means that \( x-1 \) is a factor of \( p(x) \) and therefore it’s not irreducible.
    • The other one has to do with Galois LFSRs; if the state has 0 in its most significant bit, then on the next step the 0 shifts out and another 0 shifts into the least significant bit, and the number of 1 bits stays the same. If the state has 1 in its most significant bit, then on the next step the 1 shifts out and another 0 shifts into the least significant bit, and all the nonzero terms of the polynomial except the leading one have their bits flipped. If the polynomial has an even number of nonzero coefficients, then the LFSR loses a 1 and flips an odd number of bits, so the change in the number of state bits that are 1 is even. For example, if I have a polynomial in bitwise form of 10111 then the state sequence starting with 0001 is 0001, 0010, 0100, 1000, 0111, 1110, 1011, 0001 — there are always an odd number of bits, so the LFSR state never gets to states with an even number of bits like 0011 or 1001 and therefore can’t cover all of the possible nonzero values, so it’s not a maximal-length LFSR.

Summary: To determine whether a polynomial is primitive:

  • use checkPeriod(poly, m) with m = (1<<n)-1 which returns m if poly is primitive.
  • Equivalently, check (perhaps using GF2Element(1, poly) << k to compute \( x^k \)) that \( x^m = 1 \) for \( m = 2^N-1 \) and that \( x^c \neq 1 \) for each of the cofactors \( c = m/p \) where \( p \) is a prime factor of \( c \).
  • (by inspection) \( p(x) \) must have a constant coefficient of 1, otherwise it is not irreducible and not primitive
  • (by inspection) \( p(x) \) must have an odd number of nonzero coefficients, otherwise it is not irreducible and not primitive

Finding Primitive Polynomials of Degree N

Actually finding a primitive polynomial of degree \( N \) is just a matter of testing each of the possibilities; there are faster ways to enumerate the primitive polynomials of degree N once you find one of them, but you can certainly use brute force, and it’s very easy to do so. As long as \( N \) is small, this is fairly fast, too.

The number of primitive polynomials of degree \( N \) is \( \phi(2^N - 1)/N \) where \( \phi(m) \) is Euler’s totient function. For \( N=6 \) the number of primitive polynomials is \( 63 \times \frac{2}{3} \times \frac{6}{7} \times \frac{1}{6} = 6 \).

def enumeratePrimPoly(N):
    expectedPeriod = (1<<N) - 1
    for poly in xrange((1<<N)+1, 1<<(N+1), 2):
        if checkPeriod(poly, expectedPeriod) == expectedPeriod:
            yield poly
            
for N in [2,3,4,5,6,7,8]:            
    nN = 0
    for poly in enumeratePrimPoly(N):
        nN += 1
        print "%d %3x %s" % (N,poly, "{0:b}".format(poly)) 
    print "---- degree %d: there are %d primitive polynomials" % (N, nN)    
2   7 111
---- degree 2: there are 1 primitive polynomials
3   b 1011
3   d 1101
---- degree 3: there are 2 primitive polynomials
4  13 10011
4  19 11001
---- degree 4: there are 2 primitive polynomials
5  25 100101
5  29 101001
5  2f 101111
5  37 110111
5  3b 111011
5  3d 111101
---- degree 5: there are 6 primitive polynomials
6  43 1000011
6  5b 1011011
6  61 1100001
6  67 1100111
6  6d 1101101
6  73 1110011
---- degree 6: there are 6 primitive polynomials
7  83 10000011
7  89 10001001
7  8f 10001111
7  91 10010001
7  9d 10011101
7  a7 10100111
7  ab 10101011
7  b9 10111001
7  bf 10111111
7  c1 11000001
7  cb 11001011
7  d3 11010011
7  d5 11010101
7  e5 11100101
7  ef 11101111
7  f1 11110001
7  f7 11110111
7  fd 11111101
---- degree 7: there are 18 primitive polynomials
8 11d 100011101
8 12b 100101011
8 12d 100101101
8 14d 101001101
8 15f 101011111
8 163 101100011
8 165 101100101
8 169 101101001
8 171 101110001
8 187 110000111
8 18d 110001101
8 1a9 110101001
8 1c3 111000011
8 1cf 111001111
8 1e7 111100111
8 1f5 111110101
---- degree 8: there are 16 primitive polynomials

Want to find a degree 256 primitive polynomial? You can certainly do it with checkPeriod, although it takes a bit longer to run for larger numbers. (which we can speed up by caching the cofactors for a given polynomial degree)

import time
import numpy as np
import libgf2.gf2

def parity65536(x):
    """ computes parity for numbers less than 65536 """
    x ^= x >> 8
    x ^= x >> 4
    x ^= x >> 2  
    x ^= x >> 1
    return x & 1

def primitives_hunt(n, factorize = libgf2.gf2.factorize, kmax=65536):
    p = (1 << n) - 1
    t0 = time.time()
    factors, multiplicity = factorize(p, returnMultiplicity=True)
    c = libgf2.gf2._calculateCofactors(factors, multiplicity)
    t0b = time.time()
    for k in xrange(1,kmax,2):
        poly = (1 << n) + k
        # save time: polynomials have to have an odd number of nonzero coefficients
        # (so aside from the leading term we want 
        # the parity of the remaining coefficients to be even)
        if parity65536(k) == 1:
            continue
        if checkPeriod(poly, p, c) == p:
            print "Aha, we found one:"
            print 'n={0:3d} {1:x}'.format(n,poly)
            break
    t1 = time.time()
    print "Elapsed time: %.3f seconds (factoring: %.3f s, polynomial search: %.3f s)" % (t1-t0, t0b-t0, t1-t0b)
    
for n in [16,32,48,64,80,96,112,128,144,152]:
    primitives_hunt(n)
Aha, we found one:
n= 16 1002d
Elapsed time: 0.008 seconds (factoring: 0.000 s, polynomial search: 0.008 s)
Aha, we found one:
n= 32 1000000af
Elapsed time: 0.093 seconds (factoring: 0.000 s, polynomial search: 0.093 s)
Aha, we found one:
n= 48 10000000000b7
Elapsed time: 0.299 seconds (factoring: 0.001 s, polynomial search: 0.299 s)
Aha, we found one:
n= 64 1000000000000001b
Elapsed time: 0.108 seconds (factoring: 0.004 s, polynomial search: 0.104 s)
Aha, we found one:
n= 80 1000000000000000000af
Elapsed time: 1.084 seconds (factoring: 0.067 s, polynomial search: 1.018 s)
Aha, we found one:
n= 96 10000000000000000000000dd
Elapsed time: 1.757 seconds (factoring: 0.083 s, polynomial search: 1.674 s)
Aha, we found one:
n=112 1000000000000000000000000014f
Elapsed time: 3.461 seconds (factoring: 0.273 s, polynomial search: 3.188 s)
Aha, we found one:
n=128 100000000000000000000000000000087
Elapsed time: 11.488 seconds (factoring: 9.791 s, polynomial search: 1.697 s)
Aha, we found one:
n=144 1000000000000000000000000000000000095
Elapsed time: 3.756 seconds (factoring: 0.840 s, polynomial search: 2.916 s)
Aha, we found one:
n=152 10000000000000000000000000000000000004d
Elapsed time: 7.669 seconds (factoring: 5.813 s, polynomial search: 1.856 s)

Prime factorization is the bottleneck here, and without trying to sidetrack this article, let’s see if we can speed it up a little by using the primefac module:

import primefac

def primefac_factorize(n, returnMultiplicity=False):
    fdict = primefac.factorint(n)
    if not returnMultiplicity:
        return [k**fdict[k] for k in sorted(fdict.keys())] 
    factors=[]
    multiplicities=[]
    for p in sorted(fdict.keys()):
        factors.append(p)
        multiplicities.append(fdict[p])
    return factors, multiplicities
for j in xrange(1,17):
    n = j*16
    primitives_hunt(n, factorize=primefac_factorize)
Aha, we found one:
n= 16 1002d
Elapsed time: 0.009 seconds (factoring: 0.000 s, polynomial search: 0.009 s)
Aha, we found one:
n= 32 1000000af
Elapsed time: 0.092 seconds (factoring: 0.000 s, polynomial search: 0.092 s)
Aha, we found one:
n= 48 10000000000b7
Elapsed time: 0.297 seconds (factoring: 0.001 s, polynomial search: 0.296 s)
Aha, we found one:
n= 64 1000000000000001b
Elapsed time: 0.116 seconds (factoring: 0.008 s, polynomial search: 0.108 s)
Aha, we found one:
n= 80 1000000000000000000af
Elapsed time: 1.045 seconds (factoring: 0.005 s, polynomial search: 1.040 s)
Aha, we found one:
n= 96 10000000000000000000000dd
Elapsed time: 1.690 seconds (factoring: 0.007 s, polynomial search: 1.683 s)
Aha, we found one:
n=112 1000000000000000000000000014f
Elapsed time: 3.431 seconds (factoring: 0.092 s, polynomial search: 3.338 s)
Aha, we found one:
n=128 100000000000000000000000000000087
Elapsed time: 1.775 seconds (factoring: 0.081 s, polynomial search: 1.693 s)
Aha, we found one:
n=144 1000000000000000000000000000000000095
Elapsed time: 3.068 seconds (factoring: 0.015 s, polynomial search: 3.053 s)
Aha, we found one:
n=160 1000000000000000000000000000000000000002d
Elapsed time: 3.053 seconds (factoring: 1.556 s, polynomial search: 1.497 s)
Aha, we found one:
n=176 1000000000000000000000000000000000000000000bd
Elapsed time: 6.787 seconds (factoring: 1.466 s, polynomial search: 5.321 s)
Aha, we found one:
n=192 100000000000000000000000000000000000000000000015d
Elapsed time: 10.905 seconds (factoring: 0.314 s, polynomial search: 10.590 s)
Aha, we found one:
n=208 100000000000000000000000000000000000000000000000001d3
Elapsed time: 17.109 seconds (factoring: 1.001 s, polynomial search: 16.108 s)
Aha, we found one:
n=224 1000000000000000000000000000000000000000000000000000001b5
Elapsed time: 23.060 seconds (factoring: 5.346 s, polynomial search: 17.715 s)
Aha, we found one:
n=240 1000000000000000000000000000000000000000000000000000000000129
Elapsed time: 18.134 seconds (factoring: 2.668 s, polynomial search: 15.466 s)
Aha, we found one:
n=256 10000000000000000000000000000000000000000000000000000000000000425
Elapsed time: 142.062 seconds (factoring: 90.840 s, polynomial search: 51.222 s)

If you really need an LFSR with more than 256 bits, I hope you know what you’re doing.

Implementation Details of Simple Arithmetic in libgf2

We’ll end today’s discussion with some more details about arithmetic. The libgf2 library manipulates polynomials in bit vector form, so the polynomial \( p(x) = x^7 + x^3 + 1 \) is represented in binary as the integer m = 10001001.

Let’s define \( a = x^4 + x + 1 \) (10011) and \( b = x^5 + x^2 + x + 1 \) (100111) just for some samples.

  • Addition and subtraction — these are just XORs of the corresponding bit vectors:

      a  10011
      b 100111
    ----------
    a+b 110100
    

    We’ll consider this operation and any other basic operation with \( n \)-bit binary numbers as taking some time \( t \), so addition and subtraction each take \( t \). For a fixed cap on bit size (for example, using fixed-length integer types like uint64_t), this is a constant, otherwise it’s \( O(n) \) for arbitrary-precision integer arithmetic.

  • Multiplication — we can use a variant of Russian Peasant Multiplication to multiply elements in \( GF(2)[x]/p(x) \). Here’s the code from libgf2:

    def _gf2mulmod(a,b,m):
        """
        Computes ``a * b mod m``.
        *NOTE*: Does *not* check whether `a` and `b` are both
        smaller in degree than `m`.
    
        Parameters
        ----------
        a, b, m : integer
            Polynomial coefficient bit vectors.
            Polynomials `a` and `b` should be smaller degree than `m`.
    
        Returns
        -------
        c : integer
            Polynomial coefficient bit vector of ``c = a * b mod m``.    
        """
        c = 0
        while a > 0:
            if (a & 1) != 0:
                c ^= b
            b <<= 1
            b2 = b ^ m
            if b2 < b:
                b = b2
            a >>= 1
        return c
    

    Essentially, we use one of the inputs (a) as an on/off gate for each bit, and at each step we update the state of an LFSR by shifting the other input (b) left, essentially multiplying the Galois field element by \( x \) each time. This has a time per loop iteration of \( 5t+t_b \) worst-case (\( 4.5t+t_b \) average), where \( t_b \) is the time it takes to check the least-significant bit of a number. The number of iterations is \( n \) worst-case, \( n/2 \) on average, so we’re at \( n(5t+t_b) \) worst-case, \( \frac{1}{2}n(4.5t+t_b) \) on average. This is \( O(n^2) \) for arbitrary-length numbers.

    In our example we can see what’s going on by adding some print statements, and I’ll also annotate the source code a bit:

def mulmod_print(a,b,m,fmt):
    c = 0
    while a > 0:
        if (a & 1) != 0:  # check next bit of a
            c ^= b 
        a >>= 1
        b <<= 1     # multiply b(x) by x
                    # (shift left and conditional XOR with characteristic polynomial)
        b2 = b ^ m      
        if b2 < b:  # Does XORing with m reduce the degree of b?
            b = b2  # If so, we needed to do it, otherwise forget about it.
        print fmt.format(a=a,b=b,c=c)
    return c

c = mulmod_print(0b10011, 0b100111, 0b10001001,
                 fmt="a={a:5b} b={b:07b} c={c:07b}")
a= 1001 b=1001110 c=0100111
a=  100 b=0010101 c=1101001
a=   10 b=0101010 c=1101001
a=    1 b=1010100 c=1101001
a=    0 b=0100001 c=0111101
  • Raising an element to a power — Let’s say we want to figure out \( x^{3141592653} \) in some finite field. We don’t actually multiply by \( x \) three billion times; that’s unnecessary. Russian Peasant Multiplication works here again; to compute \( a^k \) we keep squaring \( a \) repeatedly and multiplying the accumulated output \( c \) wherever \( k \) has a 1 bit. For example, if we want to compute \( a^{11} \), where 11 written in binary is 1011, this is just equal to \( (a^8 \cdot a^2 \cdot a^1) \).

    def _gf2powmod(a, k, m):
        """
        Computes `a` to the `k`th power, in the quotient ring GF(2)[x]/m(x).
        (see gf2.py for full docstring)  
        """
        c = 1
        while k > 0:
            if (k & 1) != 0:
                c = _gf2mulmod(c,a,m)
            a = _gf2mulmod(a,a,m)
            k >>= 1
        return c
    

    The inner loop runs approximately \( \log_2 k \) times, with two calls to _gf2mul worst-case and 1.5 calls on average, so the running time here is a worst-case of \( \log_2 k (2n(5t+t_b) + t) \) and an average of \( \log_2 k (\frac{3}{4}n(4.5t+t_b) + t) \). This is \( O(n^2 \log_2 k) \).

  • Shifting an element left (a << k) — this is just analogous to \( ax^k \), so for positive \( k \) we compute \( x^k \) and then multiply by \( a \); for negative \( k \) we compute \( (1/x)^k \) and multiply by \( a \). The element \( 1/x \) is analogous to an LFSR starting with state bits 0000000...001 and running it backwards one step, which yields the polynomial coefficients as a bit vector shifted right one place. Running time is essentially the same as raising an element to a power, but with one additional multiply.

  • Shifting an element right (a >> k) — this is just a << (-k), analogous to \( ax^{-k} \).

Wrapup

We looked at the libgf2 library’s basic classes GF2QuotientRing and GF2Element, along with arithmetic operations to compute in the quotient ring \( GF(2)[x]/p(x) \).

We also saw how to determine whether a polynomial is a primitive polynomial using checkPeriod() — essentially, if it’s an \( n \)th degree polynomial, verify that \( x^m = 1 \) where \( m = 2^n-1 \) is the expected period, and verify that none of the prime factors \( p \) satisfy \( x^{m/p} = 1 \).

We showed how the basic arithmetic operations were implemented.

The next couple of articles look at some more interesting operations, among them multiplicative inverse and discrete logarithm.


© 2017 Jason M. Sachs, all rights reserved.


Previous post by Jason Sachs:
   Linear Feedback Shift Registers for the Uninitiated, Part I: Ex-Pralite Monks and Finite Fields
Next post by Jason Sachs:
   Linear Feedback Shift Registers for the Uninitiated, Part III: Multiplicative Inverse, and Blankinship's Algorithm


Comments:

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.

Sign up
or Sign in