# Block Wiedemann algorithm

The Block Wiedemann algorithm for computing kernel vectors of a matrix over a finite field is a generalisation of an algorithm due to Don Coppersmith.

Coppersmith's algorithm

Let M be an $n imes n$ square matrix over some finite field F, let $x_\left\{mathrm \left\{base$ be a random vector of length n, and let $x = M x_\left\{mathrm \left\{base$. Consider the sequence of vectors $S = left \left[x, Mx, M^2x, ldots ight\right]$ obtained by repeatedly multiplying the vector by the matrix M; let y be any other vector of length n, and consider the sequence of finite-field elements $S_y = left \left[y cdot x, y cdot Mx, y cdot M^2x ldots ight\right]$

We know that the matrix M has a minimal polynomial; by the Cayley-Hamilton Theorem we know that this polynomial is of degree (which we will call $n_0$) no more than n. Say $sum_\left\{r=0\right\}^\left\{n_0\right\} p_rM^r = 0$. Then $sum_\left\{r=0\right\}^\left\{n_0\right\} y cdot \left(p_r \left(M^r x\right)\right) = 0$; so the minimal polynomial of the matrix annihilates the sequence $S$ and hence $S_y$.

But the Berlekamp-Massey algorithm allows us to calculate relatively efficiently some sequence $q_0 ldots q_L$ with $sum_\left\{i=0\right\}^L q_i S_y \left[\left\{i+r\right\}\right] =0 forall r$. Our hope is that this sequence, which by construction annihilates $y cdot S$, actually annihilates $S$; so we have $sum_\left\{i=0\right\}^L q_i M^i x = 0$. We then take advantage of the initial definition of $x$ to say $M sum_\left\{i=0\right\}^L q_i M^i x_\left\{mathrm \left\{base = 0$ and so $sum_\left\{i=0\right\}^L q_i M^i x_\left\{mathrm \left\{base$ is a hopefully non-zero kernel vector of $M$.

The Block Wiedemann algorithm

The natural implementation of sparse matrix arithmetic on a computer makes it easy to compute the sequence S in parallel for a number of vectors equal to the width of a machine word - indeed, it will normally take no longer to compute for that many vectors than for one. If you have several processors, you can compute the sequence S for a different set of random vectors in parallel on all the computers.

It turns out, by a generalisation of the Berlekamp-Massey algorithm to provide a sequence of small matrices, that you can take the sequence produced for a large number of vectors and generate a kernel vector of the original large matrix. You need to compute $y_i cdot M^t x_j$ for some $i = 0 ldots i_max, j=0 ldots j_max, t = 0 ldots t_max$ where $i_max, j_max, t_max$ need to satisfy $t_max > frac\left\{d\right\}\left\{i_max\right\} + frac\left\{d\right\}\left\{j_max\right\} + O\left(1\right)$ and $y_i$ are a series of vectors of length n; but in practice you can take $y_i$ as a sequence of unit vectors and simply write out the first $i_max$ entries in your vectors at each time t.

References

Villard's 1997 research report 'A study of Coppersmith's block Wiedemann algorithm using matrix polynomials' (available at [http://citeseer.ist.psu.edu/cache/papers/cs/4204/ftp:zSzzSzftp.imag.frzSzpubzSzCALCUL_FORMELzSzRAPPORTzSz1997zSzRR975.pdf/villard97study.pdf] - the cover material is in French but the content in English) is a reasonable description.

Thomé's paper 'Subquadratic computation of vector generating polynomials and improvement of the block Wiedemann algorithm' (available at [http://citeseer.ist.psu.edu/rd/13850609%2C564537%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/27081/http:zSzzSzwww.lix.polytechnique.frzSzLabozSzEmmanuel.ThomezSzpubliszSzjsc.pdf/subquadratic-computation-of-vector.pdf] ) uses a more sophisticated FFT-based algorithm for computing the vector generating polynomials, and describes a practical implementation with $i_max = j_max = 4$ used to compute a kernel vector of a 484603x484603 matrix of entries modulo 2607-1, and hence to compute discrete logarithms in the field $GF\left(2^\left\{607\right\}\right)$.

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Block Lanczos algorithm for nullspace of a matrix over a finite field — The Block Lanczos algorithm for nullspace of a matrix over a finite field is a procedure for finding the nullspace of a matrix using only multiplication of the matrix by long, thin matrices. These long, thin matrices are considered as vectors of… …   Wikipedia

• List of mathematics articles (B) — NOTOC B B spline B* algebra B* search algorithm B,C,K,W system BA model Ba space Babuška Lax Milgram theorem Baby Monster group Baby step giant step Babylonian mathematics Babylonian numerals Bach tensor Bach s algorithm Bachmann–Howard ordinal… …   Wikipedia

• Quadratic sieve — The quadratic sieve algorithm (QS) is a modern integer factorization algorithm and, in practice, the second fastest method known (after the general number field sieve). It is still the fastest for integers under 100 decimal digits or so, and is… …   Wikipedia

• Crible quadratique — L algorithme du crible quadratique est un algorithme de factorisation fondé sur l arithmétique modulaire. C est en pratique le plus rapide après le crible généralisé sur les corps de nombres, lequel est cependant bien plus compliqué, et n est… …   Wikipédia en Français

• General number field sieve — In number theory, the general number field sieve (GNFS) is the most efficient classical algorithm known for factoring integers larger than 100 digits. Heuristically, its complexity for factoring an integer n (consisting of log2 n bits) is of …   Wikipedia

• Methods of detecting extrasolar planets — Any planet is an extremely faint light source compared to its parent star. In addition to the intrinsic difficulty of detecting such a faint light source, the light from the parent star causes a glare that washes it out. For those reasons, only a …   Wikipedia