Kernel (matrix)

In linear algebra, the kernel or null space (also nullspace) of a matrix A is the set of all vectors x for which Ax = 0. The kernel of a matrix with n columns is a linear subspace of n-dimensional Euclidean space.[1] The dimension of the null space of A is called the nullity of A.

If viewed as a linear transformation, the null space of a matrix is precisely the kernel of the mapping (i.e. the set of vectors that map to zero). For this reason, the kernel of a linear transformation between abstract vector spaces is sometimes referred to as the null space of the transformation.



The kernel of an m × n matrix A is the set

\text{Ker}(\mathbf{A}) = \left\{ \textbf{x}\in\textbf{R}^n : \mathbf{A}\textbf{x} = \textbf{0} \right\}\text{,}[2]

where 0 denotes the zero vector with m components. The matrix equation Ax = 0 is equivalent to a homogeneous system of linear equations:

\mathbf{A}\textbf{x}=\textbf{0} \;\;\Leftrightarrow\;\;  \begin{alignat}{6}
a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = 0&    \\
a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = 0&    \\
\vdots\;\;\; &&     && \vdots\;\;\; &&              && \vdots\;\;\; && \vdots\,& \\
a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = 0. &

From this viewpoint, the null space of A is the same as the solution set to the homogeneous system.


Consider the matrix

\mathbf{A} = \begin{bmatrix}\,\,\,2 & 3 & 5 \\ -4 & 2 & 3\end{bmatrix}.

The null space of this matrix consists of all vectors (xyz) ∈ R3 for which

\begin{bmatrix}\,\,\,2 & 3 & 5 \\ -4 & 2 & 3\end{bmatrix}\begin{bmatrix} x \\ y \\ z\end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}\text{.}

This can be written as a homogeneous system of linear equations involving x, y, and z:

 2x &&\; + \;&& 3y &&\; + \;&& 5z &&\; = \;&& 0, \\
-4x &&\; + \;&& 2y &&\; + \;&& 3z &&\; = \;&& 0.\\

This can be written in matrix form as:

    2 & 3 & 5 & 0 \\
    -4 & 2 & 3 & 0

Using Gauss-Jordan reduction, this reduces to:

    1 & 0 & .0625 & 0 \\
    0 & 1 & 1.625 & 0

Rewriting yields:

 x = \;&& -.0625z \\
y = \;&& -1.625z.

Now we can write the null space (solution to Ax = 0) in terms of z (which is our free variable), where z is scalar:

\begin{bmatrix} x \\ y \\ z\end{bmatrix} = z \begin{bmatrix}\,\,\,-.0625 \\ -1.625 \\ 1\end{bmatrix}.

The null space of A is precisely the set of solutions to these equations (in this case, a line through the origin in R3).

Subspace properties

The null space of an m × n matrix is a subspace of Rn. That is, the set Null(A) has the following three properties:

  1. Null(A) always contains the zero vector.
  2. If x ∈ Null(A) and y ∈ Null(A), then x + y ∈ Null(A).
  3. If x ∈ Null(A) and c is a scalar, then c x ∈ Null(A).

Here are the proofs:

  1. A0 = 0.
  2. If Ax = 0 and Ay = 0, then A(x + y) = Ax + Ay0 + 0 = 0.
  3. If Ax = 0 and c is a scalar, then A(cx) = cAxc0 = 0.


The null space of a matrix is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the null space:

Input An m × n matrix A.
Output A basis for the null space of A
  1. Use elementary row operations to put A in reduced row echelon form.
  2. Interpreting the reduced row echelon form as a homogeneous linear system, determine which of the variables x1x2, ..., xn are free. Write equations for the dependent variables in terms of the free variables.
  3. For each free variable xi, choose the vector in the null space for which xi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A.

For example, suppose that the reduced row echelon form of A is

\left[ \begin{alignat}{6}
1 && 0 && -3 && 0 &&  2 && -8 \\
0 && 1 &&  5 && 0 && -1 && 4 \\
0 && 0 &&  0 && 1 &&  7 && -9 \\
0 && \;\;\;\;\;0 &&  \;\;\;\;\;0 && \;\;\;\;\;0 &&  \;\;\;\;\;0 && \;\;\;\;\;0 \end{alignat} \,\right]\text{.}

Then the solutions to the homogeneous system given in parametric form with x3, x5, and x6 as free variables are

x_1 &&\; = \;&&  3x_3 &&\; - \;&& 2x_5 &&\; + \;&& 8x_6  & \\
x_2 &&\; = \;&& -5x_3 &&\; + \;&&  x_5 &&\; - \;&& 4x_6  & \\
x_4 &&\; = \;&&       &&\; - \;&& 7x_5 &&\; + \;&& 9x_6  &.

Which can be rewritten as

\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{bmatrix} = x_3 \begin{bmatrix} 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} + x_5 \begin{bmatrix} -2 \\ 1 \\ 0 \\ -7 \\ 1 \\ 0 \end{bmatrix} + x_6 \begin{bmatrix} 8 \\ -4 \\ 0 \\ 9 \\ 0 \\ 1 \end{bmatrix} \text{.}

Therefore, the three vectors

\left[\!\! \begin{array}{r} 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right],\;
\left[\!\! \begin{array}{r} -2 \\ 1 \\ \mathbf{0} \\ -7 \\ \mathbf{1} \\ \mathbf{0} \end{array} \right],\;
\left[\!\! \begin{array}{r} 8 \\ -4 \\ \mathbf{0} \\ 9 \\ \mathbf{0} \\ \mathbf{1} \end{array} \right]

are a basis for the null space of A.

Relation to the row space

Let A be an m by n matrix (i.e., A has m rows and n columns). The product of A and the n-dimensional vector x can be written in terms of the dot product of vectors as follows:

\mathbf{A}\textbf{x} = \begin{bmatrix} \textbf{a}_1 \cdot \textbf{x} \\ \textbf{a}_2 \cdot \textbf{x} \\ \vdots \\ \textbf{a}_m \cdot \textbf{x} \end{bmatrix}\text{.}

Here a1, ..., am denote the rows of the matrix A. It follows that x is in the null space of A if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (because if the dot product of two vectors is equal to zero they are by definition orthogonal).

The row space of a matrix A is the span of the row vectors of A. By the above reasoning, the null space of A is the orthogonal complement to the row space. That is, a vector x lies in the null space of A if and only if it is perpendicular to every vector in the row space of A.

The dimension of the row space of A is called the rank of A, and the dimension of the null space of A is called the nullity of A. These quantities are related by the equation

\text{rank}(\mathbf{A}) + \text{nullity}(\mathbf{A}) = n\text{.}\,

The equation above is known as the rank-nullity theorem.

Nonhomogeneous equations

The null space also plays a role in the solution to a nonhomogeneous system of linear equations:

a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = \;&&& b_1      \\
a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = \;&&& b_2      \\
\vdots\;\;\; &&     && \vdots\;\;\; &&              && \vdots\;\;\; &&     &&& \;\vdots \\
a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = \;&&& b_m      \\

If u and v are two possible solutions to the above equation, then

\mathbf{A}(\textbf{u}-\textbf{v}) = \mathbf{A}\textbf{u} - \mathbf{A}\textbf{v} = \textbf{b} - \textbf{b} = \textbf{0}\,

Thus, the difference of any two solutions to the equation Ax = b lies in the null space of A.

It follows that any solution to the equation Ax = b can be expressed as the sum of a fixed solution v and an arbitrary element of the null space. That is, the solution set to the equation Ax = b is

\left\{ \textbf{v}+\textbf{x} \,:\, \textbf{x}\in\text{Null}(\mathbf{A})\, \right\}\text{,}

where v is any fixed vector satisfying Av = b. Geometrically, this says that the solution set to Ax = b is the translation of the null space of A by the vector v. See also Fredholm alternative.

Left null space

The left null space of a matrix A consists of all vectors x such that xTA = 0T, where T denotes the transpose of a column vector. The left null space of A is the same as the null space of AT. The left null space of A is the orthogonal complement to the column space of A, and is the cokernel of the associated linear transformation. The null space, the row space, the column space, and the left null space of A are the four fundamental subspaces associated to the matrix A.

Null space of a transformation

If V and W are vector spaces, the null space (or kernel) of a linear transformation TV → W is the set of all vectors in V that map to zero:

\ker(T) = \left\{\textbf{v}\in V : T(\textbf{v}) = \textbf{0} \right\}\text{.}

If we represent the linear transformation by a matrix, then the kernel of the transformation is precisely the null space of the matrix.

Numerical computation of null space

Algorithms based on row or column reduction, that is, Gaussian elimination, presented in introductory linear algebra textbooks and in the preceding sections of this article are not suitable for a practical computation of the null space because of numerical accuracy problems in the presence of rounding errors. Namely, the computation may greatly amplify the rounding errors, which are inevitable in all but textbook examples on integers, and so give completely wrong results. For this reason, methods based on introductory linear algebra texts are generally not suitable for implementation in software; rather, one should consult contemporary numerical analysis sources for an algorithm like the one below, which does not amplify rounding errors unnecessarily.

A state-of-the-art approach is based on singular value decomposition (SVD). This approach can be also easily programmed using standard libraries, such as LAPACK. SVD of matrix A computes unitary matrices U and V and a rectangular diagonal matrix S of the same size as A with nonnegative diagonal entries, such that

 \mathbf{U}\mathbf{S}\mathbf{V}^T = \mathbf{A}.

Denote the columns of V by


the diagonal entries of S by


and put


(The numbers si are called the singular values of A.) Then the columns vi of V such that the corresponding  s_i\,=\,0 form an orthonormal basis of the nullspace of A. This can be seen as follows: First note that if we have one solution y of the equation USy = 0, then also US(y + ei) = 0 for unit vectors ei with si = 0. Now if we solve (y + ei) = VTz for z, then z = V(y + ei) because of VTV = Id, which means that the i'th column of V spans one direction of the null space.

In a numerical computation, the singular values si are taken to be zero when they are less than some small tolerance. For example, the tolerance can be taken to be


where ε is the machine epsilon of the computer, that is, the smallest number such that in the floating point arithmetics of the computer, 1\,+\,\varepsilon>1. For the IEEE 64 bit floating point format,  \varepsilon\,\approx\, 2.2\cdot 10^{-16}.

Computation of the SVD of a matrix generally costs about the same as several matrix-matrix multiplications with matrices of the same size when state-of-the art implementation (accurate up to rounding precision) is used, such as in LAPACK. This is true even if, in theory, the SVD cannot be computed by a finite number of operations, so an iterative method with stopping tolerances based on rounding precision must be employed. The cost of the SVD approach is several times higher than computing the null space by reduction, but it should be acceptable whenever reliability is important. It is also possible to compute the null space by the QR decomposition, with the numerical stability and the cost both being between those of the SVD and the reduction approaches. The computation of a null space basis using the QR decomposition is explained in more detail below.

Let A be a mxn matrix with m < n. Using the QR factorization of \mathbf{A}^T, we can find a matrix such that

\mathbf{A}^T \, \mathbf{P} = \mathbf{Q} \, \mathbf{R} = [\mathbf{Q}_1 \; \mathbf{Q}_2] \, \mathbf{R},

where P is a permutation matrix, Q is nxn and R is nxm. Matrix \mathbf{Q}_1 is nxm and consists of the first m columns of Q. Matrix \mathbf{Q}_2 is nx(n-m) and is made up of Q 's last n-m columns. Since \mathbf{A} \, \mathbf{Q}_2 = 0, the columns of \mathbf{Q}_2 span the null space of A.

See also


  1. ^ Linear algebra, as discussed in this article, is a very well-established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.
  2. ^ This equation uses set-builder notation.



  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590 
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137 
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548, 
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International 
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall 

Numerical analysis textbooks

External links

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Kernel — may refer to:Computing* Kernel (computer science), the central component of most operating systems ** Linux kernel * Kernel (programming language), a Scheme like language * kernel trick, in machine learningLiterature* Kernel ( Lilo Stitch ),… …   Wikipedia

  • Matrix decoder — is an audio technology where a finite number of discrete audio channels (e.g., 2) are decoded into a larger number of channels on play back (e.g., 5). The channels are generally, but not always, arranged for transmission or recording by an… …   Wikipedia

  • Kernel smoother — A kernel smoother is a statistical technique for estimating a real valued function f(X),,left( Xin mathbb{R}^{p} ight) by using its noisy observations, when no parametric model for this function is known. The estimated function is smooth, and the …   Wikipedia

  • Kernel (algebra) — In the various branches of mathematics that fall under the heading of abstract algebra, the kernel of a homomorphism measures the degree to which the homomorphism fails to be injective. An important special case is the kernel of a matrix, also… …   Wikipedia

  • Kernel (mathematics) — In mathematics, the word kernel has several meanings. Kernel may mean a subset associated with a mapping:* The kernel of a mapping is the set of elements that map to the zero element (such as zero or zero vector), as in kernel of a linear… …   Wikipedia

  • Kernel (linear operator) — Main article: Kernel (mathematics) In linear algebra and functional analysis, the kernel of a linear operator L is the set of all operands v for which L(v) = 0. That is, if L: V → W, then where 0 denotes the null vector… …   Wikipedia

  • Matrix (mathematics) — Specific elements of a matrix are often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A. In mathematics, a matrix (plural matrices, or less commonly matrixes)… …   Wikipedia

  • Matrix decomposition — In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Contents 1… …   Wikipedia

  • Multivariate kernel density estimation — Kernel density estimation is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics. It can be viewed as a generalisation of histogram density… …   Wikipedia

  • Positive definite kernel — In operator theory, a positive definite kernel is a generalization of a positive matrix. Definition Let :{ H n } {n in {mathbb Z be a sequence of (complex) Hilbert spaces and :mathcal{L}(H i, H j)be the bounded operators from Hi to Hj . A map A… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.