\( \newcommand{\nsp}[1]{{\mathcal N}\left(#1\right)} \newcommand{\col}[1]{{\mathcal C}\left(#1\right)} \newcommand{\row}[1]{{\mathcal R}\left(#1\right)} \newcommand{\transpose}[1]{#1^t} \newcommand{\vect}[1]{{\mathbf #1}} \newcommand{\twopartcol}[2]{\left[\begin{array}{c}#1\\\hline#2\end{array}\right]} \newcommand{\twopartrow}[2]{\left[\begin{array}{c|c}#1&#2\end{array}\right]} \)


Bringing a matrix to reduced row-echelon form via row operations is one of the most important computational processes taught in an introductory linear algebra course. For example, if we augment a square nonsingular matrix with an identity matrix of the same size and row-reduce the resulting \(n\times 2n\) matrix, we obtain the inverse in the final \(n\) columns: \[\left[A\vert I_n\right]\xrightarrow{\text{RREF}}\left[I_n\vert A^{-1}\right].\] What happens if \(A\) is singular, or perhaps not even square?

We depict this process for an arbitrary \(m\times n\) matrix \(A,\) \[M = [A\vert I_m] \xrightarrow{\text{RREF}} N = [B\vert J] = \left[\begin{array}{c|c}C&K\\\hline 0&L\end{array}\right].\] The columns are partitioned into the first \(n\) columns, followed by the last \(m\) columns. The rows are partitioned so that \(C\) has a leading one in each row. We refer to \(N\) as the extended echelon form of \(A\). While it is transparent that \(C\) contains information about \(A\), it is less obvious that \(L\) also contains significant information about \(A\).

Given a matrix \(A\), four associated subspaces are of special interest: the column space \(\col{A}\), the (right) null space \(\nsp{A}=\{\vect{x}\,\vert\,A\vect{x}=\vect{0}\}\), the row space \(\row{A}\) and the left nullspace \(\nsp{\transpose{A}}\). These are Strang's four subspaces, whose interesting properties and important relationships are described in the “Fundamental Theorem of Linear Algebra” (see [2.4] and [2.5]). Bases for all four of these subspaces can be obtained easily from \(C\) and \(L\). Additionally, we obtain a canonical description of the row operations that bring a matrix to reduced row-echelon form, and the result that row rank and column rank are equal. We only know of one other textbook, [2.3], besides our own [2.1], that describes this procedure. (See also, Lay's paper [2.2].) So, one purpose of this note is to make this approach better known.

Several informal observations about extended echelon form are key. As we perform the row operations necessary to bring \(M\) to reduced row-echelon form, the conversion of \(I_m\) to \(J\) records all of these row operations, as it is the product of the elementary matrices associated with the row operations. Indeed, \(B=JA\) (see Lemma 1.1). Second, \(J\) is nonsingular, which we can see by its row-equivalence with \(I_m\), or recognized as a product of elementary matrices, each of which is nonsingular. Third, the entries of a row of \(L\) are the scalars in a relation of linear dependence on the rows of \(A\), so each row of \(L\) is an element of \(\nsp{\transpose{A}}\). We will see that the rows of \(L\) are a basis for \(\nsp{\transpose{A}}\).

We begin our last observation by considering that there can be many different sequences of row operations which bring a matrix to reduced row-echelon form. If \(A\) does not have rank \(m\), then part of this variety is due to the many ways row operations can produce zero rows. However, because the reduced row-echelon form of a matrix is unique, we conclude that \(J\) is unique. Thus \(J\) can be interpreted as a canonical transformation of the rows of \(A\) to reduced row-echelon form.

Informally, the equivalence simply says that if we solve the system \(A\vect{x}=\vect{y}\) for \(\vect{x}\) by augmenting the coefficient matrix \(A\) with the vector \(\vect{y}\) and row-reducing, then we obtain a matrix in reduced row-echelon form representing an equivalent system having \(B\) as coefficient matrix and \(J\vect{y}\) as the vector of constants.

Because \(A\) and \(B\) are row-equivalent, and because \(B\) and \(C\) differ only in the zero rows of \(B\), it should be apparent that \(\nsp{A}=\nsp{C}\) and \(\row{A}=\row{C}\). While the individual rows of \(L\) are easily explained as elements of the the left null space of \(A\), they together form a basis for the left null space. Less obvious to a student is that the null space of \(L\) is the column space of \(A\)!

We now establish these two results about \(L\) carefully. Our second purpose is to give proofs that establish these set equalities without ever exploiting the properties of the subspaces as vector spaces, in contrast to the arguments on dimension used in [2.2]. This gives us a fundamental result about the dimensions of these subspaces as a corollary. The equivalence of Lemma 1.1 is our primary tool in the proofs of the next two lemmas.

Our next lemma can be obtained by taking orthogonal complements of the subspaces in the conclusion of Lemma 1.2. Instead, we provide a direct proof.

So far, we have only employed definitions and matrix operations in the proofs of these results. Now consider the vector space structure of these subspaces, specifically their dimensions. We choose to define the rank of a matrix, \(r\), as the dimension of the row space. From this definition, it follows that the matrix \(C\) must have \(r\) rows, and consequently \(L\) has \(m-r\) rows. Recall that if a matrix with \(n\) columns has reduced row-echelon form with \(r\) pivot columns, then there is a natural basis of the null space with \(n-r\) vectors. Notice that \(L\) is in reduced row-echelon form with no zero rows. So the dimension of \(\nsp{L}\) is \(m-(m-r)=r\), and by Lemma 1.2, the dimension of \(\col{A}\) is \(r\). So we have an argument based only on definitions and matrix operations that says the two types of rank, the row rank and the column rank, are equal. In a similar vein, the dimension of \(\row{L}\) is \(m-r\), and so by Lemma 1.3 the dimension of \(\nsp{\transpose{A}}\) is \(m-r\).

We summarize our results as a theorem and corollary.

So a student can easily obtain all four fundamental subspaces from extended echelon form as row spaces or null spaces of the matrices \(C\) and \(L\). Bases for these subspaces are easy to enumerate since both \(C\) and \(L\) are in reduced row-echelon form. From this theorem, we have the very important corollary about dimension.