The four fundamental spaces of a matrix $A$, namely the range and the nullspace of itself $A$ or its transpose $A^T$, are the heart of linear algebra. We often find ourselves in need of computing a basis for the range or the nullspace of a matrix, for theoretical or applicational purposes.

There are many ways of computing a basis for the range or nullspace for $A$ or $A^T$. Some are better for application, either due to their robustness against floating point errors, or due to their efficiency. Others are better for theoretical purposes, either because they are orthogonal or because they are unique. It’s good to have an overall picture of many approaches so that we can use the one that’s best suited for a given purpose.

## I. Gaussian elimination and Gauss-Jordan reduction

These are typically the methods that are introduced early in a linear algebra book. Later we figure out more ways to compute a basis, but Gaussian elimination and Gauss-Jordan remain useful nevertheless.

### Gauss Elimination

Gaussian elimination takes a matrix $A_{m\times n}$ and applies a series of Type I, Type II and Type III operations to reduce it to an upper triangular matrix $U$. The series of Type I, Type II and Type III operations are encoded in a non-singular matrix $P$, and the decomposition takes the form
$$PA = U.$$
Let us break down the matrix $P$ as
$$P = \begin{pmatrix} P_1 \\ P_2 \end{pmatrix},$$