# Does it really take 10¹⁴¹ years to compute the determinant of a 100×100 matrix?

Well, it depends on how you compute it. If you compute it by using the definition of determinant directly, in fact it can take more than 10¹⁴¹ years, as we’ll see.${}^\dagger$

Let’s recall the definition first. The determinant of a matrix $\mathbf A$ is $$\text{det}(\mathbf A) = \sum\limits_p \sigma(p) a_{1p_1} a_{2p_2} \dots a_{np_n},$$ where the sum is taken over the $n!$ permutations $p=(p_1,p_2,\dots,p_n)$ of the numbers $1,2,\dots,n$. The total number of multiplications in this definition are $n! (n-1)$. Based on my quick estimations from this link, a modern CPU can make approximately $6\times 10^12$ multiplications per second (which sounds quite insane frankly). However, it is peanuts compared to what it takes to compute the determinant of an even moderately sized matrix using the definition above. Even for a matrix of $100\times 100$, the total number of multiplications is approximately $9\times 10^159$. If you do the math, it turns out that computing the determinant of a $100\times 100$ matrix with direct application of the definition **takes more than … 10¹⁴¹ years**! If this number didn’t impress you, let me tell you that it’s **well beyond the total number of atoms on earth**.

I hopefully got your attention. Now at least two questions should come to your mind:

- How can the determinant of a $100\times 100$ matrix be computed in
**millisecond(s)**then? (which is the case) - What is the point of even knowing this definition, since it’s clearly is not used during computations?

The answer to the first question is that, we can computing the determinant of the matrix through its QR or LU composition, which is computationally very efficient (at least for a 100 x 100 matrix). Specifically, we make use of the properties of determinants upper/lower triangular or orthogonal matrices in this case (it would be a simple exercise to show).

And the point of knowing the definition is that it’s still useful for theoretical purposes. This stems from the fact that the determinant definition is a continuous function of its entries, allowing us to study show that the determinant or the *inverse of a matrix* vary continuously with its entries (p480), which, among other things, leads to the closed form of the derivative of a determinant (see p471 of Meyer)

${}^\dagger$ (This article is inspired and derived from Exercise 6.1.21 of Carl D. Meyer).