Linear Algebra

Magical generalizations thanks to Eigenvalues

Imagine taking ubiquitous scalar function $f(x)$, like $\exp(x)$ or $\sin (x)$, generalizing it somehow to a matrix function $f(\mathbf A)$, and expecting some of the properties of the scalar function to hold for the matrix function as well. Sounds like a magic trick, right? But generalizations of this kind are possible, and it’s precisely the point of this article to show some of them! This article also strongly supports the statement that eigenvalues are like the chromosomes or genes of a matrix (quasi-quote from Carl D. Meyer p543), as these magical generalizations require but the eigenvalues…

Imagine, say, defining matrix functions $\cos \mathbf A$ and $\sin \mathbf A$, and expecting one of the main properties of their scalar counterparts, namely $$\cos^2 x + \sin^2 x =1$$ to hold for the matrix functions as well: $$\cos^2 \mathbf A + \sin^2 \mathbf A =\mathbf 1.$$

How on earth can you even hope to define functions $\cos \mathbf A$ and $\sin\mathbf A$ that would satisfy this property? Clearly, a naive extension of the kind $[\cos (\mathbf A)]_{ij} = \cos([\mathbf A_{ij]})$ wouldn’t work, as you can easily try for yourself.

It turns that the key part to achieve the desired generalization is … the eigenvalues. Suppose that we have a diagonalizable matrix $\mathbf A$. We can take any scalar function $f(x)$ and define a straightforward matrix generalization $f(\mathbf A)$ as $$f(\mathbf A) = \mathbf P f(\mathbf D) \mathbf P^{-1} = \mathbf P \text{diag}(f(\lambda_1), f(\lambda_2), \dots, f(\lambda_n)) \mathbf P^{-1},$$ where $f(\lambda_i)$ is the scalar function of the $i$th eigenvalue $\lambda_i$.

If you use the extension above on some well-known functions, you can see for yourself that the key properties of a good number of standard scalar functions hold for the matrix function as well. One of the most insane examples is the infinite geometric series $$\sum\limits_{k=0}^\infty \mathbf A^k.$$ A well-known fact from algebra is that, the scalar geometric series $\sum\limits_{k=0}^\infty = x^k$ converges to $(1-s)^{-1}$ if $|x|<1$. How on earth would one expect this to hold for the matrix generalization as well? But it very much does, and under analogous assumptions! That is, if $||\mathbf A||<1$, then $$\sum\limits_{k=0}^\infty \mathbf A^k = (\mathbf{I-A})^{-1}.$$ If you you have a deeper understanding of math than I do, then you none of these may seem as a surprise to you… Still, this doesn’t mean that the generalization is not magical—it simply means that you are a magician yourself so you can see the “trick” behind the curtain.

If you didn’t find the generalizations above impressive, a few more perhaps will convince you about the strange power of the generalization of the eigenvalues:

  • $\exp \mathbf 0 = \mathbf I$
  • $\exp \mathbf A = \sum_{i=0}^{\infty} \frac{1}{i!} \mathbf A^i$
  • $\exp^{\mathbf{A+B}} = \exp^{\mathbf A}\exp^{\mathbf B}$ (whenever $\mathbf{AB=BA}$)
  • $(\sqrt{\mathbf A})^2 = \mathbf A$ (for nonnegative definite $\mathbf A$).

As you see, the generalization above took nothing but the eigenvalues. Makes one understand better why people have spent so much time to understand these magical numbers.