• Linear Algebra

    Since we already have SVD, do we need URV factorization?

    The SVD factorization is a special case of the URV factorization that has many great properties. The latter decomposes a matrix $A_{m\times n}$ as $$A=URV^T,$$ where $U=(U_1|U_2)$ is an orthonormal matrix such that $U_1$ is a basis for $R(A)$, $U_2$ is a basis for $N(A^T)$; and $V=(V_1|V_2)$ is another orthonormal matrix such that $V_1$ is a basis for $R(A^T)$ and $V_2$ is a basis for $N(A)$. Suppose that the rank of $A$ is $r$, in which case $U_1$ has $r$ columns…

  • Linear Algebra

    Drazin Inverse and what it means

    The Drazin inverse and the discussion around it (p400 of C.D.Meyer) made me truly grasp some of the points about what a generalized inverse is and what is the connection between linear operators and matrices; and change of basis. The Drazin inverse is a natural consequence of the Core-Nilpotent decomposition, according to which, a matrix can be decomposed as $$A=Q\begin{pmatrix}C_{r\times r} & 0 \\ 0 & N\end{pmatrix}Q^{-1},$$where $C$ is nonsingular, and $N$ is nilpotent. Here, $r$ is not the rank of…

  • Computer Vision

    Taylor series of an image

    Taylor series is the main reason that the field of computer vision exists*. This may seem like a bold statement, but think about it. Classical computer vision problems like image alignment, optical flow, depth estimation (from stereo) or shape from motion, all were first solved with Taylor series expansion. And this should not be very surprising. Most computer vision problems are just optimization problems, and it is most natural that the first computer vision researchers used existing literature on optimization, which…

  • Probability

    What’s the use of characteristic functions in probability?

    Many books and classes in probability mention some mysterious entities called “Characteristic functions”, with little or no motivation. For students like myself, concepts without proper motivation and excitement do not register in the brain, that’s why I almost completely forgot all I’ve learned about characteristic functions from my first classes/books. The truth is that characteristic functions turn out to be incredibly useful in probability and play an important reason for the following four reasons. 1) Characteristic function is the other side…

  • Probability

    An elegant trick to compute moments of Gaussian RV

    Assume we have a zero-mean Gaussian RV $\mathbf x$ with variance $\sigma^2$, and we want to compute its moments, i.e., $E\{\mathbf x^n\}$. The moments for odd $n$ are zero because the density function of our RV $f(x)$ is an even function. But how to compute the moments. i.e., $$E\{\mathbf x^n\} =\int\limits_{-\infty}^{\infty} x^n f(x) dx =\frac{1}{\sigma \sqrt{2\pi}}\int\limits_{-\infty}^{\infty} x^n e^{-x^2/2\sigma^2} dx $$ for even $n$? Of course this is not a very difficult integral (can be computed through integration by parts etc.), but…