Probability

What’s the use of characteristic functions in probability?

Many books and classes in probability mention some mysterious entities called “Characteristic functions”, with little or no motivation. For students like myself, concepts without proper motivation and excitement do not register in the brain, that’s why I almost completely forgot all I’ve learned about characteristic functions from my first classes/books. The truth is that characteristic functions turn out to be incredibly useful in probability and play an important reason for the following four reasons.

1) Characteristic function is the other side of the coin

A distribution is uniquely specified by two things: its density function and its characteristic function. In fact the characteristic function may exist for distribution that doesn’t have a closed-form density (e.g. Levy-stable distributions).

2) Characteristic function matters because moments matter

Moments are extremely useful quantities that succinctly summarize the shape of a distribution. Importantly, moments can be obtained with the moment generating function, who is a close relative of the characteristic function. Moments are more important than just summarizing a distribution. In real life we can be interested, for example, in finding a distribution that optimizes a criterion subject to additional criteria that involves moments. For example, we can be interested in finding the maximal entropy distribution subject to some moments being in a certain interval.

Another very closely related interested application of moments is Independent Component Analysis.