The EM algorithm was formally established by Arthur Dempster, Nan Laird, and Donald Rubin in thier 1977 paper. THE EM algorithm is useful in cases where we are analyzing a system with incomplete or missing data.The incomplete data case occurs when we have a combination of data that we can observe, and data that we cannot not observe (i.e "hidden data"). This means that we observe some data vector , and we are unable to observe , but we believe that was generated in-conjunction with .

It is important to note that our hypothesis is that although we only observe , we believe that the underlying model contains , which we do not observe. This means that we have that where is the pdf of .

So, if we were using the MLE approach, we would want to maximize: In most instances, will be difficult to maximize because we have a log of a sum. However, the EM algorithm provides a potential solution to this problem. Consider the following formulation:

where is some distribution function and is the expectation of under the probability measure .

The fact that means that we have created a lower bound for . It is important to note that this lower bound exists for different . So we need to think carefully about our choice of . In addition, we want the lower bound to be as tight as possible on . That is, if possible, we want to chose in such a way that .

Given that is strictly concave, one can show that if and only if the expectation is with respect to a constant. That is: . This means that .

However, since is a proper distribution and its probabilities must sum to one, we have that , thus . The process of finding is called the Expectation step. We are now going to focus on the Maximization step.

The Maximization step involves finding a that maximizes at the chosen from the Expectation step. Finding this optimal given is called the Maximization step. We then use these new values as initial values for the Expectation step, and continue iterating between the Expectation and Maximization steps.

In summary, the EM algorithm is iterative and is implemented as follows:

Expectation step Assume an initial value for Find This

Maximisation Step Given the Use this estimated value of



One can also show that after each iteration, the value of that we obtain produces a value for that is bigger than the value of at the previous iteration. This means that the procedure converges to at least a local optima. In addition, one can also show that this procedure will converge to the MLE that we are looking for. The proofs for the above claims can be found here.

Disadvantages of the EM algorithm

The EM algorithm tends to find local optima. So one needs to use many different starting values to increase the chances that the global optima is attained. In addition, the convergence to the optima is slow, more especially if the parameters we are trying to estimate are not separable.

In the next section, we will be looking at some numerical examples in order to see how one would implement the EM algorithm in practice.