What is Maximum Likelihood Estimation?

Written in

by

In machine learning, we often perform what we call parameter estimation, which are the weights that are assigned to each feature of the input data.

For example, in a simple linear model, we use the equation y=mx + c , and m and c are your parameters to be estimated. For different values of the parameters, we build different models that produce different estimations of the data

Given different parameter values, we get different models. In the case of a linear model, each model is a different line on the graph.

Maximum likelihood is a technique for parameter value estimation.

MLE Parameter Estimation


Whenever we create a model with certain parameters, the outputs of the model (or the prediction) can be plotted as a probability distribution as well.

What MLE does it to try to make the distribution of the model close to the distribution of the observed data. Intuitively, this makes the model more accurate, as it becomes more representative of the actual data.

For example, given the following training data distribution points:

We want to find out which of the graphs below has the highest probability of plotting those points. Each graph has different parameter values, and so they are plotted in different spaces on the graph.

Just by visual inspection, we can see that the blue line is the graph with the correct parameters that produces those data points. But of course in a machine, there is no visual inspection, only maths.

Calculating the MLE


We want to calculate what is the total probability of observing all the generated data, or the joint probability of all the data points.

For a single data point for an assume Gaussian distribution, we have the following equation

Probability for observing 1 point

For 3 data points, we have the following joint probability:

Joint probability for observing 3 points

This can be extended to n number of points

To calculate the MLE of the parameters, we need to find the values of the parameters in the equation that gives us the maximum value of the probability. To find the maximum, we get the differential of the equation and set it to 0, and solve for the parameters.

Extending the MLE to the least squares method


When the distribution is Gaussian, the process of finding the MLE is similar to the least squared method.

For least squares estimation we want to find the line that minimizes the total squared distance between the data points and the regression line.

When the data distribution is assumed to be Gaussian, the maximum probability is found when the data points get closer to the mean value.

Since the Gaussian distribution is symmetric, this is equivalent to minimizing the distance between the data points and the mean value.

Tags

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: