Tag Archives: OLS

OLS_dummy

Using a Genetic Algorithm to Minimize an OLS Regression in R

A genetic algorithm allows you to optimize parameters by using an algorithm that mimics biological evolution. It will run through several generations of values trying to find the values that minimizes [or maximizes depending on the algorithm] its fitness or evaluation function, which is just any function that returns a value from the parameters the algorithm is optimizing.

There is a lot of literature on how genetic algorithms work, and I would recommend reading those if you want the technical details on how they work. Genetic algorithms are typically demonstrated by the knapsack algorithm problem [Numb3rs Scene Youtube], where you look to optimize the survival points by seeking the right combination of survival items weighing under a specified amount to fit in a knapsack. This R-bloggers site has a good demonstration of that example and code. However, I find it more interesting to use a genetic algorithm on something more familiar to analytics and statistics, and that’s the ordinary least squares regression (OLS).

OLS minimizes the sum of squared error (SSE) to find the best fit line or regression line for the data set. This is derived using matrix calculus, and it’s computational efficient, easy to understand, and ubiquitous.

Since OLS essentially is an algorithm that uses calculus to minimize SSE, we can use a genetic algorithm to accomplish the same task. R’s GA (genetic algorithm) package allows you to use either binary or real numbers as parameters for the fitness function. Traditionally, genetic algorithms use binary parameters [see the knapsack algorithm], but for this problem, real numbers will be much more useful since the regression coefficients will be real numbers.

The GA algorithm will create a vector of real numbers between -100 and 100, then use that vector to evaluation a regression equation in the fitness function. The fitness function returns the SSE. Since the GA algorithm seeks to maximize the fitness function, the function has a negative sign in front of it, so the lowest absolute SSE will at the maximum if it’s negative. The GA has a population of 500 vectors which are evaluated with the fitness function, and the best solutions are generally kept and children vectors are created, the process is repeated 500 times. The results is a SSE that is very close the OLS solution, and parameter estimates that match up as well.

I’ve included two different linear models. The first has only two variables which play significant roles in the OLS regression, and a second model which has every variable with not all being significant. You can run it a few times and see how the GA solutions differ. The first model’s GA estimates will be a lot closer to the OLS’ estimates than the second model’s.

All of this is rather academic for well-behaved linear regression problems, since GA are computationally expensive taking forever relative to your standard OLS procedure.

The full annotated R code follows:

OLS_dummy

OLS Derivation

Ordinary Least Squares (OLS) is a great low computing power way to obtain estimates for coefficients in a linear regression model. I wanted to detail the derivation of the solution since it can be confusing for anyone not familiar with matrix calculus.

First, the initial matrix equation is setup below. With X being a matrix of the data’s p covariates plus the regression constant. [This will be represented as a column of ones if you were to look at the data in the X matrix.] Y is the column matrix of the target variable and β is the column matrix of unknown coefficients. e is a column matrix of the residuals.

\mathbf{Y = X} \boldsymbol{\beta} + \boldsymbol{e}

Before manipulating the equation it is important to note you are not solving for X or Y, but instead β and will do this by minimizing the sum of squares for the residuals (SSE). So the equation can be rewritten by moving the error term to the left side of the equation.

\boldsymbol{e}  =  \mathbf{Y - X} \boldsymbol{\beta}

The SSE can be written as the product of the transposed residual column vector and its original column vector. [This is actually how you would obtain the sum of squares for any vector.]

\mathrm{SSE} = \boldsymbol{e}'\boldsymbol{e}

Since you transpose and multiply one side of the equation, you have to follow suit on the other side. Yielding

\boldsymbol{e'e}  =  (\mathbf{Y - X} \boldsymbol{\beta})'(\mathbf{Y - X} \boldsymbol{\beta})

The transpose operator can be distributed through out the quantity on the right side, so the right side can be multiplied out.

\boldsymbol{e'e}  =  (\mathbf{Y' - \boldsymbol{\beta}'X'})(\mathbf{Y - X} \boldsymbol{\beta})

Using the rule that A’X = X’A, you can multiple out the right side and simplify it.

\boldsymbol{e'e}  =  (\mathbf{Y'Y - Y'X\boldsymbol{\beta} - \boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})
\boldsymbol{e'e}  =  (\mathbf{Y'Y - \boldsymbol{\beta}'X'Y - \boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})
\boldsymbol{e'e}  =  (\mathbf{Y'Y - 2\boldsymbol{\beta}'X'Y} + \boldsymbol{\beta'\mathbf{X'X}\beta})

To minimize the SSE, you have to take the partial derivative relative to β. Any terms without a β term in them will go to zero. Using the transpose rule from before you can see how the middle term yields -2X’Y using differentiation rules from Calc1. The last term is a bit tricky, but it derives to +2X’Xβ.

\frac{\delta\boldsymbol{e'e}}{\delta\boldsymbol{\beta}}   =  \frac{\delta\mathbf{Y'Y}}{\delta\boldsymbol{\beta}} - \frac{2\boldsymbol{\beta}'\mathbf{X'Y}}{\delta\boldsymbol{\beta}} + \frac{\delta\boldsymbol{\beta'\mathbf{X'X}\beta}}{\delta\boldsymbol{\beta}}

\frac{\delta\boldsymbol{e'e}}{\delta\boldsymbol{\beta}}   =  - 2\mathbf{X'Y} + 2\boldsymbol{\mathbf{X'X}\beta}

To find the minimum (it will never be a maximum if you have all the requirements for OLS fulfilled), the derivative of the SSE is set to zero.

0  =  - 2\mathbf{X'Y} + 2\mathbf{X'X}\boldsymbol{\beta}

0  =  \mathbf{- X'Y} + \mathbf{X'X}\boldsymbol{\beta}

Using some basic linear algebra and multiplying both sides by the inverse of (X’X)…

(\mathbf{X'X})^{-1}\mathbf{X'X}\boldsymbol{\beta} = (\mathbf{X'X})^{-1}\mathbf{X'Y}

…yields the solution for β

\boldsymbol{\beta} = (\mathbf{X'X})^{-1}\mathbf{X'Y}

References:

The Mathematical Derivation of Least Squares. Psychology 8815. Retrieved from: http://isites.harvard.edu/fs/docs/icb.topic515975.files/OLSDerivation.pdf

Chatterjee, S & Hadi, A. (2012). Regression analysis by example. Hoboken, NJ: John Wiley & Sons, Inc.