# Eigenvalues and eigenvectors: a full information guide [LA4]

In this series of posts, I`ll be writing about some basics of Linear Algebra [LA] so we can learn together. The book I`ll be using as the material is:

Cabral, M. & Goldfeld, P. (2012). Curso de Álgebra Linear: Fundamentos e Aplicações. Third Edition.

Anton, H. & Rorres, C. (2012). Algebra Linear com Aplicações. Tenth edition.

You’ll have to be familiarized with the topics I wrote in this series before jumping into eigenvalues and eigenvectors, here are the links to it:

The topics we’ll see in this post are:

• The determinant of a matrix
• Eigenvalues and Eigenvectors
• Diagonalization
• Calculating eigenvalue and eigenvector in R

# 1. The determinant of a matrix

We use determinants a lot in Linear Algebra, especially in the calculation of eigenvalues and eigenvectors. What is a determinant then?

The determinant is a function that associates each squared real matrix A to a number denoted as det(A). The determinant is a generalization of area and volume.

## Properties of determinants

a) Scalar multiplication: if we multiply a column of a matrix by k the determinant is multiplied by k.

b) Vector addition: the determinant of a sum of vectors is equal to the sum of the determinants.

c) If the vectors are linearly dependent, the determinant is equal to zero.

d) The determinant of an identity matrix is equal to one.

e) If we change the place of two columns/rows, the signal of the determinant is changed.

f) If A is a square matrix, then

g) If A, B are square matrices n x n. Then

## a) 2x2 matrices:

Consider a 2x2 matrix

We define the determinant (det) of this matrix as:

We see that in a 2x2 matrix is easy to calculate its determinant, we just need to multiply the elements in the main diagonal minus the elements in the other diagonal.

## b) 3x3 matrices:

Consider a 3x3 matrix

Using Laplace formula for the determinant of a 3x3 matrix we have

Where we multiply the determinant of a 2x2 matrix by a cofactor.

## c) n x n matrices:

We calculate the determinant of a n x n matrix A with

To know more, read this very useful Wikipedia article.

# 2. Eigenvalues and Eigenvectors

If A is a n x n matrix, then a non-null vector v in ℝ^n is called an eigenvector of A (or of the linear transformation T) if Av is a scalar multiple of v, that is: Av = λv
with some scalar λ. The scalar λ is called the eigenvalue of A (or of the linear transformation T), and we say that v is an eigenvector associated with λ.

Example: The vector

is an eigenvector of

associated to the eigenvalue λ = 3, because

Geometrically, the multiplication of A expanded the vector x by a factor 3.

To calculate eigenvalues and eigenvectors, we have to notice that Av = λv can be rewritten as Av = λIdv, or as (λId - A)v = 0. Where Id is the identity matrix.

For λ to be the eigenvalue of A, this equation must have some non-null solution v. However, this only occurs if, and only if, the matrix of coefficients λId - A has a null determinant. Therefore, we have the following result:

If A is a n x n matrix, then λ is an eigenvalue of A if, and only if, λ satisfies the equation det(λId - A) = 0. This is called the characteristic equation of A.

Example: Following the last example, using the equation det(λId — A) = 0 in matrix A, we have

Where (λ - 3)(λ + 1) = 0. The solutions of this equations shows that the eigenvalues of A are λ = 3 and λ = -1.

The polynomial (λ — 3)(λ + 1) = 0 is called a characteristic polynomial of A. In general, the characteristic polynomial of a n x n matrix is

Given that a polynomial of a n degree has a maximum of n different roots, we have the equation

with a maximum of n different eigenvalues.

Example: Calculating the eigenvalues of an upper-triangular matrix.

Given a matrix A

We have

Therefore, the characteristic equation is

Where the eigenvalues are

We can notice that the eigenvalues are the main diagonal of the upper-triangular matrix A. That gives us the following theorem:

If A is a triangular matrix n x n (upper, lower, or diagonal matrix), then the eigenvalues of A are the elements of the main diagonal of A.

Example: Given a matrix A

We can see by the last theorem that the eigenvalues of this matrix are λ = 1/2, λ = 2/3, and λ = -1/4.

# 3. Diagonalization

In this section, we will find a basis of ℝ^n that is based on the eigenvectors of a given matrix A n x n.

We say that a square matrix A is diagonalizable if there is an invertible P that

with D = diagonal.

If A is diagonalizable, AP = PD. Note that if P =

And D is a diagonal matrix with

Then, AP =

Thus, Avi = λivi. Therefore, the columns of P are eigenvectors of A with eigenvalues in the diagonal of D.

Eigenvectors associated with different eigenvalues are linearly independent, that is, if

And

, k = 1, …, p, with different λk, then {v1, …, vp} is linearly independent.

If a matrix A of n x n dimensions has n different eigenvalues, then A is diagonalizable.

# 4. Calculating eigenvalues and eigenvectors in R

Calculate eigenvalues and eigenvectors in R is such an easy thing to do. First, we have to create a square matrix. You could use a correlation/covariance matrix, for example. But we will use a matrix from a previous example instead:

We will create the matrix

`A <- matrix(c(1/2, 0, 0, -1, 2/3, 0, 5, -8, -1/4), nrow = 3print(A)`

You can see that we called our matrix as `A` . Which the result is the matrix

`##      [,1] [,2] [,3]## [1,]  1/2   0    0## [2,]  -1   2/3   0## [3,]   5   -8  -1/4`

R already has a function called `eigen()` where you put the name of your matrix inside and they calculate both eigenvalues and eigenvectors.

We will store the results in a variable called `e` , type`e <- eigen(A)`

To get the eigenvalues, type `e\$values` . Which the results are

And to get the eigenvectors, type `e\$vectors` . Which the results are

# Contact

Stay in touch via LinkedIn ou via email rafavsbastos@gmail.com

Maters’ Candidate and Bachelor in Psychology. Works as researcher and consultant.

## More from Rafael Valdece Sousa Bastos

Maters’ Candidate and Bachelor in Psychology. Works as researcher and consultant.

## Inverse Kinematics for Game Programming

Get the Medium app