PDF download Download Article PDF download Download Article

The matrix equation involves a matrix acting on a vector to produce another vector. In general, the way acts on is complicated, but there are certain cases where the action maps to the same vector, multiplied by a scalar factor.

Eigenvalues and eigenvectors have immense applications in the physical sciences, especially quantum mechanics, among other fields.

  1. The determinant of a matrix when is non-invertible. When this occurs, the null space of becomes non-trivial - in other words, there are non-zero vectors that satisfy the homogeneous equation [1]
  2. As mentioned in the introduction, the action of on is simple, and the result only differs by a multiplicative constant called the eigenvalue. Vectors that are associated with that eigenvalue are called eigenvectors. [2]
    • We can set the equation to zero, and obtain the homogeneous equation. Below, is the identity matrix.
    Advertisement
  3. In order for to have non-trivial solutions, the null space of must be non-trivial as well.
    • The only way this can happen is if This is the characteristic equation.
  4. yields a polynomial of degree for matrices.
    • Consider the matrix
    • Notice that the polynomial seems backwards - the quantities in parentheses should be variable minus number, rather than the other way around. This is easy to deal with by moving the 12 to the right and multiplying by to both sides to reverse the order.
  5. This is, in general, a difficult step for finding eigenvalues, as there exists no general solution for quintic functions or higher polynomials. However, we are dealing with a matrix of dimension 2, so the quadratic is easily solved.
  6. Let's substitute first. [3]
    • The resulting matrix is obviously linearly dependent. We are on the right track here.
  7. the resulting matrix. With larger matrices, it may not be so obvious that the matrix is linearly dependent, and so we must row-reduce. Here, however, we can immediately perform the row operation to obtain a row of 0's. [4]
    • The matrix above says that Simplify and reparameterize as it is a free variable.
  8. The previous step has led us to the basis of the null space of - in other words, the eigenspace of with eigenvalue 5.
    • Performing steps 6 to 8 with results in the following eigenvector associated with eigenvalue -2.
    • These are the eigenvectors associated with their respective eigenvalues. For the basis of the entire eigenspace of we write
  9. Advertisement

Community Q&A

Search
Add New Question
  • Question
    Why do we replace y with 1 and not any other number while finding eigenvectors?
    Community Answer
    For simplicity. Eigenvectors are only defined up to a multiplicative constant, so the choice to set the constant equal to 1 is often the simplest.
  • Question
    How do you find the eigenvectors of a 3x3 matrix?
    Alphabet
    Community Answer
    First, find the solutions x for det(A - xI) = 0, where I is the identity matrix and x is a variable. The solutions x are your eigenvalues. Let's say that a, b, c are your eignevalues. Now solve the systems [A - aI | 0], [A - bI | 0], [A - cI | 0]. The basis of the solution sets of these systems are the eigenvectors.
  • Question
    Is an eigenspace the same as an eigenvector?
    StrangelyQuiet
    Top Answerer
    No. An eigenvector is a single vector, whereas an eigenspace is a collection of vectors.
Ask a Question
      Advertisement

      Tips

      • The determinant of a triangular matrix is easy to find - it is simply the product of the diagonal elements. The eigenvalues are immediately found, and finding eigenvectors for these matrices then becomes much easier. [5]
        • Beware, however, that row-reducing to row-echelon form and obtaining a triangular matrix does not give you the eigenvalues, as row-reduction changes the eigenvalues of the matrix in general.
      • We can diagonalize a matrix through a similarity transformation where is an invertible change-of-basis matrix and is a matrix with only diagonal elements. However, if is an matrix, it must have distinct eigenvalues in order for it to be diagonalizable.
        • In our case,
        • There are a few things of note here. First, the diagonal elements of are the eigenvalues that we found. Second, the columns of are the eigenspace of Third, is similar to in the sense that they have the same determinant, eigenvalues, and trace.
        • When diagonalizing, the eigenbases in that correspond to their eigenvalues must line up - in other words, you must be consistent with the ordering. In the example above, you cannot switch the columns of without switching the positions of the diagonal elements in
      Submit a Tip
      All tip submissions are carefully reviewed before being published
      Thanks for submitting a tip for review!
      Advertisement

      About This Article

      Thanks to all authors for creating a page that has been read 126,672 times.

      Did this article help you?

      Advertisement