Section 2.4 Cholesky Decomposition
An LU decomposition of a matrix is obtained by repeated row operations and produces a result with some symmetry of sorts. The “L” matrix is lower triangular and the “U” matrix is upper triangular, so \(\matrixentry{L}{ij}=0=\conjugate{\matrixentry{U}{ji}}\) for \(i\lt j\text{,}\) which should be reminiscent of the definition of the adjoint of a matrix (Definition AM). If we begin with a positive definite matrix, then we can do better. By beginning with a Hermitian matrix, we can do row operations, and identical column operations and maintain the symmetry of the entries. We arrive at a decomposition of the form \(\adjoint{U}U\text{,}\) where \(U\) is upper-triangular.
Subsection 2.4.1 The Cholesky Decomposition
Recall that a Hermitian matrix \(A\) is positive definite if \(\innerproduct{\vect{x}}{A\vect{x}}\gt 0\) for all \(\vect{x}\neq\zerovector\text{.}\) This is just the variant of positive semi-definiteness (Definition Definition 1.8.1) where we replace the inequality by a strict inequality.
Theorem 2.4.1. Cholesky Decomposition.
Suppose that \(A\) is a positive definite matrix. Then there exists a unique upper triangular matrix, \(U\text{,}\) with positive diagonal matrices such that \(A=\adjoint{U}U\text{.}\)
Proof.
Checkpoint 2.4.2.
Prove that the upper triangular matrix \(U\) in the conclusion of Theorem Theorem 2.4.1 is unique.
Look at the technique used to establish uniqueness for the \(LU\) decomposition. How does the requirement that the entries of \(U\) be positive play a role in the proof?
Subsection 2.4.2 Computing a Cholesky Decomposition
To create an LU decomposition, we used row operations to “zero out” entries below the diagonal of a matrix \(A\text{.}\) If we represented these row operations as elementary matrices, we could accumulate their net effect in a lower triangular matrix that operates on the left of the matrix. For a Cholesky decomposition, we do the same thing, but also perform the analogous column operation, which can be represented as the adjoint of the same elementary matrix, and then applied from the right.
Here is the same idea, but expressed as intermediate steps leading to the eventual Cholesky decomposition. Recall that Hermitian matrices necessarily have real diagonal entries. Suppose that \(A\) is an \(n\times n\) positive definite matrix.
The only obstacle to this computation is the square root of the entry in the top left corner of \(A\text{,}\) and the result should be positive. If we apply the positive definite condition, with \(\vect{x}=\vect{e}_1\) (the first column of the identity matrix) then we have
Can we repeat this decomposition on the \((n-1)\times(n-1)\) matrix \(B-\frac{1}{a}\vect{y}\adjoint{\vect{y}}\text{?}\) As before we just need a strictly positive entry in the upper left corner of this slightly smaller matrix. Similar to before, employ the positive definite condition for \(A\) using \(\vect{x}=\inverse{U_1}\vect{e}_2\) and employ the version of \(A\) defining \(A_1\) (see Exercise Checkpoint 2.4.3). What is the result after \(n\) iterations?
Here we have used the observation that a product of upper triangular matrices is again upper triangular, and you should notice the appearance of the positive diagonal entries. So we have our desired factorization.
Checkpoint 2.4.3.
In the discussion of a recursive algorithm for computing a Cholesky decomposition in Section Subsection 2.4.2, verify that the matrix \(A_1\) has a strictly positive value in the second diagonal entry.
See the suggestion in the discussion, but comment on how we know \(U_1\) is invertible.