Section Inner Products and Norms
|
|
|
- Neal Paul
- 9 years ago
- Views:
Transcription
1 Section Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F, denoted x, y, such that for all x, y, and z in V and all c in F, the following hold: 1. x + z, y = x, y + z, y 2. cx, y = c x, y 3. x, y = y, x, where the bar denotes complex conjugation. 4. x, x > 0 if x 0 Note that if z is a complex number, then the statement z 0 means that z is real and non-negative. Notice that if F = R, (3) is just x, y = y, x. Definition. Let A M m n (F ). We define the conjugate transpose or adjoint of A to be the n m matrix A such that (A ) i,j = A i,j for all i, j. Theorem 6.1. Let V be an inner product space. Then for x, y, z V and c F, 1. x, y + z = x, y + x, z 2. x, cy = c x, y 3. x, 0 = 0, x = 0 4. x, x = 0 x = 0 5. If x, y = x, z, for all x V, then y = z. Proof. (Also see handout by Dan Hadley.) 1. x, y + z = y + z, x = y, x + z, x = y, x + z, x = x, y + x, z 2. x, cy = cy, x = c y, x = c y, x = c x, y 3. x, 0 = x, 0x = 0 x, x = 0x, x = 0, x 4. If x = 0 then by (3) of this theorem, x, x = 0. If x, x = 0 then by (3) of the definition, it must be that x = x, y x, z = 0, for all x V. x, y z = 0, for all x V. Thus, y z, y z = 0 and we have y z = 0 by (4). So, y = z. Definition. Let V be an inner product space. For x V, we define the norm or length of x by x = x, x. Theorem 6.2. Let V be an inner product space over F. Then for all x, y V and c F, the following are true. 1. cx = c x. 2. x = 0 if and only if x = 0. In any case x (Cauchy-Schwarz Inequality) < x, y > x y. 4. (Triangle Inequality) x + y x + y. 1
2 Proof. (Also see notes by Chris Lynd.) 1. cx 2 = cx, cx = c c x, x = c 2 x x = 0 iff ( x, x ) = 0 iff x, x = 0 x = 0. If x 0 then ( x, x ) > 0 and so ( x, x ) > If y = 0 the result is true. So assume y 0. Finished in class. 4. Done in class. Definition. Let V be an inner product space, x, y V are orthogonal or ( perpendicular ) if x, y = 0. A subset S V is called orthogonal if x, y S, x, y = 0. x V is a unit vector if x = 1 and a subset S V is orthonormal if S is orthogonal and x S, x = 1. Section 6.2 Definition. Let V be an inner product space. Then S V is an orthonormal basis of V if it is an ordered basis and orthonormal. Theorem 6.3. Let V be an inner product space and S = {v 1, v 2,..., v k } be an orthogonal subset of V such that i, v i 0. If y Span(S), then k y, v i y = v i 2 v i Proof. Let Then y = k a i v i y, v j = = k a i v i, v j k a i v i, v j = a j v j, v j = a j v j 2 So a j = y,vi v i. Corollary 1. If S also is orthonormal then y = k y, v i v i Corollary 2. If S also is orthogonal and all vectors in S are non-zero then S is linearly independent. Proof. Suppose k a iv i = 0. Then for all j, a j = 0,v i v i = 0. So S is linearly independent. Theorem 6.4. (Gram-Schmidt) Let V be an inner product space and S = {w 1, w 2,..., w n } V be a linearly independent set. Define S = {v 1, v 2,..., v n } where v 1 = w 1 and for 2 k n, k 1 v k = w k w k, v i v i 2 v i. Then, S is an orthogonal set of non-zero vectors and Span(S ) =Span(S). 2
3 Proof. Base case, n = 1. S = {w 1 }, S = {v 1 }. Let n > 1 and S n = {w 1, w 2,..., w n }. For S n 1 = {w 1, w 2,..., w n 1 } and S n 1 = {v 1, v 2,..., v n 1 }, we have by induction that span(s n 1) = span(s n 1 ) and S n 1 is orthogonal. To show S n is orthogonal, we just have to show j [n 1], < v n, v j >= 0. n 1 v n = w n k=1 < v n, v j > = < w n, v j > < < w k, v k > v k 2 v k n 1 = < w n, v j > k=1 n 1 k=1 < w k, v k > v k 2 v k, v j > < w k, v k > v k 2 < v k, v j > = < w n, v j > < w n, v j > v j 2 < v j, v j > = = 0 We now show span(s n ) = span(s n). We know dim(span(s n )) = n, since S n is linearly independent. We know dim(span(s n)) = n, by Corollary 2 to Theorem 6.3. We know span(s n 1 ) = span(s n 1). So, we just need to show v n span(()s n ). n 1 v n = w n a j v j for some constants a 1, a 2,..., a n 1. For all j [n 1], v j span(s n 1 ) since span(s n 1 ) = span(s n 1) and v j S n 1. Therefore, v n span(s n ). Theorem 6.5. Let V be a non-zero finite dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β = {v 1, v 2,..., v n } and x V, then x = j=1 x, v i v i. Proof. Start with a basis of V Apply Gram Schmidt, Theorem 6.4 to get an orthogonal set β. Produce β from β by normalizing β. That is, multiply each x β by 1/ x. By Corollary 2 to Theorem 6.3, β is linearly independent and since it has n vectors, it must be a basis of V. By Corollary 1 to Theorem 6.3, if x V, then x = x, v i v i. Corollary 1. Let V be a finite dimensional inner product space with an orthonormal basis β = {v 1, v 2,..., v n }. Let T be a linear operator on V, and A = [T ] β. Then for any i, j, (A) i,j = T (v j ), v i. Proof. By Theorem 6.5, x V, then So Hence So, (A) i,j = T (v j ), v i. x = T (v j ) = x, v i v i. T (v j ), v i v i. [T (v j )] β = ( T (v j ), v 1, T (v j ), v 2,..., T (v j ), v n ) t 3
4 Definition. Let S be a non-empty subset of an inner product space V. We define S (read S perp ) to be the set of all vectors in V that are orthogonal to every vector in S; that is, S = {x : x, y = 0, y S}. S is called the orthogonal complement of S. Theorem 6.6. Let W be a finite dimensional subspace of an inner product space V, and let y V. Then there exist unique vectors u W, z W such that y = u + z. Furthermore, if {v 1, v 2..., v k } is an orthonormal basis for W, then k u = y, v i v i. Proof. Let y V. Let {v 1, v 2,..., v k } be an orthonormal basis for W. This exists by Theorem 6.5. Let u = k y, v i v i. We will show y u W. It suffices to show that for all i [k], y u, v i = 0. y u, v i = y k y, v j v j, v i j=1 = y, v i k y, v j v j, v i j=1 = y, v i y, v i = 0 Thus y u W. Suppose x W W. Then x W and x W. So, x, x = 0 and we have that x = 0. Suppose r W and s W such that y = r + s. Then r + s = u + z r u = z s This shows that both r u and z s are in both W and W. But W W = so it must be that r u = 0 and z s = 0 which implies that r = u and z = s and we see that the representation of y is unique. Corollary 1. In the notation of Theorem 6.6, the vector u is the unique vector in W that is closest to y. That is, for any x W, y x y u and we get equality in the previous inequality if and only if x = u. Proof. Let y V, u = k < y, v i > v i. z = y u W. Let x W. Then < y u, x >= 0, since z W. By Exercise 6.1, number 10, if a is orthogonal to b then a + b 2 = a 2 + b 2. So we have y x 2 = (u + z) x 2 = (u x) + z 2 = u x 2 + z 2 z 2 = y u 2 Now suppose y x = y u. By the squeeze theorem, u x 2 + z 2 = z 2 and thus u x 2 = 0, u x = 0, u x = 0, and so u = x. 4
5 Theorem 6.7. Suppose that S = {v 1, v 2,..., v k } is an orthonormal set in an n-dimensional inner product space V. Then (a) S can be extended to an orthonormal basis {v 1,..., v k, v k+1,..., v n } (b) If W =Span(S) then S 1 = {v k+1, v k+2,..., v b } is an orthonormal basis for W. (c) If W is any subspace of V, then dim(v ) = dim(w )+ dim(w ). Proof. (a) Extend S to an ordered basis. S = {v 1, v 2,..., v k, w k+1, w k+2,..., w n }. Apply Gram-Schmidt to S. First k do not change. S spans V. Then normalize S to obtain β = {v 1,..., v k, v k+1,..., v n }. (b) S 1 = {v k+1, v k+2,..., v n } is an orthonormal set and linearly independent Also, it is a subset of W. It must span W since if x W, x = n < x, v i > v i = n i=k+1 < x, v i > v i. (c) dim(v ) = n = k + n k = dim(w ) + dim(w ). Section 6.3: Theorem 6.8. Let V be a finite dimensional inner product space over F, and let g : V F be a linear functional. Then, there exists a unique vector y V such that g(x) = x, y, x V. Proof. Let β = {v 1, v 2,..., v n } be an orthonormal basis for V and let y = g(v i )v i. Then for 1 j n, v j, y = v j, = g(v i )v i g(v i ) v j, v i = g(v j ) v j, v j = g(v j ) and we have, x V, g(x) = x, y. To show y is unique, suppose g(x) = x, y, x. Then, x, y = x, y, x. By Theorem 6.1(e), we have y = y. Example. (2b) Let V = C 2, g(z 1, z 2 ) = z 1 2z 2. Then V is an inner-product space with the standard inner product (x 1, x 2 ), (y 1, y 2 ) = x 1 y 1 + x 2 y 2 and g is a linear operator on V. Find a vector y V such that g(x) = x, y, x V. Sol: We need to find (y 1, y 2 ) C 2 such that That is: g(z 1, z 2 ) = (z 1, z 2 ), (y 1, y 2 ), (z 1, z 2 ) C 2. z 1 2z 2 = z 1 y 1 + z 2 y 2. (1) Using the standard ordered basis {(1, 0), (0, 1)} for C 2, the proof of Theorem 6.8 gives that y = n g(v i)v i. So, (y 1, y 2 ) = g(1, 0)(1, 0) + g(0, 1)(0, 1) = z 1 (1, 0) + 2z 2 (0, 1) = (1, 0) 2(0, 1) = (1, 2) Check (1): LHS= z 1 2z 2 and for y 1 = 1 and y 2 = 2, we have RHS= z 1 2z 2. 5
6 Theorem 6.9. Let V be a finite dimensional inner product space and let T be a linear operator on V. There exists a unique operator T : V V such that T (x), y = x, T (y), x, y V. Furthermore, T is linear. Proof. Let y V Define g : V F as g(x) = T (x), y, x V. Claim: g is linear. g(ax + z) = T (ax + z), y = at (x) + T (z), y = at (x), y + T (z), y = ag(x) + g(z). By Theorem 6.8, there is a unique y V such that g(x) = x, y. So we have T (x), y = x, y, x V. We define T : V V by T (y) = y. So T (x), y = x, T (y), x V Claim: T is linear. We have for cy + z V, T (cy + z) equals the unique y such that T (x), cy + z = x, T (cy + z). But, Since y is unique, T (cy + z) = ct (y) + T (z). Claim: T is unique. Let U : V V be linear such that So, T = U. T (x), cy + z = c T (x), y + T (x), z = c x, T (y) + x, T (z) = x, ct (y) + T (z) T (x), y = x, U(y), x, y V x, T (y) = x, U(y), x, y V = T (y) = U(y), y V Definition. T is called the adjoint of the linear operator T and is defined to be the unique operator on V satisfying T (x), y = x, T (y), x, y V. For A M n (F ), we have the definition of A, the adjoint of A from before, the conjugate transpose. Fact. x, T (y) = T (x), y, x, y V. Proof. x, T (y) = T (y), x = y, T (x) = T (x), y Theorem Let V be a finite dimensional inner product space and let β be an orthonormal basis for V. If T is a linear operator on V, then [T ] β = [T ] β Proof. Let A = [T ] β and B = [T ] β with β = {v 1, v 2,..., v n }. By the corollary to Theorem 6.5, (B) i,j = T (v j ), v i = v i, T (v j ) = T (v i ), v j = (A) j,i = (A ) i,j 6
7 Corollary 2. Let A be an n n matrix, then L A = (L A ). Proof. Use β, the standard ordered basis. By Theorem 2.16, [L A ] β = A (2) and [L A ] β = A (3) By Theorem 6.10, [L A ] β = [L A ] β, which equals A by (2) and [L A ] β by (3). Therefore, [L A ] β = [L A ] β L A = L A. Theorem Let V be an inner product space, T, U linear operators on V. Then 1. (T + U) = T + U. 2. (ct ) = ct, c F. 3. (T U) = U T (composition) 4. (T ) = T 5. I = I. Proof and (T + U)(x), y = x, (T + U) (y) (T + U)(x), y = T (x), y + U(x), y And (T + U) is unique, so it must equal T + u. and and = x, T (y) + x, U (y) = x, (T + U )(y) ct (x), y = x, (ct ) (y) ct (x), y = c T (x), y = c x, T (y) = x, ct (y) T U(x), y = x, (T U) (y) T U(x), y = T (U(x)), y = U(x), T (y) = x, U (T (y)) = x, (U T )(y) 7
8 4. T (x), y = x, (T ) (y) by definition and T (x), y = x, T (y) by Fact. 5. I (x), y = x, I(y) = x, y, x, y V Therefore, I (x) = x, x V and we have I = I. Corollary 1. Let A and B be n n matrices. Then 1. (A + B) = A + B. 2. (ca) = ca, c F. 3. (AB) = B A 4. (A ) = A 5. I = I. Proof. Use Theorem 6.11 and Corollary to Theorem Or, use below. Example. (Exercise 5b) Let A and B be m n matrices and C an n p matrix. Then 1. (A + B) = A + B. 2. (ca) = ca, c F. 3. (AC) = C A 4. (A ) = A 5. I = I. Proof. 1. (A + B) i,j = (A + B) j,i = (A) j,i + (B) j,i = (A) j,i + (B) j,i and (A + B ) i,j = (A ) i,j + (B ) i,j = (A) j,i + (B) j,i 2. Let c F. (ca) i,j = (ca) j,i = c(a) j,i = c(a) j,i and (ca ) i,j = c(a ) i,j = c(a) j,i 8
9 3. ((AC) ) i,j = (AC) j,i = (A) j,k (C) k,i = = = k=1 (A) j,k (C) k,i k=1 (A ) k,j (C ) i,k k=1 (C ) i,k (A ) k,j k=1 = (C A ) i,j Fall The following was not covered For x, y F n, let x, y n denote the standard inner product of x and y in F n. Recall that if x and y are regarded as column vectors, then x, y n = y x. Lemma. Let A M m n (F ), x F n, and y F m. Then Ax, y m = x, A y n. Proof. Ax, y m = y (Ax) = (y A)x = (A y) x = x, A y n. Lemma. Let A M m n (F ), x F n. Then rank(a A) =rank(a). Proof. A A is an n n matrix. By the Dimension Theorem, rank(a A)+nullity(A A) = n. We also have, rank(a)+nullity(a) = n. We will show that the nullspace of A equals the nullspace of A A. We will show A Ax = 0 if and only if Ax = 0. 0 = A Ax 0 = A Ax, x n 0 = Ax, A x m 0 = Ax, Ax m 0 = Ax Corollary 1. If A is an m n matrix such that rank(a) = n, then A A is invertible. Theorem Let A M m n (F ) and y F m. Then there exists x 0 F n such that (A A)x 0 = A y and Ax 0 y Ax y for all x F n. Furthermore, if rank(a) = n, then x 0 = (A A) 1 A y. Proof. Define W = {Ax : x F n } = R(L A ). By the corollary to Theorem 6.6, there is a unique vector u = Ax 0 in W that is closest to y. Then, Ax 0 y Ax y for all x F n. Also by Theorem 6.6, z = y u is in W. So, z = Ax 0 y is in W. So, Ax, Ax 0 y m = 0, x F n. 9
10 By Lemma 1, x, A (Ax 0 y) n = 0, x F n. So, A (Ax 0 y) = 0. We see that x 0 is the solution for x in A Ax = A y. If, in addition, we know that rank(a) = n, then by Lemma 2, we have that rank(a A) = n and is therefore invertible. So, x 0 = (A A) 1 A y. Fall not covered until here. Section 6.4: Lemma. Let T be a linear operator on a finite-dimensional inner product space V. If T has an eigenvector, then so does T. Proof. Let v be an eigenvector of T corresponding to the eigenvalue λ. For all x V, we have 0 = 0, x = (T λi)(x), x = v, (T λi) (x) = v, (T λi)(x) So v is orthogonal to (T λi)(x) for all x. Thus, v R(T λi) and so the nullity of (T λi) is not 0. There exists x 0 such that (T λi)(x) = 0. Thus x is a eigenvector corresponding to the eigenvalue λ of T. Theorem (Schur). Let T be a linear operator on a finite-dimensional inner product space V. Suppose that the characteristic polynomial of T splits. Then there exists an orthonormal basis β for V such that the matrix [T ] β is upper triangular. Proof. The proof is by mathematical induction on the dimension n of V. The result is immediate if n = 1. So suppose that the result is true for linear operators on (n 1)-dimensional inner product spaces whose characteristic polynomials split. By the lemma, we can assume that T has a unit eigenvector z. Suppose that T (z) = λz and that W = span({z}). We show that W is T -invariant. If y W and x = cz W, then T (y), x = T (y), cz = y, T (cz) = y, ct (z) = y, cλz = cλ y, z = cλ(0) = 0. So T (y) W. By Theorem 5.21, the characteristic polynomial of T W divides the characteristic polynomial of T and hence splits. By Theorem 6.7(c), dim(w ) = n 1, so we may apply the induction hypothesis to T W and obtain an orthonormal basis γ of W such that [T W ] γ is upper triangular. Clearly, β = γ {z} is an orthonormal basis for V such that [T ] β is upper triangular. Definition. Let V be an inner product space, and let T be a linear operator on V. We say that T is normal if T T = T T. An n n real or complex matrix A is normal if AA = A A. Theorem Let V be an inner product space, and let T be a normal operator on V. Then the following statements are true. (a) T (x) = T (x) for all x V. (b) T ci is normal for every c F (c) If x is an eigenvector of T, then x is also an eigenvector of T. In fact, if T (x) = λx, then T (x) = λx. 10
11 (d) If λ 1 and λ 2 are distinct eigenvalues of T with corresponding eigenvectors x 1 and x 2, then x 1 and x 2 are orthogonal. Proof. (a) For any x V, we have (b) T (x) 2 = T (x), T (x) = T T (x), x = T T (x), x = T (x), T (x) = T (x) 2. (T ci)(t ci) = (T ci)(t ci) = T (T ci) ci(t ci) = T T ct ct c ci = T T ct ct c ci = T T ct ct c ci = T (T ci) ci(t ci) = (T ci)(t ci) = (T ci) (T ci) (c) Suppose that T (x) = λx for some x V. Let U = T λi. Then U(x) = 0, and U is normal by (b). Thus (a) implies that 0 = U(x) = U (x) = (T λi)(x) = T (x) λx. Hence T (x) = λx. So x is an eigenvector of T. (d) Let λ 1 and λ 2 be distinct eigenvalues of T with corresponding eigenvectors x 1 and x 2. Then using (c), we have Since λ 1 λ 2, we conclude that x 1, x 2 = 0. λ 1 x 1, x 2 = λ 1 x 1, x 2 = T (x 1 ), x 2 = x 1, T (x 2 ) = x 1, λ 2 x 2 = λ2 x 1, x 2. Definition. Let T be a linear operator on an inner product space V. We say that T is self-adjoint ( Hermitian) if T = T. An n n real or complex matrix A is self-adjoint (Hermitian) if A = A. Theorem Let T be a linear operator on a finite-dimensional complex inner product space V. Then T is normal if and only if there exists an orthonormal basis for V consisting of eigenvectors of T. Proof. The characteristic polynomial of T splits over C. By Shur s Theorem there is an orthonormal basis β = {v 1, v 2,..., v n } such that A = [T ] β is upper triangular. ( ) Assume T is normal. T (v 1 ) = A 1,1 v 1. Therefore, v 1 is an eigenvector with associated eigenvalue A 1,1. Assume v 1,..., v k 1 are all eigenvectors of T. Claim: v k is also an eigenvector. Suppose λ 1,..., λ k 1 are the corresponding eigenvalues. By Theorem 6.15, T (v j ) = λ j v j T (v j ) = λ j v j. Also, T (v k ) = A 1,k v 1 + A 2,k v A k,k v k. We also know by the Corollary to Theorem 6.5 that A i,j =< T (v j, v i >. So, A j,k = < T (v k, v j > = < v k, T (v j ) > = < v k, λ j v j > = λ j < v k, v j > { 0 j k = j = k A k,k 11
12 So T (v k ) = A k,k v k and we have that v k is an eigenvector of T. By induction, β is a set of eigenvectors of T. ( ) If β is orthonormal basis of eigenvectors then T is diagonalizable by Theorem 5.1. D = [T ] β is diagonal. Hence [T ] β is diagonal and equals [T ] β. Diagonal matrices commute. Hence, [T ] β [T ] β = [T ] β [T ] β. Let x V, x = a 1 v 1 + a 2 v a n v n. We have that T T (x) = T T (x), and hence, T T = T T. Lemma. Let T be a self-adjoint operator on a finite-dimensional inner product space V. Then (a) Every eigenvalue of T is real. (b) Suppose that V is a real inner product space. Then the characteristic polynomial of T splits. Proof. (a) Let λ be an eigenvalue of T. So T (x) = λx for some x 0. Then λx = T (x) = T (x) = λ(x) (λ λ)x = 0 But x 0 so λ λ = 0, and we have that λ = λ so λ is real. (b) Let dim(v ) = n and β be an orthonormal basis for V. Set A = [T ] β. Then So A is self-adjoint. A = [T ] β = [T ] β = [T ] β = A Define T A : C n C n by T A (x) = Ax. Notice that [T A ] γ = A where γ is the standard ordered basis, which is orthonormal. So T A is self-adjoint and by (a) its eigenvalues are real. WE know that over C the characteristic polynomial of T A factors into linear factors, t λ, and since each λ is R, we know it also factors over R. But T A, A, and T all have the same characteristic polynomial. Theorem Let T be a linear operator on a finite-dimensional real inner product space V. Then T is self-adjoint if and only if there exists an orthonormal basis β for V consisting of eigenvectors of T. Proof. ( ) Assume T is self-adjoint. By the lemma, the characteristic polynomial of T splits. Now by Shur s Theorem, there exists an orthonormal basis β for V such that A = [T ] β is upper triangular. But A = [T ] β = [T ] β = [T ] β = A So A and A are both upper triangular, thus diagonal and we see that β is a set of eigenvectors of T. ( ) Assume there is an orthonormal basis β of V of eigenvectors of T. We know D = [T ] β is diagonal with eigenvalues on the diagonal. D is diagonal and is equal to D since it is real. But then [T ] β = D = D = [T ] β = [T ] β. So T = T. Fall The following was covered but will not be included on the final. Section 6.5: Definition. Let T be a linear operator on a finite-dimensional inner product space V (over F ). If T (x) = x for all x V, we call T a unitary operator if F = C and an orthogonal operator if F = R. Theorem Let T be a linear operator on a finite-dimensional inner product space V. Then the following statements are equivalent. 1. T T = T T = I. 12
13 2. T (x), T (y) = x, y, x, y V. 3. If β is an orthonormal basis for V, then T (β) is an orthonormal basis for V. 4. There exists an orthonormal basis β for V such that T (β) is an orthonormal basis for V. 5. T (x) = x, x V. Proof. (1) (2) T (x), T (y) = x, T T (y) = x, y (2) (3) Let v i, v j β. Then 0 = v i, v j = T (v i ), T (v j ), so T (β) is orthogonal. By Corollary 2 to Theorem 6.3, any orthogonal subset is linearly independent and since T (β) has n vectors, it must be a basis of V. Also, 1 = v i 2 = v i, v i = T (v i ), T (v i ). So, T (β) is an orthonormal basis of V. (3) (4) By Graham-Schmit, there is an orthonormal basis β for V. By (3), T (β) is orthonormal. (4) (5) Let β = {v 1, v 2,..., v n } be an orthonormal basis for V. Let Then And x = a 1 v 1 + a 2 v 2 + a n v n. x 2 = a 2 1 v 1, v 1 + a 2 2 v 2, v a 2 n v n, v n = a a a 2 n T (x) 2 = a 2 1 T (v 1 ), T (v 1 ) + a 2 2 T (v 2 ), T (v 2 ) + + a 2 n T (v n ), T (v n ) = a a a 2 n Therefore, T (x) = x. (5) (1) We are given T (x) = x, for all x. We know x, x = 0 if and only if x = 0 and T (x), T (x) = 0 if and only if T (x) = 0. Therefore, T (x) = 0 if and only if x = 0. So N(T ) = {0} and therefore, T is invertible. We have x, x = T (x), T (x) = x, T T (x). Therefore, T T (x) = x for all x, which implies that T T = I. But since T is invertible, it must be that T = T 1 and we have that T T = T T = I. Lemma. Let U be a self-adjoint operator on a finite-dimensional inner product space V. If x, U(x) = 0 for all x V, then U = T 0. (Where T 0 (x) = 0, x.) Proof. u = U U is normal. If F = C, by Theorem 6.16, there is an orthonormal basis β of eigenvectors. If F = R, by Theorem 6.17, there is an orthonormal basis β of eigenvectors. Let x β. Then U(x) = λx for some λ. x, Therefore, λ = 0, so λ = 0 and U(x) = 0, x. 0 =< x, U(x) >=< x, λx >= λ < x, x > Corollary 1. Let T be a linear operator on a finite-dimensional real inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is both self-adjoint and orthogonal. 13
14 Proof. ( ) Suppose V has an orthonormal basis {v 1, v 2,..., v n } such that i, T (v i ) = λ i v i and λ i = 1. Then by Theorem 6.17, T is self-adjoint. We ll show T T = T T = I. Then by Theorem 6.18, T (x) = x, x V T is orthogonal. T T (v i ) = T (T (v i )) = T (λ i v i ) = λ i T (v i ) = λ i λ i v i = λ 2 i v i = v i So T T = I. Similarly, T T = I. ( ) Assume T is self-adjoint. Then by Theorem 6.17, V has an orthonormal basis {v 1, v 2,..., v n } such that T (v i ) = λ i v i, i. If T is also orthogonal, we have λ i v i = λ i v i = T (v i ) = v i λ i = 1 Corollary 2. Let T be a linear operator on a finite-dimensional complex inner product space V. Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute value 1 if and only if T is unitary. Definition. A square matrix A is called an orthogonal matrix if A t A = AA t = I and unitary if A A = AA = I. We say B is unitarily equivalent to D if there exists a unitary matrix Q such that D = Q BQ. Theorem Let A be a complex n n matrix. Then A is normal if and only if A is unitarily equivalent to a diagonal matrix. Proof. ( ) Assume A is normal. There is an orthonormal basis β for F n consisting of eigenvectors of A, by Theorem Let β = {v 1, v 2,..., v n }. So A is similar to a diagonal matrix D by Theorem 5.1, where the matrix S with column i equal to v i is the invertible matrix of similarity. S 1 AS = D. S is unitary. (Leftarrow) Suppose A = P DP where P is unitary and D is diagonal. Also A A = P D DP, but D D = DD. AA = (P DP )(P DP ) = P DP P D P = P DD P Theorem Let A be a real n n matrix. Then A is symmetric if and only if A is orthogonally equivalent to a real diagonal matrix. Theorem Let A M n (F ) be a matrix whose characteristic polynomial splits over F. 1. If F = C, then A is unitarily equivalent to a complex upper triangular matrix. 2. If F = R, then A is orthogonally equivalent to a real upper triangular matrix. Proof. (1) By Shur s Theorem there is an orthonormal basis β = {v 1, v 2,..., v n } such that [L A ] β = N where N is a complex upper triangular matrix. Let β be the standard ordered basis. Then [L A ] β = A. Let Q = [I] β β. Then N = Q 1 AQ. We know that Q is unitary since its columns are an orthonormal set of vectors and so Q Q = I. Section 6.6 Definition. If V = W 1 W 2, then a linear operator T on V is the projection on W 1 along W 2 if whenever x = x 1 + x 2, with x 1 W 1 and x 2 W 2, we have T (x) = x 1. In this case, R(T ) = W 1 = {x V : T (x) = x} and N(T ) = W 2. We refer to T as the projection. Let V be an inner product space, and let T : V V be a projection. We say that T is an orthogonal projection if R(T ) = N(T ) and N(T ) = R(T ). 14
15 Theorem Let V be an inner product space, and let T be a linear operator on V. orthogonal projection if and only if T has an adjoint T and T 2 = T = T. Then T is an Compare Theorem 6.24 to Theorem 6.9 where V is finite-dimensional. This is the non-finite dimensional version. Theorem (The Spectral Theorem). Suppose that T is a linear operator on a finite-dimensional inner product space V over F with the distinct eigenvalues λ 1, λ 2,..., λ k. Assume that T is normal if F = C and that T is self-adjoint if F = R. For each i(1 i k), let W i be the eigenspace of T corresponding to the eigenvalue λ i, and let T i be the orthogonal projection of V on W i. Then the following statements are true. (a) V = W 1 W 2 W k. (b) If W i denotes the direct sum of the subspaces W j for j i, then W i = W i. (c) T i T j = δ i,j T i for 1 i, j k. (d) I = T 1 + T T k. (e) T = λ 1 T 1 + λ 2 T λ k T k. Proof. Assume F = C. (a) T is normal. By Theorem 6.16 there exists an orthonormal basis of eigenvectors of T. By Theorem 5.10, V = W 1 W 2 W k. (b) Let x W i and y W j, i j. Then x, y = 0 and so W i = W i. But from (1), dim(w i ) = j i dim(w j ) = dim(v ) dim(w i ). By Theorem 6.7(c), we know also that dim(w i ) = dim(v ) dim(w i ). Hence W i = W i. (c) T i is the orthogonal projection of T on W i. For i j, x V, x = w 1 + w w k, w i W i. (d) T i (w i ) = w i T i (T i (x)) = T i (x) T i (x) = w i T j T i (x) = T j (w i ) = 0 (T 1 + T T k )(x) = T 1 (x) + T 2 (x) + + T k (x) (e) Let x = T 1 (x) + T 2 (x) + + T k (x). So, T (x) = T (T 1 (x)) + T (T 2 (x)) + + T (T k (x)). For all i, T i (x) W i. So T (T i (x)) = λ i T i (x) = w 1 + w w k = x = λ 1 T 1 (x) + λ 2 T 2 (x) + + λ k T k (x) = (λ 1 T 1 + λ 2 T λ k T k )(x). 15
16 Definition. The set {λ 1, λ 2,..., λ k } of eigenvalues of T is called the spectrum of T, the sum I = T 1 + T T k is called the resolution of the identity operator induced by T, and the sum T = λ 1 T 1 + λ 2 T λ k T k is called the spectral decomposision of T. Corollary 1. If F = C, then T is normal if and only if T = g(t ) for some polynomial g. Corollary 2. If F = C, then T is unitary if and only if T is normal and λ = 1 for every eigenvalue λ of T. Corollary 3. If F = C and T is normal, then T is self-adjoint if and only if every eigenvalue of T is real. Corollary 4. Let T be as in the spectral theorem with spectral decomposition T = λ 1 T 1 + λ 2 T λ k T k. Then each T j is a polynomial in T. 16
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Finite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Math 333 - Practice Exam 2 with Some Solutions
Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
Matrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3
MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
Finite Dimensional Hilbert Spaces and Linear Inverse Problems
Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Orthogonal Projections and Orthonormal Bases
CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Linear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
MAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
The Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are
18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Notes on Linear Algebra. Peter J. Cameron
Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is
More than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
16.3 Fredholm Operators
Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this
Solutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
Methods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
Lecture 18 - Clifford Algebras and Spin groups
Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
LEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS
NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS Second edition Yousef Saad Copyright c 2011 by the Society for Industrial and Applied Mathematics Contents Preface to Classics Edition Preface xiii xv 1
MAT 242 Test 3 SOLUTIONS, FORM A
MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection
