Ex Parte Perronnin et alDownload PDFPatent Trial and Appeal BoardFeb 23, 201712109496 (P.T.A.B. Feb. 23, 2017) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/109,496 04/25/2008 Florent Perronnin 20070981USNP-XER1867US01 3575 62095 7590 02/24/2017 FAY SHARPE / XEROX - ROCHESTER 1228 EUCLID AVENUE, 5TH FLOOR THE HALLE BUILDING CLEVELAND, OH 44115 EXAMINER ROSTAMI, MOHAMMAD S ART UNIT PAPER NUMBER 2154 MAIL DATE DELIVERY MODE 02/24/2017 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte FLORENT PERRONNIN and GUILLAUME BOUCHARD ____________________ Appeal 2016-006538 Application 12/109,496 Technology Center 2100 ____________________ Before ERIC S. FRAHM, LARRY J. HUME, and CATHERINE SHIANG, Administrative Patent Judges. FRAHM, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Introduction Appellants appeal under 35 U.S.C. § 134(a) from a Final Rejection of claims 1–21 and 23, all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. Appellants’ Disclosed Invention Appellants disclose a system and method of clustering for use in data storage and information management, whereby the clustering of objects uses a nonnegative sparse similarity matrix and various combinations of Appeal 2016-006538 Application 12/109,496 2 mathematical calculations to factorize the matrices and allocate objects based on factor matrices generated by factorization of the nonnegative sparse similarity matrix (Spec. ¶¶ 1–3; Title; Abs.; The Figure). Exemplary Claims An understanding of the invention can be derived from a reading of exemplary claims 1, 3, and 8, which are reproduced below with emphases added to contested limitations: 1. A clustering method comprising: constructing a nonnegative sparse similarity matrix for a set of objects, the constructing including one of: constructing an e graph defining the nonnegative sparse similarity matrix, the É› graph including nodes for object pairs conditional upon a similarity measure of the object pair exceeding a threshold É›, constructing a K-nearest neighbors (K-NN) directed graph defining the nonnegative sparse similarity matrix, the K- NN directed graph including a node for first and second objects of the set of objects conditional upon the second object being one of K nearest neighbors of the first object, and constructing an adjacency matrix having matrix elements corresponding to object pairs, the matrix element values being nonnegative values indicative of similarity of the corresponding object pairs, and deriving a commute time matrix from the adjacency matrix; performing nonnegative factorization of the nonnegative sparse similarity matrix; and allocating objects of the set of objects to clusters based on factor matrices generated by the nonnegative factorization of the nonnegative sparse similarity matrix; wherein the constructing, performing, and allocating are performed by a processor executing software or firmware. Appeal 2016-006538 Application 12/109,496 3 3. The clustering method as set forth in claim 1, wherein the constructing comprises: constructing an É› graph defining the nonnegative sparse similarity matrix, the É› graph including nodes for object pairs conditional upon a similarity measure of the object pair exceeding a threshold É› . 8. The clustering method as set forth in claim 1, wherein the performing comprises: optimizing factor matrices A and B to minimize a Frobenius norm between the nonnegative sparse similarity matrix and a matrix product AxB. The Examiner’s Rejections (1) The Examiner rejected claims 1, 3, 4, and 7 under 35 U.S.C. § 112(d) as failing to further limit the subject matter claimed, i.e., the subject matter recited in independent claim 1. Final Act. 4–6. (2) The Examiner rejected claims 1–5, 8, 11, 16–18, 20, 21, and 23–29 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Geshwind (US 2006/0155751 A1; issued Jul. 13, 2006) and Koren (US 2009/0083258 A1; published Mar. 26, 2009). Final Act. 6–19. (3) The Examiner rejected claim 7 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Geshwind, Koren, and Rifkin (US 2006/0235812 A1; published Oct. 19, 2006). Final Act. 19–21. (4) The Examiner rejected claims 9, 10, and 12–14 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Geshwind, Koren, and Tamayo (US 2005/0246354 A1; published Nov. 3, 2005). Final Act. 21–24. Appeal 2016-006538 Application 12/109,496 4 Issues on Appeal1 Based on Appellants’ arguments in the Appeal Brief (App. Br. 5–23) and the Reply Brief (2–14), in light of the Examiner’s response to Appellants’ arguments in the Appeal Brief (Ans. 4–14), the following three principal issues are presented on appeal: (1) Did the Examiner err in rejecting claims 1, 3, 4, and 7 under 35 U.S.C. § 112(d) as failing to further limit the subject matter claimed, i.e., the subject matter recited in independent claim 1? (2) Did the Examiner err in rejecting claims 1–5, 7–12, 15–18, 20, 21, and 23–29 because the combination (i) is improper, and/or (ii) fails to teach or suggest the limitations at issue in representative independent claim 1? (3) Did the Examiner err in rejecting claim 8 because the Official Notice that a Frobenius norm is equivalent to, or a synonym of, a Hilbert- Schmidt norm, was improperly taken or fails to support the proposition relied upon as teaching? 1 Independent claims 1, 11, and 23 (and claims 2 –5, 8, 16–18, 20, 21, and 24–29 which depend respectively therefrom) contain the same disputed limitations pertaining to clustering by constructing a nonnegative sparse similarity matrix. Appellants do not present separate arguments for claims 2–5, 16–18, 20, 21, and 24–29, and rely on the arguments presented as to claims 1, 11, and 23 (see App. Br. 8–19). We select claim 1 as representative of claims 1–5, 11, 16–18, 20, 21, and 23–29. Claim 8, separately argued, will be discussed separately. Because (i) claims 7, 9, 10, and 12 each ultimately depend from representative independent claim 1; and (ii) Appellants rely on the arguments presented for claim 1 as to the patentability of claims 7, 9, 10, and 12, the outcome for the obviousness rejections of claims 7, 9, 10, and 12 will stand/fall with the outcome as to representative claim 1. Appeal 2016-006538 Application 12/109,496 5 (4) Did the Examiner err in rejecting claims 13 and 14 for obviousness over Geshwind, Koren, and Tamayo, because none of the applied references, taken individually or in combination, disclose initializing as recited? ANALYSIS We have reviewed the Examiner’s rejections (Final Act. 4–24) in light of Appellants’ contentions in the Appeal Brief (App. Br. 5–23) and the Reply Brief (Reply Br. 2–14) that the Examiner has erred, as well as the Examiner’s response to Appellants’ arguments in the Appeal Brief (Ans. 3– 14). We provide the following for emphasis with regard to each of the four issues before us on appeal. Rejection of Claims 1, 3, 4, and 7 Under 35 U.S.C. § 112(d) At the outset, we note our agreement with Appellants (App. Br. 8) that claim 1 is independent, and therefore is not properly rejected as failing to further limit itself. Accordingly, we do not sustain the Examiner’s rejection of claim 1. Now, we turn to the issue of whether or not claims 3, 4, and 7 further limit claim 1. It is axiomatic that a dependent claim cannot be broader than the claim from which it depends. See 35 U.S.C. § 112(d) (“[A] claim in dependent form shall [1] contain a reference to a claim previously set forth and then [2] specify a further limitation of the subject matter claimed.â€); see also Intamin Ltd. v. Magnetar Techs., Corp., 483 F.3d 1328, 1335 (Fed. Cir. 2007) (“An independent claim impliedly embraces more subject matter than its narrower dependent claim.â€); AK Steel Corp. v. Sollac & Ugine, 344 F.3d 1234, 1242 (Fed. Cir. 2003) (“Under the doctrine of claim differentiation, dependent claims are presumed to be of narrower scope than the independent Appeal 2016-006538 Application 12/109,496 6 claims from which they depend.â€); Pfizer, Inc. v. Ranbaxy Labs. Ltd., 457 F.3d 1284, 1292 (Fed. Cir. 2006). In the instant case, claims 3, 4, and 7 recite a specific type of construction operation as being the construction operation used as the “one of†constructions used in the method of claim 1, thus claim 1 covers that one type of construction. Furthermore, because a dependent claim narrows the claim from which it depends, it must “incorporate . . . all the limitations of the claim to which it refers.†35 U.S.C. § 112(d).2 In the instant case, claims 3, 4, and 7 must incorporate all the limitations of claim 1, and the construction used in claim 1, when modified by any of claims 3, 4, and 7, must be a specific one of the three listed in claim 1. Because claim 1 only requires one of the constructions of the three listed in claim 1, and claims 3, 4, and 7 each recite a specific single type of construction to be used from the group of three possible constructions listed in claim 1, claims 3, 4, and 7 further limit claim 1. In view of the foregoing, we agree with Appellants’ arguments (App. Br. 8–9) that claims 3, 4, and 7 are narrower than claim 1. Accordingly, we do not sustain the Examiner’s rejection of claims 3, 4, and 7 under 35 U.S.C. § 112(d). 2 “One or more claims may be presented in dependent form, referring back to and further limiting another claim or claims in the same application.†37 C.F.R. § 1.75(c); see also MPEP §§ 608.01(i), (n). “Claims in dependent form shall be construed to include all the limitations of the claim incorporated by reference into the dependent claim.†37 C.F.R. § 1.75(c); see also MPEP §§ 608.01(i), (n). Appeal 2016-006538 Application 12/109,496 7 Obviousness Rejections of Claims 1–5, 7–12, 15–18, 20, 21, and 23–29 We disagree with the Appellants’ contentions as to representative independent claim 1 and dependent claim 8, separately argued. With regard to claim 1 and 8, we adopt as our own (1) the findings and reasons set forth by the Examiner in the action from which this appeal is taken (Final Act. 6– 10), and (2) the reasons set forth by the Examiner in the Examiner’s Answer in response to the Appellants’ Appeal Brief (Ans. 4–13). We provide the following comments concerning Appellants’ arguments and certain teachings and suggestions of the references as follows. We note that each reference cited by the Examiner must be read, not in isolation, but for what it fairly teaches in combination with the prior art as a whole. See In re Merck & Co., 800 F.2d 1091, 1097 (Fed. Cir. 1986) (one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references). In this light, Appellants’ arguments as to representative independent claim 1 (App. Br. 10–15) concerning the individual shortcomings in the teachings of Geshwind and Koren are not persuasive, and are not convincing of the non-obviousness of the claimed invention set forth in representative independent claim 1. With regard to representative independent claim 1, the Examiner has relied upon the combination of Geshwind and Koren as teaching or suggesting the clustering method to construct a nonnegative sparse similarity matrix for a set of objects using one of the construction operations recited in claim 1. The Examiner relies upon Geshwind as disclosing clustering and construction of a nonnegative sparse similarity matrix (see Final Act. 6–9); and relies upon Koren as disclosing clustering, factorization of a matrix, using a nonnegative sparse similarity matrix for objects, and allocating Appeal 2016-006538 Application 12/109,496 8 objects to matrices using the sparse similarity matrix (see Final Act. 9; Ans. 4–8 citing Koren ¶¶ 78, 79, and 83). We agree with the Examiner’s findings regarding Geshwind and Koren. We also agree with the Examiner’s conclusion that the combination is properly made (see Final Act. 9–10; Ans. 8–10) and the combination teaches and/or suggests the recited limitations of representative independent claim 1. Specifically, we agree with the Examiner’s determination that because neighborhood based methods are intuitive and relatively simple to implement, without a need to present many parameters or to conduct an extensive training stage. They also allow for presenting a user with similar items that he or she has rated, and giving the user an opportunity to change previous ratings in accordance with his or her present tastes, with the understanding that this will affect subsequent ratings. Ans. 9. We also agree with the Examiner’s determination that it would have been obvious to combine the teachings of Geshwind and Koren “because Koren’s system would have allowed Geshwind to facilitate a system for performing nonnegative factorization of the nonnegative sparse similarity matrix; allocating objects of the set of objects to clusters based on factor matrices generated by the nonnegative factorization of the nonnegative sparse similarity matrix†(Final Act. 9–10). In addition, the portions of Koren cited by the Examiner (see Ans. 4– 5) strongly suggest that modifying Geshwind with the teachings of Koren would provide the benefits of (i) improving prediction accuracy (Koren ¶ 90); and (ii) alleviating computational complexity (Koren ¶ 86). In view of the foregoing, we sustain the Examiner’s obviousness rejection of representative independent claim 1, as well as claims 2–5, 8, 11, 16–18, 20, 21, and 23–29 grouped therewith. Appeal 2016-006538 Application 12/109,496 9 We also sustain the Examiner’s obviousness rejections of (i) claim 7 over the combination of Geshwind, Koren, and Rifkin; and (ii) claims 9, 10, and 12 over the combination of Geshwind, Koren, and Tamayo for the same reasons as provided as to claim 1. Obviousness Rejection of Claim 8: Official Notice With regard to the obviousness rejection of claim 8, the Examiner relies upon Official Notice that it is well-known to those of ordinary skill in the art that a Frobenius norm is equivalent to, or a synonym of, a Hilbert- Schmidt norm (Ans. 10). The Examiner is correct, as evidenced by: (1) James E. Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics, p. 132 (2007):3 “The Frobenius norm is also often called the “usual normâ€, which emphasizes the fact that it is one of the most useful matrix norms. Other names sometimes used to refer to the Frobenius norm are Hilbert-Schmidt norm and Schur norm;†and (2) Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Chapter 5.2, Matrix Norms, p. 279 (2000):4 “This is one of the simplest notions of a matrix norm, and it is called the Frobenius (p. 662) norm (older texts refer to it as the Hilbert–Schmidt norm or the Schur norm).†Copies of these two documents are attached to this Decision on a form PTO-892. In view of the foregoing, Appellants’ arguments (App. Br. 10; Reply Br. 8) that Gilbert merely discloses a Hilbert-Schmidt norm and fails 3 See also http://saba.kntu.ac.ir/eecd/sedghizadeh/Ebooks/ Matrix_Analysis.pdf, last viewed on February 20, 2017. 4 See also http://www.matrixanalysis.com/page279.pdf, last viewed on February 20, 2017. Appeal 2016-006538 Application 12/109,496 10 to disclose the equivalent of the recited Frobenius norm recited in claim 8 are unpersuasive. For this reason, we sustain the Examiner’s obviousness rejection of claim 8. Obviousness Rejection of Claims 13 and 14 We agree with Appellants’ contentions as to claims 13 and 14 (App. Br. 20–21; Reply Br. 12–13) that none of the applied references Geshwind, Koren, and Tamayo teach or suggest initiation as recited, and Tamayo teaches random initialization. Accordingly, we do not sustain the Examiner’s obviousness rejection of claims 13 and 14. CONCLUSIONS (1) Appellants have shown the Examiner erred in rejecting claims 1, 3, 4, and 7 under 35 U.S.C. § 112(d) as failing to further limit the subject matter claimed, i.e., the subject matter recited in independent claim 1. (2) The Examiner did not err in rejecting claims 1–5, 7–12, 15–18, 20, 21, and 23–29 under 35 U.S.C. § 103(a) because the combination (i) is proper, and (ii) teaches or suggests the limitations at issue in representative independent claim 1. (3) The Examiner did not err in rejecting claim 8 under 35 U.S.C. § 103(a) because the Official Notice that a Frobenius norm is equivalent to, or a synonym of, a Hilbert-Schmidt norm, was properly taken, as evidenced by the documents cited to Appellants in this Decision (see supra accompanying form PTO-892 and Evidence Appendix). (4) Appellants have established the Examiner erred in rejecting claims 13 and 14 as being unpatentable under 35 U.S.C. § 103(a) because none of Appeal 2016-006538 Application 12/109,496 11 the applied references, taken singly or in combination, disclose initialization as recited in claims 13 and/or 14. DECISION We affirm the Examiner’s rejections of claims 1–5, 7–12, 15–18, 20, 21, and 23–29 under 35 U.S.C. § 103(a). We reverse the Examiner’s rejections of (i) claims 1, 3, 4, and 7 under 35 U.S.C. 112(d); and (ii) claims 13 and 14 under 35 U.S.C. § 103(a). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED-IN-PART Appeal 2016-006538 Application 12/109,496 12 EVIDENCE APPENDIX James E. Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics, p. 132 (2007). Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Chapter 5.2, Matrix Norms, p. 279 (2000). Notice of References Cited Application/Control No. 12/109,496 Applicant(s)/Patent Under Patent Appeal No. 2016-006538 Administrative Patent Judge 1 Eric S. Frahm Art Unit 2100 Page 1 of 1 U.S. PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Name Classification A US- B US- C US- D US- E US- F US- G US- H US- I US- J US- K US- L US- M US- FOREIGN PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Country Name Classification N O P Q R S T NON-PATENT DOCUMENTS * Include as applicable: Author, Title Date, Publisher, Edition or Volume, Pertinent Pages) U James E. Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics, p. 132 (2007) V Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, Chapter 5.2, Matrix Norms, p. 279 (2000). W X *A copy of this reference is being furnished with the associated Board decision from this appeal.. Dates in MM-YYYY format are publication dates. Classifications may be US or foreign. U.S. Patent and Trademark Office PTO-892 (Rev. 01-2001) Notice of References Cited Part of Paper No. James E. Gentle Matrix Algebra Theory, Computations, and Applications in Statistics Editorial Board George Casella Stephen Fienberg Ingram Olkin Department of Statistics Department of Statistics University of Florida Carnegie Mellon University Stanford University Gainesville, FL 32611-8545 Pittsburgh, PA 15213-3890 Stanford, CA 94305 USA USA USA Printed on acid-free paper. © 2007 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY, 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. 9 8 7 6 5 4 3 2 1 springer.com James E. Gentle Department of Computational and Data Sciences George Mason University 4400 University Drive Fairfax, VA 22030-4444 jgentle@gmu.edu ISBN :978-0-387-70872-0 e-ISBN :978-0-387-70873-7 Department of Statistics Library of Congress Control Number: 2007930269 Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Part I Linear Algebra 1 Basic Vector/Matrix Structure and Notation . . . . . . . . . . . . . . 3 1.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Representation of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Vectors and Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Operations on Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Linear Combinations and Linear Independence . . . . . . . . 10 2.1.2 Vector Spaces and Spaces of Vectors . . . . . . . . . . . . . . . . . 11 2.1.3 Basis Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.4 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.5 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.6 Normalized Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.7 Metrics and Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.1.8 Orthogonal Vectors and Orthogonal Vector Spaces . . . . . 22 2.1.9 The “One Vector†. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Cartesian Coordinates and Geometrical Properties of Vectors . 24 2.2.1 Cartesian Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.2 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.3 Angles between Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.4 Orthogonalization Transformations . . . . . . . . . . . . . . . . . . 27 2.2.5 Orthonormal Basis Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2.6 Approximation of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.7 Flats, Affine Spaces, and Hyperplanes . . . . . . . . . . . . . . . . 31 2.2.8 Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 xvi Contents 2.2.9 Cross Products in IR3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3 Centered Vectors and Variances and Covariances of Vectors . . . 33 2.3.1 The Mean and Centered Vectors . . . . . . . . . . . . . . . . . . . . 34 2.3.2 The Standard Deviation, the Variance, and Scaled Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3.3 Covariances and Correlations between Vectors . . . . . . . . 36 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3 Basic Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1 Basic Definitions and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.1 Matrix Shaping Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.2 Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.1.3 Matrix Addition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.1.4 Scalar-Valued Operators on Square Matrices: The Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.1.5 Scalar-Valued Operators on Square Matrices: The Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2 Multiplication of Matrices and Multiplication of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2.1 Matrix Multiplication (Cayley) . . . . . . . . . . . . . . . . . . . . . . 59 3.2.2 Multiplication of Partitioned Matrices . . . . . . . . . . . . . . . . 61 3.2.3 Elementary Operations on Matrices . . . . . . . . . . . . . . . . . . 61 3.2.4 Traces and Determinants of Square Cayley Products . . . 67 3.2.5 Multiplication of Matrices and Vectors . . . . . . . . . . . . . . . 68 3.2.6 Outer Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.7 Bilinear and Quadratic Forms; Definiteness . . . . . . . . . . . 69 3.2.8 Anisometric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.2.9 Other Kinds of Matrix Multiplication . . . . . . . . . . . . . . . . 72 3.3 Matrix Rank and the Inverse of a Full Rank Matrix . . . . . . . . . . 76 3.3.1 The Rank of Partitioned Matrices, Products of Matrices, and Sums of Matrices . . . . . . . . . . . . . . . . . . . 78 3.3.2 Full Rank Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.3.3 Full Rank Matrices and Matrix Inverses . . . . . . . . . . . . . . 81 3.3.4 Full Rank Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.3.5 Equivalent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.3.6 Multiplication by Full Rank Matrices . . . . . . . . . . . . . . . . 88 3.3.7 Products of the Form ATA . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.3.8 A Lower Bound on the Rank of a Matrix Product . . . . . 92 3.3.9 Determinants of Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.3.10 Inverses of Products and Sums of Matrices . . . . . . . . . . . 93 3.3.11 Inverses of Matrices with Special Forms . . . . . . . . . . . . . . 94 3.3.12 Determining the Rank of a Matrix . . . . . . . . . . . . . . . . . . . 94 3.4 More on Partitioned Square Matrices: The Schur Complement 95 3.4.1 Inverses of Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . 95 3.4.2 Determinants of Partitioned Matrices . . . . . . . . . . . . . . . . 96 Contents xvii 3.5 Linear Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.5.1 Solutions of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.5.2 Null Space: The Orthogonal Complement . . . . . . . . . . . . . 99 3.6 Generalized Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.6.1 Generalized Inverses of Sums of Matrices . . . . . . . . . . . . . 101 3.6.2 Generalized Inverses of Partitioned Matrices . . . . . . . . . . 101 3.6.3 Pseudoinverse or Moore-Penrose Inverse . . . . . . . . . . . . . . 101 3.7 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.8 Eigenanalysis; Canonical Factorizations . . . . . . . . . . . . . . . . . . . . 105 3.8.1 Basic Properties of Eigenvalues and Eigenvectors . . . . . . 107 3.8.2 The Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . 108 3.8.3 The Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.8.4 Similarity Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.8.5 Similar Canonical Factorization; Diagonalizable Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.8.6 Properties of Diagonalizable Matrices . . . . . . . . . . . . . . . . 118 3.8.7 Eigenanalysis of Symmetric Matrices . . . . . . . . . . . . . . . . . 119 3.8.8 Positive Definite and Nonnegative Definite Matrices . . . 124 3.8.9 The Generalized Eigenvalue Problem . . . . . . . . . . . . . . . . 126 3.8.10 Singular Values and the Singular Value Decomposition . 127 3.9 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 3.9.1 Matrix Norms Induced from Vector Norms . . . . . . . . . . . 129 3.9.2 The Frobenius Norm — The “Usual†Norm . . . . . . . . . . . 131 3.9.3 Matrix Norm Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 3.9.4 The Spectral Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.9.5 Convergence of a Matrix Power Series . . . . . . . . . . . . . . . . 134 3.10 Approximation of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 4 Vector/Matrix Derivatives and Integrals . . . . . . . . . . . . . . . . . . . 145 4.1 Basics of Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.2 Types of Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 4.2.1 Differentiation with Respect to a Scalar . . . . . . . . . . . . . . 149 4.2.2 Differentiation with Respect to a Vector . . . . . . . . . . . . . . 150 4.2.3 Differentiation with Respect to a Matrix . . . . . . . . . . . . . 154 4.3 Optimization of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.3.1 Stationary Points of Functions . . . . . . . . . . . . . . . . . . . . . . 156 4.3.2 Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.3.3 Optimization of Functions with Restrictions . . . . . . . . . . 159 4.4 Multiparameter Likelihood Functions . . . . . . . . . . . . . . . . . . . . . . 163 4.5 Integration and Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.5.1 Multidimensional Integrals and Integrals Involving Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.5.2 Integration Combined with Other Operations . . . . . . . . . 166 4.5.3 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 xviii Contents Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 5 Matrix Transformations and Factorizations . . . . . . . . . . . . . . . . 173 5.1 Transformations by Orthogonal Matrices . . . . . . . . . . . . . . . . . . . 174 5.2 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 5.2.1 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5.2.2 Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5.2.3 Translations; Homogeneous Coordinates . . . . . . . . . . . . . . 178 5.3 Householder Transformations (Reflections) . . . . . . . . . . . . . . . . . . 180 5.4 Givens Transformations (Rotations) . . . . . . . . . . . . . . . . . . . . . . . 182 5.5 Factorization of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 5.6 LU and LDU Factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 5.7 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.7.1 Householder Reflections to Form the QR Factorization . 190 5.7.2 Givens Rotations to Form the QR Factorization . . . . . . . 192 5.7.3 Gram-Schmidt Transformations to Form the QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.8 Singular Value Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 5.9 Factorizations of Nonnegative Definite Matrices . . . . . . . . . . . . . 193 5.9.1 Square Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.9.2 Cholesky Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.9.3 Factorizations of a Gramian Matrix . . . . . . . . . . . . . . . . . . 196 5.10 Incomplete Factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 6 Solution of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.1 Condition of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 6.2 Direct Methods for Consistent Systems . . . . . . . . . . . . . . . . . . . . . 206 6.2.1 Gaussian Elimination and Matrix Factorizations . . . . . . . 207 6.2.2 Choice of Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 6.3 Iterative Methods for Consistent Systems . . . . . . . . . . . . . . . . . . . 211 6.3.1 The Gauss-Seidel Method with Successive Overrelaxation . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.3.2 Conjugate Gradient Methods for Symmetric Positive Definite Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.3.3 Multigrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6.4 Numerical Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.5 Iterative Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.6 Updating a Solution to a Consistent System . . . . . . . . . . . . . . . . 220 6.7 Overdetermined Systems; Least Squares . . . . . . . . . . . . . . . . . . . . 222 6.7.1 Least Squares Solution of an Overdetermined System . . 224 6.7.2 Least Squares with a Full Rank Coefficient Matrix . . . . . 226 6.7.3 Least Squares with a Coefficient Matrix Not of Full Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Contents xix 6.7.4 Updating a Least Squares Solution of an Overdetermined System . . . . . . . . . . . . . . . . . . . . . . . 228 6.8 Other Solutions of Overdetermined Systems. . . . . . . . . . . . . . . . . 229 6.8.1 Solutions that Minimize Other Norms of the Residuals . 230 6.8.2 Regularized Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.8.3 Minimizing Orthogonal Distances . . . . . . . . . . . . . . . . . . . . 234 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 7 Evaluation of Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . 241 7.1 General Computational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 242 7.1.1 Eigenvalues from Eigenvectors and Vice Versa . . . . . . . . . 242 7.1.2 Deflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 7.1.3 Preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 7.2 Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 7.3 Jacobi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 7.4 QR Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 7.5 Krylov Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.6 Generalized Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.7 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Part II Applications in Data Analysis 8 Special Matrices and Operations Useful in Modeling and Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 8.1 Data Matrices and Association Matrices . . . . . . . . . . . . . . . . . . . . 261 8.1.1 Flat Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 8.1.2 Graphs and Other Data Structures . . . . . . . . . . . . . . . . . . 262 8.1.3 Probability Distribution Models . . . . . . . . . . . . . . . . . . . . . 269 8.1.4 Association Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 8.2 Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 8.3 Nonnegative Definite Matrices; Cholesky Factorization . . . . . . . 275 8.4 Positive Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 8.5 Idempotent and Projection Matrices . . . . . . . . . . . . . . . . . . . . . . . 280 8.5.1 Idempotent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 8.5.2 Projection Matrices: Symmetric Idempotent Matrices . . 286 8.6 Special Matrices Occurring in Data Analysis . . . . . . . . . . . . . . . . 287 8.6.1 Gramian Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 8.6.2 Projection and Smoothing Matrices . . . . . . . . . . . . . . . . . . 290 8.6.3 Centered Matrices and Variance-Covariance Matrices . . 293 8.6.4 The Generalized Variance . . . . . . . . . . . . . . . . . . . . . . . . . . 296 8.6.5 Similarity Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 8.6.6 Dissimilarity Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 8.7 Nonnegative and Positive Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 299 xx Contents 8.7.1 Properties of Square Positive Matrices . . . . . . . . . . . . . . . 301 8.7.2 Irreducible Square Nonnegative Matrices . . . . . . . . . . . . . 302 8.7.3 Stochastic Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 8.7.4 Leslie Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 8.8 Other Matrices with Special Structures . . . . . . . . . . . . . . . . . . . . . 307 8.8.1 Helmert Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 8.8.2 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 8.8.3 Hadamard Matrices and Orthogonal Arrays . . . . . . . . . . . 310 8.8.4 Toeplitz Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 8.8.5 Hankel Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 8.8.6 Cauchy Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 8.8.7 Matrices Useful in Graph Theory . . . . . . . . . . . . . . . . . . . . 313 8.8.8 M -Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 9 Selected Applications in Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 321 9.1 Multivariate Probability Distributions . . . . . . . . . . . . . . . . . . . . . . 322 9.1.1 Basic Definitions and Properties . . . . . . . . . . . . . . . . . . . . . 322 9.1.2 The Multivariate Normal Distribution . . . . . . . . . . . . . . . . 323 9.1.3 Derived Distributions and Cochran’s Theorem . . . . . . . . 323 9.2 Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 9.2.1 Fitting the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 9.2.2 Linear Models and Least Squares . . . . . . . . . . . . . . . . . . . . 330 9.2.3 Statistical Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 9.2.4 The Normal Equations and the Sweep Operator . . . . . . . 335 9.2.5 Linear Least Squares Subject to Linear Equality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 9.2.6 Weighted Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 9.2.7 Updating Linear Regression Statistics . . . . . . . . . . . . . . . . 338 9.2.8 Linear Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 9.3 Principal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 9.3.1 Principal Components of a Random Vector . . . . . . . . . . . 342 9.3.2 Principal Components of Data . . . . . . . . . . . . . . . . . . . . . . 343 9.4 Condition of Models and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 9.4.1 Ill-Conditioning in Statistical Applications . . . . . . . . . . . . 346 9.4.2 Variable Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 9.4.3 Principal Components Regression . . . . . . . . . . . . . . . . . . . 348 9.4.4 Shrinkage Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 9.4.5 Testing the Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . 350 9.4.6 Incomplete Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 9.5 Optimal Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 9.6 Multivariate Random Number Generation . . . . . . . . . . . . . . . . . . 358 9.7 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 9.7.1 Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 9.7.2 Markovian Population Models . . . . . . . . . . . . . . . . . . . . . . . 362 Contents xxi 9.7.3 Autoregressive Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Part III Numerical Methods and Software 10 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 10.1 Digital Representation of Numeric Data . . . . . . . . . . . . . . . . . . . . 377 10.1.1 The Fixed-Point Number System . . . . . . . . . . . . . . . . . . . . 378 10.1.2 The Floating-Point Model for Real Numbers . . . . . . . . . . 379 10.1.3 Language Constructs for Representing Numeric Data . . 386 10.1.4 Other Variations in the Representation of Data; Portability of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 10.2 Computer Operations on Numeric Data . . . . . . . . . . . . . . . . . . . . 393 10.2.1 Fixed-Point Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 10.2.2 Floating-Point Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 395 10.2.3 Exact Computations; Rational Fractions . . . . . . . . . . . . . 399 10.2.4 Language Constructs for Operations on Numeric Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401 10.3 Numerical Algorithms and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 403 10.3.1 Error in Numerical Computations . . . . . . . . . . . . . . . . . . . 404 10.3.2 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 10.3.3 Iterations and Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 417 10.3.4 Other Computational Techniques . . . . . . . . . . . . . . . . . . . . 419 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 11 Numerical Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 11.1 Computer Representation of Vectors and Matrices . . . . . . . . . . . 429 11.2 General Computational Considerations for Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 11.2.1 Relative Magnitudes of Operands . . . . . . . . . . . . . . . . . . . . 431 11.2.2 Iterative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 11.2.3 Assessing Computational Errors . . . . . . . . . . . . . . . . . . . . . 434 11.3 Multiplication of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . 435 11.4 Other Matrix Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 12 Software for Numerical Linear Algebra . . . . . . . . . . . . . . . . . . . . 445 12.1 Fortran and C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 12.1.1 Programming Considerations . . . . . . . . . . . . . . . . . . . . . . . 448 12.1.2 Fortran 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 12.1.3 Matrix and Vector Classes in C++ . . . . . . . . . . . . . . . . . . 453 12.1.4 Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 12.1.5 The IMSLTM Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 12.1.6 Libraries for Parallel Processing . . . . . . . . . . . . . . . . . . . . . 460 xxii Contents 12.2 Interactive Systems for Array Manipulation . . . . . . . . . . . . . . . . . 461 12.2.1 MATLAB R© and Octave . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 12.2.2 R and S-PLUS R© . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 12.3 High-Performance Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 12.4 Software for Statistical Applications . . . . . . . . . . . . . . . . . . . . . . . 472 12.5 Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 A Notation and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 A.1 General Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 A.2 Computer Number Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 A.3 General Mathematical Functions and Operators . . . . . . . . . . . . . 482 A.4 Linear Spaces and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 A.5 Models and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 B Solutions and Hints for Selected Exercises . . . . . . . . . . . . . . . . . 493 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 132 3 Basic Properties of Matrices It is easy to see that this measure has the consistency property (Exercise 3.27), as a norm must. The Frobenius norm is sometimes called the Euclidean matrix norm and denoted by ‖ · ‖E, although the L2 matrix norm is more directly based on the Euclidean vector norm, as we mentioned above. We will usually use the notation ‖ · ‖F to denote the Frobenius norm. Occasionally we use ‖ · ‖ without the subscript to denote the Frobenius norm, but usually the symbol without the subscript indicates that any norm could be used in the expression. The Frobenius norm is also often called the “usual normâ€, which emphasizes the fact that it is one of the most useful matrix norms. Other names sometimes used to refer to the Frobenius norm are Hilbert-Schmidt norm and Schur norm. A useful property of the Frobenius norm that is obvious from the defini- tion is ‖A‖F = √ tr(ATA) = √ 〈A,A〉; that is, • the Frobenius norm is the norm that arises from the matrix inner prod- uct (see page 74). From the commutativity of an inner product, we have ‖AT‖F = ‖A‖F. We have seen that the L2 matrix norm also has this property. Similar to defining the angle between two vectors in terms of the inner product and the norm arising from the inner product, we define the angle between two matrices A and B of the same size and shape as angle(A,B) = cos−1 ( 〈A,B〉 ‖A‖F‖B‖F ) . (3.233) If Q is an n × m orthogonal matrix, then ‖Q‖F = √ m (3.234) (see equation (3.169)). If A and B are orthogonally similar (see equation (3.191)), then ‖A‖F = ‖B‖F; that is, the Frobenius norm is an orthogonally invariant norm. To see this, let A = QTBQ, where Q is an orthogonal matrix. Then ‖A‖2F = tr(ATA) = tr(QTBTQQTBQ) = tr(BTBQQT) = tr(BTB) = ‖B‖2F. Contents Preface . . . . . . . . . . . . . . . . . . . . . . . ix 1. Linear Equations . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . 1 1.2 Gaussian Elimination and Matrices . . . . . . . . 3 1.3 Gauss–Jordan Method . . . . . . . . . . . . . . 15 1.4 Two-Point Boundary Value Problems . . . . . . . 18 1.5 Making Gaussian Elimination Work . . . . . . . . 21 1.6 Ill-Conditioned Systems . . . . . . . . . . . . . 33 2. Rectangular Systems and Echelon Forms . . . 41 2.1 Row Echelon Form and Rank . . . . . . . . . . . 41 2.2 Reduced Row Echelon Form . . . . . . . . . . . 47 2.3 Consistency of Linear Systems . . . . . . . . . . 53 2.4 Homogeneous Systems . . . . . . . . . . . . . . 57 2.5 Nonhomogeneous Systems . . . . . . . . . . . . 64 2.6 Electrical Circuits . . . . . . . . . . . . . . . . 73 3. Matrix Algebra . . . . . . . . . . . . . . 79 3.1 From Ancient China to Arthur Cayley . . . . . . . 79 3.2 Addition and Transposition . . . . . . . . . . . 81 3.3 Linearity . . . . . . . . . . . . . . . . . . . . 89 3.4 Why Do It This Way . . . . . . . . . . . . . . 93 3.5 Matrix Multiplication . . . . . . . . . . . . . . 95 3.6 Properties of Matrix Multiplication . . . . . . . 105 3.7 Matrix Inversion . . . . . . . . . . . . . . . 115 3.8 Inverses of Sums and Sensitivity . . . . . . . . 124 3.9 Elementary Matrices and Equivalence . . . . . . 131 3.10 The LU Factorization . . . . . . . . . . . . . 141 4. Vector Spaces . . . . . . . . . . . . . . . 159 4.1 Spaces and Subspaces . . . . . . . . . . . . . 159 4.2 Four Fundamental Subspaces . . . . . . . . . . 169 4.3 Linear Independence . . . . . . . . . . . . . 181 4.4 Basis and Dimension . . . . . . . . . . . . . 194 vi Contents 4.5 More about Rank . . . . . . . . . . . . . . . 210 4.6 Classical Least Squares . . . . . . . . . . . . 223 4.7 Linear Transformations . . . . . . . . . . . . 238 4.8 Change of Basis and Similarity . . . . . . . . . 251 4.9 Invariant Subspaces . . . . . . . . . . . . . . 259 5. Norms, Inner Products, and Orthogonality . . 269 5.1 Vector Norms . . . . . . . . . . . . . . . . 269 5.2 Matrix Norms . . . . . . . . . . . . . . . . 279 5.3 Inner-Product Spaces . . . . . . . . . . . . . 286 5.4 Orthogonal Vectors . . . . . . . . . . . . . . 294 5.5 Gram–Schmidt Procedure . . . . . . . . . . . 307 5.6 Unitary and Orthogonal Matrices . . . . . . . . 320 5.7 Orthogonal Reduction . . . . . . . . . . . . . 341 5.8 Discrete Fourier Transform . . . . . . . . . . . 356 5.9 Complementary Subspaces . . . . . . . . . . . 383 5.10 Range-Nullspace Decomposition . . . . . . . . 394 5.11 Orthogonal Decomposition . . . . . . . . . . . 403 5.12 Singular Value Decomposition . . . . . . . . . 411 5.13 Orthogonal Projection . . . . . . . . . . . . . 429 5.14 Why Least Squares? . . . . . . . . . . . . . . 446 5.15 Angles between Subspaces . . . . . . . . . . . 450 6. Determinants . . . . . . . . . . . . . . . 459 6.1 Determinants . . . . . . . . . . . . . . . . . 459 6.2 Additional Properties of Determinants . . . . . . 475 7. Eigenvalues and Eigenvectors . . . . . . . . 489 7.1 Elementary Properties of Eigensystems . . . . . 489 7.2 Diagonalization by Similarity Transformations . . 505 7.3 Functions of Diagonalizable Matrices . . . . . . 525 7.4 Systems of Differential Equations . . . . . . . . 541 7.5 Normal Matrices . . . . . . . . . . . . . . . 547 7.6 Positive Definite Matrices . . . . . . . . . . . 558 7.7 Nilpotent Matrices and Jordan Structure . . . . 574 7.8 Jordan Form . . . . . . . . . . . . . . . . . 587 7.9 Functions of Nondiagonalizable Matrices . . . . . 599 Contents vii 7.10 Difference Equations, Limits, and Summability . . 616 7.11 Minimum Polynomials and Krylov Methods . . . 642 8. Perron–Frobenius Theory . . . . . . . . . 661 8.1 Introduction . . . . . . . . . . . . . . . . . 661 8.2 Positive Matrices . . . . . . . . . . . . . . . 663 8.3 Nonnegative Matrices . . . . . . . . . . . . . 670 8.4 Stochastic Matrices and Markov Chains . . . . . 687 Index . . . . . . . . . . . . . . . . . . . . . . 705 5.2 Matrix Norms 279 5.2 MATRIX NORMS Because Cm×n is a vector space of dimension mn, magnitudes of matrices A ∈ Cm×n can be “measured†by employing any vector norm on Cmn. For example, by stringing out the entries of A = ( 2 −1 −4 −2 ) into a four-component vector, the euclidean norm on 4 can be applied to write ‖A‖ = [ 22 + (−1)2 + (−4)2 + (−2)2 ]1/2 = 5. This is one of the simplest notions of a matrix norm, and it is called the Frobenius (p. 662) norm (older texts refer to it as the Hilbert–Schmidt norm or the Schur norm). There are several useful ways to describe the Frobenius matrix norm. Frobenius Matrix Norm The Frobenius norm of A ∈ Cm×n is defined by the equations ‖A‖2F = ∑ i,j |aij |2 = ∑ i ‖Ai∗‖22 = ∑ j ‖A∗j‖22 = trace (A∗A). (5.2.1) The Frobenius matrix norm is fine for some problems, but it is not well suited for all applications. So, similar to the situation for vector norms, alternatives need to be explored. But before trying to develop different recipes for matrix norms, it makes sense to first formulate a general definition of a matrix norm. The goal is to start with the defining properties for a vector norm given in (5.1.9) on p. 275 and ask what, if anything, needs to be added to that list. Matrix multiplication distinguishes matrix spaces from more general vector spaces, but the three vector-norm properties (5.1.9) say nothing about products. So, an extra property that relates ‖AB‖ to ‖A‖ and ‖B‖ is needed. The Frobenius norm suggests the nature of this extra property. The CBS inequality insures that ‖Ax‖22 = ∑ i |Ai∗x|2 ≤ ∑ i ‖Ai∗‖ 2 2 ‖x‖ 2 2 = ‖A‖ 2 F ‖x‖ 2 2 . That is, ‖Ax‖2 ≤ ‖A‖F ‖x‖2 , (5.2.2) and we express this by saying that the Frobenius matrix norm ‖ ‖F and the euclidean vector norm ‖ ‖2 are compatible . The compatibility condition (5.2.2) implies that for all conformable matrices A and B, ‖AB‖2F = ∑ j ‖[AB]∗j‖22 = ∑ j ‖AB∗j‖22 ≤ ∑ j ‖A‖2F ‖B∗j‖ 2 2 = ‖A‖2F ∑ j ‖B∗j‖22 = ‖A‖ 2 F ‖B‖ 2 F =⇒ ‖AB‖F ≤ ‖A‖F ‖B‖F . This suggests that the submultiplicative property ‖AB‖ ≤ ‖A‖ ‖B‖ should be added to (5.1.9) to define a general matrix norm. Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] Books Science & Math Mathematics Matrix analysis and applied linear algebra Har/Cdr Edition by Carl D. Meyer (Author) 39 customer reviews › › ISBN-13: 978-0898714548 ISBN-10: 0898714540 Why is ISBN important? Have one to sell? Share Sell yours for a Gift Card We'll buy it for $27.11 Learn More Trade in now Sell on Amazon Add to List Textbook Binding $48.11 - $101.18 Other Sellers from $48.11 This book avoids the traditional definition-theorem-proof format; instead a fresh approach introduces a variety of problems and examples all in a clear and informal style. The in-depth focus on applications separates this book from others, and helps students to see how linear algebra can be applied to real-life situations. Some of the more contemporary topics of applied linear algebra are included here which are not normally found in undergraduate textbooks. Theoretical developments are always accompanied with detailed examples, and Report incorrect product information. Buy used $48.11 Buy new $101.18 In Stock. Ships from and sold by Amazon.com. Gift-wrap available. Want it tomorrow, Feb. 23? Order within 1 hr 59 mins and choose One-Day Shipping at checkout. Details List Price: $106.50 Save: $5.32 (5%) 9 New from $93.80 FREE Shipping. Ship to: Karen L. Hilaski - 22201 Qty: 1 Yes, I want FREE Two-Day Shipping with Amazon Prime More Buying Choices 9 New from $93.80 29 Used from $48.11 38 used & new from $48.11 See All Buying Options Read more Turn on 1-Click ordering Qty: 1 Add to Cart Books Advanced Search New Releases Best Sellers The New York Times® Best Sellers Children's Books Textbooks Textbook Rentals Departments Account & Lists Hello, Karen Orders Try Prime Cart 3 Amazon@CollegePark Browsing History Karen's Amazo Try Prime All Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] + Frequently Bought Together Total price: $158.17 Special Offers and Product Promotions Amazon.com Gift Cards make great gifts for any occasion and never expire. Shop now. Editorial Reviews Review Add both to Cart Add both to List This item: Matrix analysis and applied linear algebra by Carl D. Meyer Textbook Binding $101.18 Matrix Analysis by Roger A. Horn Paperback $56.99 Customers Who Bought This Item Also Bought Page 1 of 11 Elementary Classical Analysis, 2nd Edition 28 Hardcover $240.31 Matrix Analysis for Scientists and Engineers 6 Paperback $48.00 Matrix Analysis Roger A. Horn 14 Paperback $56.99 Complex Variables and Applications (Brown and Churchill) James Brown 25 Hardcover $133.95 Matrix Computatio (Johns Hopkins S the Mathematical Gene H. Golub 20 Hardcover $62.77 Jerrold E. Marsden› Alan J. Laub› 'I have taught courses using Meyer's text for two semesters now and I like the book even better than when I first read it. The text is just what I want for an advanced level course in Linear Algebra for applied mathematicians and engineers. I plan to use it again.' William C. Brown, Michigan State University 'Meyer extensively treats traditional topics in matrix analysis and linear algebra. The text is well written, with the exact statements of important definitions and theorems set off in gray boxes, surrounded by proofs, motivational discussions, many examples and historical notes, and 749 exercises. Meyer intentionally leaves the 'scaffolding' in place to help the reader understand the development of the subject ... Included are a separate solutions manual and a CD-ROM containing the entire text and solution manual in a searchable, hyperlinked (cross-referenced) pdf format. This CD-ROM, one of the few with this feature, makes the package of even greater reference value.' J. D. Fehribach, CHOICE 'Carl Meyer's book is an outstanding addition to the vast literature in this area. Its most distinctive feature is a seamless integration of the theoretical, computational, and applied aspects of the subject, which stems from the author's extensive experience in both teaching and research. The author's clear and elegant expository style is enlivened by a generous sprinkling of historical notes and aptly chosen quotations from famous mathematicians, making this book a delight to read. If this textbook will not succeed in awakening your students' interest in matrices and their uses, nothing else will.' Michele Benzi, Emory University 'I like how well thought out and organized this book is. I would recommend that anyone who teaches a course in linear algebra consider this text. Those who choose not to adopt this text would still find it a handy reference a good reminder of some of the practical issues of linear algebra that working scientists must consider. In my opinion, the stronger the students in the course, and the longer they are exposed to the text (it would be best in a two semester sequence) the better they will appreciate this book and its spiral development of ideas.' Joel Foisy, MAA Online Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] Customer Reviews 4.5 out of 5 stars Top Customer Reviews Nice book for advanced undergrad/grad students By Jack on November 29, 2014 Most Recent Customer Reviews Book Description See all Editorial Reviews Product Details Textbook Binding: 700 pages Publisher: SIAM: Society for Industrial and Applied Mathematics; Har/Cdr edition (February 15, 2001) Language: English ISBN-10: 0898714540 ISBN-13: 978-0898714548 Product Dimensions: 6 x 2 x 9 inches Shipping Weight: 4.6 pounds (View shipping rates and policies) Average Customer Review: (39 customer reviews) Amazon Best Sellers Rank: #340,514 in Books (See Top 100 in Books) #101 in Books > Science & Math > Mathematics > Pure Mathematics > Algebra > Linear #608 in Books > Textbooks > Science & Mathematics > Mathematics > Algebra & Trigonometry More About the Author › Visit Amazon's Carl D. Meyer Page Discover books, learn about writers, read author blogs, and more. Customers Viewing This Page May Be Interested In These Sponsored Links (What's this?) 1. You Deserve The Best PLM - Review Top Software Vendors - Compare The Top 10 PLM Software Vendors In Exclusive Vendor Rankings Report! offer.business-software.com/PLM- Software/2017Reviews 2 Data Anal sis E perts Ad feedback Read more The fresh approach of this book introduces a variety of problems with clarity and informality. The focus on applications demonstrates how linear algebra can be applied to real-life situations. Numerous examples, exercises and historical notes are provided, along with a CD-ROM containing a searchable copy of the textbook and solutions. Tell the Publisher! I'd like to read this book on Kindle Don't have a Kindle? Get your Kindle here, or download a FREE Kindle Reading App. Spark new inspiration with these editor's picks from Kindle. See more Follow 39 5 star 82% 4 star 13% 3 star 3% 2 star 2% 1 star 0% Share your thoughts with other customers Write a customer review See all verified purchase reviews Ad feedback Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] Format: Textbook Binding Verified Purchase Comment One person found this helpful. Was this review helpful to you? Report abuse The reason that I gave it 4 stars is because I feel like it sometimes to fails to live up to the ... By Josiah on December 25, 2014 Format: Textbook Binding Verified Purchase Comment One person found this helpful. Was this review helpful to you? Report abuse Finding a good intermediate book on linear algebra By matrixlearner on December 28, 2010 Format: Textbook Binding Verified Purchase Comment 8 people found this helpful. Was this review helpful to you? Report abuse A Model of Authorship By Gus on November 9, 2006 Format: Textbook Binding Verified Purchase In my opinion it is better than Prof This is a gem of a book! In my opinion it is better than Prof. Strang's book. Its lucidly written. With illustrative examples. Published 1 year ago by Brato Chakrabarti Could be better, but it is concise and typically clear Reasonable. Could be better, but it is concise and typically clear. Published on February 11, 2015 by Ryan D Save your money The entire book, including the solutions manual, is available for free in PDF format (albeit with a significant watermark on every page) at matrixanalysis .com. Published on May 10, 2014 by Liam Cullen Excellent!!! Excellent book. The answers really help to know if you're on right path. I will enjoy it for the rest of my life Published on November 23, 2013 by Ernesto RamÃrez A. Good, clear, and well written I'm an EE Control Systems student in my senior year. I've already had basic classes in linear algebra and have used LA for for many other courses. Published on September 30, 2013 by Jimmerson One of my favorite books on linear alebgra This must be one of my favorite books on linear algebra. This is a beautifully produced book and covers almost everything one might need for an introductory to intermediate... Published on March 31, 2013 by Herbert Sauro perfect! This is a good book for matrix analysis! Maybe the best I ever read!The book is new and of good quantity. Published on October 19, 2012 by zz a Masterpiece Every book should be written like this. Absolutely marvelous; a pleasure to read! It is theoretical, but presented in an easy manner with many examples that show the concepts... Published on March 19, 2011 by Pushkin Kachroo Search Customer Reviews Really nice text on applied linear algebra. There are many examples that involve physical phenomena and interesting applications. However, it also provides the theoretical details necessary to really understand matrix computations at a deeper level. Proofs are provided throughout and there are numerous problems (proofs and applications) that go along with each topic. Even slightly more obscure or advanced topics are covered in addition to the standard material. One nice feature is the historical information provided in footnotes throughout the book, which make the motivation for certain topics more obvious. One disappointing feature: operation counts for numerical computations are included throughout, however they usually stand alone, with no algorithm given in the presentation. This is a little annoying since you would have to come up with the algorithm on your own to understand why the operation counts are what they are. However, since the focus is not on numerical computation, this still serves to motivate the use of one algorithm over another in practical applications. Overall, this is a detailed book with nice explanations. It could be used for lower-level undergraduate students, but is probably better for advanced undergrads or grad students who haven't taken much linear algebra. It is similar to Strang's book but deeper in coverage. Yes No This is a "solid" matrix theory book for engineers or scientists. The reason that I gave it 4 stars is because I feel like it sometimes to fails to live up to the claim of being as casual it claims. For instance the idea of a projector matrix onto a subspace of an operator is very abstractly described but then is not backed up with what I would like to see in examples. So in my opinion, the book sometimes is too abstract without clear explanation. However, overall, this is a very good text that will take you far in the application of linear algebra in a wide variety of problems. Yes No This book provides many good chapters with excellent, clear, explanations and numerously well-constructed examples. It can be used as a text for a basic introductory level student as well as a book serving those who need to learn advanced topics in modern matrix/linear algebra. The student is introduced to the discipline with the usual gaussian elimination techniques as well as basic subspaces but is quickly lead to linear transformation, norms, inner products, orthogonal projections and eigenvaules and eigenvectors. As with many books, the chapters lead to the study of Jordan Form. This is where this book excels over many others because of its detailed explanations and examples in the preceding chapters. My own experience of trying to understand the Jordan Form has lead me in search for numerous other books on: invariant subspaces, complimentary subspaces, nilpotent matrices, range-null decomposition and projection operators. The foundation on successfully understanding the Jordan Form is a good understanding of these preceding subjects. Other books either skimp over the basic details of these abstract subjects (books that are too advanced) or failed to link them in a way that culminate in a coherent proof of the Jordan Form. This book handles this difficult proof in a clear logically manner, even with illustrations, because of the way these preceding abstract subjects are introduced to the student. Yes No This book contains a comprehensive treatment on the topic of matrix analysis and applied linear algebra. The concepts are clearly introduced and developed. It is rich with detailed proofs that are easy to follow. Results are summarized and clearly grouped and marked for reference. As a researcher and a practitioner, I found this book quite useful in explaining mathematical concepts without the need for a classroom instructor. Besides, this book comes with a CD that contains a PDF version which makes it quite useful to port as a Read more Read more Read more Read more Search Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] Comment 7 people found this helpful. Was this review helpful to you? Report abuse But the answers of exercises are perfect for studying By Yuchen Wang on July 22, 2015 Format: Textbook Binding Verified Purchase Comment Was this review helpful to you? Report abuse Five Stars By Amazon Customer on October 19, 2015 Format: Textbook Binding Verified Purchase Comment Was this review helpful to you? Report abuse Good depth and breadth By engineering guy on September 26, 2010 Format: Textbook Binding Verified Purchase Comment 5 people found this helpful. Was this review helpful to you? Report abuse Five Stars By Constantine K. on March 22, 2015 Format: Textbook Binding Verified Purchase Comment Was this review helpful to you? Report abuse Set up an Amazon Giveaway reference. It is very rich with problem sets that add insight, both theoretically and practically. It is accompanied by a solutions manual which strengthens comprehension. I highly recommend this book. I think it deserves to be a model to follow for authorship in the digital age. Yes No I have not read it yet. But the answers of exercises are perfect for studying. Yes No Best of matrix analisys's book Yes No This is a good overall book that goes beyond your basic linear algebra texts such as Leon. If you're on the fence about buying it, just google Carl Meyer. His website has a digital copy of the text, so you can check it out and decide if it's worth buying. I commend the author for making the digital copies available, which is uncommon for an author to do. So if you find the text useful, please buy it! Oh, by the way, the textbook includes a cd with searchable pdf copies of the book and solutions manual. That alone sets this text apart from most others. Yes No Very good condition. Yes No See all verified purchase reviews (newest first) Write a customer review Amazon Giveaway allows you to run promotional giveaways in order to create buzz, reward your audience, and attract new followers and customers. Learn more about Amazon Giveaway This item: Matrix analysis and applied linear algebra Set up a giveaway Matrix analysis and applied linear algebra: Carl D. Meyer: 9780898714548: Amazon.com: Books https://www.amazon.com/Matrix-analysis-applied-linear-algebra/dp/0898714540/ref=sr_1_1?ie=UTF8&qid=1487179595&sr=8-1&keywords=matrix+analysis+and+applied+linear+algebra[2/22/2017 11:47:00 AM] What Other Items Do Customers Buy After Viewing This Item? Applied Linear Algebra and Matrix Analysis (Undergraduate Texts in Mathematics) Paperback 3 $49.95 Thomas S. Shores› Matrix Methods, Third Edition: Applied Linear Algebra Hardcover 9 $76.92 Richard Bronson› Introduction to Topology: Third Edition (Dover Books on Mathematics) Paperback 60 $8.86 Bert Mendelson› Matrix Analysis for Scientists and Engineers Paperback 6 $48.00 Alan J. Laub› Copy with citationCopy as parenthetical citation