- Book Name: Linear Algebra with Applications by Gareth Williams
- Author: Gareth Williams
- Pages: 599
- Size: 41 MB

Linear Algebra with Applications PDF Free Download
This book contains mathematics with interesting applications integrated into the main body of the text. My approach is to develop the mathematics first and then provide the application. I believe that this makes for the clearest text presentation. However, some instructors may prefer to look ahead with the class to an application and use it to motivate the mathematics. Historically, mathematics has developed through interplay with applications. For example, the analysis of the long-term behavior of a Markov chain model for analyzing population movement between U.S. cities and suburbs can be used to motivate eigenvalues and eigenvectors. This type of approach can be very instructive but should not be overdone.
Chapter 1 Linear Equations and Vedors The reader is led from solving systems of two linear equations to solving general systems. The Gauss-Jordan method of forward elimination is used-it is a clean, uncomplicated algorithm for the small systems encountered. ( The Gauss method that uses forward elimination to arrive at the echelon form, and then back substitution to get the reduced echelon form, can be easily substituted if preferred, based on the discussion in Section 7 .1. The examples then in fact become useful exercises for checking mastery of the method.) Solutions in many variables lead to the vector space Rn. Concepts of linear independence, basis, and dimension are discussed. They are illustrated within the framework of subspaces of solutions to specific homogeneous systems. I have tried to make this an informal introduction to these ideas, which will be followed in Chapter 4 by a more in-depth discussion. The significance of these concepts to the large picture will then be apparent right from the outset. Exercises at this stage require a brief explanation involving simple vectors. The aim is to get the students to understand the ideas without having to attempt it through a haze of arithmetic. In the following sections, the course then becomes a natural, beautiful buildup of ideas. The dot product leads to the concepts of angle, vector magnitude, distance, and geometry of Rn. (This section on the dot product can be deferred to just before Section 4.6, which is on orthonormal vectors, if desired.) The chapter closes with three optional applications. Fitting a polynomial of degree n – 1 to n data points leads to a system of linear equations that has a unique solution. The analyses of electrical networks and traffic flow give rise to systems that have unique solutions and many solutions. The model for traffic flow is similar to that of electrical networks, but has fewer restrictions, leading to more freedom and thus many solutions in place of a unique solution.
Linear Algebra with Applications PDF Free Download
Chapter 2 Matrices and Linear Transformations Matrices were used in the first chapter to handle systems of equations. This application motivates the algebraic development of the theory of matrices in this chapter. A beautiful application of matrices in archaeology that illustrates the usefulness of matrix multiplication, transpose, and symmetric matrices, is included in this chapter. The reader can anticipate, for physical reasons, why the product of a matrix and its transpose has to be symmetric and can then arrive at the result mathematically. This is mathematics at its best! A derivation of the general result that the set of solutions to a homogeneous system of linear equations forms a subspace builds on the discussion of specific systems in Chapter 1. A discussion of dilations, reflections, and rotations leads to matrix transformations and an early introduction of linear transformations on Rn. Matrix representations of linear transformations with respect to standard bases of Rn are derived and applied. A self-contained illustration of the role of linear transformations in computer graphics is presented. The chapter closes with three optional sections on applications that should have broad appeal. The Leontief Input-Output Model in Economics is used to analyze the interdependence of industries. (Wassily Leontief received a Nobel Prize in 1973 for his work in this area.) A Markov chain model is used in demography and genetics, and digraphs are used in communication and sociology. Instructors who cannot fit these sections into their formal class schedule should encourage readers to browse through them. All discussions are self-contained. These sections can be given as out-of-class projects or as reading assignments.
Chapter 3 Determinants and Eigenvedors Determinants and their properties are introduced as quickly and painlessly as possible. Some proofs are included for the sake of completeness, but can be skipped if the instructor so desires. The chapter closes with an introduction to eigenvalues, eigenvectors, and eigenspaces. The student will see applications in demography and weather prediction and a discussion of the Leslie Model used for predicting births and deaths of animals. The importance of eigenvalues to the implementation of Google is discussed. Some instructors may wish to discuss diagonalization of matrices from Section 5.3 at this time.
Chapter 4 General Vedor Spaces The structure of the abstract vector space is based on that of Rn. The concepts of subspace, linear dependence, basis, and dimension are defined rigorously and are extended to spaces of matrices and functions. The section on rank brings together many of the earlier concepts. The reader will see that matrix inverse, determinant, rank, and uniqueness of solutions are all related. This chapter includes an introduction to
Topics such as kernel, range, and the rank/nullity theorem are presented. Linear transformations, kernel, and range are used to give the reader a geometrical picture of the sets of solutions to systems of linear equations, both homogeneous and nonhomogeneous.
Chapter 5 Coordinate Representations The reader will see that every finite dimensional vector space is isomorphic to Rn. This implies that every such vector space is, in a mathematical sense, “the same as” Rn. These isomorphisms are defined by the bases of the space. Different bases also lead to different matrix representations of linear transformation. The central role of eigenvalues and eigenvectors in finding diagonal representations is discussed. These techniques are used to arrive at the normal modes of oscillating systems.
Chapter 6 Inner Produd Spaces The axioms of inner products are presented and inner products are used (as was the dot product earlier in Rn) to define norms of vectors, angles between vectors, and distances in general vector spaces. These ideas are used to approximate functions by polynomials. The importance of such approximations to computer software is discussed. I could not resist including a discussion of the use of vector space theory to detect errors in codes. The Hamming code, whose elements are vectors over a finite field, is introduced. The reader is also introduced to non-Euclidean geometry, leading to a self-contained discussion of the special relativity model of space-time. Having developed the general inner product space, the reader finds that the framework is not appropriate for the mathematical description of space-time. The positive definite axiom is discarded, opening up the door first for the pseudo inner product that is used in special relativity, and later for one that describes gravity in general relativity. It is appropriate at this time to discuss the importance of first mastering standard mathematical structures, such as inner product spaces, and then to indicate that mathematical research often involves changing the axioms of such standard structures. The chapter closes with a discussion of the use of a pseudoinverse to determine least squares curves for given data.
Chapter 7 Numerical Methods This chapter on numerical methods is important to the practitioner of linear algebra in today’s computing environment. I have included Gaussian elimination, LU decomposition, and the Jacobi and Gauss-Seidel iterative methods. The merits of the various methods for solving linear systems are discussed. In addition to discussing the standard topics of round-off error, pivoting, and scaling, I felt it important and well within the scope of the course to introduce the concept of ill-conditioning. It is very interesting to return to some of the systems of equations that have arisen earlier in the course and find out how dependable the solutions are! The matrix of coefficients of a least squares problem, for example, is very often a Vandermonde matrix, leading to an ill-conditioned system. The chapter concludes with an iterative method for finding dominant eigenvalues and eigenvectors. This discussion leads very naturally into a discussion of techniques used by geographers to measure the relative accessibility of nodes in a network. The connectivity of the road network of Cuba is found. The chapter closes with a discussion of Singular Value Decomposition. This is more complete than the discussion usually given in introductory linear algebra books.
Chapter 8 Linear Programming This final chapter gives the student a brief introduction to the ideas of linear programming. The field, developed by George Dantzig and his associates at the U.S. Department of the Air Force in 1947, is now widely used in industry and has its foundation in linear algebra. Problems are described by systems of linear inequalities. The reader sees how small systems can be solved in a geometrical manner, but that large systems are solved using row operations on matrices using the simplex algorithm.
Linear algebra with applications pdf free download.
0 Comments