The Horrors Persist, But so do I

📐 Why Everyone Should Learn Linear Algebra

I'm currently in my first year at Laurier studying Linear Algebra for my Computer Science undergrad. I'm unsure what I'll be doing in the future; for my motivation in getting through to finals, I wrote this post on why Linear Algebra is something everyone should learn.

Defining Linear Algebra

Term Definition
Algebra Exploring the relationships between unknown numbers and equations (usually x or y)
Linear Describing something in a line-like or straight-forward fashion
Linear Algebra Exploring the relationships between equations that are linearly transformed

In this subject, you won’t see x-squared or x/2 or pi/y. x + 5 or 2x are fair game since multiplication and addition are the only operations that scale linearly.

Something that scales linearly can be visualized with a graph, where for all values of x, the resulting line is straight.

Applications

Before defining LinAlg’s real world applications, let’s get the following three concepts out of the way:

  • Vectors: there are no apparent real-world applications for vectors which are terminating lines in 2D or 3D space1

  • Matrices: matrices are tables of numbers. What else are tables of numbers? Spreadsheets, which we’ll explore later

  • Systems of Equations: systems of equations also have no apparent real-world applications besides being transformed into matrices/spreadsheets1

In abstract, LinAlg allows us to translate spreadsheets of data into visualizations in 2D and 3D space, and vice-versa!

This is useful for:

🧑‍🏫 mathematicians: who want to derive visual insights from complex sets of numbers and equations

🧑‍🔬 physicists: who want to convert visualizations into complex sets of numbers and equations, and either manipulate/transform them, or do any other computation that modern spreadsheets give us

The Stock Market

Most of you won’t be mathematicians or physicists, so we’ll use the stock market as something more relatable.

Here’s a transformation matrix where the columns are people

And we have three stock owners:

The columns represent the stock that the owners have in the companies, AAPL, GOOG, and MSFT.

When running them through the transformation matrix, we multiply the stock values by the numbers in the matrix. So, the new values will be:

Just like that, we’ve transformed our matrix of stocks owned by Nate and Tom using the three rows (operations) in our transformation matrix! That’s the power of spreadsheets with linear equations.

Conclusion

Why should everybody learn LinAlg? Because spreadsheet formulas are really cool for transforming large amounts of data, and it is fun to visualize said transformations with matrices.

As for the rest, again, I'm not sure why the hell we'd ever need linear systems of equations or vectors other than as easy examples for students to visualize.

WLU Intro to Linear Algebra - (MA123)

Systems of Linear Equations

  • Coefficients of systems of equations are the same as matrices
    • The augmented matrix are the answers of the equations
    • The coefficent matrix are the questions
  • A system has either 0, 1, or infinite solutions according to the rules referenced in the rank section
  • A homogenous system is where the answer to all systems of equations is 0

Row Operations and RREF

  • A leading one is a row containing a one that is prefixed with zeroes
  • REF is where there are leading ones with all zeroes below them
  • RREF is where there are lleading ones with all zeroes above and below them
  • A free variable is a column without a leading one

To get to RREF, we use row operations which are arbitrarily decided rules for manipulating matrices: 1. Make a leading one in the first column by multiplying it by a constant 2. Add that leading one to the second column to zero it out, multiplying it first if you have to 3. Do the same for the third column 4. Now, zero-out all values above the leading one 5. Repeat for every column in the coefficient matrix

Rank: Solutions of a System

Before solving a system, we should determine whether the system is actually solvable.

Case Solution Type Example
No Solution When the rank of the coefficient matrix < rank of the augmented matrix (obviously, since 0 ≠ 1) [1 0]
[0 1]
Unique Solution When both matrices have the same rank and the rank = the number of variables A 2 x 2 matrix
Infinite Solutions Same as above except the ranks are greater than the number of variables. A 3 x 2 matrix

To solve an infinite system: reference the free variables according to the leading ones. For example, this matrix in RREF:

[1  0  1  0  |  1]
[0  1 -1  3  |  7]
[0  0  0  0  |  0]

Has equations that are:

x + z      = 1
y - z + 3w = 7

# Therefore

x = 1 - z
y = 7 + z - 3w

Vectors

  • A row vector is a matrix with one row
  • A column vector is a matrix with one column
  • To find where two vectors intersect, make their equations equal each other

Vector Operations

  • To add two column vectors, add their rows
  • To multiply a vector by a constant, that’s self-explanatory (other forms of multiplication of the cross/dot product)

The Dot Product

The dot product is how much the vectors line up with each other.

  • To find the magnitude of a vector projected onto another (aka. the geometric interpretation of the dot product), multiply the rows by each other and add their products
  • If two vectors are orthogonal, their dot product = 0

Cosine Law

The cosine law determines the angle between two vectors in radians (that means you must take the cosine inverse to find the actual angle, bitch)

Length, Distance, and Direction

  • To determine the distance between two vectors, subtract the second by the first

Or using the Cosine Law

  • To find the magnitude/length/norm of a vector, take the Pythagorean theorum of the vector components

  • To find the unit vector in the direction of another vector, divide that vector by its norm

The Cross Product

The cross product of 2 3D vectors gets us a vector that is orthogonal to both vectors.

Projection

  • I know we discussed the geometric interpretation of the dot product being the “projection”, but there’s also orthogonal projection, which is when we change the direction of a vector according to the direction of another vector

Linear Combinations

  • A linear combination is a vector determined in terms wof other vectors. This is just reordering systems of equations
  • An upper triangular matrix is where all entries under the main diagonal are 0
  • An lower triangular matrix is where all entries above the main diagonal are 0

Linear Independence

  • A linearly independent vector changes the span of a vector set
  • Linear dependence can be determined by making one of the vectors non-free variables? (wtf?)

Lines

  • The vector equation of a line is like y = mx + b except even more useless!
  • It helps us determine, using a point vector and direction vector, the equation of an infinite line

Planes

  • The vector equation of a plane uses the same equation format (a position vector + direction vector), except with an added direction vector, to denote format
    • This format is useful for finding where two planes or a plane and a line intersect!

  • The normal/standard/scalar equation of a plane uses a normal vector, a vector orthogonal to the plane, to denote the direction of the vector
    • No position vector required!
    • This equation is useless for determining if two planes or a plane and a line intersect
    • To convert the normal equation form into a vector equation: …

Matrices

  • Matrices are indexed using i and j, (whoever thought that was a good idea probably invented Cheezits ☠️🔫)
  • A matrix is considered symmetric when both triangles reflect each other across the diagonal

Operations

  • We can add/subtract matrices of the same width/height
  • A transposed matrix is a matrix rotated on its side (rows become rows)

Multiplication

  • A matrix A can be multiplied by matrix B if matrix B has the same amount of rows as matrix A’s columns
    • To multiply two matrices, treat the first matrix’s rows as inputs, and the second matrix’s columns as operations
  • To square a matrix, we multiply it by itself

Inverse

  • The inverse a matrix reverses the operations of a multiplication matrix
  • There is only ever either one or no inverses for a matrix
    • A matrix is called singular if it is uninvertible
    • A matrix is called non-singular if it is invertible
  • A matrix is singular if its determinant = 0 (meaning its area is reduced to 0)
  • Only square matrices are invertible
  • The multiplying a matrix with its inverse results in no transformation. The identity matrix is what results.
  • To determine the inverse of a matrix, we put it side-by-side to its multiplication identity ([A|I]), and try to make the matrix equal the identity using row operations.
  • A permutation matrix is a non-RREF matrix contain exactly one 1 per row. It can be inverted by taking its transposition

Properties of Inverted Matrices

  • A constant multiplied by an inverted matrix is the same as the reciprocal of that constant multiplied by the matrix
  • The inverse of two multiplied matrices is the same as inverting both matrices and reversing their order
  • Taking the exponent of a matrix then inverting it = taking the inverse of a matrix to an exponent
  • Taking the transpose of a matrix then inverting it = taking the inverse of a matrix, then transposing it

Determinants

Area of a 2 x 2 Matrix

The cross product of a square matrix results in the area between 2 vectors, or the determinant of the two vectors.

Bigger (Sub)matrices

  • We make a submatrix by removing a row and column using ~ notation:

  • The determinant of a submatrix is called a minor of that matrix
  • To determine the determinant of matrix greater than 2 columns and rows, we add the determinants of each cell’s cofactor matrices in any of the rows or columns


  1. Please correct me if I’m wrong 

Thoughts? Leave a comment