I'm currently in my first year at Laurier studying Linear Algebra for my Computer Science undergrad. I'm unsure what I'll be doing in the future; for my motivation in getting through to finals, I wrote this post on why Linear Algebra is something everyone should learn.
Defining Linear Algebra
Term  Definition 

Algebra  Exploring the relationships between unknown numbers and equations (usually x or y ) 
Linear  Describing something in a linelike or straightforward fashion 
Linear Algebra  Exploring the relationships between equations that are linearly transformed 
In this subject, you wonāt see xsquared or x/2
or pi/y
. x + 5
or 2x
are fair game since multiplication and addition are the only operations that scale linearly.
Something that scales linearly can be visualized with a graph, where for all values of x
, the resulting line is straight.
Applications
Before defining LinAlgās real world applications, letās get the following three concepts out of the way:

Vectors: there are no apparent realworld applications for vectors which are terminating lines in 2D or 3D space^{1}

Matrices: matrices are tables of numbers. What else are tables of numbers? Spreadsheets, which weāll explore later

Systems of Equations: systems of equations also have no apparent realworld applications besides being transformed into matrices/spreadsheets^{1}
In abstract, LinAlg allows us to translate spreadsheets of data into visualizations in 2D and 3D space, and viceversa!
This is useful for:
š§āš« mathematicians: who want to derive visual insights from complex sets of numbers and equations
š§āš¬ physicists: who want to convert visualizations into complex sets of numbers and equations, and either manipulate/transform them, or do any other computation that modern spreadsheets give us
The Stock Market
Most of you wonāt be mathematicians or physicists, so weāll use the stock market as something more relatable.
Hereās a transformation matrix where the columns are people
And we have three stock owners:
The columns represent the stock that the owners have in the companies, AAPL
, GOOG
, and MSFT
.
When running them through the transformation matrix, we multiply the stock values by the numbers in the matrix. So, the new values will be:
Just like that, weāve transformed our matrix of stocks owned by Nate and Tom using the three rows (operations) in our transformation matrix! Thatās the power of spreadsheets with linear equations.
Conclusion
Why should everybody learn LinAlg? Because spreadsheet formulas are really cool for transforming large amounts of data, and it is fun to visualize said transformations with matrices.
As for the rest, again, I'm not sure why the hell we'd ever need linear systems of equations or vectors other than as easy examples for students to visualize.
WLU Intro to Linear Algebra  (MA123)
Systems of Linear Equations
 Coefficients of systems of equations are the same as matrices
 The augmented matrix are the answers of the equations
 The coefficent matrix are the questions
 A system has either 0, 1, or infinite solutions according to the rules referenced in the rank section
 A homogenous system is where the answer to all systems of equations is 0
Row Operations and RREF
 A leading one is a row containing a one that is prefixed with zeroes
 REF is where there are leading ones with all zeroes below them
 RREF is where there are lleading ones with all zeroes above and below them
 A free variable is a column without a leading one
To get to RREF, we use row operations which are arbitrarily decided rules for manipulating matrices: 1. Make a leading one in the first column by multiplying it by a constant 2. Add that leading one to the second column to zero it out, multiplying it first if you have to 3. Do the same for the third column 4. Now, zeroout all values above the leading one 5. Repeat for every column in the coefficient matrix
Rank: Solutions of a System
Before solving a system, we should determine whether the system is actually solvable.
Case  Solution Type  Example 

No Solution  When the rank of the coefficient matrix < rank of the augmented matrix (obviously, since 0 ā 1)  [1 0] [0 1] 
Unique Solution  When both matrices have the same rank and the rank = the number of variables  A 2 x 2 matrix 
Infinite Solutions  Same as above except the ranks are greater than the number of variables.  A 3 x 2 matrix 
To solve an infinite system: reference the free variables according to the leading ones. For example, this matrix in RREF:
[1 0 1 0  1] [0 1 1 3  7] [0 0 0 0  0]
Has equations that are:
x + z = 1 y  z + 3w = 7 # Therefore x = 1  z y = 7 + z  3w
Vectors
 A row vector is a matrix with one row
 A column vector is a matrix with one column
 To find where two vectors intersect, make their equations equal each other
Vector Operations
 To add two column vectors, add their rows
 To multiply a vector by a constant, thatās selfexplanatory (other forms of multiplication of the cross/dot product)
The Dot Product
The dot product is how much the vectors line up with each other.
 To find the magnitude of a vector projected onto another (aka. the geometric interpretation of the dot product), multiply the rows by each other and add their products
 If two vectors are orthogonal, their dot product = 0
Cosine Law
The cosine law determines the angle between two vectors in radians (that means you must take the cosine inverse to find the actual angle, bitch)
Length, Distance, and Direction
 To determine the distance between two vectors, subtract the second by the first
Or using the Cosine Law
 To find the magnitude/length/norm of a vector, take the Pythagorean theorum of the vector components
 To find the unit vector in the direction of another vector, divide that vector by its norm
The Cross Product
The cross product of 2 3D vectors gets us a vector that is orthogonal to both vectors.
Projection
 I know we discussed the geometric interpretation of the dot product being the āprojectionā, but thereās also orthogonal projection, which is when we change the direction of a vector according to the direction of another vector
Linear Combinations
 A linear combination is a vector determined in terms wof other vectors. This is just reordering systems of equations
 An upper triangular matrix is where all entries under the main diagonal are 0
 An lower triangular matrix is where all entries above the main diagonal are 0
Linear Independence
 A linearly independent vector changes the span of a vector set
 Linear dependence can be determined by making one of the vectors nonfree variables? (wtf?)
Lines
 The vector equation of a line is like
y = mx + b
except even more useless!  It helps us determine, using a point vector and direction vector, the equation of an infinite line
Planes
 The vector equation of a plane uses the same equation format (a position vector + direction vector), except with an added direction vector, to denote format
 This format is useful for finding where two planes or a plane and a line intersect!
 The normal/standard/scalar equation of a plane uses a normal vector, a vector orthogonal to the plane, to denote the direction of the vector
 No position vector required!
 This equation is useless for determining if two planes or a plane and a line intersect
 To convert the normal equation form into a vector equation: ā¦
Matrices
 Matrices are indexed using i and j, (whoever thought that was a good idea probably invented Cheezits ā ļøš«)
 A matrix is considered symmetric when both triangles reflect each other across the diagonal
Operations
 We can add/subtract matrices of the same width/height
 A transposed matrix is a matrix rotated on its side (rows become rows)
Multiplication
 A matrix A can be multiplied by matrix B if matrix B has the same amount of rows as matrix Aās columns
 To multiply two matrices, treat the first matrixās rows as inputs, and the second matrixās columns as operations
 To square a matrix, we multiply it by itself
Inverse
 The inverse a matrix reverses the operations of a multiplication matrix
 There is only ever either one or no inverses for a matrix
 A matrix is called singular if it is uninvertible
 A matrix is called nonsingular if it is invertible
 A matrix is singular if its determinant = 0 (meaning its area is reduced to 0)
 Only square matrices are invertible
 The multiplying a matrix with its inverse results in no transformation. The identity matrix is what results.
 To determine the inverse of a matrix, we put it sidebyside to its multiplication identity ([AI]), and try to make the matrix equal the identity using row operations.
 A permutation matrix is a nonRREF matrix contain exactly one 1 per row. It can be inverted by taking its transposition
Properties of Inverted Matrices
 A constant multiplied by an inverted matrix is the same as the reciprocal of that constant multiplied by the matrix
 The inverse of two multiplied matrices is the same as inverting both matrices and reversing their order
 Taking the exponent of a matrix then inverting it = taking the inverse of a matrix to an exponent
 Taking the transpose of a matrix then inverting it = taking the inverse of a matrix, then transposing it
Determinants
The cross product of a square matrix results in the area between vectors, or the determinant of n
vectors.
Note: The transpose of a matrix is just rotating the matrix, so thereās no change in the area or determinant.
Scaling Determinants
Say we have a 2 x 2
matrix that has a determinant, or area of 4. What if we scale the determinant by 3? We might think we just multiply 4 x 3
, but that isnāt our true determinant, since we havenāt considered both dimensions, only one.
A: 2A: a b > 2a 2b c d > 2c 2d det(A) = ad  bc det(2A) = (2a)(2d)  (2b)(2c) = 4ad  4bc = 4(ad  bc) = 4det(A) = 2Ā²det(A)
Each dimension introduced increased our determinant exponentially.
Area of a 2 x 2
Matrix
But why?
Remember that each column is a vector, so if AC = [0, 1] and BD = [1, 0], we get a matrix of:
[0, 1] [1, 0]
Which we know to be our basis vectors. The area of our basis vectors is 1. So, the formula is 0 * 0  1 * 1 = 1
(or an area of 1). If we flipped the order of the vectors, we would get 1
.
Okay, but why do we subtract?
Because of the checkerboard pattern. Keep reading, and Iāll explain it in a bit.
Area of a 3 x 3
Matrix
There are 3 ways to multiply a 3 x 3
matrix.
 Cofactor expansion which is the formal, longer method
 Sarrusā Rule which is the shorter method based on cofactor expansion
 Upper triangular form, which is a shorter form of Sarrusā rule
Cofactor Expansion
To get cofactor expansion, we make a submatrix by removing a row and column using ~ notation:
The determinant of a submatrix is called a minor of that matrix
The determinant of matrix greater than 2 columns and rows is the sum of each cellās cofactor matrices in any of the rows or columns
How is a cofactor different than a submatrix?
It isnāt, the only difference is that we use the checkerboard pattern:
For a matrix, the signs alternate like this: +  +  +  +  + This creates a "checkerboard" pattern where:  Position (1,1): (1)^(1+1) = (1)^2 = +1  Position (1,2): (1)^(1+2) = (1)^3 = 1  Position (1,3): (1)^(1+3) = (1)^4 = +1
The checkerboard pattern is used for consistency of expansion which is fancy math lingo āif we were to do cofactor expansion on any row or column, the determinant should be the same.ā Try this for yourself! Keep all the signs positive, and youāll see it no longer gets you the determinant.
Sarrusā Rule
Hereās a picture that subtracts the products of the left diagonals and right diagonals:
NOTE Sarrusā rule doesnāt work for 4 x 4 matrices
Area of a 4 x 4
Matrix
Using the Upper Triangular method can work for any square matrix!
# Converting an original matrix into an upper triangular matrix using row operations  2 1 3  >  2 1 3   4 5 6  >  0 3 0   7 8 9  >  0 0 1.5  det(^) = 2 * 3 * 1.5
Think about why this works. If we attempt Sarrusā rule on it, we end up just getting the first diagonal since thereās nothing to add or subtract! Same with cofactor expansion!
Inverting Determinants
Recall our identity:
So, the inverse of a determinant 4
is just 1/4
!