Matrix Operations

CSC 213 - Operating Systems and Parallel Algorithms - Weinman



Summary:
A brief exposition on two simple matrix operations: addition and multiplication.

Matrix Addition

If A and B are two M×N matrices, the matrix addition C=A+B is defined as
Ci,j=Ai,j+Bi,j
where Cij means the entry of C at the ith row and the jth column.
Does it matter which order the entries are computed in? Not really. We could compute C2,5 and C4,1 in any order we wish. So long as all the entries of C are computed, the algorithmic order is largely irrelevant to a user. The equation does suggest an algorithm that simply walks along the indices of C (and A and B) calculates the appropriate sum, and stores the result appropriately.
Because each of the M times N values in C must be calculated, the matrix sum requires O(MN)or O(N2)operations (when M=N). The "big O" in this case means there may be some constant (i.e., matrix size independent) multiplier in front of the MN product. For instance, incrementing loop variables and making the assignment as well as the sum operations are necessary.

Matrix Multiplication

If A is an M×N matrix and B is an N×P matrix, the definition of the matrix product C=AB is

Ci,j
=
N

k=1 
Ai,kBk,j
=
Ai,1B1,j+Ai,2B2,j+...+Ai,NBN,j.
Because all M×P entries of C must be calculated, and each of these requires a sum over N values, a similar simple analysis shows that multiplying two matrices using a direct method requires O(MNP) operations, or O(N3) when N=M=P.
Like addition, the order in which the elements Ci,j are computed makes no mathematical difference. Furthermore, because of the commutativity of addition, we can even add the Ai,kBk,j products in any order we wish. Mathematically, all orders are equivalent. The big question is ...
Is there any reason to choose one ordering of i, j, and k values over another?
Copyright © 2012 Jerod Weinman.
ccbyncsa.png
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.