Chapter 1

Introduction

The workhorse of scientific computation is matrix multiplication. In many

applications one would like to multiply large matrices, ten thousand by ten

thousand or even larger. The standard algorithm for multiplying n × n

matrices uses on the order of

n3

arithmetic operations, whereas addition

of matrices only uses

n2.

For a 10, 000 × 10, 000 matrix this means

1012

arithmetic operations for multiplication compared with 108 for addition.

Wouldn’t it be great if all matrix operations were as easy as addition? As

“pie in the sky” as this wish sounds, it might not be far from reality. After

reviewing the standard algorithm for comparison, §1.1 begins with Strassen’s

algorithm for multiplying 2 × 2 matrices using seven multiplications. As

shocking as this algorithm may be already, it has an even more stunning

consequence: n × n matrices can be multiplied by performing on the order

of

n2.81

arithmetic operations. This algorithm is implemented in practice

for multiplication of large matrices. More recent advances have brought the

number of operations needed even closer to the

n2

of addition.

To better understand Strassen’s algorithm, and to investigate if it can

be improved, it helps to introduce the language of tensors, which is done in

§1.2. In particular, the rank and border rank of a tensor are the standard

measures of its complexity.

The problem of determining the complexity of matrix multiplication can

be rephrased as the problem of decomposing a particular tensor (the matrix

multiplication operator) according to its rank. Tensor decomposition arises

in numerous application areas: locating the area causing epileptic seizures

in a brain, determining the compounds in a solution using fluorescence spec-

troscopy, and data mining, to name a few. In each case, researchers compile

3

http://dx.doi.org/10.1090/gsm/128/01