Why Matrix Computations?

As of a few minutes ago, enrollment in matrix computations this semester stands at 35. We’re capped at 40 due to room capacity. More students would mean more work. I still hope a few more sign up, because this is an awesome topic to study. Why? It’s a one-two combo:

  1. Linear algebra is some beautiful math, right up there with complex analysis. It’s also fantastically useful. Want to understand continuum mechanics? Nonlinear dynamics and bifurcations? Statistics? Control theory? Signal processing? Modern graph theory? Hey, I’ve never even figured out how anyone could effectively use calculus without understanding linear algebra!

  2. Computers let us engineer math, including continuous mathematics. In one way, this is old stuff: numerical methods comprise some of the oldest algorithms in computer science, and scientific computing was the chief purpose of many of the pioneering machines. These days, different forces drive the computing industry: games and big data analysis are arguably hotter topics than climate simulation. But good luck producing 3D graphics, computer sound, or search engines without a solid numerical base!

So why learn numerical linear algebra? Not because I’m the highest energy lecturer in the world, nor has matrix computations suddenly become the most fashionable sub-field in computer science. But linear algebra is the raw structural material of numerical computation. We know how to make linear algebra computations strong and lightweight, and we know how to compose factorizations to build fantastic things. The body of numerical knowledge is tremendous, but matrix computations are the bones that support that body.

See you in class tomorrow. Get the book, it’s a good one. Oh, and the first homework is up.

Comments

blog comments powered by Disqus
Fork me on GitHub