Matrix Multiplication

However trivial, there is a lot we can learn about parallelisation from this example and because this is a trivial example, researchers around the world have tried their hand and come up with amazing optimisations and implementations. And also matrix multiplication is the building block for dense linear algebra libraries.

This will be a 3 post series with the optimisations and their parallelised version.

  1. Naive Implementation
  2. Blocking Implementation
  3. Tiled Matrix Multiplication

The naive matrix multiplication code was adapted from Cornell University CS5220  course assignment but for some reason the code was dysfunctional. I rewrote the code partially to suit my needs and remove the redundant stuff.

There are several other optimisations as well that one can look forward to for implementing matrix multiplication like copy optimisations. This will be soon covered in my future posts.

I use a MacBook Air with an i5 processor and 4GB RAM with and L1 cache size of 64K. The maximum number of threads that can run on my CPU are 8. All experimental setup and results are based on the above system configuration.

I have referred to Tile Reduction: the First Step Towards Tile Aware Parallelization in OpenMP  to explain the their

All the code in the context of this thread can be found here.

Thank you.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s