Lab 8: CS61kaChow

Deadline: Thursday, May 1, 11:59:59 PM PT

Lab Slides

Make sure you've finished the setup in Lab 0 before starting this project.

For this lab, remember that the verb form of convolution is "convolve" not "convolute". Convolute means to complicate things, but in case you forget, here's a helpful quote:

"Convolute is what I do on exams, convolve is what you do on exams."

-Babak Ayazifar, EE120 Professor

Introduction

Lab 8 is a modified version of Project 4, the parallelism project from past 61C semesters. As this is a lab designed to be completed within the 2 hour lab session, we have provided the naive convolution code in compute_naive.c as well as a skeleton for compute_optimized.c.

One of the major goals of this lab is to explore how to speed up our programs. However, as with any real-world performance tests, there are many different factors that can affect the performance of any particular program.

For this lab, one of the greatest factors is the load on the hive machines. Heavy hive machine load may significantly affect execution time and speedup. In fact, it can drastically misrepresent the performance of your program, when in fact your code may be correct.

As a result, we recommend checking Hivemind to choose which hive machine to ssh into. You will want to select a hive machine that has a low overall load, CPU usage, and number of current users.

Setup: Git

You must complete this project on the hive machines (not your local machine). See Lab 0 if you need to set up the hive machines again.

In your labs directory on the hive machine, pull any changes you may have made in past labs:

git pull origin main

Still in your labs directory on the hive machine, pull the files for this lab with:

git pull starter main

If you run into any git errors, please check out the common errors page.

Setup: Testing

The starter code does not come with any provided tests. To download the staff tests, run the following command:

python3 tools/create_tests.py

Background

This section is a bit long, so feel free to skip it for now, but please refer to them as needed since they provide helpful background information for the tasks in this project.

Convolutions

For background information about what a convolution is and why it is useful, see Appendix: (Optional) Convolutions.

Application: Video Processing

For this lab, we will be applying convolutions to a real world application: video processing. Convolutions can blur, sharpen, or apply other effects to videos. This is possible because individual frames in a video can be treated as matrices of red, blue, and green values constructing the color for a given pixel. For simplicity, we're only working with grayscale videos, so that there's only one value per pixel (as opposed to one value for red, one for green, one for blue). As such, we can perform any matrix operation on each video frame, one of them being convolution.

When we convolve a matrix with an image, the matrix we use will have a major impact on the outcome ranging from sharpening to blurring an image. The matrices that we provide in this lab will blur or sharpen your video frames. For each pixel, we compute a weighted average using the pixel itself and the pixels near it. By averaging many pixels together, this will smoothen any difference between their values, resulting in a blur. This is referred to as “Gaussian Blur” and is exactly how your phone blurs photos.

Vectors

In this lab, a vector is represented as an array of int32_ts, or a int32_t *.

Matrices

In this lab, we provide a type matrix_t defined as follows:

typedef struct {
  uint32_t rows;
  uint32_t cols;
  int32_t *data;
} matrix_t;

In matrix_t, rows represents the number of rows in the matrix, cols represents the number of columns in the matrix, and data is a 1D array of the matrix stored in row-major format (similar to project 2). For example, the matrix [[1, 2, 3], [4, 5, 6]] would be stored as [1, 2, 3, 4, 5, 6].

bin files

Matrices are stored in .bin files as a consecutive sequence of 4-byte integers (identical to project 2). The first and second integers in the file indicate the number of rows and columns in the matrix, respectively. The rest of the integers store the elements in the matrix in row-major order.

To view matrix files, you can run xxd -e matrix_file.bin, replacing matrix_file.bin with the matrix file you want to read. The output should look something like this:

00000000: 00000003 00000003 00000001 00000002  ................
00000010: 00000003 00000004 00000005 00000006  ................
00000020: 00000007 00000008 00000009           ............

The left-most column indexes the bytes in the file (e.g. the third row starts at the 0x20th byte of the file). The dots on the right display the bytes in the file as ASCII, but these bytes don't correspond to printable ASCII characters so only dot placeholders appear.

The actual contents of the file are listed in 4-byte blocks, with 4 blocks displayed per row. The first row has the numbers 3 (row count), 3 (column count), 1 (first element), and 2 (second element). This is a 3x3 matrix with elements [1, 2, 3, 4, 5, 6, 7, 8, 9].

Task

In this lab, we provide a type task_t defined as follows:

typedef struct {
  char *path;
} task_t;

Each task represents a convolution operation (we'll go into what convolution is later), and is uniquely identified by its path. The path member of the task_t struct is the relative path to the folder containing the task.

Testing Framework

Tests in this lab are located in the tests directory. The starter code does not contain any tests, but it contains a script tools/create_tests.py which will create the tests directory and generate tests. If you would like to make a custom test, please add it to tools/custom_tests.py. Feel free to use the tests we provide in tools/staff_tests.py as examples.

Once you define a custom test in custom_tests.py, you can run the test using the make commands provided in the task that you're currently working on, and the test will be generated for you based on the parameters you specify.

Task 1: Naive Convolutions

Take a look at the provided naive convolution code in compute_naive.c.

Conceptual Overview: 2D Convolutions

Note: For this lab, we will only be covering discrete convolutions since the input vectors and matrices only have values at discrete indexes. This is in contrast to continuous convolutions where the input would have values for all real numbers. If you have seen convolutions in a different class or in some other context, the process we describe below may differ slightly from what you are used to.

Convolution is a special way of multiplying two vectors or two matrices together in order to determine how well they overlap. This leads to many different applications that you'll explore in this lab, but first, here are the mechanics for how convolution is done:

A convolution is when you want to convolve two vectors or matrices together, matrix A and matrix B. We will assume that matrix B is always smaller than matrix A.

  1. You begin by flipping matrix B in both dimensions. Note that flipping matrix B in both dimensions is NOT the same as transposing the matrix. Flipping an MxN matrix results in an MxN matrix. Transpose results in an NxM matrix.

  1. Once flipped horizontally and vertically, overlap matrix B in the top left corner of matrix A. Perform an element-wise multiplication of where the matrices overlap and then add all of the results together to get a single value. This is the top left entry in your resultant matrix.
  2. Slide matrix B to the right by 1 and repeat this process. This continues until any part of matrix B no longer overlaps with matrix A. When this happens, move matrix B to first column of matrix A and down by 1 row.
  3. Repeat the entire process until reaching the bottom right corner of matrix A. You have now convolved matrix A and B. (click the image for a larger version)

You can assume that the height and width of matrix B are less than or equal to the height and width of matrix A.

Note: The output matrix has different dimensions from its input matrices. We'd recommend working out some examples to see how the dimensions of the output matrix is related to the input matrices.

Your Task

Fill in this naive convolution quiz which will test your understanding of the naive implementation.

You may retake the quiz an unlimited number of times to achieve 100% before the deadline; the quiz should give instant feedback on correct/incorrect answers.

Task 2: Optimization

You will implement all optimizations in compute_optimized.c.

Task 2.1: SIMD

Helpful resources: Lab 7, Discussion 9

Optimize the naive solution using SIMD instructions in compute_optimized.c by filling in all relevant blanks. For this lab, we're using 32-bit integers, so each 256 bit AVX vector can store 8 integers and perform 8 operations at once. Take a look at the provided wrapper function for the Intel Vector Intrinsics in vector.h to see what each function does.

As a reminder, you can use the Intel Intrinsics Guide as a reference to look up the relevant instructions. We are using __m256i type to hold 8 integers in a YMM register, and then use the _mm256_* intrinsics to operate on them. Make sure you use the unaligned versions of the intrinsic, unless your code aligns the memory to use the aligned versions.

Task 2.2: OpenMP

Helpful resources: Lab 7, Discussion 10

Optimize your solution from task 2.1 using OpenMP directives in compute_optimized.c by filling in all relevant blanks. You can find more information on OpenMP directives on the OpenMP summary card.

Task 2.3: Testing and Debugging

To run the staff provided tests for this task (note: for this task, you should use make task_2 instead of make task_1):

make task_2 TEST=tests/test_tiny
make task_2 TEST=tests/test_small
make task_2 TEST=tests/test_large

If you don't have the tests, please pull the starter first!

For debugging, you can use vec_print() provided in vector.h to print out vectors. It is also possible to print out vectors directly in CGDB. To do so, type print *(type_in_vector *)&variable_name@num_elems_in_vector in CGDB.

For example, if you'd like to print the values inside a 256-bit wide vector, you could use the following command:

(gdb) print *(int32_t *)&my_vector@8
$1 = {123, 123, 123, 123, 123, 123, 123, 123}

While cgdb and valgrind may not be as helpful as they are for project 1, they are still helpful for fixing any errors you may encounter. When you run a test through make, it prints out the actual command used to run the test. For example, one possible make command could print out the following:

Command: ./convolve_naive_optimized tests/my_custom_test/input.txt

If you would like to use cgdb, add cgdb before the command printed. Similarly, for valgrind, add valgrind before the command printed. For example, the commands would be

cgdb --args ./convolve_naive_optimized tests/my_custom_test/input.txt
valgrind ./convolve_naive_optimized tests/my_custom_test/input.txt

Task 3: Feedback Form

Please fill out this short form. Any feedback you provide won't affect your grade, so feel free to be honest and constructive!

Benchmarks

Execution time and speedup may vary depending on hive machine load. Heavy hive machine load may significantly affect your program's performance. We recommend checking Hivemind to choose which hive machine to ssh into.

There is one type of benchmark we test:

  • Optimized: uses your compute_optimized.c and the staff compute_naive.c
    • These benchmarks are run with a limit of 4 threads for OpenMP.

Speedup Requirements

Speedup Requirements
Name Folder Name Speedup
Random test_ag_random 7.25x
Increasing test_ag_increasing 7.14x
Decreasing test_ag_decreasing 7.70x

Performance Score Calculation

The score for the performance portion of the autograder is calculated using the followed equation: score = log(x) / log(t), where x is the speedup a submission achieves on a specific benchmark, and t is the target speedup for that benchmark.

Submission and Grading

Submit your code to the Lab 8 Gradescope assignment. Make sure that you have only modified compute_optimized.c.

The score you see on Gradescope will be your final score for this lab. By default, we will rate limit submissions to 4 submissions for any given 2 hour period. We may adjust this limit as the deadline approaches and if there is significant load on the autograder, but we will always allow at least one submission per hour.

Appendix: (Optional) Convolutions

The content in this appendix is not in scope for this course.

Convolutions are a mathematical operation that have wide-ranging applications in signal processing and analysis, computer vision, image and audio processing, machine learning, probability, and statistics. Common applications include edge detection in images, adding blur (bokeh) in images, adding reverberation in audio, and creating convolutional neural networks.

This appendix will only give an overview of the 1D discrete time convolution. Discrete time means that a function is only defined at distinct, equally spaced integer indexes. Also, the explanations below are very fast and can be complicated, so it is ok to not understand convolutions for this project. For a much deeper understanding of convolutions and how they interact with signals, be sure to take EE120.

With that, let’s jump in 😈

Mathematically, a convolution is a type of function multiplication that determines how much the two overlap. In real world applications though, a convolution is how a system responds to a signal. A signal can be any mathematical function, and a system can be anything that takes in this function and provides an output. This output is also in the form of a function.

We can describe signals and systems like so:

x(t) -> H -> y(t)

Where x(t) and y(t) are functions, and H is the system.

Important qualities of a system that we’ll need for convolution are linearity and time-invariance.

Linearity: One aspect of linearity is that if you feed a signal into a system that has been multiplied by some constant, it will respond with the original output signal multiplied by the same constant.

αx(t) = x̂(t) -> H -> ŷ(t) = αy(t)

Additionally, linearity means that if you pass in a sum of signals, the system will output a sum of their corresponding outputs.

x_1(t) + x_2(t) = x̂(t) -> H -> ŷ(t) = y_1(t) + y_2(t)

Putting the two together:

αx_1(t) + βx_2(t) = x̂(t) -> H -> ŷ(t) = αy_1(t) + βy_2(t)

Time-Invariance: Time-Invariance means that if you feed a time shifted input into a system, it will respond with an equally time shifted output.

$αx(t - T) = x̂(t) -> H -> ŷ(t) = αy(t - T)$

Going forward, the systems we examine will only be linear and time-invariant (LTI).

In order to determine how a system will respond to any signal, an impulse can be passed into the system. The impulse function used is known as the Kronecker Delta, and it is defined like so:

x(t) = 1 iff t = 0, else x(t) = 0 ∀ t | t ≠ 0

Going forward, this function will be denoted as: 𝛿(t).

An important property of 𝛿(t) is that when it is multiplied by another function x(t), the result is a new function that is 0 everywhere except for t = 0 where it equals x(0). This is due to 𝛿(t) being 0 everywhere except for t = 0 where 𝛿(t) = 1. This can be written as:

x(t)𝛿(t) = x(0)𝛿(t)

This property still holds when 𝛿(t) is shifted by some value T, so more generally:

x(t)𝛿(t - T) = x(T)𝛿(t-T)

This means that we can collect a single value of a function at any integer time value t just by multiplying the function with a shifted 𝛿(t). Knowing this, we can reconstruct x(t) by adding together the functions containing a single point. That means x(t) can be represented as a sum of scaled delta functions like so:

∑x(T)𝛿(t-T) from T = -∞ to ∞ = … + x(-1)𝛿(t + 1) + x(0)𝛿(t) + x(1)𝛿(t - 1) + …

This is a very useful property of the delta function because instead of passing x(t) itself into a system, we can pass in this sum and gain more insight about the output, but first we must investigate how a system responds to 𝛿(t).

When 𝛿(x) is fed into the system, the output is known as the impulse response.

𝛿(t) -> H -> h(t)

There are a few methods for finding h(t), one of which being physically testing the system with an impulse and measuring the output. Knowing this, let’s see what happens when we pass in the scaled sum of delta functions. We know that because the system is LTI, passing in a sum of scaled inputs will result in a sum of corresponding scaled outputs. Additionally, inputs shifted in time will give the corresponding output time shifted by the same value. This means that:

x(t) =  … + x(-1)𝛿(t + 1) + x(0)𝛿(t) + x(1)𝛿(t - 1) + … -> H ->  … + x(-1)h(t + 1) + x(0)h(t) + x(1)h(t - 1) + … = y(t)

The output can then be expressed as ∑x(T)h(t-T) from T = -∞ to ∞, and this is defined as the convolution of x(t) and h(t) and the operator is denoted with the symbol *. The system responded to x(t) by convolving it with the impulse response. Convolution is also a commutative operation, although the proof of this is left as an exercise for the reader. The sum also shows why one of the vectors in exercise 1.2 must be flipped to directly multiply the vectors together. x(T) begins evaluating at -∞ and is multiplied with h(t-T) which begins at +∞, so without flipping, you must have two pointers, one starting at the beginning of the first vector and the other at the end of the second vector, progressing in the opposite direction. By flipping h(t - T) and getting h(-t + T), it is also evaluated from -∞ to +∞, allowing for the two vectors to be directly multiplied to find the output at t. Since convolution is commutative, it doesn’t matter which vector is flipped.

As a final point, notice that the input sum of ∑x(T)𝛿(t-T) from T = -∞ to ∞ is also convolution, one between x(t) and 𝛿(t), and it equals x(t). That means that 𝛿(t) is the identity for convolution, and x(t)*𝛿(t) = x(t) for any x(t).