You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to start this post by doing a brief analysis of the kernel in step 4 (keeping actual hardware in mind). Compiling the code with flags --ptxas-options=-v outputs that we are using 8192 bytes (8 KB) of shared memory. As I am using 32x32 blocks, there are 1024 threads per blocks. Below are the specifications for the RTX 3090:
Max Threads per Block: 1024
Max Threads per SM: 1536
Max Shared Memory per Block: 48 KB
Max Shared Memory per SM: 100 KB
As the whole block gets assigned to an SM, the program runs on the hardware as follows:
Shared memory: We are using 8 KB per Block + 1 KB per Block for CUDA runtime usage results in the total of 9 KB per Block. The hardware supports 100 KB per Block, so based on this we can have up to 11 Blocks running per SM (at a time).
Threads: We are using 1024 Threads per Block and the maximum of 1536 threads per SM is supported. This leads to just 1 block running per SM (at a time).
What this means is that our code is running 1 block per SM in parallel at a time. So in short, a larger portion of the calculations are running in sequence. Wouldn't it be better if:
We manually serialize a portion of the code? This way we can avoid the cost of letting hardware handle it automatically and also make sure that more blocks get assigned to an SM.
Even though shared memory accesses are not that costly (compared to global memory), can we still reduce the number of shared memory accesses to make the code even faster?
We can achieve both these goals by using a thread to compute multiple elements of the output matrix C and utilize registers wisely (such that memory accesses are even faster).
1D Thread Coarsening
The strategy here is to use a thread to compute multiple elements along the column of the output matrix C. Consider a simple 4x4 matrix multiplication. To solve this in parallel, let's define a 1x1 grid (i.e., 1 block only) and 1x8 block (i.e., 1D block with 8 threads in x direction). Even though the block is 1D, we can still distribute the threads to cover the 2D space (see Figure 1).
Figure 1: 1 thread computing 2 elements of C along the column.
Just as before, we now load tiles of matrix A and B into shared memory (in multiple phases). The difference here is that a tile of A is 4x2 and a tile of B is 2x4. This is because, we still need a 4x4 output but have just 8 threads at our disposal. The thing about using 1D block is that, we can redistribute the threads along 4x2 and 2x4 dimensions ensuring coalesced global memory accesses (see Figure 2).
Figure 2: Loading tiles into shared memory
Once the tiles are in the shared memory, the kernel in step 4 performed standard matrix multiplication on these tiles. However, in this kernel, 1 thread computes 2 elements. So, we can use registers and store some data to reduce shared memory accesses (remember register is private to a thread so this was not possible in the previous kernels). We are going to do this by creating another loop (call this k) inside each each phase. This loop k will retrieve an element of B along the column and store it in a register. Then a final loop (call this c) just calculates the 2 elements of C assigned to the thread using the required elements of A.Figure 3 shows the whole process for Phase 0 (same thing is done in Phase 1, but with different matrix tiles).
Figure 3: Moving elements of B into thread registers
This might look a bit overwhelming, but just consider elements calculated by thread[0,0], i.e., C[0,0] and C[1,0]:
Now, let's put everything we have discussed so far into code for general matrix multiplication. Defining grid and block dimensions require careful analysis now. We need to tie the tile sizes and the number of elements each thread computes together for compatibility. I decided to go with 64x8 tiles for A and 8x64 tiles for B. Based on this, in my code, each thread computes 8 elements.
// Coalescing Factor
#define COARSE_FACTOR 8
// Tiles of A
#define tiles_A_rows 64
#define tiles_A_cols 8
// Tiles of B
#define tiles_B_cols 64
As each thread in a block copies 1 element from global to shared memory, the block is 1D with 512 threads. As each thread is computing 8 elements along the column of C, a block is assigned to compute 64x64 tile of C. We can use this to define a grid that spans the whole matrix C.
As each block has 512 threads and an SM can have a max of 1536 thread, there will be more than 1 block assigned to each SM! This will result in much better latency hiding.
We can now start defining the kernel function. As the block is 1D and threads will get distributed differently (based on what they're doing), we can do the bookkeeping beforehand.
__global__ void coarse_1d_mat_mul_kernel(float *d_A_ptr, float *d_B_ptr, float *d_C_ptr, int C_n_rows, int C_n_cols, int A_n_cols)
{
// Details regarding this thread
const int by = blockIdx.y;
const int bx = blockIdx.x;
const int tx = threadIdx.x;
// 1D -> 2D while loading A
const int A_view_ty = tx / tiles_A_cols;
const int A_view_tx = tx % tiles_A_cols;
// 1D -> 2D while loading B
const int B_view_ty = tx / tiles_B_cols;
const int B_view_tx = tx % tiles_B_cols;
// Working on C[row,col]
const int row = tiles_A_rowsby + COARSE_FACTOR * (tx/tiles_B_cols);
const int col = tiles_B_colsbx + (tx % tiles_B_cols);
// .
// .
// .
}
Next step is to allocate the shared memory and load tiles of A and B. The thread to element mapping in this case is very similar to before. The only difference is that, we need to use the block indices carefully and load the correct tiles into shared memory.
__global__ void coarse_1d_mat_mul_kernel(float *d_A_ptr, float *d_B_ptr, float *d_C_ptr, int C_n_rows, int C_n_cols, int A_n_cols)
{
// Details regarding this thread
const int by = blockIdx.y;
const int bx = blockIdx.x;
const int tx = threadIdx.x;
// 1D -> 2D while loading A
const int A_view_ty = tx / tiles_A_cols;
const int A_view_tx = tx % tiles_A_cols;
// 1D -> 2D while loading B
const int B_view_ty = tx / tiles_B_cols;
const int B_view_tx = tx % tiles_B_cols;
// Working on C[row,col]
const int row = tiles_A_rowsby + COARSE_FACTOR * (tx/tiles_B_cols);
const int col = tiles_B_colsbx + (tx % tiles_B_cols);
Once the tiles are in shared memory, we define another loop k that puts an element of B into thread register. Then the final loop can just perform the standard dot product by retrieving elements of A one by one. After all the calculations, we just store the results back into matrix C.
__global__ void coarse_1d_mat_mul_kernel(float *d_A_ptr, float *d_B_ptr, float *d_C_ptr, int C_n_rows, int C_n_cols, int A_n_cols)
{
// Details regarding this thread
const int by = blockIdx.y;
const int bx = blockIdx.x;
const int tx = threadIdx.x;
// 1D -> 2D while loading A
const int A_view_ty = tx / tiles_A_cols;
const int A_view_tx = tx % tiles_A_cols;
// 1D -> 2D while loading B
const int B_view_ty = tx / tiles_B_cols;
const int B_view_tx = tx % tiles_B_cols;
// Working on C[row,col]
const int row = tiles_A_rowsby + COARSE_FACTOR * (tx/tiles_B_cols);
const int col = tiles_B_colsbx + (tx % tiles_B_cols);
// Phases
const int phases = ceil((float)A_n_cols/tiles_A_cols);
// Parallel mat mul
float value[COARSE_FACTOR] = {0.0f};
for (int phase = 0; phase < phases; phase++)
{
// Load Tiles into shared memory
if ((bytiles_A_rows + A_view_ty < C_n_rows) && ((phasetiles_A_cols+A_view_tx) < A_n_cols))
sh_A[A_view_ty][A_view_tx] = d_A_ptr[(by*tiles_A_rows + A_view_ty)A_n_cols + (phasetiles_A_cols+A_view_tx)];
else
sh_A[A_view_ty][A_view_tx] = 0.0f;
if (((phase*tiles_A_cols + B_view_ty) < A_n_cols) && (bx*tiles_B_cols + B_view_tx < C_n_cols))
sh_B[B_view_ty][B_view_tx] = d_B_ptr[(phase*tiles_A_cols + B_view_ty)*C_n_cols + (bx*tiles_B_cols + B_view_tx)];
else
sh_B[B_view_ty][B_view_tx] = 0.0f;
__syncthreads();
for (int k = 0; k < tiles_A_cols; k++)
{
float B_val_register = sh_B[k][B_view_tx];
// Dot product
for (int c = 0; c < COARSE_FACTOR; c++)
value[c] += sh_A[B_view_ty*COARSE_FACTOR+c][k] * B_val_register;
}
__syncthreads();
}
💡
Note that we are performing boundary checks at every step, so this is valid for all matrix sizes!
Benchmark
Figure 4: cuBLAS vs Shared Memory vs 1D Thread Coarsening
Figure 4 shows the GFLOPS for the shared memory code (where tile size is set to 32x32) and 1D thread coalescing kernel (where each thread computes 8 elements) against NVIDIA's SGEMM implementation. As we saw earlier that the shared memory version was achieving around 12% of what cuBLAS can do for large matrices. With 1D thread coarsening, the kernel is at around 37% of cuBLAS. This gives a big performance jump and a reason is that we are now spending more time performing calculations (and not accessing memory). So, why not keep up and make each thread compute even more elements by storing elements of both A and B in registers.
Step 4: 1D Thread Coarsening using GPU Registers
https://ift.tt/st2xK0z
I want to start this post by doing a brief analysis of the kernel in step 4 (keeping actual hardware in mind). Compiling the code with flags
--ptxas-options=-v
outputs that we are using 8192 bytes (8 KB) of shared memory. As I am using 32x32 blocks, there are 1024 threads per blocks. Below are the specifications for the RTX 3090:As the whole block gets assigned to an SM, the program runs on the hardware as follows:
What this means is that our code is running 1 block per SM in parallel at a time. So in short, a larger portion of the calculations are running in sequence. Wouldn't it be better if:
We can achieve both these goals by using a thread to compute multiple elements of the output matrix C and utilize registers wisely (such that memory accesses are even faster).
1D Thread Coarsening
The strategy here is to use a thread to compute multiple elements along the column of the output matrix C. Consider a simple 4x4 matrix multiplication. To solve this in parallel, let's define a 1x1 grid (i.e., 1 block only) and 1x8 block (i.e., 1D block with 8 threads in x direction). Even though the block is 1D, we can still distribute the threads to cover the 2D space (see Figure 1).
Figure 1: 1 thread computing 2 elements of C along the column.Just as before, we now load tiles of matrix A and B into shared memory (in multiple phases). The difference here is that a tile of A is 4x2 and a tile of B is 2x4. This is because, we still need a 4x4 output but have just 8 threads at our disposal. The thing about using 1D block is that, we can redistribute the threads along 4x2 and 2x4 dimensions ensuring coalesced global memory accesses (see Figure 2).
Figure 2: Loading tiles into shared memoryOnce the tiles are in the shared memory, the kernel in step 4 performed standard matrix multiplication on these tiles. However, in this kernel, 1 thread computes 2 elements. So, we can use registers and store some data to reduce shared memory accesses (remember register is private to a thread so this was not possible in the previous kernels). We are going to do this by creating another loop (call this
Figure 3: Moving elements of B into thread registersk
) inside each each phase. This loopk
will retrieve an element of B along the column and store it in a register. Then a final loop (call thisc
) just calculates the 2 elements of C assigned to the thread using the required elements of A. Figure 3 shows the whole process for Phase 0 (same thing is done in Phase 1, but with different matrix tiles).This might look a bit overwhelming, but just consider elements calculated by thread[0,0], i.e., C[0,0] and C[1,0]:
Tiled Version with No Registers
Tiled Version with Registers
Now, let's put everything we have discussed so far into code for general matrix multiplication. Defining grid and block dimensions require careful analysis now. We need to tie the tile sizes and the number of elements each thread computes together for compatibility. I decided to go with 64x8 tiles for A and 8x64 tiles for B. Based on this, in my code, each thread computes 8 elements.
As each thread in a block copies 1 element from global to shared memory, the block is 1D with 512 threads. As each thread is computing 8 elements along the column of C, a block is assigned to compute 64x64 tile of C. We can use this to define a grid that spans the whole matrix C.
💡
As each block has 512 threads and an SM can have a max of 1536 thread, there will be more than 1 block assigned to each SM! This will result in much better latency hiding.
We can now start defining the kernel function. As the block is 1D and threads will get distributed differently (based on what they're doing), we can do the bookkeeping beforehand.
Next step is to allocate the shared memory and load tiles of A and B. The thread to element mapping in this case is very similar to before. The only difference is that, we need to use the block indices carefully and load the correct tiles into shared memory.
Once the tiles are in shared memory, we define another loop
k
that puts an element of B into thread register. Then the final loop can just perform the standard dot product by retrieving elements of A one by one. After all the calculations, we just store the results back into matrix C.💡
Note that we are performing boundary checks at every step, so this is valid for all matrix sizes!
Benchmark
Figure 4: cuBLAS vs Shared Memory vs 1D Thread CoarseningFigure 4 shows the GFLOPS for the shared memory code (where tile size is set to 32x32) and 1D thread coalescing kernel (where each thread computes 8 elements) against NVIDIA's SGEMM implementation. As we saw earlier that the shared memory version was achieving around 12% of what cuBLAS can do for large matrices. With 1D thread coarsening, the kernel is at around 37% of cuBLAS. This gives a big performance jump and a reason is that we are now spending more time performing calculations (and not accessing memory). So, why not keep up and make each thread compute even more elements by storing elements of both A and B in registers.
References
xGeMM/include/04_coarse_1d_xgemm.cuh at master · tgautam03/xGeMM
Accelerated General (FP32) Matrix Multiplication. Contribute to tgautam03/xGeMM development by creating an account on GitHub.
Header File
xGeMM/src/04_coarse_1d_xgemm.cu at master · tgautam03/xGeMM
Accelerated General (FP32) Matrix Multiplication. Contribute to tgautam03/xGeMM development by creating an account on GitHub.
Source File
xGeMM/test/04_benchmark_coarse_1d.cu at master · tgautam03/xGeMM
Accelerated General (FP32) Matrix Multiplication. Contribute to tgautam03/xGeMM development by creating an account on GitHub.
via 0Mean1Sigma
December 30, 2024 at 03:29PM
The text was updated successfully, but these errors were encountered: