-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accelerating MultiLayerQG on GPUs #373
Open
mpudig
wants to merge
13
commits into
FourierFlows:main
Choose a base branch
from
mpudig:MultiLayerQG_GPU
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
13 commits
Select commit
Hold shift + click to select a range
4f7d797
First attempt at writing kernel and workspace following Greg suggesti…
mpudig 114977c
Change wait to depend on GPU device as per example video on youtube.
mpudig 76c7728
Rewrote kernel for more recent version of KernelAbstraction syntax. A…
mpudig e8cd9d7
Remove GPU warning for more than two layers.
mpudig 633797b
Added KernelAbstractions to dependency list.
mpudig 9166539
Fix typos.
mpudig 04298c4
Fix more typos.
mpudig b3161d8
Some simple tests changing the workgroup size showed 8 was often bett…
mpudig 880f4f6
Fix typos.
mpudig 63f6d9b
Merge branch 'FourierFlows:main' into MultiLayerQG_GPU
mpudig a04c5b0
Changes following suggestions of Navid and Greg.
mpudig 5bc7c5f
Merge branch 'MultiLayerQG_GPU' of https://github.com/mpudig/Geophysi…
mpudig 102000e
Fix typo.
mpudig File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rewrote the kernel in more general form and added
Val
. The code has sped up slightly, but the 16-thread CPU still outperforms the GPU. Compare these benchmarks to what I showed hereGPU
nlayers = 12; nx = 512; prob = MultiLayerQG.Problem(nlayers, GPU(); nx); @btime stepforward!(prob) 668.165 ms (2533 allocations: 191.19 KiB)
CPU with 16 threads
nlayers = 12; nx = 512; prob = MultiLayerQG.Problem(nlayers, CPU(); nx); @btime stepforward!(prob) 444.419 ms (113 allocations: 5.61 KiB)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure you are timing the GPU properly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
julia> nlayers = 12; nx = 512; prob = MultiLayerQG.Problem(nlayers, GPU(); nx); @benchmark CUDA.@sync CUDA.@time stepforward!(prob)
Seems to be roughly the same as above? Unless I'm misunderstanding what this benchmark is doing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's disappointing...
The first thing I would try to figure out is whether this function is indeed the bottleneck. It might be better, infact, to simply benchmark this function in isolation.
I don't know if it matters but I saw the workgroup is 8, 8. Usually we use 16, 16.
I would also check 3 layers first perhaps. The double inner loop gets slower with more layers, and perhaps the computational costs scale differently on CPU vs GPU. That might give a clue.
The loop is over
k, m
--- which are the slowest (last) indices ina
andb
. That could be an issue. If you can benchmark this operation in isolation, then you can experiment with new arrays wherek, m
are the fastest / first indices ina
andb
. This experiment would tell you what kind of slow down that's incurring.Maybe the
@unroll
is not working for some reason. When I've seen stuff like this before, people have used matrix / linear algebra viaStaticArrays
, rather than explicit loops as used here. If you are just testing the kernel in isolation, transforminga
,b
to arrays ofStaticVector
could also be something to experiment with.In general with performance engineering, one has to really be persistent and creative and test test test. To make this easier you want to extract out the function you're trying to optimize and work with a very idealized test case that also allows you to change the data structure (rather than working with a FourierFlows script). Think of this as research. If you find that you need to rearrange memory differently, then we can come back to GeophysicalFlows and see whether that is feasible or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@glwagner @navidcy Apologies for the slow uptake!
As you suggested I tested the
streamfunctionfrompv
function in isolation as this was indeed the bottleneck. I tested the suggestions you proposed and writinga
,b
in the kernel as 2d arrays ofStaticVector
led, by far, to the best performance improvement.When
a
,b
are defined as such, andM
(which will beS
orS⁻¹
depending on the inversion direction) is a 2d array ofStaticArray
, the kernel is very elegant:I benchmarked this new kernel and structure of
a
,b
and compared it to what this PR previously proposed (what I call "current" in the attached figure). The results are pretty impressive – 3 orders of magnitude speed-up in some cases! (These tests were done on a v100 GPU.) With the previous code the complexity scaled asnlayers^4
(matrix-vector multiply + the double loop) whereas without the double loop it scales asnlayers^2
-ish. The proposed change is also faster for thenlayers = 2
case, which currently hardcodes the inversion step.I'm not an expert on writing code for GPUs – how feasible do you think it would be to write all the variables required to step the model forward (
qh
,ψh
,q
,ψ
, etc.) as arrays ofStaticVector
? It seems like it would require changing the fftw structure.Let me know what you think about this. I'd be happy to try reorganise the code and implement it if you think it's a good idea. This sort of acceleration would be a huge boon for my research – and obviously would benefit others who want to run these sort of layered QG simulations as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not true. You are an expert. That is why this PR was able to succeed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exactly!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't exactly understand what the intent is, can you spell out your design a little more clearly?
I believe that FFTs act on CuArray of Floats. If I understand you correctly, I believe that you would like to use
StaticVector
for vertical directions, is that right? However, this would mean that we do not have CuArray of Floats, we have CuArray of StaticVector. So I don't think this is compatible with FFTs.I think the simplest solution is to keep the CuArray representation of all the array, but either to 1) allocate new CuArray of StaticVector, which are temporary variables for performing the inversion or 2) build the StaticVector inside the inversion kernel.
Number 2 seems actually easier to try right now. It is something like
There may be many creative ways to do this though. Possibly, it is possible to reinterpret the memory occupied by a CuArray of SVector as a simple CuArray with a different shape. There is a function
reinterpret
for this. Then perhaps you can perform ffts.