-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaling matrix size #2
Comments
Hi, thanks for your interest in my work. Your observations are correct. The pre-compute step is the most intensive step and it scales quadratic. Once the pre-compute is done, you could easily solve large grids efficiently but the pre-compute has to be done once. In the current implementation, there is nothing you can change to make it fit on a 24 GB GPU. Even on larger GPUs you will run out of memory very soon. Unfortunately, I am not sure if I will have the time to look into it more closely and find a solution. Do you want to solve the Poisson equation for your 2300 x 800 very often or just once? One thing you should keep in mind: the neural network serves as basis functions, however, they are trained on much smaller grids. I am not sure how well they can approximate the solution of your much larger grid. |
Many thanks for your input. I would be solving the equation many times with the same PDE and boundary condition domains, which is why the approach suits me. But indeed that's an important caveat if the neural network is trained on much smaller grids and might not be able to do a precise-enough approximation at this higher resolution. |
My method uses 700 basis functions, for a grid with almost 2 million points (2300 x 800) this could be too few. The maximum I tried where 160K points (400 x 400) and the accuracy started to reduce. |
Ok, thanks for your help |
Hello, Thanks for this library; I like your approach to the problem. But I was testing and it seems to be using more GPU memory than I expected?
If I simply change the
grid_num
in theexamples/testrun.py
file to 600,precompute()
runs out of memory on a 24GB GPU card [ingrad()
called fromcalculate_laplace()
, or for larger matrices, withinPINN.h()
]. I actually need a rectangular grid of about 2300 x 800 for my problem (which corresponds in size to a square of about 1350), so it's not even close. I'm surprised by this. If I were able toprecompute
on a larger GPU, wouldsolve()
need less memory so I could perhaps continue to use the 24GB card?Are there any other settings I should change to reduce the memory needed for
precompute()
? Would they sacrifice performance?Also, there is a note in the README about adding support for interior boundary conditions in the future, but it seems this is already supported. Is that correct?
Many thanks
The text was updated successfully, but these errors were encountered: