Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaling matrix size #2

Open
jtbr opened this issue May 31, 2024 · 5 comments
Open

Scaling matrix size #2

jtbr opened this issue May 31, 2024 · 5 comments

Comments

@jtbr
Copy link

jtbr commented May 31, 2024

Hello, Thanks for this library; I like your approach to the problem. But I was testing and it seems to be using more GPU memory than I expected?

If I simply change the grid_num in the examples/testrun.py file to 600, precompute() runs out of memory on a 24GB GPU card [in grad() called from calculate_laplace(), or for larger matrices, within PINN.h()]. I actually need a rectangular grid of about 2300 x 800 for my problem (which corresponds in size to a square of about 1350), so it's not even close. I'm surprised by this. If I were able to precompute on a larger GPU, would solve() need less memory so I could perhaps continue to use the 24GB card?

Are there any other settings I should change to reduce the memory needed for precompute()? Would they sacrifice performance?

Also, there is a note in the README about adding support for interior boundary conditions in the future, but it seems this is already supported. Is that correct?

Many thanks

@jtbr
Copy link
Author

jtbr commented Jun 4, 2024

I ran a test to see how memory scales with grid_size. 400x400 is about the largest I could fit on my 24GB GPU. It does appear that Precompute uses by far the most memory, so if I could manage to precompute, the solver would work great for me. But even with an 80GB GPU doesn't look like I could precompute with my problem size. See image below. Not sure if there's any way to reduce memory usage during precompute().
image

@matthiasnwt
Copy link
Owner

Hi, thanks for your interest in my work. Your observations are correct. The pre-compute step is the most intensive step and it scales quadratic. Once the pre-compute is done, you could easily solve large grids efficiently but the pre-compute has to be done once.

In the current implementation, there is nothing you can change to make it fit on a 24 GB GPU. Even on larger GPUs you will run out of memory very soon.

Unfortunately, I am not sure if I will have the time to look into it more closely and find a solution.

Do you want to solve the Poisson equation for your 2300 x 800 very often or just once?

One thing you should keep in mind: the neural network serves as basis functions, however, they are trained on much smaller grids. I am not sure how well they can approximate the solution of your much larger grid.

@jtbr
Copy link
Author

jtbr commented Jun 4, 2024

Many thanks for your input. I would be solving the equation many times with the same PDE and boundary condition domains, which is why the approach suits me. But indeed that's an important caveat if the neural network is trained on much smaller grids and might not be able to do a precise-enough approximation at this higher resolution.

@matthiasnwt
Copy link
Owner

My method uses 700 basis functions, for a grid with almost 2 million points (2300 x 800) this could be too few. The maximum I tried where 160K points (400 x 400) and the accuracy started to reduce.
Solving the Poisson equation for a grid with millions of points is a interesting research question. I think it would be possible to scale up my approach but the training and pre-compute will always scale quadratic with my approach, only the interference is constant. I think you need to look into a different method.

@jtbr
Copy link
Author

jtbr commented Jun 11, 2024

Ok, thanks for your help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants