Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory leak ? #10

Open
behrica opened this issue Nov 7, 2019 · 2 comments
Open

GPU memory leak ? #10

behrica opened this issue Nov 7, 2019 · 2 comments

Comments

@behrica
Copy link

behrica commented Nov 7, 2019

I use bert-sklearn in a benchmark scenario,
so I repeatedly construct and use BertClassifiers, like this:

m1 = BertClassifier( bert_model="biobert-base-cased")
m1.fit(..)
m1.predict(..)
m1.save(..)

....

m2 = BertClassifier( )
m2.fit(..)
m2.predict(..)
m2.save(..)

Doing so fails on using the second classifier with a "out of GPU memory" error.
Executing the code with only one model at a time works.

So I suppose there is a GPU memory leak somewhere. Or do I need to do something special to free memory ?

@charles9n
Copy link
Owner

Hi there,

As it stands in your snippet now m1 is still hanging onto GPU memory. So what i would try is either:

  1. do a del m1 after the m1.save(..) to release GPU memory or
  2. If for some reason you absolutely need m1 around then push the BERT model back on to the cpu with a m1.model.to("cpu") after the m1.save(..)

@charles9n
Copy link
Owner

Also I don't know if you are running in a jupyter or notebook or a script, but i have noticed the memory utilization to be better and more predictable in a script. Not sure if that applies to you or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants