-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] loading model on machine with different accelerator #1509
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1509 +/- ##
==========================================
+ Coverage 88.33% 88.41% +0.08%
==========================================
Files 38 38
Lines 5099 5103 +4
==========================================
+ Hits 4504 4512 +8
+ Misses 595 591 -4 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great addition, thank you!
Can be merged after passing of tests. |
Happy to help (a bit) with such a great library! |
Hi @McOffsky Thank you! The pyright type checker did want a change of line 2737, I fixed it quickly - save your efforts for a later contribution. Best, |
Update: waiting for rerun of tests to finish before merge. |
Model Benchmark
|
🔬 Background
During my experiments with NP I've found out, that utils.load() is using trainer config is retrived from model, including accelerator. This caused an error on cpu machine while loading model fitted on GPU.
🔮 Key changes
NeuralProphet.restore_trainer() now takes optional argument, accelerator: str = None. If not provided, value stored in model will be used.
utils.load() now creates torch.device(map_location) into separate variable, and that variable is used in torch.load(). Orginal value is additionaly passed to NeuralProphet.restore_trainer() as accelerator
📋 Review Checklist
Please make sure to follow our best practices in the Contributing guidelines.