-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError:index_select() and issue about DataParallel #2
Comments
HaHa, I am very excited to be here again since I have solved some problems. And here I would share the solutions about DataParallel and my experiences on new pytorch 0.4.0 + windows10. Firstly, I solved the problem about DataParallel problem:
Secondly, I test this code on the official released version of pytorch 0.4.0 on Windows10, there would be somewhere to pay attention to: (1) A special multiprocessing error on windows--Windows FAQ
So all code should be put under (2) Error about 'torch.sparse'
According to a similar question, it would work well after replacing (3) An userwaring to use tensor.item() instead of .data[0]
so it would be OK after being revised as follows: (Those tests are based on Windows 10 + python 3.6 + pytorch 0.4.0) |
First of all, thanks, its definitely an easy to follow CapsNet tutorial for me as a beginner, but I found an error after running the code:
I solved this issue same as gram-ai/capsule-networks#13, in Decoder class :
".data" should be removed.
Then I successfully trained on single GPU according to this tutorial, but when I tried to train the net on two GPUs according to PyTorch data parallelism tutorial :
but it produced an error
AttributeError: 'DataParallel' object has no attribute 'loss'
I'm confused, and if there is any good solution, please tell me, thanks!
(I use python 2.7.12 and pytorch 0.3.0.post4)
The text was updated successfully, but these errors were encountered: