-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Utilization? #2
Comments
Hi, I have the same question, how to change it to train on a GPU? |
Bump! |
Any updates here ? did anyone find out how to run on GPU ? |
I changed both _CPU and the device in Abstract filter class (hard coded as cpu). But this crashes my kernel |
I had the same problem... can't change filters.py into cuda type |
I don't think gpu is supported for the pytorch version |
Anyone tried to merge this implementation: https://github.com/HapeMask/crfrnn_layer ? The author implemented GPU version but I don't have GPU to debug. |
Any updates here? |
as an alternative, this repo provides an implementation that runs on gpu and batch size > 1: |
I've been running inference with the provided pre-trained model, but I've noticed that it only runs on the CPU. I attempted to convert the code to run on a GPU; however, I get numerous runtime errors regarding CPU tensors vs GPU tensors. I see that there are several C++ source files included. Does this mean that this implementation of CRF as RNN is not able to run on a GPU, due to the code compiling for the CPU? Or am I missing something in my conversion of your code?
Thanks!
The text was updated successfully, but these errors were encountered: