Skip to content

Inference gpu requirements #797

Answered by fg-mindee
kumar-rajwnai asked this question in Q&A
Discussion options

You must be logged in to vote

Hello @kumar-rajwnai 👋

My apologies for the late reply!
This looks more like a bug report, would you mind sharing details about your environment? (by pasting the results of running this script https://github.com/mindee/doctr/blob/main/scripts/collect_env.py)

Apart from that, early answers/remarks:

  • all of us are able to run the model without a beefy GPU, especially for inference, so there is definitely an issue here
  • it looks like you want to use AMP, I would suggest to remove .half() which only switch to fp16, and instead use a context with torch.cuda.autocast as described here https://pytorch.org/docs/stable/notes/amp_examples.html
  • there is a very high probability your OOM comes from you…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by fg-mindee
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
module: models Related to doctr.models framework: pytorch Related to PyTorch backend
2 participants