Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

lr=0.0025 for single GPU training still cause nan in few iteration #1347

Open
suxi1111 opened this issue Dec 15, 2022 · 0 comments
Open

lr=0.0025 for single GPU training still cause nan in few iteration #1347

suxi1111 opened this issue Dec 15, 2022 · 0 comments

Comments

@suxi1111
Copy link

❓ Questions and Help

help!!! when I train the network on coco, use the turtorial lr=0.0025 still cause nan

iteration : 1, losses : 39.65670394897461
iteration : 2, losses : 22.218917846679688
iteration : 3, losses : 68.60948944091797
iteration : 4, losses : 1266.863037109375
iteration : 5, losses : 332.03045654296875
iteration : 6, losses : 1436176.25
iteration : 7, losses : nan
iteration : 8, losses : nan
iteration : 9, losses : nan
iteration : 10, losses : nan

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant