You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I realize that the shard size was changed from 300MB back to 1GB. In doing so, it seems the main-wds.py script has broken.
When trying to run ./run single or ./run multi, the check_shards() function looks for 490 shards, while now the number of shards are smaller.
I also tried running the main-wds.py file myself, but got this error:
No such file or directory: './shards/imagenet-train-000488.tar'
I found that the error stems from the way the train shards are being loaded. The number 490 is hardcoaded in the parser arg in the line:-
parser.add_argument('--trainshards', default='./shards/imagenet-train-{000000..000490}.tar', help='path/URL for ImageNet shards',)
I changed that to the number of training shards generated for the 1GB/shard (146 in my case), but this is a stop-gap. A programmatic fix might be a better idea.
If you think it might help, I can fix this in a Pull Request. Let me know, and thanks again for all your amazing work on this!
The text was updated successfully, but these errors were encountered:
I realize that the shard size was changed from 300MB back to 1GB. In doing so, it seems the
main-wds.py
script has broken.When trying to run
./run single
or./run multi
, the check_shards() function looks for 490 shards, while now the number of shards are smaller.I also tried running the main-wds.py file myself, but got this error:
No such file or directory: './shards/imagenet-train-000488.tar'
I found that the error stems from the way the train shards are being loaded. The number 490 is hardcoaded in the parser arg in the line:-
parser.add_argument('--trainshards', default='./shards/imagenet-train-{000000..000490}.tar', help='path/URL for ImageNet shards',)
I changed that to the number of training shards generated for the 1GB/shard (146 in my case), but this is a stop-gap. A programmatic fix might be a better idea.
If you think it might help, I can fix this in a Pull Request. Let me know, and thanks again for all your amazing work on this!
The text was updated successfully, but these errors were encountered: