You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thank you for approaching me efficient pruning method.
I implemented 1 of 2 pruning steps(Training & Masked retraining). While I train with train.py, 4 batch size, and 25 epoch, It implements 0-24 steps and again and again.. When will it stop itself? and what does this iteration means?
The text was updated successfully, but these errors were encountered:
Hi hhb1224,
The first step includes 4 iterations of rho (in cfg/darknet_admm.yaml: --rou-num) each iteration uses a different rho value. Starting from 0.0001, the rou value increases by 10x each iteration, which means the punishment of pruning loss increases by 10x each iteration. For more details of admm algorithms please check this paper: A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers https://arxiv.org/pdf/1804.03294.pdf
Hi, Thank you for approaching me efficient pruning method.
I implemented 1 of 2 pruning steps(Training & Masked retraining). While I train with train.py, 4 batch size, and 25 epoch, It implements 0-24 steps and again and again.. When will it stop itself? and what does this iteration means?
The text was updated successfully, but these errors were encountered: