-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training of LR stage #15
Comments
Hi, I retrained the model with
|
Thanks for the response.
|
The training images are generated with Another difference is that, we generate the training dataset offline to speed up the training. Since the degradation space of bsrgan is quite large, generating the images online and training the model with a small batch size may cause problem. You may try to first synthesize the LR images offline, which would make the model training easier. FeMaSR/options/train_FeMaSR_LQ_stage.yml Lines 13 to 17 in 497d3ee
|
Does the offline preprocessing for training set generation include some other enhancement? like 0.5~1.0 scaling before passing to the degradation model described in the manuscript? |
No, resize at the beginning will further enlarge the degradation space. This might also be the problem in current online mode, you can try to set FeMaSR/options/train_FeMaSR_LQ_stage.yml Line 22 in 497d3ee
In fact, we did not verify whether such random scaling brings improvement or deterioration to the performance in offline mode either. We released the same setting as the paper to reproduce our results. Since random scaling is already performed in In a word, keep a proper degradation space can ease the difficulty of model training. Otherwise, you may need much more computation resources similar to the training of BSRGAN. |
Hi there, I cannot train the network and converge to the numeric metric values reported in the manuscript in the SR stage. All settings and experiments are performed for x4 SR.
Setting A. As you commented in #11, I changed the corresponding codes and retrained the network for the pretraining stage. The network converged as expected, and the validation PSNR was around 24.5 dB on DIV2K validation set, which seemed reasonable. Then, I further trained the network for the pretraining stage, however, cannot reproduce the results reported in the paper. The best PSNR/SSIM/LPIPS loss was 21.85/0.5813/0.3724 at 350K iterations, respectively.
Setting B. To locate the problem, I trained the network for SR stage with the default options file and HRP pretrained weights of this repo. However, also converged to a very similar number with Setting A.
Would you mind giving me any suggestions or guidance about this issue?
Some information may help:
Setting A:
Setting B:
Fullfile: https://drive.google.com/drive/folders/1MLPoIYXvWODhevk8ICSAmPHj0DP0PF-k?usp=sharing
The text was updated successfully, but these errors were encountered: