Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with Training Outcomes After Reducing Model Channel Count to 50% of the Original Paper Specification #16

Open
Damehou opened this issue Aug 16, 2024 · 0 comments

Comments

@Damehou
Copy link

Damehou commented Aug 16, 2024

Hello,
I've been working on replicating a model from a paper but decided to modify the channel count to 50% of what's described in the original paper to potentially improve computational efficiency. However, I've encountered an issue where the training results now show noticeable grid-like artifacts, and the overall quality of the results significantly deviates from those presented in the paper.

Modifications Made:

  • Reduced the channel count across all layers to 50% of the original specifications.

Observed Issues:

  • The model produces grid-like artifacts in the output.
  • There is a significant degradation in the quality of the results compared to the original model.

Questions:

  1. Is it normal to see such a significant drop in performance and appearance of artifacts when reducing the model’s channel count?
  2. Are there recommended strategies to mitigate these issues while still operating under reduced computational resources?
    Any guidance or recommendations on how to address these issues would be greatly appreciated.
    Thank you!
    1
    1-
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant