-
Notifications
You must be signed in to change notification settings - Fork 774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change "plot_loss_history" because of safety #1571
Conversation
Could you point out an example when the array sizes vary during training? |
Before recompiling with L-BFGS, I attempted to modify the loss function by setting
However, the array returned by this new pde function had a different length, resulting in the loss graph not being generated. Therefore, the length of arrays inside loss_train changes, and the code
results in an error.
|
My modification of the loss function was merely out of curiosity. However, considering that changing the code did not seem to significantly impact performance, and there might be instances in the future where I experiment with altering the loss function, I suggested a pull request. |
The change to the new code was made after discovering that np.sum(loss_history.loss_train, axis=1) does not work when the number of arrays changes during the learning process of PDE. This issue arises because the original function assumes a regular 2D numpy array structure, which is not the case when the array sizes vary during training. The list comprehension approach is therefore used to sum each inner array individually, allowing effective handling of this irregular data structure. This solution is particularly suitable given the minimal impact on performance due to the small overall size of the data set((we store a loss every 1000 epochs, so the number of lists is not large).