-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Legendre nnls merge with main #307
base: main
Are you sure you want to change the base?
Conversation
vector (in tdata) so that NNLS can be used instead of BVLS. Fixed bug when ValueError excepted in rebinning
negative legendre terms. Refactored a little to clean up. For archetypes with legendre terms, NNLS is now actually used with additional terms of -1*each legendre term so that we can use the faster NNLS to get the same result as BVLS.
because on CPU, the additional computation time for the larger array sizes dominates any time savings of NNLS vs BVLS
negative legendre terms on the GPU in lieu of BVLS into per_camera_coeff_with_least_square_batch. Moved prior_on_coeffs into zscan.py, called from within per_camera_coeff_with_least_square_batch. Pass prior_sigma, a scalar, insted of the array from fitz to archetypes. Bug fix on case where only some object types, not all, have an archetype.
terms with 0 coefficients and make the coeffs negative for negative terms. Refactored to remove the last bit of the BVLS-NNLS trick from archetypes.py and zfind.py by passing binned instead of tdata to per_camera_coeff_with_least_square_batch.
involve using a list of bands instead of ncam to be more general.
because results were different and timing as well.
…re_nnls_merge_with_main Resolved conflicts
@sbailey When this was initially pushed out, there were conflicts, which I created a new branch to resolve.
instead of
After attempting to resolve conflicts, and realizing I needed to update to a newer desi environment, everything ran happily but the results did not match those from After a lot of playing around with various branches in various desi environments, I have finally satisfied myself that everything is ok with my branch to merge into main. There were major output differences between running in environment 23.1 vs 24.4 AND major differences between my legendre_nnls and legendre_nnls_merge_with_main branches running under 24.4. I have finally concluded all of these are due to template changes and code changes unrelated to anything in my updates.If I run from the main branch without archetypes, I now get an identical output to legendre_nnls_merge_with_main. If I run with archetypes, I get an np.allclose agreement between them:
FYI here are the differences in my legendre_nnls branch with exact same code and environment 23.1 vs 24.4:
And even bigger the difference between legendre_nnls and legendre_nnls_merge_with_main
Keep in mind that this is without archetypes! - so not touching anything I recently did. Clearly one file had a 9e99 flag set in one code version vs the other.With --archetypes,
But again all of these differences seem to come from the templates and code independent of anything I did and the main branch seems to agree with my legendre_nnls_merge_with_main branch. I did notice that there were changes in the templates between the various DESI environments and between the main/merged code versus Finally there is still a significant slowdown in the new version in the fine redshift scan that I am investigating that is independent of archetypes and any code I had touched in the |
Created this branch to sort out the differences between main and legendre_nnls. Many changed had been made to main since legendre_nnls was branched off and some conflicts had to be manually resolved, particularly with bands in archetypes.py.