Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference Semantics #113

Open
davidthomas5412 opened this issue Nov 7, 2016 · 2 comments
Open

Inference Semantics #113

davidthomas5412 opened this issue Nov 7, 2016 · 2 comments

Comments

@davidthomas5412
Copy link
Collaborator

davidthomas5412 commented Nov 7, 2016

The code for comparing the 'true' ray traced reduced shear (g_i_true) of a background galaxy to its Pangloss predicted ellipticity (e_i_predicted) still confuses me.

  1. How I think it should be:
  • The method lens_by_map extracts g_i_true from the Hilbert et al ray traced pangloss.KAPPA_FILE, pangloss.GAMMA_1_FILE, pangloss.GAMMA_2_FILE.
  • We compute g_i_predicted and then use it to compute e_i_predicted in the method lens_by_halos.
  • Compare g_i_true with e_i_predicted.
  1. How it is now:
  • The method lens_by_halos computes g_i_predicted and then uses it to compute e_i_predicted.
  • Compare g_i_predicted with e_i_predicted.

Lines 919-926 in background.py on 'wl' branch are good reference. What am I missing?

@drphilmarshall
Copy link
Owner

Good: the names of things are important. There are several different
comparisons that we might want to do.

A. Compare shapes of background galaxies lensed by a realistic density
field (as captured by the MS maps) with the shapes of background galaxies
lensed by a set of halos extracted from that density field (ie each with
known Mhalo). This is the test that Spencer was doing, and answers the
question, how good can the halo model possibly be? The comparison can be
done either in g or e, but either way we should:

  • Get g_true for each background galaxy with lens_by_maps, apply to e_int
    to get e_true
  • Get g_predicted for each background galaxy with lens_by_halos, apply to
    e_int to get e_preditced
  • Compare g or e true with g or e using correlation functions to summarize
    the data in each case

B. Compare observed (fixed) and model-predicted galaxy shapes, in order to
infer the halo masses. This is what we are now trying to do. Here, the
comparison must be between e_obs (which is fixed) and e_predicted (which we
compute using the halos). However, we are still doing functional tests, so
we make the e_obs ourselves, using our realistic density field from teh MS
maps.

  • Get g_true for each background galaxy with lens_by_maps, apply to e_int
    to get e_true and add noise to get e_obs
  • For each choice of halo masses, compute g_predicted for each background
    galaxy with lens_by_halos
  • Compare g_predicted with e_obs using the log likelihood function
  • See whether the inferred Mhalo values match the true halo masses in the
    MS catalog.

On Sun, Nov 6, 2016 at 5:09 PM, David Thomas notifications@github.com
wrote:

The code for comparing the 'true' ray traced reduced shear (g_i_true) of a
background galaxy to its Pangloss predicted ellipticity (e_i_predicted)
either has a bug or my understanding requires correcting.

  1. How I think it should be:
  • The method lens_by_map extracts g_i_true from the Hilbert et al ray
    traced pangloss.KAPPA_FILE, pangloss.GAMMA_1_FILE, pangloss.GAMMA_2_FILE.
  • We compute g_i_predicted and then use it to compute e_i_predicted in
    the method lens_by_halos.
  • Compare g_i_true with e_i_predicted.
  1. How it is now:
  • The method lens_by_halos computes g_i_predicted and then uses it to
    compute e_i_predicted.
  • Compare g_i_predicted with e_i_predicted.

Lines 919-926 in background.py on 'wl' branch are good reference.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#113, or mute the
thread
https://github.com/notifications/unsubscribe-auth/AArY98THZXddQRHJpscnBTMwSyvoTIlhks5q7npFgaJpZM4KqwIP
.

@davidthomas5412
Copy link
Collaborator Author

The names in my original post are fictitious and exist solely for the purpose of simplifying communication. My confusion was not with names, but rather that the 'calculate_log_likelihood' method, which currently computes neither A nor B. A simple way to see this is that the 'calculate_log_likelihood' method only ever makes a comparison with variables computed with lens_by_halos or lens_by_map but never compares variables across both (see lines 920-928 in background.py).

It seems like the conclusion is that we should (i) write a method that computes B, and (ii) use this method in our inference notebooks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants