-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instrumental resolution broadening #245
Conversation
I also like this idea. It might be nice if we could reuse Currently the functions take the widths and their units separately, which is nicer than having to input redundant
I guess initially we could just implement the With Ignoring the 2D issue for now, we could probably add 1D Lorentzian kernels in
I think
Good point! I like the use of |
Since in principle for neutron spectrometer resolutions the broadening width is a function of Q and ω, the most general width argument we can specify would be a function or lambda which takes One thing about the polynomial interpolation - should we consider how to specify the range/limits in which the interpolation is valid. The polynomial would only be an approximation to the resolution function within some range; but if you have e.g. modes which are very very high in energy, it might be outside the range of validity of the polynomial - can we assume that users would check this themselves? Or should we have another parameter, e.g. |
Internally we could pass around that new Polynomial object instead of the coefficient list. This has optional domain and window attributes which could correspond to those input range limits. Already I find it necessary to apply a floor to output values that get too small, otherwise we have sub-bin peak widths that create increasingly tall peaks without a perceivable difference in width. Arguably it's on the user to choose an appropriate energy range, anyway... |
Even more general would be a 4x4 covariance matrix, but we aren't ready for that yet 😅 We should also consider whether that sort of thing would live in Euphonic or another code... But anyway here we are broadening a pre-calculated spectrum, and Euphonic spectra only go up to 2D. We don't have that 4-D input coordinate at the point this is applied; such a routine would need to be part of the StructureFactor methods to calculate 1d averages and S(|q|,ω) maps.
Right, I do worry that adding more and more options to Re: |
A neat benefit of working with Polynomial is that in principle one can also use Chebyshev. But in practice these resolution functions are so smooth that it will make no difference... |
Ah of course, I hadn't thought of that. In that case perhaps it is better to have |
Yeah, FWHM is more consistent with Euphonic in general. (And PyChop, actually...) It's just that the usual TOSCA function is in σ / cm-1 and that's what I've been testing with 😅 |
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## master #245 +/- ##
==========================================
- Coverage 96.53% 95.78% -0.76%
==========================================
Files 28 28
Lines 3807 3958 +151
Branches 752 798 +46
==========================================
+ Hits 3675 3791 +116
- Misses 77 98 +21
- Partials 55 69 +14
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Well, the other benefit to having a Spectrum-specific broadening tool is that is can handle the bin-width issue: spectral data is scaled so the overall intensity is unaffected by bin-width, whereas the lower-level broadening methods are designed to bin from raw data (which makes sense for an adaptive DOS). It's a bit of a headache keeping the units straight so nice to abstract that away. It could be a pure function instead of a method. Maybe start with pure functions, then access from command-line tools, then after a while we can review and decide what would make most sense as a method? |
1f319a6
to
e1004a7
Compare
Another API aspect I'm not sure about is limiting the functional to a Polynomial object. Although the type-hint says that, really the only features that matter are:
The former property could be satisfied by any function handle. The latter requires a suitable object and is e.g. also satisfied by the Chebyshev polynomial object. (For typical smooth resolution functions, the least-square Polynomial and minimax Chebyshev approaches will give very similar results, but maybe in some cases the Chebyshev truncation workflow is preferred?) The second property is also not really needed, it can be handled by scaling the output instead of the object. Multiplying the coefficients is more computationally efficient, but I don't suppose that avoiding one array multiplication is really a big deal. It's easy to foresee the utility of allowing an external, more complex, resolution function to be passed as a handle. I suppose a balance between YAGNI and future-proofing would be have |
af4ca42
to
03fc2e6
Compare
An example of a more generic interface to form a discussion basis: spectrum1d.broaden(x_width: Union[Quantity, Callable[[Quantity], Quantity]],
**kwargs) -> Spectrum1D
spectrum2d.broaden(x_width: Union[Quantity, Callable[[Quantity], Quantity]],
y_width: Union[Quantity, Callable[[Quantity], Quantity]],
**kwargs) -> Spectrum2D which type-dispatches to fixed or variable-width implementations depending on inputs and also passes on the corresponding **kwargs. Pros:
Cons:
|
I think the proposed syntax looks good!
Or you can add more keyword arguments - e.g. Or you can force users to use the |
Right, if that is available and people are trying to get that clever with their resolution functions then they might as well use this and load it with the desired off-diagonal terms. I think I'd prefer that to implementing keywords for every permutation! |
I definitely prefer this new proposed interface, the more consistent use of
Yeah, and as long as we document how to use it clearly, it shouldn't be too much trouble for users!
I do worry about this, it's ok for now as we can just tell users to look at the docs for the
I think it's still relevant, the 'fast approximate' polynomial broadening still uses a convolve in the end and we probably still want to emit the same warning for in the case of irregular grids, right? We could in the future evaluate a function for each bin (like in calculate_dos which would be slow, but at least more correct, and I don't think adding the polynomial broadening to |
Something I've noticed in there is a logical pathway such that
There are two issues here:
|
Ah, looking at the help text I see that the existing behaviour is actually documented.
I'm not sure that scaling the adaptive broadening is more useful than adding a fixed width to it, but I suppose that both are legitimate choices. Hmm, how best to handle this... 🤔 I would like to allow instrumental resolution function functions to be used with adaptive broadening but it seems incompatible with this behaviour. And we just ticked a major version number so shouldn't be breaking backward compatibility. (Although I'm also keen to switch that default implementation from 'reference' to 'fast'...) |
I was just about to reply with this haha. The conversion from mode gradients to widths scales on the cell size and an estimate of the q-point spacing (which is based on the number of q-points due to symmetry weighting, I don't think this is completely correct but I think this is what CASTEP implements if I remember), which I wasn't sure was totally correct or would cover all cases so I wanted to make it easy to change this scaling so I put it in the |
Yeah, when you think of the feature as "broadening" it seems weird to need two steps. But if you think of "improved DOS convergence" and "instrumental resolution" as different features it makes sense! |
This might be a terrible idea, but we could make it so |
I suppose the natural endpoint then would be to have separate Euphonic v1.x
Euphonic v2.x
Euphonic v3.x
|
This looks like a good path to me, or at least the best we're going to get! |
Ok, I'll crack on with that scheme. Is there somewhere we should be recording this kind of API |
Create some issues and slap a Euphonic 2.0/3.0 label on them? If we end up with lots we can think about organising it a bit more... |
- range cutoff has been eliminated, changing results
This is not ideal; in the case of very large broadening it will tend to overestimate peak intensities. However, evaluating the kernel with "correct" scaling derived from bin widths can drastically overestimate peak heights at narrow broadening width. There are better ways of doing this; the width range can be constrained, or histogram integrals can be worked out properly. However, this is not generally done and can lead to results that look strange compared to other implementations (including the Scipy Gaussian broadening) that use simple normalisation. Hopefully we can revisit this some time.
- Although we have reverted to normalised kernels, increasing the calculation range has slightly changed the normalisation magnitude
It's quite common for band structure calcs to end up with slightly mismatched q-sampling segments, so we allow it until a better option is made available.
The kernel range has slightly changed, which in turn affects normalisation. When plotted, the change to the test example is indistinguishable.
Justification same as 1d. The differences are actually a little bit larger in these cases, with a max of around 2%. Visually we can see nothing too wacky is happening.
In this data we can clearly see how the extended tail replaces zeros that previous lay outside the truncation region.
Accept multiple arguments and try to set up polynomials. - Have checked that this doesn't break existing - Initial manual checks also look ok
Test files a bit big but makes them easy to visually debug.
Error messages were very confusing and misleading, suggesting that there were problems with defining and importing things. Actually it was just normal problems...
9b04b2c
to
368b2c9
Compare
For new broadening methods we default to the new Chebyshev-log(energy) fits. But test data is for the old cubic fit... First we make sure that the option to select method is still working and existing tests pass.
de627ae
to
26f5dca
Compare
I have updated the parametrisation of kernel width from nominal error: a wider-ranging, more robust version has been developed and is included in a manuscript to be submitted soon (:crossed_fingers:). As the existing cubic fit was used for test data and for adaptive broadening, at first we will allow them to exist in parallel and have adaptive broadening default to the cubic fit. We should drop the inferior fit in the next major version; will create a milestoned Issue for that. |
This logic case only applies when there is no adaptive broadening, so backward compabibility is not an issue.
Very few changes needed! Most tests are for self-consistency or against exact broadening. Added one new reference dataset for spectrum2d.
Comparing the new 2D test data (chebyshev polynomial log-energy fit) to existing (cubic fit) the difference is reassuringly small. from pathlib import Path
from euphonic import Spectrum2D
from euphonic.plot import plot_2d_to_axis
import matplotlib.pyplot as plt
fig, axes = plt.subplots(ncols=2, nrows=2)
data_dir = Path.home() / 'src/euphonic/tests_and_analysis/test/data/spectrum2d'
spectra = {'Unbroadened': Spectrum2D.from_json_file(data_dir / 'synthetic_x.json'),
'cubic': Spectrum2D.from_json_file(data_dir / 'synthetic_x_poly_broadened.json'),
'cheby-log': Spectrum2D.from_json_file(data_dir / 'synthetic_x_poly_broadened_cheby.json')
}
diff = Spectrum2D(spectra['cheby-log'].x_data,
spectra['cheby-log'].y_data,
spectra['cheby-log'].z_data - spectra['cubic'].z_data)
spectra['diff'] = diff
fig.suptitle(
f"Maximum value in broadened data: {spectra['cubic'].z_data.max():.4f} \n"
f"Maximum difference in broadened data: {diff.z_data.max():.3e}")
for ax, (label, spectrum) in zip(axes.flatten(), spectra.items()):
plot_2d_to_axis(spectrum, ax)
ax.set_title(label)
fig.tight_layout()
fig.savefig('euphonic_cheby_sanity_check.png') |
Slight correction to cheby-log parametrisation gives slight difference in corresponding test ref
The safe range for cheby-log fit is a bit smaller than the data range; it goes a bit crazy at the lowest two points!
This implements energy-dependent resolution broadening with 1-D polynomial functions.
The idea is that while Euphonic does not contain detailed instrument models, users can obtain a polynomial fit to energy-dependent broadening (derived empirically or calculated using something like PyChop or McStas) and provide the polynomial coefficients instead of a single broadening "width". We could provide information in tutorials to help with this without overcomplicating the Euphonic library.
It re-uses the interpolated broadening machinery used for adaptive DOS broadening. As this resolution broadening is energy-dependent, it would normally be performed after other calculation and binning steps.
There are some obvious places for further improvement and consideration:
But for now we should discuss the API. How much of this stuff should live in
euphonic.broadening
onSpectrum
classes or elsewhere?I also note a convention difference between
euphonic.broadening
which works with "width" in sigma (standard deviation) and the outer functions which treat width as FWHM. It's quite possible users will have a mixture of both and the conversion is simple, but the internals could be more clear and consistent about this.