-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fitting overhaul #305
fitting overhaul #305
Conversation
|
||
klass = utils.import_class(self.fit_module, self.fit_class_name) | ||
self.fit_klass = klass | ||
self.bounds = bounds if bounds is not None else klass.get_default_bounds() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm...should the bounds replace the defaults completely or just overwrite specific ones 🤔 . Same in the fitting code. Same question for constants etc...
ndscan/fitting/fitting.py
Outdated
param_bounds: Optional[Dict[str, Tuple[float, float]]] = None, | ||
fixed_params: Optional[Dict[str, float]] = None, | ||
initial_values: Optional[Dict[str, float]] = None, | ||
scale_factors: Optional[Dict[str, float]] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could allow fit classes to specify default scale factors. This isn't so useful for general purpose fits like the sinusoid but might be for more specialised fit classes. I haven't bothered to add it for an initial implementation as it's easy to add later.
parameters. If not specified, the defaults from the fit class are used. | ||
initial_values: dictionary specifying initial parameter values to use in | ||
the fit. These override the values found by the fit's parameter estimator. | ||
scale_factors: dictionary specifying scale factors for parameters. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did briefly consider having a mechanism that allows the user to get default scale factors from the axis units (e.g. if I'm scanning a time in us
then 1e6
is probably a good scale factor) but this felt more effort than it's worth! We could also do something like the default scale factor estimation in oitg.fitting
but AFAICT this can do nasty things if the initial parameter guess is close to (but not equal to) zero -- although I might be wrong, I haven't fully understood that code!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super quick first pass on basic style issues -- I'll do another pass to look at the design 👍
import numpy as np | ||
import logging | ||
from typing import Any, Callable, Dict, List, Iterable, Optional, Set, Tuple, Union | ||
import collections | ||
from enum import Enum | ||
import dataclasses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import numpy as np | |
import logging | |
from typing import Any, Callable, Dict, List, Iterable, Optional, Set, Tuple, Union | |
import collections | |
from enum import Enum | |
import dataclasses | |
import collections | |
import dataclasses | |
from enum import Enum | |
import logging | |
from typing import Any, Callable, Dict, List, Iterable, Optional, Set, Tuple, Union | |
import numpy as np |
I don't know about ndscan, but generally we'd arrange the imports in 3 groups:
- standard library
- third party
- internal
and it's generally considered good practice to alphabetise within the groups, to discourage deliberate ordering of imports to mask dependency issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I don't see the point of differentiating between third-party packages coming from the standard library and from other sources; it just adds an artificial visual distinction for where there is no actual semantic difference (e.g. ARTIQ is, for ndscan.experiment
, in effect the "standard library").
But yes, please do try to keep things roughly grouped and in alphabetical order. (I'm sure there are places where that's not quite followed in ndscan; I've long outsourced caring about mechanical stylistic issues to tools, but I don't currently have one for imports in the CI setup, although I wouldn't be opposed in principle to adding something like isort
.)
def __post_init__(self): | ||
if type(self) == NamedFit: | ||
self.kind = AnalysisType.NAMED_FIT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move this to the __post_init__
of NamedFit
, without the conditional. Unless there was some reason for it to be here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually unsure about this whole construct, but I'll think about that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually unsure about this whole construct, but I'll think about that later.
Yeah, this was a bit of an experiment.
raise ValueError(f"Duplicate analysis result channel name '{name}' " + | ||
f"in analysis for axes [{axes}]") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
raise ValueError(f"Duplicate analysis result channel name '{name}' " + | |
f"in analysis for axes [{axes}]") | |
raise ValueError(f"Duplicate analysis result channel name '{name}' " | |
f"in analysis for axes [{axes}]") |
No need for the +
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep to the existing coding style. Implicit string concatenation is evil; I'm still hoping the Python core devs will at some point see the light and remove it (e.g. as suggested by Guido himself as well).
from scipy import optimize | ||
|
||
|
||
# TODO: type annotations! (What's the best way of annotating np arrays?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something I find annoying too -- I'd like to be able to say what dimensions I expect the array to be.
I will send you a link to how I've handled this in an OxIonics project, but afaik there's nothing built into numpy for this. You'd just have to use np.ndarray
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've had a look at this as well in the context of NAC3, and came to the same conclusion; this isn't nicely supported in NumPy at all. There is some support for type annotations now (e.g. you can do ndarray[int32, 2]
), but IIRC it isn't even clearly defined what the index types should even be.
@hartytp could you explain the design in I note The docstring for |
@lochsh all good questions!
This was essentially just trying to mimic the structure that was already used in ndscan but replacing dictionaries with objects to improve discoverability. Some background...previously the code in default analysis looked like ndscan/ndscan/experiment/default_analysis.py Lines 392 to 403 in d96251c
Since the online fits are done in applets, which only communicate with the master through datasets, all the scan data gets serialised and stuffed inside a dataset, which happens inside
Coming back to your questions
Because the current Edit: putting it another way, I think this is an instance where I'm struggling because the structure/intent of what's going on here is rather implicit. The move to data classes here was trying to make it all a bit more explicit. |
Maybe the docstring here could be improved (any suggestions?). The intention here is that this dataclass describes the fit object, giving all the information we need to perform the fit later on. The fit object should be any class available in the python path that inherits from One might think "why not pass the fit object in directly rather than just describe it?". The motivation here is that we need the applets to be able to do their own local fitting in separate processes. So we need to give them enough information to instantiate the fit object for themselves. |
Agreed, these names could be clearer! I think @dnadlinger was considering renaming I struggled a bit with the naming of the As to |
Thinking about this more, I wonder if a better design would be to scrap the ndscan/ndscan/plots/model/__init__.py Lines 252 to 258 in 6fce114
Analysis subclasses.
Side note: right now a lot of the serialisation in |
I agree that cleaning up fitting routines and having a few robust, high-quality fit functions that work under a wide variety of conditions is a good idea. Fitting data is a very general task, is there a reason to make this specific to ndscan? People might want to use these classes/functions for data analysis in other contexts, too. For example, they might be used in jupyter notebooks for general-purpose fitting. Having a dependency on ndscan (which probably requires artiq) seems like a lot of overhead in this case. Could |
9a07b6e
to
5973bef
Compare
@lochsh I thought more about your feedback. Since I am not aware of any need for a case where |
an only marginally cursed compatibility layer added to make this compatible with existing code. |
… be consistent with the rest of ndscan
…a derived parameter function
…ify bounds for all parameters, handle (again) the lack of derived parameter functions in some cases
There are now only two test failures: |
I think this PR is now substantively complete in its first iteration. The main outstanding item is deciding what new tests we want and implementing them. Other than that, it's waiting for reviews. |
Thanks for raising that @AUTProgram ! This is one of the design choices I made here that I'm not totally sure about. I completely agree that we should expect that our user-written fit code will be used outside of artiq experiments e.g. data post analysis notebooks as you say. I feel that the So, what are our options?
I'm not totally sure which I prefer. (1) is probably the path of least resistance though even if it's not my preference. In the end, I think this is going to depend somewhat on what the fate of this PR turns out to be. I'm not sure if it's likely to end up getting merged into upstream. If not and we push some version of it onto our fork only it changes some of my thinking about the best place to put this sort of code. |
Regarding dependencies, code in the other subpackages shouldn't have dependencies into |
ack. It's easy to move some classes around to avoid this. I just wasn't sure where the best place for them to be was. |
I do largely agree with that. The flipside is that sometimes it's useful to build a strawman implementation to get a clear feel for how the design choices will play out in detail.
Am I wrong in thinking that this PR could be cast into that form by moving the bits from
agreed, that would be great. Re goodness of fit, that feels like something which would naturally be built on top of this PR, right? (e.g. add a
Interesting. I'm not fully sure I know what you have in mind here. Could you provide a link to relevant code / a point in the right direction? |
Added Rabi flop fits, which serve the purpose of the |
Anyway, I'm going to leave this here for now, pending feedback |
Good call! Implemented, that's much cleaner than the way we were doing things. |
@@ -413,17 +433,9 @@ def describe_online_analyses( | |||
if isinstance(v, ResultChannel) | |||
] | |||
|
|||
analysis_identifier = self.analysis_identifier |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 Potentially contentious... 🚨
If no analysis identifier is given we attempt to generate a unique identifier by mangling together the fit type with the result channels (there is more info we could in principle add here -- even the entire fit description -- but this is pretty good). The generated identifier mangles the names of the result channels fed into the fit (in practice this means that the channels fed into y
and y_err
since only 1D fits are supported at present).
Result channels are given a name in the fragment they are originally defined, but can be renamed in the context they are used in (e.g. subscans rename result channels by appending a prefix to avoid collisions). In the original code, the local names within the scan context were used which seems sensible. However, a side effect here is that if the user doesn't explicitly give us an analysis identifier we cannot generate one until describe_online_analyses
.
This commit introduces a change in behaviour by generating the analysis identifier using the original channel names. The motivation was that (as far as I can tell, but I may be wrong) the result channels need to be defined in the DefaultAnalysis.__init__
method which does not have access to the scan context. I wanted to generate p
and chi2
result channels for the analysis and it seemed sensible to name them based on the analysis identifier, which then required a change in behaviour.
The new behaviour is more likely to cause an accidental collision, but it's not obvious to me how much this is a practical concern (what real use-case would this affect?). So, I think it's fine, but I also haven't got my head fully around the details so may be missing something
NB in many cases the result channels defined in the original fragment will just be the readout p
and p_err
so there isn't much value in mangling that into the name since they don't tell us much...
In any case, the tests now pass, but that may say more about the test coverage than the correctness of the design here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In hindsight, a better design here would probably have been to have used an ExportedResult
to export chi2
or p
rather than the implementation here. That way we could revert all of the changes around analysis_identifier
. It's also a bit more explicit and more consistent with the rest of the interface. It requires a little bit of special casing in DefaultAnalysis
to handle, but is probably the better design overall. TODO!
nit pick: would FitResult
be a more descriptive name than ExportedResult
?
|
||
y_fit = self.evaluate(self._x)[1] | ||
chi_2 = np.sum(np.power((self._y - y_fit) / self._y_err, 2)) | ||
p = stats.chi2.sf(chi_2, n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Todo: check the standard definition of p
(is it cdf or sf? i.e. if all points lie on the curve is p 1 or 0).
Just saw I missed this in the above (although I don't really have the bandwidht for any work on this still):
I'm not sure whether this is based on a misunderstanding of the architecture, or I am just misunderstanding what you mean here. The point of this is precisely to accomodate other online analysis types (such as ones based on custom code loaded at runtime, as proposed here), but of course, the applet somehow needs to know how to deal with them. Since the design of ndscan is such that the only interface between experiment and applets or other postprocessing code is through the well-defined schema and associated dataset contents (doing anything different would pretty much be going against the ARTIQ model), there needs to be some registry that maps string descriptions to the code handling this on the applet side. The if statement (+ fallthrough warning) you linked is precisely that registry. For instance, there is no immediate need for an oitg.fitting compatibility layer; we can just keep the old |
@@ -127,7 +170,7 @@ def required_axes(self) -> Set[ParamHandle]: | |||
|
|||
def describe_online_analyses( | |||
self, context: AnnotationContext | |||
) -> Tuple[List[Dict[str, Any]], Dict[str, Dict[str, Any]]]: | |||
) -> Tuple[List[Dict[str, Any]], Dict[str, FitDescription]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This probably shouldn't be a FitDescription
, as there might be other online analysis types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack. I wasn't sure quite how to name this class. Maybe better to have an AnalysisSchema
(or AnalysisDescription
or bikeshed any other name) base class to go alongside Analysis
? With the idea being that the analysis Analysis
object lives experiment-side, while AnalysisSchema
is broadcast to the applets etc. Then we have a natural FitAnalysisSchema
object here to go with the new FitAnalysis
class (OnlineFit
replacement).
|
||
self._recompute_in_progress = False | ||
self.updated.emit() | ||
|
||
|
||
def _run_fit(fit_type, xs, ys, y_errs, constants, initial_values): | ||
def _run_fit(fit_obj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this actually work? I kept the interface painfully scalar since I figured that the arguments would need to be transferred out of process to the executor somehow, but perhaps the fit object is pickled/unpickled just fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! The only thing that doesn't work are the fit result properties that I dynamically create during __init__
(and, in any case, I'm not sure that those are really good design anyway).
....although I'm still not convinced that this is the right way of handling it. I was thinking of changing this to pass the FitDescription
into the executor and then have the code here do exactly the same thing as the applet.
@@ -114,7 +115,7 @@ def d(key): | |||
for i in range(len(timestamps)): | |||
prev = 0.0 if i == 0 else timestamps[i - 1] | |||
cur = timestamps[i] | |||
self.assertGreater(cur, prev) | |||
self.assertGreaterEqual(cur, prev) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you actually run into an issue here? I used strict inequality on purpose to reject the case where e.g. all timestamps are just set to the same value by accident, since the granularity of time.monotonic should be considerably better than microseconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did, but it's possible that it was unrelated and I didn't dig into it properly (e.g. could have been an artiq
version conflict?). This is a bit sloppy anyway. I'll revert this change and open an issue if I hit it again.
def default(self, obj): | ||
if isinstance(obj, numpy.integer): | ||
return int(obj) | ||
if isinstance(obj, numpy.floating): | ||
return float(obj) | ||
if isinstance(obj, numpy.ndarray): | ||
return obj.tolist() | ||
if dataclasses.is_dataclass(obj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't dislike this (although the dump_json
docstring would need adapting), but I wonder whether we can come up with some sort of best practices for using dataclasses/… in PYON. (I used JSON here instead of PYON on purpose to make it easier to load the result files from e.g. Mathematica or Julia, but that could be adapted.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but I wonder whether we can come up with some sort of best practices for using dataclasses/… in PYON
You mean best practices within ndscan or best practices for general pyon? If we do go down the route of replacing the dictionary schema with something like dataclasses, I was wondering of having a Schema
(or SerialisableDataClass
or whatever) base class which helps to formalise some of this. That could then provide to_dict
and from_dict
(or describe
and decode
or whatever) methods to help keep things uniform.
Assumes the values of `y` are independent and normally distributed. | ||
Returns the tuple (Chi-Squared, p-value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't had a closer look at this at all, but it might be valuable to offer some guidance here to users regarding the fact that our data is pretty much never actually normally distributed when working with ion readout data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mmm...I'm not sure that I have much intelligent to say here (but that probably mainly speaks to my limited knowledge of statistics) beyond "if the errors are well-approximated by normal then the Chi2 p-value is probably a decent estimator for the fit significance."
A couple of other comments here:
- I was thinking of perhaps adding an additional
get_significance
(returns a number between 0 and 1) method toFitBase
which would default to returning the Chi Squared p-value. That leaves the option open for people to do something cleverer if desired - I was also thinking of splitting
FitBase.fit
into two parts: a fit initialisation part and a part that does the fitting. The latter would default to least-squares fitting but allows people to do some other form of parameter estimation if desired. - I was thinking of also letting
ExportedResult
take a result channel and a fit significance parameter. If the fit significance is above the threshold then the linked dataset is updated. This could also be done through a post-fit call back inside theAnalysis
class
from concurrent.futures import ProcessPoolExecutor | ||
from pyqtgraph import SignalProxy | ||
from qasync import QtCore | ||
from typing import Any, Dict | ||
from ...utils import FIT_OBJECTS | ||
from ...experiment import FitDescription |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in the discussion thread, I'd be keen to keep the clean separation between experiment
and other packages. experiment
is allowed to import ARTIQ, including the whole dependencies on the coredevice stuff (pulling in the compiler dependencies like llvmlite, etc.), whereas the rest isn't (well, except for the artiq.{dashboard, gui}
for the dashboard, and artiq.applets.simple
for the applet.
I'm not opposed to documenting the structure of the metadata (schemata/…) more explicitly through dataclasses, but imho those should be in some sort of extra package. Currently, ndscan.utils
is the dumping ground for shared things, but for the schema stuff, an extra package may be more appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I wonder if it's worth having a schemata package then?
Thanks. I think it's a little of me misunderstanding the architecture and a little of me not expressing myself clearly. I agree that other analysis types can be supported through the current design in principle. But, it's not something that a user can easily do at present since it needs modification to |
Thinking about what we do with this PR...
There is a pretty well-contained piece on starting to implement this. This work isn't required for the rest of this PR; it's just here because I found myself needing to write documentation about schema to understand how things fitted together and a simple dataclass felt like a good way of doing that. Since there isn't an objection to doing this in principle, I'll open a discussion thread to discuss it further.
Other than that, this feels like the big thing we need to decide on: should the functionality added by this PR be implemented by adding another option to the registry (or replacing the old one and adding the compatibility layer as I've opted for here if we feel that's a like less krufty approach)? or, should we add a more flexible interface for things that want to add content to plots? |
Some further thoughts on this... ValidationI had a look at the NIST scanning framework as prior art.
My initial thoughts:
Reacting to fit results
Split fit from model
Anyway, as David points out, mostly this is stuff I can play around with by implementing a new analysis class so I'll do that on our fork. The one thing we need to agree here is how to handle things on the applet side. |
Oh, one other comment here
|
Thanks for the feedback everyone. Here is how I suggest we proceed: Fitting library On reflection it makes more sense for this to live outside of Interfaces It sounds like there is consensus that it's worth having a more concretely defined interface between the experiment-side and the applet-side of the code is useful. I have some thoughts about how this could look and will spin up a proof of concept PR. One thing I'd like is to build in explicit support for custom implementations of the interface classes ( Fitting framework The original aim of this PR was to extend the current Another thing on my radar is a potential extension of Once the work on fitting code / interfaces is done, this becomes something that can be done entirely in user code. I think the best path forward might be for me to start by getting something up and running in our fragment library. Once we're basically happy with that we can have a discussion about where this falls between: not of general interest, should stay in our codebase; general interest but not suitable for merging into |
@dnadlinger what would you think about a minimal change here to allow custom applet-side analyses. Something like diff --git a/ndscan/plots/model/__init__.py b/ndscan/plots/model/__init__.py
index e9d5e16..dcf28f0 100644
--- a/ndscan/plots/model/__init__.py
+++ b/ndscan/plots/model/__init__.py
@@ -25,6 +25,8 @@ situations.)
import logging
import numpy
+import importlib
+import inspect
from qasync import QtCore
from typing import Any, Callable, Dict, List, Optional
from .online_analysis import OnlineNamedFitAnalysis
@@ -164,6 +166,24 @@ class SinglePointModel(Model):
raise NotImplementedError
+def import_class(module_name: str, class_name: str):
+ """
+ Imports a named class from a module in the python path or raises an exception.
+ """
+ try:
+ module = importlib.import_module(module_name)
+ except ImportError:
+ raise ValueError(f'Cannot import module "{module_name}"')
+
+ module_classes = [
+ name for name, obj in inspect.getmembers(module) if inspect.isclass(obj)
+ ]
+ if class_name not in module_classes:
+ raise ValueError(f'Class "{class_name}" not in module "{module_name}"')
+
+ return getattr(module, class_name)
+
+
class ScanModel(Model):
points_rewritten = QtCore.pyqtSignal(dict)
points_appended = QtCore.pyqtSignal(dict)
@@ -251,6 +271,11 @@ class ScanModel(Model):
kind = schema["kind"]
if kind == "named_fit":
self._online_analyses[name] = OnlineNamedFitAnalysis(schema, self)
+ elif kind == "custom":
+ module_name = schema["module_name"]
+ module_class = schema["module_class"]
+ cls = import_class(module_name, module_class)
+ self._online_analyses[name] = cls(schema, self)
else:
logger.warning("Ignoring unsupported online analysis type: '%s'", kind) Edit: also needs custom annotation item handling in Line 249 in 7116e4a
|
ndscan.fitting
module withSinusoid
as an example fit classExample of the new framework
To do before merge:
oitg.fitting
and a bit of special casing in ndscanndscan.fitting
(how should we annotate arrays?)One other nice consequence of this is that the error channels this creates are properly configured. In contrast with
CustomAnalysis
there is no way of tellingndscan
that one channel is an error bar for another.Why replace
oitg.fitting
?I've replaced
oitg.fitting.FitBase
with a new classndscan.fitting.FitBase
, which is a complete rewrite from scratch. While the idea behindoitg.fitting.FitBase
is great, I've always found the implementation rather convoluted and unergonomic. The implementation here is (hopefully!) simpler and easier to use while still providing all of the functionality we want.I also found the quality of the fits in
oitg.fitting
to be a bit variable. e.g. why do we have so many functions which are all essentially fit sinusoids, but only one has a decent heuristic? AFAICT this is partly because the way the package is structured (exporting functions rather than the classes) makes inheritance / code reuse harder.Given how important online fits are to ndscan, it always felt odd to me that ndscan relies on oitg to provide this functionality. Moving the fit base class into ndscan removes this coupling. In so doing, it makes it easier to break backward compatibility between the old and new fit functions. We also allow the fit classes to provide metadata such as default annotations.
Edit: another thing I don't like about the current oitg code is the way it handles parameter scaling. It normalises the parameters by their initial (estimated) values. This has a habit of blowing up if the initial guess is small. I've chosen to go down a different route here and allow the fit functions to define how each parameter scales with the
x
/y
units, which should be more robust.Using arbitrary modules
My expectation is that
ndscan.fitting
will grow to contain a small number of high-quality basic fitting functions (probably porting a subset ofoitg.fitting
), which can be supplemented by user code, but the bar for getting code into this should be high.After this PR we allow users to supplement these inbuilt fits with their own library of fit classes derived from
ndscan.fitting.FitBase
. One nice feature of this is that if your applet is run from an environment with your fragment library it it, the applets can run any fit code defined alongside the fragments. This makes it really easy to define small modifications to fit functions (e.g. to do a bit of pre/post processing, add a derived result, etc) by creating derived classes.Exporting results
One of the things that has always frustrated me is the amount of boilerplate it takes to get fit results into result channels. This PR creates an interface which makes this ergonomic and simple -- no more
CustomeAnalysis
to just duplicate the online fit!Related issues
#251
#141
Re #272 (comment): the new
NamedFit
object gives an example of how this kind of additional structure might look...