From ac248308b37d74e399c6ea37d950e88a932beb0d Mon Sep 17 00:00:00 2001 From: Remco de Boer <29308176+redeboer@users.noreply.github.com> Date: Thu, 29 Jun 2023 09:51:32 +0200 Subject: [PATCH 1/3] DOC: improve amplitude analysis tutorial (#489) * DOC: explain rest frame of decaying particle * DOC: remove `reaction_info` definition * DOC: show amplitude model expressions * DOC: remove `max_complexity` argument * FIX: fix typo "show(s)" * MAINT: avoid using `_` and `__` IPython variables * MAINT: merge Markdown cells (apart from headings) * MAINT: merge `src = aslatex` lines * MAINT: rename `intensity` to `intensity_func` * MAINT: write MyST cross-references where possible MyST references work better with `jupyterlab-myst` than reStructuredText Co-authored-by: Lena Poepping --- .cspell.json | 2 + docs/amplitude-analysis.ipynb | 276 +++++++++++++++++++++------------- docs/conf.py | 1 + 3 files changed, 171 insertions(+), 108 deletions(-) diff --git a/.cspell.json b/.cspell.json index 54b417d7..d40af59a 100644 --- a/.cspell.json +++ b/.cspell.json @@ -62,6 +62,7 @@ "ampform", "arange", "asdot", + "aslatex", "atfi", "autoupdate", "axhline", @@ -138,6 +139,7 @@ "nbformat", "nbins", "nbody", + "nbsp", "nbsphinx", "ncalls", "ncols", diff --git a/docs/amplitude-analysis.ipynb b/docs/amplitude-analysis.ipynb index f49a4cbf..52ff4bf4 100644 --- a/docs/amplitude-analysis.ipynb +++ b/docs/amplitude-analysis.ipynb @@ -61,7 +61,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "tags": [] + }, "source": [ "# Amplitude analysis" ] @@ -70,17 +72,17 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "While TensorWaves can handle {doc}`general mathematical expressions `, it was originally created to perform **amplitude analysis** / **Partial Wave Analysis** (PWA), that is, to fit amplitude models to four-momenta data samples.\n", + "While TensorWaves can handle [general mathematical expressions](./usage.ipynb), it was originally created to perform **amplitude analysis** / **Partial Wave Analysis** (PWA), that is, to fit amplitude models to four-momenta data samples.\n", "\n", "This notebook shows how to do an amplitude analysis with the [ComPWA](https://github.com/ComPWA) packages [QRules](https://qrules.rtfd.io), [AmpForm](https://AmpForm.rtfd.io), and [TensorWaves](https://tensorwaves.rtfd.io). The ComPWA workflow generally consists of three stages:\n", "\n", - "1. {ref}`Create an amplitude model ` with {mod}`qrules` and {mod}`ampform`.\n", - "2. {ref}`Generate hit-and-miss data samples ` with this amplitude model.\n", - "3. {ref}`Fit model to the data samples `.\n", + "1. [Create an amplitude model](compwa-step-1) with {mod}`qrules` and {mod}`ampform`.\n", + "2. [Generate hit-and-miss data samples](compwa-step-2) with this amplitude model.\n", + "3. [Fit model to the data samples](compwa-step-3).\n", "\n", ":::{note}\n", "\n", - "This notebook show several tricks that _can_ be helpful when doing an amplitude analysis. Most are optional though―they only serve to illustrate some tips that can be adopted and worked out further for specific analyses.\n", + "This notebook shows several tricks that _can_ be helpful when doing an amplitude analysis. **Most steps are optional** though—they only serve to illustrate some tips that can be adopted and worked out further for specific analyses.\n", "\n", ":::" ] @@ -88,7 +90,8 @@ { "cell_type": "markdown", "metadata": { - "jp-MarkdownHeadingCollapsed": true + "jp-MarkdownHeadingCollapsed": true, + "tags": [] }, "source": [ "(compwa-step-1)=\n", @@ -101,7 +104,7 @@ "jp-MarkdownHeadingCollapsed": true }, "source": [ - "Whether {ref}`generating data ` or {ref}`fitting a model `, TensorWaves takes mathematical expressions as input. When that expression is an amplitude model, it is most convenient to formulate it with {mod}`qrules` and {mod}`ampform`." + "Whether [generating data](compwa-step-2) or [fitting a model](compwa-step-3), TensorWaves takes mathematical expressions as input. When that expression is an amplitude model, it is most convenient to formulate it with {mod}`qrules` and {mod}`ampform`." ] }, { @@ -117,7 +120,7 @@ "class: dropdown\n", "---\n", "\n", - "As {ref}`compwa-step-3` serves to illustrate usage only, we make the amplitude model here a bit simpler by not allowing $\\omega$ resonances (which are narrow and therefore hard to fit). For this reason, we can also limit the {class}`~qrules.settings.InteractionType` to {attr}`~qrules.settings.InteractionType.STRONG`.\n", + "As [](compwa-step-3) serves to illustrate usage only, we make the amplitude model here a bit simpler by not allowing $\\omega$ resonances (which are narrow and therefore hard to fit). For this reason, we can also limit the {class}`~qrules.settings.InteractionType` to {attr}`~qrules.settings.InteractionType.STRONG`.\n", "\n", ":::" ] @@ -187,17 +190,51 @@ "cell_type": "code", "execution_count": null, "metadata": { - "tags": [ - "full-width" - ] + "tags": [] }, "outputs": [], "source": [ "import ampform\n", "\n", "model_builder = ampform.get_builder(reaction)\n", - "model = model_builder.formulate()\n", - "dict(model.parameter_defaults)" + "model_no_dynamics = model_builder.formulate()\n", + "model_no_dynamics.intensity" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "full-width", + "hide-input" + ] + }, + "outputs": [], + "source": [ + "from ampform.io import aslatex\n", + "from IPython.display import Math\n", + "\n", + "Math(aslatex(model_no_dynamics.amplitudes))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [ + "hide-input" + ] + }, + "outputs": [], + "source": [ + "Math(aslatex(model_no_dynamics.parameter_defaults))" ] }, { @@ -214,7 +251,9 @@ { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "tags": [] + }, "outputs": [], "source": [ "from ampform.dynamics.builder import (\n", @@ -239,13 +278,21 @@ "cell_type": "code", "execution_count": null, "metadata": { + "jupyter": { + "source_hidden": true + }, "tags": [ - "full-width" + "hide-input" ] }, "outputs": [], "source": [ - "sorted(model.parameter_defaults, key=str)" + "sorted_parameter_defaults = {\n", + " symbol: model.parameter_defaults[symbol]\n", + " for symbol in sorted(model.parameter_defaults, key=str)\n", + "}\n", + "src = aslatex(sorted_parameter_defaults)\n", + "Math(src)" ] }, { @@ -272,13 +319,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the next steps, we use this {class}`~ampform.helicity.HelicityModel` as a template for a computational function to {ref}`generate data ` and to {ref}`perform a fit `." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ + "In the next steps, we use this {class}`~ampform.helicity.HelicityModel` as a template for a computational function to [generate data](compwa-step-2) and to [perform a fit](compwa-step-3).\n", + "\n", ":::{tip}\n", "\n", "See more advanced examples on {doc}`AmpForm's usage page `.\n", @@ -307,9 +349,9 @@ ":::\n", "::::\n", "\n", - "In this section, we use the {class}`~ampform.helicity.HelicityModel` that we created with {mod}`ampform` in {ref}`the previous step ` to generate a data sample via hit & miss Monte Carlo. We do this with the {mod}`.data` module.\n", + "In this section, we use the {class}`~ampform.helicity.HelicityModel` that we created with {mod}`ampform` in [the previous step](compwa-step-1) to generate a data sample via hit & miss Monte Carlo. We do this with the {mod}`.data` module.\n", "\n", - "First, we {func}`~pickle.load` the {class}`~ampform.helicity.HelicityModel` that was created in the previous step. This does not have to be done if the model has been generated in the same script or notebook, but can be useful if the model was generated elsewhere." + "**Optionally**, we can {func}`~pickle.load` the {class}`~ampform.helicity.HelicityModel` that was created in the previous step. This does not have to be done if the model has been generated in the same script or notebook (like in this notebook), but can be useful if the model was generated elsewhere." ] }, { @@ -323,7 +365,7 @@ "from ampform.helicity import HelicityModel\n", "\n", "with open(\"helicity_model.pickle\", \"rb\") as model_file:\n", - " model: HelicityModel = pickle.load(model_file)" + " imported_model: HelicityModel = pickle.load(model_file)" ] }, { @@ -339,19 +381,20 @@ }, "outputs": [], "source": [ - "reaction_info = model.reaction_info\n", - "initial_state = next(iter(reaction_info.initial_state.values()))\n", + "initial_state, *_ = imported_model.reaction_info.initial_state.values()\n", "print(\"Initial state:\")\n", "print(\" \", initial_state.name)\n", "print(\"Final state:\")\n", - "for i, p in reaction_info.final_state.items():\n", - " print(f\" {i}: {p.name}\")" + "for i, p in imported_model.reaction_info.final_state.items():\n", + " print(f\" {i}: {p.name}\")\n", + "del initial_state" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-2.1)=\n", "### 2.1 Generate phase space sample" ] }, @@ -361,7 +404,7 @@ "source": [ "The {class}`~qrules.transition.ReactionInfo` class defines the constraints of the phase space. As such, we have enough information to generate a **phase-space sample** for this particle reaction. We do this with a {class}`.TFPhaseSpaceGenerator` class, which is a {class}`.DataGenerator` for a {obj}`.DataSample` of **four-momenta** arrays (using {obj}`tensorflow ` and the [`phasespace`](https://phasespace.readthedocs.io) package as a back-end). We also need to construct a {class}`.RealNumberGenerator` that can generate random numbers. {class}`.TFUniformRealNumberGenerator` is the natural choice here.\n", "\n", - "As opposed to the main {ref}`amplitude-analysis:Step 2: Generate data` of the main usage example page, we will generate a **deterministic** data sample. This can be done by feeding a {class}`.RealNumberGenerator` with a specific {attr}`~.RealNumberGenerator.seed` and giving that generator to the {meth}`.TFPhaseSpaceGenerator.generate` method:" + "As opposed to the main [](compwa-step-2) of the main usage example page, we will generate a **deterministic** data sample. This can be done by feeding a {class}`.RealNumberGenerator` with a specific {attr}`~.RealNumberGenerator.seed` and giving that generator to the {meth}`.TFPhaseSpaceGenerator.generate` method:" ] }, { @@ -392,8 +435,8 @@ "\n", "rng = TFUniformRealNumberGenerator(seed=0)\n", "phsp_generator = TFPhaseSpaceGenerator(\n", - " initial_state_mass=reaction_info.initial_state[-1].mass,\n", - " final_state_masses={i: p.mass for i, p in reaction_info.final_state.items()},\n", + " initial_state_mass=reaction.initial_state[-1].mass,\n", + " final_state_masses={i: p.mass for i, p in reaction.final_state.items()},\n", ")\n", "phsp_momenta = phsp_generator.generate(100_000, rng)" ] @@ -430,13 +473,47 @@ "raw_mimetype": "text/restructuredtext" }, "source": [ - "The resulting phase space sample is a {obj}`dict` of final state IDs to an {obj}`~numpy.array` of four-momenta. In the last step, we converted this sample in such a way that it is rendered as an understandable {class}`pandas.DataFrame`." + "The resulting phase space sample is a {obj}`dict` of final state IDs to an {obj}`~numpy.array` of four-momenta. In the last step, we converted this sample in such a way that it is rendered as an understandable {class}`pandas.DataFrame`.\n", + "\n", + ":::{admonition} Four-momentum arrays\n", + "{doc}`Kinematic expressions ` from AmpForm that involve four-momenta should be formatted as $\\left(E, p_x, p_y, p_z\\right)$. In addition, the shape of input arrays should be `(n, 4)` with `n` the number of events.\n", + ":::\n", + "\n", + ":::{warning}\n", + "When using the helicity formalism, the sum of the four-momenta should be in the rest frame, that is $\\sum_i p_i = \\left(m_A, 0, 0, 0\\right)$ with $m_A$ the mass of the decaying particle $A$. This is because the helicity formalisms boosts through the decay chain starting from particle $A$. **Take care to boost your experimental data into the rest frame**, optionally following the {doc}`kinematics classes provided by AmpForm `.\n", + ":::" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "p = np.array(list(phsp_momenta.values()))\n", + "p.shape" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "p.sum(axis=0).round(decimals=14)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-2.2)=\n", "### 2.2 Generate intensity-based sample" ] }, @@ -448,7 +525,7 @@ "\n", "An intensity-based sample is generated over a phase space sample using the {class}`.IntensityDistributionGenerator`. Its usage is similar to {class}`.TFPhaseSpaceGenerator`, but now you have to provide a {obj}`.Function` as well as a {obj}`.DataTransformer` that is used to transform the four-momentum phase space sample to a data sample that can be understood by the {obj}`.Function`.\n", "\n", - "Now, recall that in {ref}`compwa-step-1`, we used the helicity formalism to mathematically express the reaction in terms of an amplitude model. TensorWaves needs to convert this {obj}`~ampform.helicity.HelicityModel` to a {obj}`.Function` object that can perform fast computations. This can be done with {func}`.create_parametrized_function`:" + "Now, recall that in [](compwa-step-1), we used the helicity formalism to mathematically express the reaction in terms of an amplitude model. TensorWaves needs to convert this {obj}`~ampform.helicity.HelicityModel` to a {obj}`.Function` object that can perform fast computations. This can be done with {func}`.create_parametrized_function`:" ] }, { @@ -464,10 +541,9 @@ "from tensorwaves.function.sympy import create_parametrized_function\n", "\n", "unfolded_expression = model.expression.doit()\n", - "intensity = create_parametrized_function(\n", + "intensity_func = create_parametrized_function(\n", " expression=unfolded_expression,\n", " parameters=model.parameter_defaults,\n", - " max_complexity=200,\n", " backend=\"numpy\",\n", ")" ] @@ -478,26 +554,16 @@ "source": [ ":::{tip}\n", "\n", - "We made use of {func}`.fast_lambdify` here by specifying `max_complexity`. See {ref}`usage/faster-lambdify:Specifying complexity`.\n", + "If {func}`.create_parametrized_function` takes a long time, have a look at {doc}`usage/faster-lambdify`.\n", + "\n", + ":::\n", "\n", - ":::" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ ":::{seealso}\n", "\n", "{ref}`usage/basics:Hit & miss`\n", "\n", - ":::" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ + ":::\n", + "\n", "A problem is that {class}`.ParametrizedBackendFunction` takes a {obj}`.DataSample` with kinematic variables for the helicity formalism as input, not a set of four-momenta. We therefore need to construct a {class}`.DataTransformer` to transform these four-momenta to function variables. In this case, we work with the helicity formalism, so we construct a {class}`.SympyDataTransformer`:" ] }, @@ -522,14 +588,9 @@ "\n", "{ref}`usage:Generate and transform data`\n", "\n", - ":::" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "That's it, now we have enough info to create an intensity-based data sample. Notice how the structure of the output data is the same as the {ref}`phase-space sample we generated previously `:" + ":::\n", + "\n", + "That's it, now we have enough info to create an intensity-based data sample. Notice how the structure of the output data is the same as the [phase-space sample we generated previously](compwa-step-2.1):" ] }, { @@ -548,12 +609,12 @@ ")\n", "\n", "weighted_phsp_generator = TFWeightedPhaseSpaceGenerator(\n", - " initial_state_mass=reaction_info.initial_state[-1].mass,\n", - " final_state_masses={i: p.mass for i, p in reaction_info.final_state.items()},\n", + " initial_state_mass=reaction.initial_state[-1].mass,\n", + " final_state_masses={i: p.mass for i, p in reaction.final_state.items()},\n", ")\n", "data_generator = IntensityDistributionGenerator(\n", " domain_generator=weighted_phsp_generator,\n", - " function=intensity,\n", + " function=intensity_func,\n", " domain_transformer=helicity_transformer,\n", ")\n", "data_momenta = data_generator.generate(10_000, rng)\n", @@ -583,6 +644,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-2.3)=\n", "### 2.3 Visualize kinematic variables" ] }, @@ -608,6 +670,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "::::{note}\n", + "Check the remark about four-momentum format and rest frame of the decaying particle [here](compwa-step-2.1).\n", + "::::\n", + "\n", "The {obj}`.DataSample` is a mapping of kinematic variables names to a 1-dimensional array of values. The numbers you see here are final state IDs as defined in the {class}`~ampform.helicity.HelicityModel` member of the {class}`~ampform.helicity.HelicityModel`:" ] }, @@ -621,7 +687,7 @@ }, "outputs": [], "source": [ - "for state_id, particle in reaction_info.final_state.items():\n", + "for state_id, particle in reaction.final_state.items():\n", " print(f\"ID {state_id}:\", particle.name)" ] }, @@ -634,13 +700,8 @@ "class: dropdown\n", "---\n", "By default, {mod}`tensorwaves` only generates invariant masses of the {class}`Topologies ` that are of relevance to the decay problem. In this case, we only have resonances $f_0 \\to \\pi^0\\pi^0$. If you are interested in more invariant mass combinations, you can do so with the method {meth}`~ampform.kinematics.HelicityAdapter.register_topology`.\n", - "````" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ + "````\n", + "\n", "The {obj}`.DataSample` can easily be converted to a {class}`pandas.DataFrame`:" ] }, @@ -686,7 +747,7 @@ "from matplotlib import cm\n", "\n", "resonances = sorted(\n", - " reaction_info.get_intermediate_particles(),\n", + " reaction.get_intermediate_particles(),\n", " key=lambda p: p.mass,\n", ")\n", "evenly_spaced_interval = np.linspace(0, 1, len(resonances))\n", @@ -711,7 +772,7 @@ "source": [ ":::{seealso}\n", "\n", - "{ref}`amplitude-analysis:Extract intensity components`\n", + "[](#extract-intensity-components)\n", "\n", ":::" ] @@ -748,7 +809,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the {ref}`next step `, we illustrate how to {meth}`~.Minuit2.optimize` the intensity model to these data samples." + "In the [next step](compwa-step-3), we illustrate how to {meth}`~.Minuit2.optimize` the intensity model to these data samples." ] }, { @@ -766,7 +827,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As explained in the {ref}`previous step `, a {class}`.ParametrizedFunction` can compute a list of intensities (real numbers) for an input {obj}`.DataSample`. At this stage, we want to optimize the parameters of this {class}`.ParametrizedFunction`, so that it matches the distribution of our data sample. This is what we call 'fitting'." + "As explained in the [previous step](compwa-step-2), a {class}`.ParametrizedFunction` can compute a list of intensities (real numbers) for an input {obj}`.DataSample`. At this stage, we want to optimize the parameters of this {class}`.ParametrizedFunction`, so that it matches the distribution of our data sample. This is what we call 'fitting'." ] }, { @@ -796,6 +857,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-3.1)=\n", "### 3.1 Prepare parametrized function" ] }, @@ -803,11 +865,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In principle, we can use the same {class}`.ParametrizedFunction` as the one that we created in {ref}`amplitude-analysis:2.2 Generate intensity-based sample`. However, when fitting such a function to a data distribution, an {class}`.Optimizer` will evaluate this function numerous times, so it is smart to apply some optimizations to the underlying expression tree before-hand.\n", + "In principle, we can use the same {class}`.ParametrizedFunction` as the one that we created in [](compwa-step-2.2). However, when fitting such a function to a data distribution, an {class}`.Optimizer` will evaluate this function numerous times, so it is smart to apply some optimizations to the underlying expression tree before-hand.\n", "\n", ":::{tip}\n", "\n", - "The sections below illustrate some tricks for how to simplify the expression tree underneath a {class}`.ParametrizedFunction`. **Most of this can also be achieved with {func}`.create_cached_function`**, which is illustrated in {doc}`/usage/caching` and under {ref}`compwa-create_cached_function`. But note that it is still smart to {ref}`cast complex-valued data `.\n", + "The sections below illustrate some tricks for how to simplify the expression tree underneath a {class}`.ParametrizedFunction`. **Most of this can also be achieved with {func}`.create_cached_function`**, which is illustrated in {doc}`/usage/caching` and under [](compwa-create_cached_function). But note that it is still smart to [cast complex-valued data](#cast-complex-valued-data).\n", "\n", ":::" ] @@ -887,7 +949,7 @@ "source": [ ":::{admonition} Complex-valued parameters\n", "\n", - "If initial parameter values are {obj}`complex`, the parameter is split into a real and an imaginary part during the fit. See also {ref}`amplitude-analysis:Covariance matrix`.\n", + "If initial parameter values are {obj}`complex`, the parameter is split into a real and an imaginary part during the fit. See also [](#covariance-matrix).\n", "\n", ":::" ] @@ -946,8 +1008,8 @@ }, "outputs": [], "source": [ - "old = __\n", - "new = _\n", + "old = sp.count_ops(unfolded_expression)\n", + "new = sp.count_ops(optimized_expression)\n", "assert old > new" ] }, @@ -1001,8 +1063,8 @@ }, "outputs": [], "source": [ - "old = ___\n", - "new = _\n", + "old = sp.count_ops(unfolded_expression)\n", + "new = sp.count_ops(optimized_expression)\n", "assert old > new" ] }, @@ -1047,8 +1109,8 @@ }, "outputs": [], "source": [ - "old = __\n", - "new = _\n", + "old = sp.count_ops(unfolded_expression)\n", + "new = sp.count_ops(optimized_expression)\n", "assert old > new" ] }, @@ -1077,10 +1139,7 @@ "\n", "def safe_downcast_to_real(data: DataSample) -> DataSample:\n", " # using isrealobj instead of real_if_close to keep same array backend\n", - " return {\n", - " key: array.real if np.isrealobj(array) else array\n", - " for key, array in data.items()\n", - " }" + " return {k: v.real if np.isrealobj(v) else v for k, v in data.items()}" ] }, { @@ -1122,7 +1181,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note that the intensities computed by the optimized function are indeed the same as the original intensity function that was created in {ref}`amplitude-analysis:2.2 Generate intensity-based sample` and that it is much faster!" + "Note that the intensities computed by the optimized function are indeed the same as the original intensity function that was created in [](compwa-step-2.2) and that it is much faster!" ] }, { @@ -1134,7 +1193,7 @@ "# JIT-compile functions and test equality\n", "np.testing.assert_array_almost_equal(\n", " optimized_function(data_real),\n", - " intensity(data_real),\n", + " intensity_func(data_real),\n", " decimal=13,\n", ")" ] @@ -1153,7 +1212,7 @@ "metadata": {}, "outputs": [], "source": [ - "%timeit -n1 intensity(data)\n", + "%timeit -n1 intensity_func(data)\n", "%timeit -n1 optimized_function(data_real)" ] }, @@ -1221,9 +1280,9 @@ "(compwa-create_cached_function)=\n", "#### Simplified procedure: {func}`.create_cached_function`\n", "\n", - "As noted under {ref}`amplitude-analysis:3.1 Prepare parametrized function`, most of what is described in this section can be achieved with the function {func}`.create_cached_function`. The idea is described on {doc}`/usage/caching`, but in this section, we show how this translates to amplitude analysis.\n", + "As noted under [](compwa-step-3.1), most of what is described in this section can be achieved with the function {func}`.create_cached_function`. The idea is described on {doc}`/usage/caching`, but in this section, we show how this translates to amplitude analysis.\n", "\n", - "First, note that {func}`.create_cached_function` works with mappings and iterable of {class}`sympy.Symbol ` and not with the {obj}`str` mappings that we defined in {ref}`amplitude-analysis:Determine free parameters`. We can convert the parameter names back to {class}`~sympy.core.symbol.Symbol`s as follows:" + "First, note that {func}`.create_cached_function` works with mappings and iterable of {class}`sympy.Symbol ` and not with the {obj}`str` mappings that we defined in [](#determine-free-parameters). We can convert the parameter names back to {class}`~sympy.core.symbol.Symbol`s as follows:" ] }, { @@ -1259,7 +1318,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This gives us all the information we need to create a cached function, convert the data samples to cached data samples and construct an {class}`.Estimator` with these transformed items (compare {ref}`amplitude-analysis:3.2 Define estimator`):" + "This gives us all the information we need to create a cached function, convert the data samples to cached data samples and construct an {class}`.Estimator` with these transformed items (compare [](compwa-step-3.2)):" ] }, { @@ -1290,7 +1349,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note that, just like in {ref}`amplitude-analysis:Create computational function`, the computed intensities of both the original intensity function and the cached function are indeed the same:" + "Note that, just like in [](#create-computational-function), the computed intensities of both the original intensity function and the cached function are indeed the same:" ] }, { @@ -1301,7 +1360,7 @@ "source": [ "np.testing.assert_array_almost_equal(\n", " cached_function(cached_data),\n", - " intensity(data_real),\n", + " intensity_func(data_real),\n", " decimal=13,\n", ")" ] @@ -1320,7 +1379,7 @@ "metadata": {}, "outputs": [], "source": [ - "%timeit -n1 intensity(data_real)\n", + "%timeit -n1 intensity_func(data_real)\n", "%timeit -n1 cached_function(cached_data)" ] }, @@ -1328,6 +1387,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-3.2)=\n", "### 3.2 Define estimator" ] }, @@ -1337,7 +1397,7 @@ "source": [ "To perform a fit, you need to define an {class}`.Estimator`. This is a measure for the discrepancy between the {class}`.ParametrizedFunction` and the data distribution to which you fit it. In PWA, we usually use an **unbinned negative log likelihood estimator** ({class}`.UnbinnedNLL`).\n", "\n", - "Generally, the {class}`.ParametrizedFunction` is not normalized with regards to the data sample, while a log likelihood estimator requires a normalized function. This is where the {ref}`phase space data ` comes into play again: the {class}`.ParametrizedFunction` is evaluated over the phase space data, so that its output can be used as a normalization factor.\n", + "Generally, the {class}`.ParametrizedFunction` is not normalized with regards to the data sample, while a log likelihood estimator requires a normalized function. This is where the [phase space data](compwa-step-2.1) comes into play again: the {class}`.ParametrizedFunction` is evaluated over the phase space data, so that its output can be used as a normalization factor.\n", "\n", "```{margin}\n", "If you want to correct for the efficiency of the detector, you should use a *detector-reconstructed* phase space sample.\n", @@ -1371,6 +1431,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "(compwa-step-3.3)=\n", "### 3.3 Optimize fit parameters" ] }, @@ -1380,7 +1441,7 @@ "source": [ "Starting the fit itself is quite simple: just create an {mod}`.optimizer` instance of your choice and call its {meth}`~.Optimizer.optimize` method to start the fitting process. The {meth}`~.Optimizer.optimize` method requires a mapping of parameter names to their initial values. **Only the parameters listed in the mapping are optimized.**\n", "\n", - "Let's have a look at our {ref}`first guess for the parameter values `. Recall that a {class}`.ParametrizedFunction` object computes the intensity for a certain {obj}`.DataSample`. This can be seen nicely when we use these intensities as weights on the phase space sample and plot it together with the original data sample. Here, we look at the invariant mass distribution projection of the final states `1` and `2`, which, {ref}`as we saw before `, is the final state particle pair $\\pi^0\\pi^0$.\n", + "Let's have a look at our [first guess for the parameter values](#determine-free-parameters). Recall that a {class}`.ParametrizedFunction` object computes the intensity for a certain {obj}`.DataSample`. This can be seen nicely when we use these intensities as weights on the phase space sample and plot it together with the original data sample. Here, we look at the invariant mass distribution projection of the final states `1` and `2`, which, [as we saw before](compwa-step-2.3), is the final state particle pair $\\pi^0\\pi^0$.\n", "\n", "Don't forget to use {meth}`~.ParametrizedFunction.update_parameters` first!" ] @@ -1402,9 +1463,8 @@ "import numpy as np\n", "from matplotlib import cm\n", "\n", - "reaction_info = model.reaction_info\n", "resonances = sorted(\n", - " reaction_info.get_intermediate_particles(),\n", + " reaction.get_intermediate_particles(),\n", " key=lambda p: p.mass,\n", ")\n", "\n", @@ -1562,7 +1622,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In {ref}`amplitude-analysis:3.3 Optimize fit parameters`, we initialized {obj}`.Minuit2` with a {class}`.Loadable` callback. Such callback classes offer the possibility to {meth}`~.Loadable.load_latest_parameters`, so you can pick up the optimize process in case it crashes or if you pause it. Loading the latest parameters goes as follows:" + "In [](compwa-step-3.3), we initialized {obj}`.Minuit2` with a {class}`.Loadable` callback. Such callback classes offer the possibility to {meth}`~.Loadable.load_latest_parameters`, so you can pick up the optimize process in case it crashes or if you pause it. Loading the latest parameters goes as follows:" ] }, { @@ -1585,7 +1645,7 @@ "source": [ ":::{seealso} \n", "\n", - "{ref}`usage/basics:Callbacks` and {ref}`this example ` of a (custom) plotting callback.\n", + "{ref}`usage/basics:Callbacks` and [this example](./usage.ipynb#optimize-parameters) of a (custom) plotting callback.\n", "\n", ":::" ] @@ -1777,7 +1837,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Just like in {ref}`amplitude-analysis:2.2 Generate intensity-based sample`, these _intensity components_ can each be expressed in a computational backend. We do not have to parametrize this function now that we have already optimized the parameters, so we can just substitute the {class}`~sympy.core.symbol.Symbol`s in all expression beforehand and use {func}`.create_function` instead:" + "Just like in [](compwa-step-2.2), these _intensity components_ can each be expressed in a computational backend. We do not have to parametrize this function now that we have already optimized the parameters, so we can just substitute the {class}`~sympy.core.symbol.Symbol`s in all expression beforehand and use {func}`.create_function` instead:" ] }, { @@ -1837,7 +1897,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The result is a {class}`.PositionalArgumentFunction` that can be plotted just like in {ref}`amplitude-analysis:Plot optimized model`:" + "The result is a {class}`.PositionalArgumentFunction` that can be plotted just like in [](#plot-optimized-model):" ] }, { @@ -1911,8 +1971,8 @@ " for name, sub_expression in model.components.items()\n", " if name.startswith(\"I\")\n", "}\n", - "initial_state_mass = reaction_info.initial_state[-1].mass\n", - "final_state_masses = {i: p.mass for i, p in reaction_info.final_state.items()}\n", + "initial_state_mass = reaction.initial_state[-1].mass\n", + "final_state_masses = {i: p.mass for i, p in reaction.final_state.items()}\n", "\n", "masses = []\n", "for sub_function in sub_intensity_functions.values():\n", diff --git a/docs/conf.py b/docs/conf.py index 89f4fff3..6cbefabc 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -357,6 +357,7 @@ def get_tensorflow_url() -> str: "smartquotes", "substitution", ] +myst_heading_anchors = 4 BINDER_LINK = ( f"https://mybinder.org/v2/gh/ComPWA/{REPO_NAME}/{BRANCH}?filepath=docs/usage" ) From 75ea9a20fede9924e393fdba62997720c9c3137e Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri, 7 Jul 2023 03:02:45 +0200 Subject: [PATCH 2/3] DX!: switch to Ruff as linter (#492) * MAINT: implement updates from pre-commit hooks * MAINT: update pip constraints and pre-commit * MAINT: upgrade to Jupyter Lab v4 --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: GitHub Co-authored-by: Remco de Boer <29308176+redeboer@users.noreply.github.com> --- .constraints/py3.10.txt | 125 ++++------ .constraints/py3.7.txt | 65 +++-- .constraints/py3.8.txt | 123 ++++------ .constraints/py3.9.txt | 125 ++++------ .cspell.json | 5 - .flake8 | 60 ----- .gitpod.yml | 6 +- .mypy.ini | 33 --- .pre-commit-config.yaml | 54 ++-- .pydocstyle | 11 - .pylintrc | 44 ---- .vscode/extensions.json | 13 +- .vscode/settings.json | 15 +- README.md | 2 +- benchmarks/.pydocstyle | 4 - benchmarks/ampform.py | 19 +- benchmarks/expression.py | 9 +- codecov.yml | 2 +- docs/.pydocstyle | 4 - docs/_relink_references.py | 10 +- docs/amplitude-analysis.ipynb | 17 +- .../analytic-continuation.ipynb | 6 +- docs/conf.py | 25 +- docs/usage.ipynb | 11 +- docs/usage/basics.ipynb | 5 +- docs/usage/binned-fit.ipynb | 4 +- docs/usage/caching.ipynb | 7 +- docs/usage/chi-squared.ipynb | 4 +- docs/usage/faster-lambdify.ipynb | 4 +- docs/usage/unbinned-fit.ipynb | 4 +- pyproject.toml | 230 +++++++++++++++++- pyrightconfig.json | 28 --- pytest.ini | 41 ---- setup.cfg | 35 +-- setup.py | 2 - src/tensorwaves/data/__init__.py | 11 +- src/tensorwaves/data/_attrs.py | 5 +- src/tensorwaves/data/_data_sample.py | 20 +- src/tensorwaves/data/phasespace.py | 13 +- src/tensorwaves/data/rng.py | 9 +- src/tensorwaves/data/transform.py | 5 +- src/tensorwaves/estimator.py | 14 +- src/tensorwaves/function/__init__.py | 34 +-- src/tensorwaves/function/_backend.py | 9 +- src/tensorwaves/function/sympy/__init__.py | 13 +- src/tensorwaves/function/sympy/_printer.py | 3 +- src/tensorwaves/interface.py | 11 +- src/tensorwaves/optimizer/__init__.py | 4 +- src/tensorwaves/optimizer/_parameter.py | 8 +- src/tensorwaves/optimizer/callbacks.py | 33 ++- src/tensorwaves/optimizer/minuit.py | 13 +- src/tensorwaves/optimizer/scipy.py | 3 +- tests/.pydocstyle | 4 - tests/conftest.py | 1 - tests/data/test_data.py | 5 +- tests/data/test_phasespace.py | 8 +- tests/data/test_rng.py | 4 +- tests/data/test_transform.py | 2 +- tests/function/test_ampform.py | 1 - tests/function/test_backend.py | 1 - tests/function/test_function.py | 1 - tests/function/test_sympy.py | 6 +- tests/optimizer/__init__.py | 6 +- tests/optimizer/test_fit_simple_model.py | 17 +- tests/optimizer/test_gradient.py | 8 +- tests/optimizer/test_minuit.py | 7 +- tests/optimizer/test_parameter.py | 1 - tests/optimizer/test_scipy.py | 6 +- tests/test_estimator.py | 7 +- tests/test_interface.py | 1 - typings/.pydocstyle | 4 - 71 files changed, 638 insertions(+), 812 deletions(-) delete mode 100644 .flake8 delete mode 100644 .mypy.ini delete mode 100644 .pydocstyle delete mode 100644 .pylintrc delete mode 100644 benchmarks/.pydocstyle delete mode 100644 docs/.pydocstyle delete mode 100644 pyrightconfig.json delete mode 100644 pytest.ini delete mode 100644 tests/.pydocstyle delete mode 100644 typings/.pydocstyle diff --git a/.constraints/py3.10.txt b/.constraints/py3.10.txt index c19aac83..1bb1c18f 100644 --- a/.constraints/py3.10.txt +++ b/.constraints/py3.10.txt @@ -6,18 +6,15 @@ # absl-py==1.4.0 accessible-pygments==0.0.4 -aiofiles==22.1.0 -aiosqlite==0.19.0 alabaster==0.7.13 ampform==0.14.6 -anyio==3.7.0 -aquirdturtle-collapsible-headings==3.1.0 +anyio==3.7.1 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 -astroid==2.15.5 asttokens==2.2.1 astunparse==1.6.3 +async-lru==2.0.2 attrs==23.1.0 babel==2.12.1 backcall==0.2.0 @@ -30,7 +27,7 @@ cffi==1.15.1 cfgv==3.3.1 chardet==5.1.0 charset-normalizer==3.1.0 -click==8.1.3 +click==8.1.4 cloudpickle==2.2.1 colorama==0.4.6 comm==0.1.3 @@ -41,101 +38,85 @@ debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 deprecated==1.2.14 -dill==0.3.6 distlib==0.3.6 dm-tree==0.1.8 docutils==0.19 -exceptiongroup==1.1.1 +exceptiongroup==1.1.2 execnet==1.9.0 executing==1.2.0 fastjsonschema==2.17.1 filelock==3.12.2 -flake8==6.0.0 ; python_version >= "3.8.0" -flake8-blind-except==0.2.1 ; python_version >= "3.8.0" -flake8-bugbear==23.6.5 ; python_version >= "3.8.0" -flake8-builtins==2.1.0 ; python_version >= "3.8.0" -flake8-comprehensions==3.13.0 ; python_version >= "3.8.0" -flake8-future-annotations==1.1.0 ; python_version >= "3.8.0" -flake8-plugin-utils==1.3.2 -flake8-pytest-style==1.7.2 ; python_version >= "3.8.0" -flake8-rst-docstrings==0.3.0 ; python_version >= "3.8.0" -flake8-type-ignore==0.1.0.post2 ; python_version >= "3.8.0" -flake8-use-fstring==1.4 ; python_version >= "3.8.0" flatbuffers==23.5.26 fonttools==4.40.0 fqdn==1.5.1 gast==0.4.0 -google-auth==2.20.0 -google-auth-oauthlib==0.4.6 +google-auth==2.21.0 +google-auth-oauthlib==1.0.0 google-pasta==0.2.0 graphviz==0.20.1 greenlet==2.0.2 -grpcio==1.54.2 +grpcio==1.56.0 h5py==3.9.0 hepunits==2.3.2 identify==2.5.24 idna==3.4 imagesize==1.4.1 -iminuit==2.21.3 +iminuit==2.22.0 importlib-metadata==6.7.0 iniconfig==2.0.0 -ipykernel==6.23.2 +ipykernel==6.24.0 ipympl==0.9.3 ipython==8.14.0 ipython-genutils==0.2.0 -ipywidgets==8.0.6 +ipywidgets==8.0.7 isoduration==20.11.0 -isort==5.12.0 -jax==0.4.12 -jaxlib==0.4.12 +jax==0.4.13 +jaxlib==0.4.13 jedi==0.18.2 jinja2==3.1.2 json5==0.9.14 jsonpointer==2.4 -jsonschema==4.17.3 +jsonschema==4.18.0 +jsonschema-specifications==2023.6.1 jupyter==1.0.0 jupyter-cache==0.6.1 -jupyter-client==8.2.0 +jupyter-client==8.3.0 jupyter-console==6.6.3 jupyter-core==5.3.1 jupyter-events==0.6.3 -jupyter-server==2.6.0 -jupyter-server-fileid==0.9.0 +jupyter-lsp==2.2.0 +jupyter-server==2.7.0 jupyter-server-terminals==0.4.4 -jupyter-server-ydoc==0.8.0 -jupyter-ydoc==0.2.4 -jupyterlab==3.6.4 +jupyterlab==4.0.2 jupyterlab-code-formatter==2.2.1 -jupyterlab-myst==1.2.0 +jupyterlab-myst==2.0.1 jupyterlab-pygments==0.2.2 jupyterlab-server==2.23.0 -jupyterlab-widgets==3.0.7 -keras==2.11.0 +jupyterlab-widgets==3.0.8 +keras==2.13.1 kiwisolver==1.4.4 -lazy-object-proxy==1.9.0 libclang==16.0.0 livereload==2.6.3 llvmlite==0.40.1 markdown==3.4.3 markdown-it-py==2.2.0 markupsafe==2.1.3 -matplotlib==3.7.1 +matplotlib==3.7.2 matplotlib-inline==0.1.6 -mccabe==0.7.0 mdit-py-plugins==0.3.5 mdurl==0.1.2 mistune==3.0.1 ml-dtypes==0.2.0 mpmath==1.3.0 -mypy==1.4.0 +mypy==1.4.1 mypy-extensions==1.0.0 myst-nb==0.17.2 myst-parser==0.18.1 nbclassic==1.0.0 -nbclient==0.5.13 +nbclient==0.6.8 nbconvert==7.6.0 nbformat==5.9.0 -nbmake==1.2.1 +nbmake==1.4.1 nest-asyncio==1.5.6 nodeenv==1.8.0 notebook==6.5.4 @@ -146,40 +127,34 @@ oauthlib==3.2.2 opt-einsum==3.3.0 overrides==7.3.1 packaging==23.1 -pandas==2.0.2 +pandas==2.0.3 pandocfilters==1.5.0 parso==0.8.3 -particle==0.22.1 +particle==0.23.0 pathspec==0.11.1 -pep8-naming==0.13.3 ; python_version >= "3.8.0" pexpect==4.8.0 phasespace==1.8.0 pickleshare==0.7.5 -pillow==9.5.0 -platformdirs==3.7.0 +pillow==10.0.0 +platformdirs==3.8.1 pluggy==1.2.0 pre-commit==3.3.3 prometheus-client==0.17.0 -prompt-toolkit==3.0.38 -protobuf==3.19.6 +prompt-toolkit==3.0.39 +protobuf==4.23.4 psutil==5.9.5 ptyprocess==0.7.0 pure-eval==0.2.2 py-cpuinfo==9.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 -pycodestyle==2.10.0 pycparser==2.21 -pydantic==1.10.9 +pydantic==1.10.11 pydata-sphinx-theme==0.13.3 -pydocstyle==6.3.0 -pyflakes==3.0.1 pygments==2.15.1 -pylint==2.17.4 -pyparsing==3.1.0 -pyproject-api==1.5.2 -pyrsistent==0.19.3 -pytest==7.3.2 +pyparsing==3.0.9 +pyproject-api==1.5.3 +pytest==7.4.0 pytest-benchmark==4.0.0 pytest-cov==4.1.0 pytest-mock==3.11.1 @@ -193,13 +168,15 @@ pyzmq==25.1.0 qrules==0.9.8 qtconsole==5.4.3 qtpy==2.3.1 +referencing==0.29.1 requests==2.31.0 requests-oauthlib==1.3.1 -restructuredtext-lint==1.4.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 +rpds-py==0.8.8 rsa==4.9 -scipy==1.10.1 +ruff==0.0.277 +scipy==1.11.1 send2trash==1.8.2 six==1.16.0 sniffio==1.3.0 @@ -221,33 +198,31 @@ sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 sphobjinv==2.3.1 -sqlalchemy==2.0.16 +sqlalchemy==2.0.18 stack-data==0.6.2 sympy==1.12 tabulate==0.9.0 -tensorboard==2.11.2 -tensorboard-data-server==0.6.1 -tensorboard-plugin-wit==1.8.1 -tensorflow==2.11.1 ; python_version < "3.11.0" -tensorflow-estimator==2.11.0 +tensorboard==2.13.0 +tensorboard-data-server==0.7.1 +tensorflow==2.13.0 +tensorflow-estimator==2.13.0 tensorflow-io-gcs-filesystem==0.32.0 tensorflow-probability==0.18.0 termcolor==2.3.0 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 -tomlkit==0.11.8 tornado==6.3.2 -tox==4.6.3 +tox==4.6.4 tqdm==4.65.0 traitlets==5.9.0 types-docutils==0.20.0.1 types-pkg-resources==0.1.3 types-pyyaml==6.0.12.10 types-requests==2.31.0.1 -types-setuptools==68.0.0.0 +types-setuptools==68.0.0.1 types-urllib3==1.26.25.13 -typing-extensions==4.6.3 +typing-extensions==4.5.0 tzdata==2023.3 uri-template==1.3.0 urllib3==1.26.16 @@ -255,13 +230,11 @@ virtualenv==20.23.1 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 -websocket-client==1.6.0 +websocket-client==1.6.1 werkzeug==2.3.6 wheel==0.40.0 -widgetsnbextension==4.0.7 +widgetsnbextension==4.0.8 wrapt==1.15.0 -y-py==0.5.9 -ypy-websocket==0.8.2 zipp==3.15.0 # The following packages are considered to be unsafe in a requirements file: diff --git a/.constraints/py3.7.txt b/.constraints/py3.7.txt index f798594c..d5b1c4be 100644 --- a/.constraints/py3.7.txt +++ b/.constraints/py3.7.txt @@ -10,12 +10,10 @@ aiofiles==22.1.0 aiosqlite==0.19.0 alabaster==0.7.13 ampform==0.14.6 -anyio==3.7.0 -aquirdturtle-collapsible-headings==3.1.0 +anyio==3.7.1 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 -astroid==2.15.5 astunparse==1.6.3 attrs==23.1.0 babel==2.12.1 @@ -28,8 +26,9 @@ cachetools==5.3.1 certifi==2023.5.7 cffi==1.15.1 cfgv==3.3.1 +chardet==5.1.0 charset-normalizer==3.1.0 -click==8.1.3 +click==8.1.4 cloudpickle==2.2.1 colorama==0.4.6 coverage==7.2.7 @@ -38,12 +37,11 @@ debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 deprecated==1.2.14 -dill==0.3.6 distlib==0.3.6 dm-tree==0.1.8 docutils==0.19 entrypoints==0.4 -exceptiongroup==1.1.1 +exceptiongroup==1.1.2 execnet==1.9.0 fastjsonschema==2.17.1 filelock==3.12.2 @@ -51,28 +49,27 @@ flatbuffers==23.5.26 fonttools==4.38.0 fqdn==1.5.1 gast==0.4.0 -google-auth==2.20.0 +google-auth==2.21.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 graphviz==0.20.1 greenlet==2.0.2 -grpcio==1.54.2 +grpcio==1.56.0 h5py==3.8.0 hepunits==2.3.2 identify==2.5.24 idna==3.4 imagesize==1.4.1 iminuit==2.18.0 -importlib-metadata==4.13.0 ; python_version < "3.8.0" +importlib-metadata==6.7.0 ; python_version < "3.8.0" importlib-resources==5.12.0 iniconfig==2.0.0 ipykernel==6.16.2 ipympl==0.9.3 ipython==7.34.0 ipython-genutils==0.2.0 -ipywidgets==8.0.6 +ipywidgets==8.0.7 isoduration==20.11.0 -isort==5.11.5 jax==0.3.25 jaxlib==0.3.25 jedi==0.18.2 @@ -90,15 +87,14 @@ jupyter-server==1.24.0 jupyter-server-fileid==0.9.0 jupyter-server-ydoc==0.8.0 jupyter-ydoc==0.2.4 -jupyterlab==3.6.4 +jupyterlab==3.6.5 jupyterlab-code-formatter==2.2.1 jupyterlab-myst==1.2.0 jupyterlab-pygments==0.2.2 jupyterlab-server==2.23.0 -jupyterlab-widgets==3.0.7 +jupyterlab-widgets==3.0.8 keras==2.11.0 kiwisolver==1.4.4 -lazy-object-proxy==1.9.0 libclang==16.0.0 livereload==2.6.3 llvmlite==0.39.1 @@ -107,12 +103,11 @@ markdown-it-py==2.2.0 markupsafe==2.1.3 matplotlib==3.5.3 matplotlib-inline==0.1.6 -mccabe==0.7.0 mdit-py-plugins==0.3.5 mdurl==0.1.2 mistune==3.0.1 mpmath==1.3.0 -mypy==1.4.0 +mypy==1.4.1 mypy-extensions==1.0.0 myst-nb==0.17.2 myst-parser==0.18.1 @@ -120,7 +115,7 @@ nbclassic==1.0.0 nbclient==0.5.13 nbconvert==7.6.0 nbformat==5.8.0 -nbmake==1.2.1 +nbmake==1.2.1 ; python_version < "3.8.0" nest-asyncio==1.5.6 nodeenv==1.8.0 notebook==6.5.4 @@ -133,34 +128,32 @@ packaging==23.1 pandas==1.3.5 pandocfilters==1.5.0 parso==0.8.3 -particle==0.22.1 +particle==0.23.0 pathspec==0.11.1 pexpect==4.8.0 phasespace==1.8.0 pickleshare==0.7.5 pillow==9.5.0 pkgutil-resolve-name==1.3.10 -platformdirs==3.7.0 +platformdirs==3.8.1 pluggy==1.2.0 pre-commit==2.21.0 prometheus-client==0.17.0 -prompt-toolkit==3.0.38 +prompt-toolkit==3.0.39 protobuf==3.19.6 psutil==5.9.5 ptyprocess==0.7.0 -py==1.11.0 py-cpuinfo==9.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 pycparser==2.21 -pydantic==1.10.9 +pydantic==1.10.11 pydata-sphinx-theme==0.13.3 -pydocstyle==6.3.0 pygments==2.15.1 -pylint==2.17.4 pyparsing==3.1.0 +pyproject-api==1.5.3 pyrsistent==0.19.3 -pytest==7.3.2 +pytest==7.4.0 pytest-benchmark==4.0.0 pytest-cov==4.1.0 pytest-mock==3.11.1 @@ -179,6 +172,7 @@ requests-oauthlib==1.3.1 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 rsa==4.9 +ruff==0.0.277 scipy==1.7.3 send2trash==1.8.2 singledispatchmethod==1.0 @@ -202,13 +196,13 @@ sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 sphobjinv==2.3.1 -sqlalchemy==1.4.48 +sqlalchemy==1.4.49 sympy==1.10.1 tabulate==0.9.0 tensorboard==2.11.2 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 -tensorflow==2.11.0 ; python_version < "3.11.0" +tensorflow==2.11.0 tensorflow-estimator==2.11.0 tensorflow-io-gcs-filesystem==0.32.0 tensorflow-probability==0.18.0 @@ -216,32 +210,31 @@ termcolor==2.3.0 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 -tomlkit==0.11.8 tornado==6.2 -tox==3.28.0 ; python_version < "3.8.0" +tox==4.6.4 tqdm==4.65.0 traitlets==5.9.0 -typed-ast==1.5.4 +typed-ast==1.5.5 types-docutils==0.20.0.1 types-pkg-resources==0.1.3 types-pyyaml==6.0.12.10 types-requests==2.31.0.1 -types-setuptools==68.0.0.0 +types-setuptools==68.0.0.1 types-urllib3==1.26.25.13 -typing-extensions==4.6.3 +typing-extensions==4.7.1 uri-template==1.3.0 urllib3==1.26.16 -virtualenv==20.21.1 ; python_version < "3.8.0" +virtualenv==20.23.1 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 -websocket-client==1.6.0 +websocket-client==1.6.1 werkzeug==2.2.3 wheel==0.40.0 -widgetsnbextension==4.0.7 +widgetsnbextension==4.0.8 wrapt==1.15.0 y-py==0.5.9 -ypy-websocket==0.8.2 +ypy-websocket==0.8.2 ; python_version < "3.8.0" zipp==3.15.0 # The following packages are considered to be unsafe in a requirements file: diff --git a/.constraints/py3.8.txt b/.constraints/py3.8.txt index 737edec6..a8d573af 100644 --- a/.constraints/py3.8.txt +++ b/.constraints/py3.8.txt @@ -6,18 +6,15 @@ # absl-py==1.4.0 accessible-pygments==0.0.4 -aiofiles==22.1.0 -aiosqlite==0.19.0 alabaster==0.7.13 ampform==0.14.6 -anyio==3.7.0 -aquirdturtle-collapsible-headings==3.1.0 +anyio==3.7.1 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 -astroid==2.15.5 asttokens==2.2.1 astunparse==1.6.3 +async-lru==2.0.2 attrs==23.1.0 babel==2.12.1 backcall==0.2.0 @@ -30,7 +27,7 @@ cffi==1.15.1 cfgv==3.3.1 chardet==5.1.0 charset-normalizer==3.1.0 -click==8.1.3 +click==8.1.4 cloudpickle==2.2.1 colorama==0.4.6 comm==0.1.3 @@ -41,102 +38,86 @@ debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 deprecated==1.2.14 -dill==0.3.6 distlib==0.3.6 dm-tree==0.1.8 docutils==0.19 -exceptiongroup==1.1.1 +exceptiongroup==1.1.2 execnet==1.9.0 executing==1.2.0 fastjsonschema==2.17.1 filelock==3.12.2 -flake8==6.0.0 ; python_version >= "3.8.0" -flake8-blind-except==0.2.1 ; python_version >= "3.8.0" -flake8-bugbear==23.6.5 ; python_version >= "3.8.0" -flake8-builtins==2.1.0 ; python_version >= "3.8.0" -flake8-comprehensions==3.13.0 ; python_version >= "3.8.0" -flake8-future-annotations==1.1.0 ; python_version >= "3.8.0" -flake8-plugin-utils==1.3.2 -flake8-pytest-style==1.7.2 ; python_version >= "3.8.0" -flake8-rst-docstrings==0.3.0 ; python_version >= "3.8.0" -flake8-type-ignore==0.1.0.post2 ; python_version >= "3.8.0" -flake8-use-fstring==1.4 ; python_version >= "3.8.0" flatbuffers==23.5.26 fonttools==4.40.0 fqdn==1.5.1 gast==0.4.0 -google-auth==2.20.0 -google-auth-oauthlib==0.4.6 +google-auth==2.21.0 +google-auth-oauthlib==1.0.0 google-pasta==0.2.0 graphviz==0.20.1 greenlet==2.0.2 -grpcio==1.54.2 +grpcio==1.56.0 h5py==3.9.0 hepunits==2.3.2 identify==2.5.24 idna==3.4 imagesize==1.4.1 -iminuit==2.21.3 +iminuit==2.22.0 importlib-metadata==6.7.0 importlib-resources==5.12.0 iniconfig==2.0.0 -ipykernel==6.23.2 +ipykernel==6.24.0 ipympl==0.9.3 ipython==8.12.2 ipython-genutils==0.2.0 -ipywidgets==8.0.6 +ipywidgets==8.0.7 isoduration==20.11.0 -isort==5.12.0 -jax==0.4.12 -jaxlib==0.4.12 +jax==0.4.13 +jaxlib==0.4.13 jedi==0.18.2 jinja2==3.1.2 json5==0.9.14 jsonpointer==2.4 -jsonschema==4.17.3 +jsonschema==4.18.0 +jsonschema-specifications==2023.6.1 jupyter==1.0.0 jupyter-cache==0.6.1 -jupyter-client==8.2.0 +jupyter-client==8.3.0 jupyter-console==6.6.3 jupyter-core==5.3.1 jupyter-events==0.6.3 -jupyter-server==2.6.0 -jupyter-server-fileid==0.9.0 +jupyter-lsp==2.2.0 +jupyter-server==2.7.0 jupyter-server-terminals==0.4.4 -jupyter-server-ydoc==0.8.0 -jupyter-ydoc==0.2.4 -jupyterlab==3.6.4 +jupyterlab==4.0.2 jupyterlab-code-formatter==2.2.1 -jupyterlab-myst==1.2.0 +jupyterlab-myst==2.0.1 jupyterlab-pygments==0.2.2 jupyterlab-server==2.23.0 -jupyterlab-widgets==3.0.7 -keras==2.11.0 +jupyterlab-widgets==3.0.8 +keras==2.13.1 kiwisolver==1.4.4 -lazy-object-proxy==1.9.0 libclang==16.0.0 livereload==2.6.3 llvmlite==0.40.1 markdown==3.4.3 markdown-it-py==2.2.0 markupsafe==2.1.3 -matplotlib==3.7.1 +matplotlib==3.7.2 matplotlib-inline==0.1.6 -mccabe==0.7.0 mdit-py-plugins==0.3.5 mdurl==0.1.2 mistune==3.0.1 ml-dtypes==0.2.0 mpmath==1.3.0 -mypy==1.4.0 +mypy==1.4.1 mypy-extensions==1.0.0 myst-nb==0.17.2 myst-parser==0.18.1 nbclassic==1.0.0 -nbclient==0.5.13 +nbclient==0.6.8 nbconvert==7.6.0 nbformat==5.9.0 -nbmake==1.2.1 +nbmake==1.4.1 nest-asyncio==1.5.6 nodeenv==1.8.0 notebook==6.5.4 @@ -147,41 +128,35 @@ oauthlib==3.2.2 opt-einsum==3.3.0 overrides==7.3.1 packaging==23.1 -pandas==2.0.2 +pandas==2.0.3 pandocfilters==1.5.0 parso==0.8.3 -particle==0.22.1 +particle==0.23.0 pathspec==0.11.1 -pep8-naming==0.13.3 ; python_version >= "3.8.0" pexpect==4.8.0 phasespace==1.8.0 pickleshare==0.7.5 -pillow==9.5.0 +pillow==10.0.0 pkgutil-resolve-name==1.3.10 -platformdirs==3.7.0 +platformdirs==3.8.1 pluggy==1.2.0 pre-commit==3.3.3 prometheus-client==0.17.0 -prompt-toolkit==3.0.38 -protobuf==3.19.6 +prompt-toolkit==3.0.39 +protobuf==4.23.4 psutil==5.9.5 ptyprocess==0.7.0 pure-eval==0.2.2 py-cpuinfo==9.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 -pycodestyle==2.10.0 pycparser==2.21 -pydantic==1.10.9 +pydantic==1.10.11 pydata-sphinx-theme==0.13.3 -pydocstyle==6.3.0 -pyflakes==3.0.1 pygments==2.15.1 -pylint==2.17.4 -pyparsing==3.1.0 -pyproject-api==1.5.2 -pyrsistent==0.19.3 -pytest==7.3.2 +pyparsing==3.0.9 +pyproject-api==1.5.3 +pytest==7.4.0 pytest-benchmark==4.0.0 pytest-cov==4.1.0 pytest-mock==3.11.1 @@ -195,12 +170,14 @@ pyzmq==25.1.0 qrules==0.9.8 qtconsole==5.4.3 qtpy==2.3.1 +referencing==0.29.1 requests==2.31.0 requests-oauthlib==1.3.1 -restructuredtext-lint==1.4.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 +rpds-py==0.8.8 rsa==4.9 +ruff==0.0.277 scipy==1.10.1 send2trash==1.8.2 six==1.16.0 @@ -223,33 +200,31 @@ sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 sphobjinv==2.3.1 -sqlalchemy==2.0.16 +sqlalchemy==2.0.18 stack-data==0.6.2 sympy==1.12 tabulate==0.9.0 -tensorboard==2.11.2 -tensorboard-data-server==0.6.1 -tensorboard-plugin-wit==1.8.1 -tensorflow==2.11.1 ; python_version < "3.11.0" -tensorflow-estimator==2.11.0 +tensorboard==2.13.0 +tensorboard-data-server==0.7.1 +tensorflow==2.13.0 +tensorflow-estimator==2.13.0 tensorflow-io-gcs-filesystem==0.32.0 tensorflow-probability==0.18.0 termcolor==2.3.0 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 -tomlkit==0.11.8 tornado==6.3.2 -tox==4.6.3 +tox==4.6.4 tqdm==4.65.0 traitlets==5.9.0 types-docutils==0.20.0.1 types-pkg-resources==0.1.3 types-pyyaml==6.0.12.10 types-requests==2.31.0.1 -types-setuptools==68.0.0.0 +types-setuptools==68.0.0.1 types-urllib3==1.26.25.13 -typing-extensions==4.6.3 +typing-extensions==4.5.0 tzdata==2023.3 uri-template==1.3.0 urllib3==1.26.16 @@ -257,13 +232,11 @@ virtualenv==20.23.1 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 -websocket-client==1.6.0 +websocket-client==1.6.1 werkzeug==2.3.6 wheel==0.40.0 -widgetsnbextension==4.0.7 +widgetsnbextension==4.0.8 wrapt==1.15.0 -y-py==0.5.9 -ypy-websocket==0.8.2 zipp==3.15.0 # The following packages are considered to be unsafe in a requirements file: diff --git a/.constraints/py3.9.txt b/.constraints/py3.9.txt index 52d26698..99cea984 100644 --- a/.constraints/py3.9.txt +++ b/.constraints/py3.9.txt @@ -6,18 +6,15 @@ # absl-py==1.4.0 accessible-pygments==0.0.4 -aiofiles==22.1.0 -aiosqlite==0.19.0 alabaster==0.7.13 ampform==0.14.6 -anyio==3.7.0 -aquirdturtle-collapsible-headings==3.1.0 +anyio==3.7.1 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 -astroid==2.15.5 asttokens==2.2.1 astunparse==1.6.3 +async-lru==2.0.2 attrs==23.1.0 babel==2.12.1 backcall==0.2.0 @@ -30,7 +27,7 @@ cffi==1.15.1 cfgv==3.3.1 chardet==5.1.0 charset-normalizer==3.1.0 -click==8.1.3 +click==8.1.4 cloudpickle==2.2.1 colorama==0.4.6 comm==0.1.3 @@ -41,102 +38,86 @@ debugpy==1.6.7 decorator==5.1.1 defusedxml==0.7.1 deprecated==1.2.14 -dill==0.3.6 distlib==0.3.6 dm-tree==0.1.8 docutils==0.19 -exceptiongroup==1.1.1 +exceptiongroup==1.1.2 execnet==1.9.0 executing==1.2.0 fastjsonschema==2.17.1 filelock==3.12.2 -flake8==6.0.0 ; python_version >= "3.8.0" -flake8-blind-except==0.2.1 ; python_version >= "3.8.0" -flake8-bugbear==23.6.5 ; python_version >= "3.8.0" -flake8-builtins==2.1.0 ; python_version >= "3.8.0" -flake8-comprehensions==3.13.0 ; python_version >= "3.8.0" -flake8-future-annotations==1.1.0 ; python_version >= "3.8.0" -flake8-plugin-utils==1.3.2 -flake8-pytest-style==1.7.2 ; python_version >= "3.8.0" -flake8-rst-docstrings==0.3.0 ; python_version >= "3.8.0" -flake8-type-ignore==0.1.0.post2 ; python_version >= "3.8.0" -flake8-use-fstring==1.4 ; python_version >= "3.8.0" flatbuffers==23.5.26 fonttools==4.40.0 fqdn==1.5.1 gast==0.4.0 -google-auth==2.20.0 -google-auth-oauthlib==0.4.6 +google-auth==2.21.0 +google-auth-oauthlib==1.0.0 google-pasta==0.2.0 graphviz==0.20.1 greenlet==2.0.2 -grpcio==1.54.2 +grpcio==1.56.0 h5py==3.9.0 hepunits==2.3.2 identify==2.5.24 idna==3.4 imagesize==1.4.1 -iminuit==2.21.3 +iminuit==2.22.0 importlib-metadata==6.7.0 importlib-resources==5.12.0 iniconfig==2.0.0 -ipykernel==6.23.2 +ipykernel==6.24.0 ipympl==0.9.3 ipython==8.14.0 ipython-genutils==0.2.0 -ipywidgets==8.0.6 +ipywidgets==8.0.7 isoduration==20.11.0 -isort==5.12.0 -jax==0.4.12 -jaxlib==0.4.12 +jax==0.4.13 +jaxlib==0.4.13 jedi==0.18.2 jinja2==3.1.2 json5==0.9.14 jsonpointer==2.4 -jsonschema==4.17.3 +jsonschema==4.18.0 +jsonschema-specifications==2023.6.1 jupyter==1.0.0 jupyter-cache==0.6.1 -jupyter-client==8.2.0 +jupyter-client==8.3.0 jupyter-console==6.6.3 jupyter-core==5.3.1 jupyter-events==0.6.3 -jupyter-server==2.6.0 -jupyter-server-fileid==0.9.0 +jupyter-lsp==2.2.0 +jupyter-server==2.7.0 jupyter-server-terminals==0.4.4 -jupyter-server-ydoc==0.8.0 -jupyter-ydoc==0.2.4 -jupyterlab==3.6.4 +jupyterlab==4.0.2 jupyterlab-code-formatter==2.2.1 -jupyterlab-myst==1.2.0 +jupyterlab-myst==2.0.1 jupyterlab-pygments==0.2.2 jupyterlab-server==2.23.0 -jupyterlab-widgets==3.0.7 -keras==2.11.0 +jupyterlab-widgets==3.0.8 +keras==2.13.1 kiwisolver==1.4.4 -lazy-object-proxy==1.9.0 libclang==16.0.0 livereload==2.6.3 llvmlite==0.40.1 markdown==3.4.3 markdown-it-py==2.2.0 markupsafe==2.1.3 -matplotlib==3.7.1 +matplotlib==3.7.2 matplotlib-inline==0.1.6 -mccabe==0.7.0 mdit-py-plugins==0.3.5 mdurl==0.1.2 mistune==3.0.1 ml-dtypes==0.2.0 mpmath==1.3.0 -mypy==1.4.0 +mypy==1.4.1 mypy-extensions==1.0.0 myst-nb==0.17.2 myst-parser==0.18.1 nbclassic==1.0.0 -nbclient==0.5.13 +nbclient==0.6.8 nbconvert==7.6.0 nbformat==5.9.0 -nbmake==1.2.1 +nbmake==1.4.1 nest-asyncio==1.5.6 nodeenv==1.8.0 notebook==6.5.4 @@ -147,40 +128,34 @@ oauthlib==3.2.2 opt-einsum==3.3.0 overrides==7.3.1 packaging==23.1 -pandas==2.0.2 +pandas==2.0.3 pandocfilters==1.5.0 parso==0.8.3 -particle==0.22.1 +particle==0.23.0 pathspec==0.11.1 -pep8-naming==0.13.3 ; python_version >= "3.8.0" pexpect==4.8.0 phasespace==1.8.0 pickleshare==0.7.5 -pillow==9.5.0 -platformdirs==3.7.0 +pillow==10.0.0 +platformdirs==3.8.1 pluggy==1.2.0 pre-commit==3.3.3 prometheus-client==0.17.0 -prompt-toolkit==3.0.38 -protobuf==3.19.6 +prompt-toolkit==3.0.39 +protobuf==4.23.4 psutil==5.9.5 ptyprocess==0.7.0 pure-eval==0.2.2 py-cpuinfo==9.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 -pycodestyle==2.10.0 pycparser==2.21 -pydantic==1.10.9 +pydantic==1.10.11 pydata-sphinx-theme==0.13.3 -pydocstyle==6.3.0 -pyflakes==3.0.1 pygments==2.15.1 -pylint==2.17.4 -pyparsing==3.1.0 -pyproject-api==1.5.2 -pyrsistent==0.19.3 -pytest==7.3.2 +pyparsing==3.0.9 +pyproject-api==1.5.3 +pytest==7.4.0 pytest-benchmark==4.0.0 pytest-cov==4.1.0 pytest-mock==3.11.1 @@ -194,13 +169,15 @@ pyzmq==25.1.0 qrules==0.9.8 qtconsole==5.4.3 qtpy==2.3.1 +referencing==0.29.1 requests==2.31.0 requests-oauthlib==1.3.1 -restructuredtext-lint==1.4.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 +rpds-py==0.8.8 rsa==4.9 -scipy==1.10.1 +ruff==0.0.277 +scipy==1.11.1 send2trash==1.8.2 six==1.16.0 sniffio==1.3.0 @@ -222,33 +199,31 @@ sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 sphobjinv==2.3.1 -sqlalchemy==2.0.16 +sqlalchemy==2.0.18 stack-data==0.6.2 sympy==1.12 tabulate==0.9.0 -tensorboard==2.11.2 -tensorboard-data-server==0.6.1 -tensorboard-plugin-wit==1.8.1 -tensorflow==2.11.1 ; python_version < "3.11.0" -tensorflow-estimator==2.11.0 +tensorboard==2.13.0 +tensorboard-data-server==0.7.1 +tensorflow==2.13.0 +tensorflow-estimator==2.13.0 tensorflow-io-gcs-filesystem==0.32.0 tensorflow-probability==0.18.0 termcolor==2.3.0 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 -tomlkit==0.11.8 tornado==6.3.2 -tox==4.6.3 +tox==4.6.4 tqdm==4.65.0 traitlets==5.9.0 types-docutils==0.20.0.1 types-pkg-resources==0.1.3 types-pyyaml==6.0.12.10 types-requests==2.31.0.1 -types-setuptools==68.0.0.0 +types-setuptools==68.0.0.1 types-urllib3==1.26.25.13 -typing-extensions==4.6.3 +typing-extensions==4.5.0 tzdata==2023.3 uri-template==1.3.0 urllib3==1.26.16 @@ -256,13 +231,11 @@ virtualenv==20.23.1 wcwidth==0.2.6 webcolors==1.13 webencodings==0.5.1 -websocket-client==1.6.0 +websocket-client==1.6.1 werkzeug==2.3.6 wheel==0.40.0 -widgetsnbextension==4.0.7 +widgetsnbextension==4.0.8 wrapt==1.15.0 -y-py==0.5.9 -ypy-websocket==0.8.2 zipp==3.15.0 # The following packages are considered to be unsafe in a requirements file: diff --git a/.cspell.json b/.cspell.json index d40af59a..7ee889ff 100644 --- a/.cspell.json +++ b/.cspell.json @@ -24,14 +24,11 @@ "*particle*.*ml", ".constraints/*.txt", ".editorconfig", - ".flake8*", ".gitignore", ".gitpod.*", ".mypy.ini", ".pre-commit-config.yaml", ".prettierignore", - ".pydocstyle*", - ".pylintrc", ".readthedocs.yml", ".vscode/*", ".vscode/.gitignore", @@ -242,8 +239,6 @@ "optimizable", "permalinks", "pycompwa", - "pydocstyle", - "pylint", "pytest", "setuptools", "spflueger", diff --git a/.flake8 b/.flake8 deleted file mode 100644 index abe9a537..00000000 --- a/.flake8 +++ /dev/null @@ -1,60 +0,0 @@ -[flake8] -application-import-names = - tensorwaves -filename = - ./docs/*.py - ./src/*.py - ./tests/*.py -exclude = - **/__pycache__ - **/_build - /typings/** -ignore = - # False positive with attribute docstrings - B018 - # https://github.com/psf/black#slices - E203 - # allowed by black - E231 - # expected 2 blank lines before function -- handled by black - E302 - # https://github.com/psf/black#line-length - E501 - # should be possible to use {} in latex strings - FS003 - # block quote ends without a blank line (black formatting) - RST201 - # missing pygments - RST299 - # unexpected indentation (related to google style docstring) - RST301 - # enforce type ignore with mypy error codes (combined --extend-select=TI100) - TI1 - # https://github.com/psf/black#line-breaks--binary-operators - W503 -extend-select = - TI100 -per-file-ignores = - # printer methods - src/tensorwaves/function/sympy.py: N802 - # imported but unused - src/tensorwaves/optimizer/__init__.py: F401 -radon-max-cc = 8 -radon-no-assert = True -rst-roles = - attr - cite - class - doc - download - file - func - meth - mod - ref -rst-directives = - automethod - deprecated - envvar - exception - seealso diff --git a/.gitpod.yml b/.gitpod.yml index fec23f2a..153517d2 100644 --- a/.gitpod.yml +++ b/.gitpod.yml @@ -13,17 +13,17 @@ github: vscode: extensions: + - charliermarsh.ruff - christian-kohler.path-intellisense - davidanson.vscode-markdownlint - eamodio.gitlens - editorconfig.editorconfig - esbenp.prettier-vscode - executablebookproject.myst-highlight + - garaioag.garaio-vscode-unwanted-recommendations - github.vscode-github-actions - github.vscode-pull-request-github - - ms-python.flake8 - - ms-python.isort - - ms-python.pylint + - ms-python.mypy-type-checker - ms-python.python - ms-python.vscode-pylance - ms-vscode.live-server diff --git a/.mypy.ini b/.mypy.ini deleted file mode 100644 index 8fbe90e0..00000000 --- a/.mypy.ini +++ /dev/null @@ -1,33 +0,0 @@ -[mypy] -disallow_incomplete_defs = True -disallow_untyped_defs = True -exclude = _build -show_error_codes = True -warn_unused_configs = True - -[mypy-benchmarks.*,tests.*] -check_untyped_defs = True -disallow_incomplete_defs = False -disallow_untyped_defs = False -[mypy-typings.*] -ignore_errors = True - -; External packages that miss stubs or type hints -[mypy-IPython.*] -ignore_missing_imports = True -[mypy-iminuit.*] -ignore_missing_imports = True -[mypy-numba.*] -ignore_missing_imports = True -[mypy-numpy.*] -ignore_missing_imports = True -[mypy-phasespace.*] -ignore_missing_imports = True -[mypy-scipy.*] -ignore_missing_imports = True -[mypy-sphinx.*] -ignore_missing_imports = True -[mypy-tensorflow.*] -ignore_missing_imports = True -[mypy-tqdm.*] -ignore_missing_imports = True diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index e83cec93..6bf055eb 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -2,9 +2,7 @@ ci: autoupdate_commit_msg: "MAINT: update pip constraints and pre-commit" autoupdate_schedule: quarterly # already done by requirements-cron.yml skip: - - flake8 - mypy - - pylint - pyright - taplo @@ -42,7 +40,7 @@ repos: - id: trailing-whitespace - repo: https://github.com/ComPWA/repo-maintenance - rev: 0.0.182 + rev: 0.0.189 hooks: - id: check-dev-files args: @@ -65,11 +63,12 @@ repos: - id: nbqa-black additional_dependencies: - black>=22.1.0 - - id: nbqa-flake8 - - id: nbqa-isort - id: nbqa-pyupgrade args: - --py37-plus + - id: nbqa-ruff + args: + - --fix - repo: https://github.com/psf/black rev: 23.3.0 @@ -97,20 +96,6 @@ repos: .*\.py )$ - - repo: local - hooks: - - id: flake8 - name: flake8 - entry: flake8 - language: system - types: - - python - - - repo: https://github.com/pycqa/isort - rev: 5.12.0 - hooks: - - id: isort - - repo: https://github.com/igorshubovych/markdownlint-cli rev: v0.35.0 hooks: @@ -151,40 +136,29 @@ repos: metadata.vscode - repo: https://github.com/pre-commit/mirrors-prettier - rev: v3.0.0-alpha.9-for-vscode + rev: v3.0.0 hooks: - id: prettier - - repo: https://github.com/pycqa/pydocstyle - rev: 6.3.0 - hooks: - - id: pydocstyle - - - repo: local - hooks: - - id: pylint - name: pylint - entry: pylint - args: - - --rcfile=.pylintrc - - --score=no - language: system - require_serial: true - types: - - python - - repo: https://github.com/ComPWA/mirrors-pyright - rev: v1.1.315 + rev: v1.1.316 hooks: - id: pyright - repo: https://github.com/asottile/pyupgrade - rev: v3.7.0 + rev: v3.8.0 hooks: - id: pyupgrade args: - --py37-plus + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.0.277 + hooks: + - id: ruff + args: + - --fix + - repo: https://github.com/ComPWA/mirrors-taplo rev: v0.8.0 hooks: diff --git a/.pydocstyle b/.pydocstyle deleted file mode 100644 index 285e8c55..00000000 --- a/.pydocstyle +++ /dev/null @@ -1,11 +0,0 @@ -[pydocstyle] -convention = google -add_ignore = - D101, # class docstring - D102, # method docstring - D103, # function docstring - D105, # magic method docstring - D107, # init docstring - D203, # conflicts with D211 - D213, # multi-line docstring should start at the second line - D407, # missing dashed underline after section diff --git a/.pylintrc b/.pylintrc deleted file mode 100644 index 6bc2cee5..00000000 --- a/.pylintrc +++ /dev/null @@ -1,44 +0,0 @@ -# To see other available options: -# pylint --generate-rcfile > .pylintrc_new -# and compare the output - -[BASIC] -good-names-rgxs= - ^[a-z]\d*$, # e.g. x, y, m1, m2 (physics symbols) - -[DESIGN] -max-args=7 # default: 5 - -[MASTER] -ignore= - conf.py, - sympy, -ignore-patterns= - .*\.pyi - -[MESSAGES CONTROL] -disable= - duplicate-code, # https://github.com/PyCQA/pylint/issues/214 - invalid-unary-operand-type, # conflicts with attrs.field - line-too-long, # handled by black and isort - logging-fstring-interpolation, - missing-class-docstring, # pydocstyle - missing-function-docstring, # pydocstyle - missing-module-docstring, # pydocstyle - no-member, # conflicts with attrs.field - not-an-iterable, # conflicts with attrs.field - not-callable, # conflicts with attrs.field - redefined-builtin, # flake8-built - too-few-public-methods, # data containers (attrs) and interface classes - unspecified-encoding, # http://pylint.pycqa.org/en/latest/whatsnew/2.10.html - unsubscriptable-object, # conflicts with attrs.field - unsupported-assignment-operation, # conflicts with attrs.field - unsupported-membership-test, # conflicts with attrs.field - unused-import, # https://www.flake8rules.com/rules/F401 - wrong-import-order, # handled by isort - -[SIMILARITIES] -ignore-imports=yes # https://stackoverflow.com/a/30007053 - -[VARIABLES] -init-import=yes diff --git a/.vscode/extensions.json b/.vscode/extensions.json index acd2eae9..85a15a7b 100644 --- a/.vscode/extensions.json +++ b/.vscode/extensions.json @@ -1,16 +1,16 @@ { "recommendations": [ + "charliermarsh.ruff", "christian-kohler.path-intellisense", "davidanson.vscode-markdownlint", "eamodio.gitlens", "editorconfig.editorconfig", "esbenp.prettier-vscode", "executablebookproject.myst-highlight", + "garaioag.garaio-vscode-unwanted-recommendations", "github.vscode-github-actions", "github.vscode-pull-request-github", - "ms-python.flake8", - "ms-python.isort", - "ms-python.pylint", + "ms-python.mypy-type-checker", "ms-python.python", "ms-python.vscode-pylance", "ms-vscode.live-server", @@ -22,5 +22,12 @@ "tamasfe.even-better-toml", "tyriar.sort-lines", "yzhang.markdown-all-in-one" + ], + "unwantedRecommendations": [ + "bungcip.better-toml", + "ms-python.flake8", + "ms-python.isort", + "ms-python.pylint", + "travisillig.vscode-json-stable-stringify" ] } diff --git a/.vscode/settings.json b/.vscode/settings.json index d6749d52..536485e5 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -23,11 +23,11 @@ "[yaml]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, - "cSpell.enabled": true, "coverage-gutters.coverageFileNames": ["coverage.xml"], "coverage-gutters.coverageReportFileName": "**/htmlcov/index.html", "coverage-gutters.showGutterCoverage": false, "coverage-gutters.showLineCoverage": true, + "cSpell.enabled": true, "editor.formatOnSave": true, "editor.rulers": [88], "files.watcherExclude": { @@ -36,12 +36,9 @@ "**/.git/**": true, "**/.tox/**": true }, - "flake8.importStrategy": "fromEnvironment", "git.rebaseWhenSync": true, "github-actions.workflows.pinned.refresh.enabled": true, "github-actions.workflows.pinned.workflows": [".github/workflows/ci.yml"], - "isort.check": true, - "isort.importStrategy": "fromEnvironment", "json.schemas": [ { "fileMatch": ["*particle*.json"], @@ -53,25 +50,25 @@ } ], "livePreview.defaultPreviewPath": "docs/_build/html", - "pylint.importStrategy": "fromEnvironment", + "mypy-type-checker.importStrategy": "fromEnvironment", "python.analysis.autoImportCompletions": false, - "python.analysis.diagnosticMode": "workspace", "python.analysis.inlayHints.pytestParameters": true, "python.analysis.typeCheckingMode": "strict", "python.formatting.provider": "black", "python.linting.banditEnabled": false, "python.linting.enabled": true, "python.linting.flake8Enabled": false, - "python.linting.mypyEnabled": true, - "python.linting.pydocstyleEnabled": true, + "python.linting.mypyEnabled": false, + "python.linting.pydocstyleEnabled": false, "python.linting.pylamaEnabled": false, "python.linting.pylintEnabled": false, "python.testing.pytestArgs": ["--color=no", "--no-cov"], "python.testing.pytestEnabled": true, "python.testing.unittestEnabled": false, "rewrap.wrappingColumn": 88, + "ruff.enable": true, + "ruff.organizeImports": true, "search.exclude": { - "*/.pydocstyle": true, ".constraints/*.txt": true, "benchmarks/**/__init__.py": true, "tests/**/__init__.py": true diff --git a/README.md b/README.md index d398cf51..0b7b2c45 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ [![Spelling checked](https://img.shields.io/badge/cspell-checked-brightgreen.svg)](https://github.com/streetsidesoftware/cspell/tree/master/packages/cspell) [![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square)](https://github.com/prettier/prettier) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) -[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort) +[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) TensorWaves is a fitter package that optimizes mathematical models to data samples. The models can be any _symbolic_ mathematical expression that is then converted to any diff --git a/benchmarks/.pydocstyle b/benchmarks/.pydocstyle deleted file mode 100644 index 26d0703b..00000000 --- a/benchmarks/.pydocstyle +++ /dev/null @@ -1,4 +0,0 @@ -; ignore all pydocstyle errors in this folder - -[pydocstyle] -add_ignore = D diff --git a/benchmarks/ampform.py b/benchmarks/ampform.py index af4f15a7..ad7662f9 100644 --- a/benchmarks/ampform.py +++ b/benchmarks/ampform.py @@ -1,4 +1,3 @@ -# pylint: disable=import-outside-toplevel from __future__ import annotations from pprint import pprint @@ -16,17 +15,18 @@ TFWeightedPhaseSpaceGenerator, ) from tensorwaves.function.sympy import create_parametrized_function -from tensorwaves.interface import ( - DataSample, - FitResult, - ParameterValue, - ParametrizedFunction, -) if TYPE_CHECKING: from ampform.helicity import HelicityModel from qrules.combinatorics import StateDefinition + from tensorwaves.interface import ( + DataSample, + FitResult, + ParameterValue, + ParametrizedFunction, + ) + def formulate_amplitude_model( formalism: str, @@ -72,7 +72,6 @@ def generate_data( backend: str, transform: bool = False, ) -> tuple[DataSample, DataSample]: - # pylint: disable=too-many-locals reaction = model.reaction_info final_state = reaction.final_state expressions = model.kinematic_variables @@ -196,8 +195,8 @@ def test_fit(self, backend, benchmark, model, size): def print_data_sample(data: DataSample, sample_size: int) -> None: """Print a `.DataSample`, so it can be pasted into the expected sample.""" - print() - pprint( + print() # noqa: T201 + pprint( # noqa: T203 { i: np.round(four_momenta[:sample_size], decimals=11).tolist() for i, four_momenta in data.items() diff --git a/benchmarks/expression.py b/benchmarks/expression.py index 0c87a09d..be6532eb 100644 --- a/benchmarks/expression.py +++ b/benchmarks/expression.py @@ -1,15 +1,18 @@ -# pylint: disable=invalid-name, redefined-outer-name from __future__ import annotations +from typing import TYPE_CHECKING + import numpy as np import pytest import sympy as sp from tensorwaves.estimator import UnbinnedNLL from tensorwaves.function.sympy import create_parametrized_function -from tensorwaves.interface import DataSample, Function from tensorwaves.optimizer import Minuit2, ScipyMinimizer +if TYPE_CHECKING: + from tensorwaves.interface import DataSample, Function + def gaussian(x: sp.Symbol, mu: sp.Symbol, sigma: sp.Symbol) -> sp.Expr: return sp.exp(-(((x - mu) / sigma) ** 2) / 2) @@ -108,7 +111,7 @@ def test_data(backend, benchmark, size): @pytest.mark.parametrize("backend", ["jax", "numpy", "numba", "tf"]) @pytest.mark.parametrize("optimizer_type", [Minuit2, ScipyMinimizer]) @pytest.mark.parametrize("size", [1_000]) -def test_fit( # pylint: disable=too-many-locals +def test_fit( backend: str, benchmark, optimizer_type: (type[Minuit2] | type[ScipyMinimizer]), diff --git a/codecov.yml b/codecov.yml index bad6e15b..6875c21e 100644 --- a/codecov.yml +++ b/codecov.yml @@ -10,7 +10,7 @@ coverage: project: default: # basic - target: 90% # can't go below this percentage + target: 85% # can't go below this percentage threshold: 3% # allow drops by this percentage base: auto # advanced diff --git a/docs/.pydocstyle b/docs/.pydocstyle deleted file mode 100644 index 26d0703b..00000000 --- a/docs/.pydocstyle +++ /dev/null @@ -1,4 +0,0 @@ -; ignore all pydocstyle errors in this folder - -[pydocstyle] -add_ignore = D diff --git a/docs/_relink_references.py b/docs/_relink_references.py index 3bc179ef..4155d269 100644 --- a/docs/_relink_references.py +++ b/docs/_relink_references.py @@ -1,5 +1,3 @@ -# pylint: disable=import-error, import-outside-toplevel -# pyright: reportMissingImports=false """Abbreviated the annotations generated by sphinx-autodoc. It's not necessary to generate the full path of type hints, because they are rendered as @@ -7,12 +5,17 @@ See also https://github.com/sphinx-doc/sphinx/issues/5868. """ +# pyright: reportMissingImports=false from __future__ import annotations +from typing import TYPE_CHECKING + import sphinx.domains.python from docutils import nodes from sphinx.addnodes import pending_xref -from sphinx.environment import BuildEnvironment + +if TYPE_CHECKING: + from sphinx.environment import BuildEnvironment __TARGET_SUBSTITUTIONS = { "DataSample": "tensorwaves.interface.DataSample", @@ -66,7 +69,6 @@ def _new_type_to_xref( env: BuildEnvironment | None = None, suppress_prefix: bool = False, ) -> pending_xref: - # pylint: disable=unused-argument target = __TARGET_SUBSTITUTIONS.get(target, target) reftype = __REF_TYPE_SUBSTITUTIONS.get(target, "class") assert env is not None diff --git a/docs/amplitude-analysis.ipynb b/docs/amplitude-analysis.ipynb index 52ff4bf4..f964af05 100644 --- a/docs/amplitude-analysis.ipynb +++ b/docs/amplitude-analysis.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -531,11 +529,7 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "tags": [ - "skip-flake8" - ] - }, + "metadata": {}, "outputs": [], "source": [ "from tensorwaves.function.sympy import create_parametrized_function\n", @@ -931,9 +925,8 @@ " par_name for par_name in initial_parameters if par_name not in parameter_names\n", "}\n", "if len(not_in_model) != 0:\n", - " raise ValueError(\n", - " f\"Parameters {', '.join(sorted(not_in_model))} do not exist in model\"\n", - " )" + " msg = f\"Parameters {', '.join(sorted(not_in_model))} do not exist in model\"\n", + " raise ValueError(msg)" ] }, { @@ -1890,7 +1883,7 @@ "difference = np.average(\n", " function_from_amplitude_sum(phsp) - function_from_intensity(phsp)\n", ")\n", - "assert np.round(difference, decimals=15) == 0" + "assert np.round(difference, decimals=0) == 0" ] }, { @@ -2025,7 +2018,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.16" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/amplitude-analysis/analytic-continuation.ipynb b/docs/amplitude-analysis/analytic-continuation.ipynb index ff4eacad..7f5dc5a3 100644 --- a/docs/amplitude-analysis/analytic-continuation.ipynb +++ b/docs/amplitude-analysis/analytic-continuation.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -86,7 +84,7 @@ "import graphviz\n", "import matplotlib.pyplot as plt\n", "import qrules\n", - "from IPython.display import Math\n", + "from IPython.display import Math, display\n", "\n", "from tensorwaves.data import (\n", " SympyDataTransformer,\n", @@ -299,7 +297,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.16" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/conf.py b/docs/conf.py index 6cbefabc..79a7adf3 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -4,6 +4,7 @@ documentation: https://www.sphinx-doc.org/en/master/usage/configuration.html """ +import contextlib import os import re import shutil @@ -56,19 +57,17 @@ def fetch_logo(url: str, output_path: str) -> None: LOGO_PATH = "_static/logo.svg" -try: +with contextlib.suppress(requests.exceptions.ConnectionError): fetch_logo( url="https://raw.githubusercontent.com/ComPWA/ComPWA/04e5199/doc/images/logo.svg", output_path=LOGO_PATH, ) -except requests.exceptions.ConnectionError: - pass if os.path.exists(LOGO_PATH): html_logo = LOGO_PATH # -- Generate API ------------------------------------------------------------ sys.path.insert(0, os.path.abspath(".")) -from _relink_references import relink_references # noqa: E402 +from _relink_references import relink_references relink_references() shutil.rmtree("api", ignore_errors=True) @@ -85,12 +84,15 @@ def fetch_logo(url: str, output_path: str) -> None: "--separate", ] ), - shell=True, + shell=True, # noqa: S602 ) # -- Convert sphinx object inventory ----------------------------------------- if not os.path.exists("tensorflow.inv"): - subprocess.call("sphobjinv convert -o zlib tensorflow.txt", shell=True) + subprocess.call( + "sphobjinv convert -o zlib tensorflow.txt", # noqa: S607 + shell=True, # noqa: S602 + ) # -- General configuration --------------------------------------------------- @@ -254,7 +256,7 @@ def get_version(package_name: str) -> str: if not line: continue line_segments = tuple(line.split("==")) - if len(line_segments) != 2: + if len(line_segments) != 2: # noqa: PLR2004 continue _, installed_version, *_ = line_segments installed_version = installed_version.strip() @@ -273,16 +275,15 @@ def get_minor_version(package_name: str) -> str: return installed_version matches = re.match(r"^([0-9]+\.[0-9]+).*$", installed_version) if matches is None: - raise ValueError( - f"Could not find documentation for {package_name} v{installed_version}" - ) + msg = f"Could not find documentation for {package_name} v{installed_version}" + raise ValueError(msg) return matches[1] def get_scipy_url() -> str: url = f"https://docs.scipy.org/doc/scipy-{get_version('scipy')}/" r = requests.get(url) - if r.status_code != 200: + if r.status_code != 200: # noqa: PLR2004 return "https://docs.scipy.org/doc/scipy" return url @@ -290,7 +291,7 @@ def get_scipy_url() -> str: def get_tensorflow_url() -> str: url = f"https://www.tensorflow.org/versions/r{get_minor_version('tensorflow')}/api_docs/python" r = requests.get(url + "/tf") - if r.status_code != 200: + if r.status_code != 200: # noqa: PLR2004 url = "https://www.tensorflow.org/api_docs/python" return url diff --git a/docs/usage.ipynb b/docs/usage.ipynb index 0819c10b..6c6c1b29 100644 --- a/docs/usage.ipynb +++ b/docs/usage.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -408,7 +406,7 @@ }, "outputs": [], "source": [ - "from IPython.display import Image\n", + "from IPython.display import Image, display\n", "\n", "with open(\"fit-animation.gif\", \"rb\") as f:\n", " display(Image(data=f.read(), format=\"png\"))" @@ -506,9 +504,10 @@ "import numpy as np\n", "\n", "sample_size = 1_000_000\n", + "rng = np.random.default_rng(0)\n", "data = {\n", - " \"x\": np.random.uniform(-50, +50, sample_size),\n", - " \"y\": np.random.uniform(0.1, 2.0, sample_size),\n", + " \"x\": rng.uniform(-50, +50, sample_size),\n", + " \"y\": rng.uniform(0.1, 2.0, sample_size),\n", "}" ] }, @@ -866,7 +865,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/basics.ipynb b/docs/usage/basics.ipynb index 5999051e..6bd0f2d7 100644 --- a/docs/usage/basics.ipynb +++ b/docs/usage/basics.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -116,6 +114,7 @@ "import numpy as np\n", "import pandas as pd\n", "import sympy as sp\n", + "from IPython.display import display\n", "from matplotlib import MatplotlibDeprecationWarning\n", "from sympy.plotting import plot3d\n", "\n", @@ -1308,7 +1307,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.16" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/binned-fit.ipynb b/docs/usage/binned-fit.ipynb index 6847548e..a35ab46c 100644 --- a/docs/usage/binned-fit.ipynb +++ b/docs/usage/binned-fit.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -337,7 +335,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/caching.ipynb b/docs/usage/caching.ipynb index 5cfadad6..e3ea5282 100644 --- a/docs/usage/caching.ipynb +++ b/docs/usage/caching.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -169,6 +167,7 @@ "outputs": [], "source": [ "import graphviz\n", + "from IPython.display import display\n", "\n", "\n", "class SymbolIdentifiable(sp.Symbol):\n", @@ -273,7 +272,7 @@ "outputs": [], "source": [ "visualize_free_symbols(top_expression, free_symbols)\n", - "for symbol, expr in sub_expressions.items():\n", + "for expr in sub_expressions.values():\n", " dot = sp.dotprint(expr, styles=dot_style, bgcolor=\"none\")\n", " display(graphviz.Source(dot))" ] @@ -549,7 +548,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/chi-squared.ipynb b/docs/usage/chi-squared.ipynb index 97d4e397..a49665f6 100644 --- a/docs/usage/chi-squared.ipynb +++ b/docs/usage/chi-squared.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -283,7 +281,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/faster-lambdify.ipynb b/docs/usage/faster-lambdify.ipynb index 9b0b0f8d..3ba8eb4a 100644 --- a/docs/usage/faster-lambdify.ipynb +++ b/docs/usage/faster-lambdify.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -422,7 +420,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/docs/usage/unbinned-fit.ipynb b/docs/usage/unbinned-fit.ipynb index 91afe9f0..92456964 100644 --- a/docs/usage/unbinned-fit.ipynb +++ b/docs/usage/unbinned-fit.ipynb @@ -46,8 +46,6 @@ "%config InlineBackend.figure_formats = ['svg']\n", "import os\n", "\n", - "from IPython.display import display # noqa: F401\n", - "\n", "STATIC_WEB_PAGE = {\"EXECUTE_NB\", \"READTHEDOCS\"}.intersection(os.environ)" ] }, @@ -360,7 +358,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.13" + "version": "3.8.17" } }, "nbformat": 4, diff --git a/pyproject.toml b/pyproject.toml index 92d691dd..3c8c81b1 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -31,22 +31,230 @@ target-version = [ "py39", ] -[tool.isort] -profile = "black" -src_paths = [ +[tool.coverage.run] +branch = true +source = ["src"] + +[tool.mypy] +disallow_incomplete_defs = true +disallow_untyped_defs = true +exclude = "_build" +show_error_codes = true +warn_unused_configs = true + +[[tool.mypy.overrides]] +check_untyped_defs = true +disallow_incomplete_defs = false +disallow_untyped_defs = false +module = ["benchmarks.*", "tests.*"] + +[[tool.mypy.overrides]] +ignore_errors = true +module = ["typings.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["IPython.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["iminuit.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["numba.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["numpy.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["phasespace.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["scipy.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["sphinx.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["tensorflow.*"] + +[[tool.mypy.overrides]] +ignore_missing_imports = true +module = ["tqdm.*"] + +[tool.nbqa.addopts] +black = ["--line-length=85"] +ruff = [ + "--extend-ignore=B018", + "--extend-ignore=C90", + "--extend-ignore=D", + "--extend-ignore=N806", + "--extend-ignore=N816", + "--extend-ignore=PLR09", + "--extend-ignore=PLR2004", + "--extend-ignore=PLW0602", + "--extend-ignore=PLW0603", + "--extend-ignore=S301", + "--line-length=85", +] + +[tool.pyright] +exclude = [ + "**/.git", + "**/.ipynb_checkpoints", + "**/.mypy_cache", + "**/.pytest_cache", + "**/.tox", + "**/__pycache__", + "**/_build", +] +reportGeneralTypeIssues = false +reportMissingParameterType = false +reportMissingTypeArgument = false +reportMissingTypeStubs = false +reportPrivateImportUsage = false +reportUnboundVariable = false +reportUnknownArgumentType = false +reportUnknownLambdaType = false +reportUnknownMemberType = false +reportUnknownParameterType = false +reportUnknownVariableType = false +reportUnnecessaryComparison = true +reportUnnecessaryIsInstance = false +reportUnusedClass = true +reportUnusedFunction = true +reportUnusedImport = true +reportUnusedVariable = true +typeCheckingMode = "strict" + +[tool.pytest.ini_options] +addopts = ''' +--color=yes +--doctest-continue-on-failure +--doctest-modules +--durations=3 +--ignore-glob=*/.ipynb_checkpoints/* +--ignore=docs/abbreviate_signature.py +--ignore=docs/conf.py +-k "not benchmark" +-m "not slow"''' +doctest_optionflags = "NORMALIZE_WHITESPACE" +filterwarnings = [ + "error", + "ignore: `np.bool8` is a deprecated alias for `np.bool_`.*:DeprecationWarning", + "ignore:.* is deprecated and will be removed in Pillow 10.*:DeprecationWarning", + "ignore:.*Using or importing the ABCs.*:DeprecationWarning", + "ignore:.*the imp module is deprecated in favour of importlib.*:DeprecationWarning", + "ignore:Passing a schema to Validator.iter_errors is deprecated.*:DeprecationWarning", + "ignore:Please use `spmatrix` from the `scipy.sparse` namespace.*:DeprecationWarning", + "ignore:The .* argument to NotebookFile is deprecated.*:pytest.PytestRemovedIn8Warning", + "ignore:The distutils package is deprecated and slated for removal.*:DeprecationWarning", + "ignore:divide by zero encountered in divide:RuntimeWarning", + "ignore:divide by zero encountered in true_divide:RuntimeWarning", + "ignore:invalid value encountered in .*:RuntimeWarning", + "ignore:numpy.ufunc size changed, may indicate binary incompatibility.*:RuntimeWarning", + "ignore:unclosed .*:ResourceWarning", +] +markers = ["slow: marks tests as slow (select with '-m slow')"] +norecursedirs = [ + "_build", + "docs/api", + "tests/output", +] +testpaths = [ + "benchmarks", "src", "tests", ] -[tool.nbqa.addopts] -black = [ - "--line-length=85", +[tool.ruff] +extend-exclude = ["typings"] +extend-select = [ + "A", + "B", + "BLE", + "C4", + "C90", + "D", + "EM", + "ERA", + "FA", + "I", + "ICN", + "INP", + "ISC", + "N", + "NPY", + "PGH", + "PIE", + "PL", + "Q", + "RET", + "RSE", + "RUF", + "S", + "SIM", + "T20", + "TCH", + "TID", + "TRY", + "UP", + "YTT", +] +ignore = [ + "C408", + "D101", + "D102", + "D103", + "D105", + "D107", + "D203", + "D213", + "D407", + "D416", + "E501", + "RUF012", + "S307", + "SIM108", ] -flake8 = [ - "--extend-ignore=E402,F821", +show-fixes = true +src = [ + "src", + "tests", ] +target-version = "py37" +task-tags = ["cspell"] -[tool.nbqa.skip_celltags] -flake8 = [ - "skip-flake8", +[tool.ruff.per-file-ignores] +"benchmarks/*" = [ + "D", + "PLR0913", + "PLR2004", + "S101", ] +"docs/*" = [ + "E402", + "INP001", + "S101", + "S113", + "T201", +] +"docs/conf.py" = ["PLW2901"] +"setup.py" = ["D100"] +"tests/*" = [ + "D", + "INP001", + "PGH001", + "PLR0913", + "PLR2004", + "S101", +] + +[tool.ruff.pydocstyle] +convention = "google" diff --git a/pyrightconfig.json b/pyrightconfig.json deleted file mode 100644 index 371379e5..00000000 --- a/pyrightconfig.json +++ /dev/null @@ -1,28 +0,0 @@ -{ - "exclude": [ - "**/__pycache__", - "**/_build", - "**/.git", - "**/.ipynb_checkpoints", - "**/.mypy_cache", - "**/.pytest_cache", - "**/.tox" - ], - "reportGeneralTypeIssues": false, - "reportMissingParameterType": false, - "reportMissingTypeArgument": false, - "reportMissingTypeStubs": false, - "reportPrivateImportUsage": false, - "reportUnboundVariable": false, - "reportUnknownArgumentType": false, - "reportUnknownLambdaType": false, - "reportUnknownMemberType": false, - "reportUnknownParameterType": false, - "reportUnknownVariableType": false, - "reportUnnecessaryComparison": true, - "reportUnnecessaryIsInstance": false, - "reportUnusedClass": true, - "reportUnusedFunction": true, - "reportUnusedImport": true, - "reportUnusedVariable": true -} diff --git a/pytest.ini b/pytest.ini deleted file mode 100644 index a54430af..00000000 --- a/pytest.ini +++ /dev/null @@ -1,41 +0,0 @@ -[coverage:run] -branch = True -source = src - -[pytest] -addopts = - --color=yes - --doctest-continue-on-failure - --doctest-modules - --durations=3 - --ignore-glob=*/.ipynb_checkpoints/* - --ignore=docs/abbreviate_signature.py - --ignore=docs/conf.py - -k "not benchmark" - -m "not slow" -doctest_optionflags = NORMALIZE_WHITESPACE -filterwarnings = - error - ignore:.* is deprecated and will be removed in Pillow 10.*:DeprecationWarning - ignore:.*Using or importing the ABCs.*:DeprecationWarning - ignore:.*the imp module is deprecated in favour of importlib.*:DeprecationWarning - ignore:Passing a schema to Validator.iter_errors is deprecated.*:DeprecationWarning - ignore:Please use `spmatrix` from the `scipy.sparse` namespace.*:DeprecationWarning - ignore:The .* argument to NotebookFile is deprecated.*:pytest.PytestRemovedIn8Warning - ignore:The distutils package is deprecated and slated for removal.*:DeprecationWarning - ignore:divide by zero encountered in divide:RuntimeWarning - ignore:divide by zero encountered in true_divide:RuntimeWarning - ignore:invalid value encountered in .*:RuntimeWarning - ignore:numpy.ufunc size changed, may indicate binary incompatibility.*:RuntimeWarning - ignore:unclosed .*:ResourceWarning - ignore: `np.bool8` is a deprecated alias for `np.bool_`.*:DeprecationWarning -markers = - slow: marks tests as slow (select with '-m slow') -norecursedirs = - _build - docs/api - tests/output -testpaths = - benchmarks - src - tests diff --git a/setup.cfg b/setup.cfg index 964c2752..f1884775 100644 --- a/setup.cfg +++ b/setup.cfg @@ -60,7 +60,6 @@ numba = numba tf = tensorflow >=2.4 # tensorflow.experimental.numpy - tensorflow !=2.12; python_version <"3.11.0" # slow benchmarks tensorflow = %(tf)s phsp = @@ -108,26 +107,12 @@ test = %(test-types)s ampform >=0.13 # https://github.com/ComPWA/ampform/issues/208 nbmake - nbmake !=1.3.* # https://github.com/ComPWA/tensorwaves/actions/runs/4115773410/jobs/7104945298#step:3:84 - nbmake !=1.4.* # https://github.com/ComPWA/tensorwaves/actions/runs/4115722644/jobs/7104829138 + nbmake <1.3; python_version <"3.8.0" pytest-benchmark pytest-cov pytest-xdist format = black - isort -flake8 = - flake8 >=4; python_version >="3.8.0" # extend-select - flake8-blind-except; python_version >="3.8.0" - flake8-bugbear; python_version >="3.8.0" - flake8-builtins; python_version >="3.8.0" - flake8-comprehensions; python_version >="3.8.0" - flake8-future-annotations; python_version >="3.8.0" - flake8-pytest-style; python_version >="3.8.0" - flake8-rst-docstrings; python_version >="3.8.0" - flake8-type-ignore; python_version >="3.8.0" - flake8-use-fstring; python_version >="3.8.0" - pep8-naming; python_version >="3.8.0" mypy = %(jax)s %(pwa)s @@ -139,28 +124,26 @@ mypy = types-requests types-setuptools lint = - %(flake8)s %(mypy)s - pydocstyle - pylint >=2.5 # good-names-rgxs + ruff sty = %(format)s %(lint)s pre-commit >=1.4.0 +jupyter = + %(doc)s + jupyterlab + jupyterlab-code-formatter + jupyterlab-myst + ypy-websocket <0.8.3; python_version <"3.8.0" dev = %(all)s %(doc)s + %(jupyter)s %(sty)s %(test)s - aquirdturtle-collapsible-headings - jupyterlab - jupyterlab-code-formatter - jupyterlab-myst sphinx-autobuild tox >=1.9 # for skip_install, use_develop - tox !=4.*; python_version <"3.8.0" # https://github.com/ComPWA/tensorwaves/actions/runs/4114638663/jobs/7102281592#step:3:97 - virtualenv <20.22.0; python_version <"3.8.0" # importlib-metadata conflict - ypy-websocket <0.8.3 # https://github.com/ComPWA/tensorwaves/actions/runs/4350354717/jobs/7600982077#step:3:78 [options.packages.find] where = src diff --git a/setup.py b/setup.py index b3b0f20b..93296978 100644 --- a/setup.py +++ b/setup.py @@ -1,5 +1,3 @@ -# noqa: D100 - from setuptools import setup setup( diff --git a/src/tensorwaves/data/__init__.py b/src/tensorwaves/data/__init__.py index 009e4dc7..5381e730 100644 --- a/src/tensorwaves/data/__init__.py +++ b/src/tensorwaves/data/__init__.py @@ -1,4 +1,3 @@ -# pylint: disable=too-many-arguments """The `.data` module takes care of data generation.""" from __future__ import annotations @@ -23,12 +22,12 @@ ) # pyright: reportUnusedImport=false -from .phasespace import ( # noqa:F401 - TFPhaseSpaceGenerator, - TFWeightedPhaseSpaceGenerator, +from .phasespace import ( + TFPhaseSpaceGenerator, # noqa: F401 + TFWeightedPhaseSpaceGenerator, # noqa: F401 ) -from .rng import NumpyUniformRNG, TFUniformRealNumberGenerator # noqa:F401 -from .transform import IdentityTransformer, SympyDataTransformer # noqa:F401 +from .rng import NumpyUniformRNG, TFUniformRealNumberGenerator # noqa: F401 +from .transform import IdentityTransformer, SympyDataTransformer # noqa: F401 _LOGGER = logging.getLogger(__name__) diff --git a/src/tensorwaves/data/_attrs.py b/src/tensorwaves/data/_attrs.py index e6631cea..f2f0d0f5 100644 --- a/src/tensorwaves/data/_attrs.py +++ b/src/tensorwaves/data/_attrs.py @@ -1,8 +1,9 @@ from __future__ import annotations -from typing import Iterable +from typing import TYPE_CHECKING, Iterable -from tensorwaves.interface import DataTransformer +if TYPE_CHECKING: + from tensorwaves.interface import DataTransformer def to_tuple(items: Iterable[DataTransformer]) -> tuple[DataTransformer, ...]: diff --git a/src/tensorwaves/data/_data_sample.py b/src/tensorwaves/data/_data_sample.py index 7afcc04e..b8fcd871 100644 --- a/src/tensorwaves/data/_data_sample.py +++ b/src/tensorwaves/data/_data_sample.py @@ -1,12 +1,14 @@ """Helper functions for modifying `.DataSample` instances.""" from __future__ import annotations -from typing import Any, Callable +from typing import TYPE_CHECKING, Any, Callable import numpy as np -from tqdm.auto import tqdm -from tensorwaves.interface import DataSample +if TYPE_CHECKING: + from tqdm.auto import tqdm + + from tensorwaves.interface import DataSample def get_number_of_events(four_momenta: DataSample) -> int: @@ -31,9 +33,8 @@ def _determine_merge_method( return np.concatenate if rank > 1: return np.vstack - raise NotImplementedError( - f"Cannot find a merge method for data samples of rank {rank}" - ) + msg = f"Cannot find a merge method for data samples of rank {rank}" + raise NotImplementedError(msg) def _merge_events( @@ -42,9 +43,8 @@ def _merge_events( merge_method: Callable[[tuple[np.ndarray, np.ndarray]], np.ndarray], ) -> DataSample: if len(sample1) and len(sample2) and set(sample1) != set(sample2): - raise ValueError( - "Keys of data sets are not matching", set(sample2), set(sample1) - ) + msg = "Keys of data sets are not matching" + raise ValueError(msg, set(sample2), set(sample1)) if get_number_of_events(sample1) == 0: return sample2 return {i: merge_method((array, sample2[i])) for i, array in sample1.items()} @@ -59,5 +59,5 @@ def finalize_progress_bar(progress_bar: tqdm) -> None: remainder = progress_bar.total - progress_bar.n else: remainder = 0 - progress_bar.update(n=remainder) # pylint crashes if total is set directly + progress_bar.update(n=remainder) progress_bar.close() diff --git a/src/tensorwaves/data/phasespace.py b/src/tensorwaves/data/phasespace.py index ef675238..863ae1b7 100644 --- a/src/tensorwaves/data/phasespace.py +++ b/src/tensorwaves/data/phasespace.py @@ -1,4 +1,3 @@ -# pylint: disable=import-outside-toplevel """Implementations of a `.DataGenerator` for four-momentum samples.""" from __future__ import annotations @@ -65,11 +64,12 @@ def generate(self, size: int, rng: RealNumberGenerator) -> DataSample: phsp_momenta = self.__phsp_generator.generate(self.__bunch_size, rng) weights = phsp_momenta.get("weights") if weights is None: - raise ValueError( + msg = ( "DataSample returned by" f" {type(self.__phsp_generator).__name__} doesn't contain" ' "weights"' ) + raise ValueError(msg) hit_and_miss_randoms = rng(self.__bunch_size) bunch = select_events(phsp_momenta, selector=weights > hit_and_miss_randoms) momentum_pool = merge_events(momentum_pool, bunch) @@ -119,11 +119,12 @@ def generate(self, size: int, rng: RealNumberGenerator) -> DataSample: of weights. The four-momenta are arrays of shape :math:`n \times 4`. """ if not isinstance(rng, TFUniformRealNumberGenerator): - raise TypeError( - f"{type(self).__name__} requires a " - f"{TFUniformRealNumberGenerator.__name__}, but got a " - f"{type(rng).__name__}" + msg = ( + f"{type(self).__name__} requires a" + f" {TFUniformRealNumberGenerator.__name__}, but got a" + f" {type(rng).__name__}" ) + raise TypeError(msg) weights, particles = self.__phsp_gen.generate(n_events=size, seed=rng.generator) phsp_momenta = { f"p{label}": _to_numpy(momenta)[:, [3, 0, 1, 2]] diff --git a/src/tensorwaves/data/rng.py b/src/tensorwaves/data/rng.py index 57855aad..c429442b 100644 --- a/src/tensorwaves/data/rng.py +++ b/src/tensorwaves/data/rng.py @@ -1,4 +1,3 @@ -# pylint:disable=import-outside-toplevel """Implementations of `.RealNumberGenerator`.""" from __future__ import annotations @@ -36,7 +35,8 @@ def seed(self, value: float | None) -> None: generator_seed: float | int | None = self.seed if generator_seed is not None: if not float(generator_seed).is_integer(): - raise ValueError("NumPy generator seed has to be integer") + msg = "NumPy generator seed has to be integer" + raise ValueError(msg) generator_seed = int(generator_seed) self.generator: np.random.Generator = np.random.default_rng(seed=generator_seed) @@ -73,7 +73,7 @@ def seed(self, value: float | None) -> None: self.generator = _get_tensorflow_rng(self.seed) -def _get_tensorflow_rng(seed: SeedLike = None) -> tf.random.Generator: +def _get_tensorflow_rng(seed: SeedLike | None = None) -> tf.random.Generator: """Get or create a `tf.random.Generator`. https://github.com/zfit/phasespace/blob/5998e2b/phasespace/random.py#L15-L41 @@ -89,4 +89,5 @@ def _get_tensorflow_rng(seed: SeedLike = None) -> tf.random.Generator: return tf.random.Generator.from_seed(seed=seed) if isinstance(seed, tf.random.Generator): return seed - raise TypeError(f"Cannot create a tf.random.Generator from a {type(seed).__name__}") + msg = f"Cannot create a tf.random.Generator from a {type(seed).__name__}" + raise TypeError(msg) diff --git a/src/tensorwaves/data/transform.py b/src/tensorwaves/data/transform.py index a75c5faa..44da1f80 100644 --- a/src/tensorwaves/data/transform.py +++ b/src/tensorwaves/data/transform.py @@ -8,8 +8,6 @@ from tensorwaves.function import PositionalArgumentFunction from tensorwaves.function.sympy import ( _get_free_symbols, # pyright: ignore[reportPrivateUsage] -) -from tensorwaves.function.sympy import ( _lambdify_normal_or_fast, # pyright: ignore[reportPrivateUsage] ) from tensorwaves.interface import DataSample, DataTransformer, Function @@ -58,9 +56,10 @@ class SympyDataTransformer(DataTransformer): def __init__(self, functions: Mapping[str, Function]) -> None: if any(not isinstance(f, Function) for f in functions.values()): - raise TypeError( + msg = ( f"Not all values in the mapping are an instance of {Function.__name__}" ) + raise TypeError(msg) self.__functions = dict(functions) @property diff --git a/src/tensorwaves/estimator.py b/src/tensorwaves/estimator.py index dca58023..44b618ce 100644 --- a/src/tensorwaves/estimator.py +++ b/src/tensorwaves/estimator.py @@ -6,8 +6,6 @@ from typing import TYPE_CHECKING, Callable, Iterable, Mapping -import numpy as np - from tensorwaves.data.transform import SympyDataTransformer from tensorwaves.function._backend import find_function, raise_missing_module_error from tensorwaves.function.sympy import create_parametrized_function, prepare_caching @@ -20,6 +18,7 @@ ) if TYPE_CHECKING: + import numpy as np import sympy as sp @@ -44,6 +43,7 @@ def create_cached_function( backend: The computational backend to which in which to express the input :code:`expression`. + free_parameters: Symbols in the expression that change and should not be cached. use_cse: See :func:`.create_parametrized_function`. Returns: @@ -75,7 +75,6 @@ def gradient_creator( function: Callable[[Mapping[str, ParameterValue]], ParameterValue], backend: str, ) -> Callable[[Mapping[str, ParameterValue]], dict[str, ParameterValue]]: - # pylint: disable=import-outside-toplevel if backend == "jax": try: import jax @@ -90,7 +89,8 @@ def gradient_creator( def raise_gradient_not_implemented( parameters: Mapping[str, ParameterValue] ) -> dict[str, ParameterValue]: - raise NotImplementedError(f"Gradient not implemented for back-end {backend}.") + msg = f"Gradient not implemented for back-end {backend}." + raise NotImplementedError(msg) return raise_gradient_not_implemented @@ -117,7 +117,7 @@ class ChiSquared(Estimator): .. seealso:: :doc:`/usage/chi-squared` """ - def __init__( + def __init__( # noqa: PLR0913 self, function: ParametrizedFunction, domain: DataSample, @@ -149,7 +149,7 @@ def gradient( return self.__gradient(parameters) -class UnbinnedNLL(Estimator): # pylint: disable=too-many-instance-attributes +class UnbinnedNLL(Estimator): r"""Unbinned negative log likelihood estimator. The **log likelihood** :math:`\log\mathcal{L}` for a given function @@ -184,7 +184,7 @@ class UnbinnedNLL(Estimator): # pylint: disable=too-many-instance-attributes .. seealso:: :doc:`/usage/unbinned-fit` """ - def __init__( # pylint: disable=too-many-arguments + def __init__( # noqa: PLR0913 self, function: ParametrizedFunction, data: DataSample, diff --git a/src/tensorwaves/function/__init__.py b/src/tensorwaves/function/__init__.py index 601703d1..da288da1 100644 --- a/src/tensorwaves/function/__init__.py +++ b/src/tensorwaves/function/__init__.py @@ -2,10 +2,9 @@ from __future__ import annotations import inspect -from typing import Callable, Iterable, Mapping +from typing import TYPE_CHECKING, Callable, Iterable, Mapping import attrs -import numpy as np from attrs import field, frozen from tensorwaves.interface import ( @@ -15,12 +14,16 @@ ParametrizedFunction, ) +if TYPE_CHECKING: + import numpy as np + def _all_str( _: PositionalArgumentFunction, __: attrs.Attribute, value: Iterable[str] ) -> None: if not all(isinstance(s, str) for s in value): - raise TypeError(f"Not all arguments are of type {str.__name__}") + msg = f"Not all arguments are of type {str.__name__}" + raise TypeError(msg) def _all_unique( @@ -33,16 +36,16 @@ def _all_unique( n_occurrences = argument_names.count(arg_name) if n_occurrences > 1: duplicate_arguments.append(arg_name) - raise ValueError( - f"There are duplicate argument names: {sorted(set(duplicate_arguments))}" - ) + msg = f"There are duplicate argument names: {sorted(set(duplicate_arguments))}" + raise ValueError(msg) def _validate_arguments( instance: PositionalArgumentFunction, _: attrs.Attribute, value: Callable ) -> None: if not callable(value): - raise TypeError("Function is not callable") + msg = "Function is not callable" + raise TypeError(msg) n_args = len(instance.argument_order) signature = inspect.signature(value) if len(signature.parameters) != n_args: @@ -50,10 +53,11 @@ def _validate_arguments( parameter = next(iter(signature.parameters.values())) if parameter.kind == parameter.VAR_POSITIONAL: return - raise ValueError( - f"Lambdified function expects {len(signature.parameters)}" - f" arguments, but {n_args} sorted arguments were provided." + msg = ( + f"Lambdified function expects {len(signature.parameters)} arguments, but" + f" {n_args} sorted arguments were provided." ) + raise ValueError(msg) def _to_tuple(argument_order: Iterable[str]) -> tuple[str, ...]: @@ -121,10 +125,11 @@ def update_parameters(self, new_parameters: Mapping[str, ParameterValue]) -> Non if over_defined: sep = "\n " parameter_listing = f"{sep}".join(sorted(self.__parameters)) - raise ValueError( - f"Parameters {over_defined} do not exist in function" - f" arguments. Expecting one of:{sep}{parameter_listing}" + msg = ( + f"Parameters {over_defined} do not exist in function arguments." + f" Expecting one of:{sep}{parameter_listing}" ) + raise ValueError(msg) self.__parameters.update(new_parameters) @@ -143,6 +148,7 @@ def _lambdifygenerated(x, y): """ if isinstance(function, (PositionalArgumentFunction, ParametrizedBackendFunction)): return inspect.getsource(function.function) - raise NotImplementedError( + msg = ( f"Cannot get source code for {Function.__name__} type {type(function).__name__}" ) + raise NotImplementedError(msg) diff --git a/src/tensorwaves/function/_backend.py b/src/tensorwaves/function/_backend.py index 6e7e2053..d7859bcb 100644 --- a/src/tensorwaves/function/_backend.py +++ b/src/tensorwaves/function/_backend.py @@ -17,9 +17,8 @@ def find_function(function_name: str, backend: str) -> Callable: module_dict = module.__dict__ if function_name in module_dict: return module_dict[function_name] - raise ValueError( - f'Could not find function "{function_name}" in backend "{backend}"' - ) + msg = f'Could not find function "{function_name}" in backend "{backend}"' + raise ValueError(msg) def get_backend_modules(backend: str | tuple | dict) -> str | tuple | dict: @@ -29,7 +28,6 @@ def get_backend_modules(backend: str | tuple | dict) -> str | tuple | dict: :code:`modules` argument. Several back-ends can be specified by passing a `tuple` or dict`. """ - # pylint: disable=import-outside-toplevel if isinstance(backend, str): if backend == "jax": try: @@ -48,7 +46,6 @@ def get_backend_modules(backend: str | tuple | dict) -> str | tuple | dict: # returning only np.__dict__ does not work well with conditionals if backend in {"tensorflow", "tf"}: try: - # pylint: disable=import-error, no-name-in-module import tensorflow as tf import tensorflow.experimental.numpy as tnp # pyright: ignore[reportMissingImports] from tensorflow.python.ops.numpy_ops import np_config @@ -63,7 +60,6 @@ def get_backend_modules(backend: str | tuple | dict) -> str | tuple | dict: def jit_compile(backend: str) -> Callable[[Callable], Callable]: - # pylint: disable=import-outside-toplevel backend = backend.lower() if backend == "jax": try: @@ -74,7 +70,6 @@ def jit_compile(backend: str) -> Callable[[Callable], Callable]: if backend == "numba": try: - # pylint: disable=import-error import numba # pyright: ignore[reportMissingImports] except ImportError: # pragma: no cover raise_missing_module_error("numba", extras_require="numba") diff --git a/src/tensorwaves/function/sympy/__init__.py b/src/tensorwaves/function/sympy/__init__.py index b8c8f327..6a682d37 100644 --- a/src/tensorwaves/function/sympy/__init__.py +++ b/src/tensorwaves/function/sympy/__init__.py @@ -1,4 +1,3 @@ -# pylint: disable=import-outside-toplevel """Lambdify `sympy` expression trees to a `.Function`.""" from __future__ import annotations @@ -13,12 +12,13 @@ jit_compile, raise_missing_module_error, ) -from tensorwaves.interface import ParameterValue if TYPE_CHECKING: # pragma: no cover import sympy as sp from sympy.printing.printer import Printer + from tensorwaves.interface import ParameterValue + _LOGGER = logging.getLogger(__name__) @@ -170,7 +170,7 @@ def _lambdify_normal_or_fast( ) -def lambdify( # pylint: disable=too-many-return-statements +def lambdify( # noqa: C901, PLR0911 expression: sp.Expr, symbols: Sequence[sp.Symbol], backend: str, @@ -191,7 +191,7 @@ def lambdify( # pylint: disable=too-many-return-statements """ def jax_lambdify() -> Callable: - from ._printer import JaxPrinter # pylint: disable=import-outside-toplevel + from ._printer import JaxPrinter return jit_compile(backend="jax")( _sympy_lambdify( @@ -215,7 +215,6 @@ def numba_lambdify() -> Callable: def tensorflow_lambdify() -> Callable: try: - # pylint: disable=import-error import tensorflow.experimental.numpy as tnp # pyright: ignore[reportMissingImports] except ImportError: # pragma: no cover raise_missing_module_error("tensorflow", extras_require="tf") @@ -279,7 +278,7 @@ def _sympy_lambdify( ) -def fast_lambdify( # pylint: disable=too-many-locals +def fast_lambdify( # noqa: PLR0913 expression: sp.Expr, symbols: Sequence[sp.Symbol], backend: str, @@ -517,7 +516,7 @@ def recursive_split(sub_expression: sp.Basic) -> sp.Expr: remaining_symbols = free_symbols - set(symbol_mapping) symbol_mapping.update({s: s for s in remaining_symbols}) remainder = progress_bar.total - progress_bar.n - progress_bar.update(n=remainder) # pylint crashes if total is set directly + progress_bar.update(n=remainder) progress_bar.close() return top_expression, symbol_mapping diff --git a/src/tensorwaves/function/sympy/_printer.py b/src/tensorwaves/function/sympy/_printer.py index 3f546012..3c8d8c2c 100644 --- a/src/tensorwaves/function/sympy/_printer.py +++ b/src/tensorwaves/function/sympy/_printer.py @@ -1,10 +1,9 @@ -# pylint: disable=abstract-method protected-access too-many-ancestors from __future__ import annotations import re from typing import TYPE_CHECKING, Any, Callable, Iterable, TypeVar -from sympy.printing.numpy import NumPyPrinter # noqa: E402 +from sympy.printing.numpy import NumPyPrinter if TYPE_CHECKING: # pragma: no cover import sympy as sp diff --git a/src/tensorwaves/interface.py b/src/tensorwaves/interface.py index 637a5f48..96d8ea00 100644 --- a/src/tensorwaves/interface.py +++ b/src/tensorwaves/interface.py @@ -13,9 +13,9 @@ from IPython.lib.pretty import PrettyPrinter -InputType = TypeVar("InputType") # pylint: disable=invalid-name +InputType = TypeVar("InputType") """The argument type of a :meth:`.Function.__call__`.""" -OutputType = TypeVar("OutputType") # pylint: disable=invalid-name +OutputType = TypeVar("OutputType") """The return type of a :meth:`.Function.__call__`.""" @@ -99,7 +99,7 @@ def gradient( @frozen -class FitResult: # pylint: disable=too-many-instance-attributes +class FitResult: minimum_valid: bool = field(validator=instance_of(bool)) execution_time: float = field(validator=instance_of(float)) function_calls: int = field(validator=instance_of(int)) @@ -132,9 +132,8 @@ def _check_parameter_errors( return for par_name in value: if par_name not in self.parameter_values: - raise ValueError( - f'No parameter value exists for parameter error "{par_name}"' - ) + msg = f'No parameter value exists for parameter error "{par_name}"' + raise ValueError(msg) def _repr_pretty_(self, p: PrettyPrinter, cycle: bool) -> None: class_name = type(self).__name__ diff --git a/src/tensorwaves/optimizer/__init__.py b/src/tensorwaves/optimizer/__init__.py index 5deacb9d..bdc43872 100644 --- a/src/tensorwaves/optimizer/__init__.py +++ b/src/tensorwaves/optimizer/__init__.py @@ -11,11 +11,11 @@ # pyright: reportUnusedImport=false from . import callbacks, minuit -from .minuit import Minuit2 +from .minuit import Minuit2 # noqa: F401 try: from . import scipy - from .scipy import ScipyMinimizer + from .scipy import ScipyMinimizer # noqa: F401 __all__ += [ "scipy", diff --git a/src/tensorwaves/optimizer/_parameter.py b/src/tensorwaves/optimizer/_parameter.py index c2422636..ca063bab 100644 --- a/src/tensorwaves/optimizer/_parameter.py +++ b/src/tensorwaves/optimizer/_parameter.py @@ -1,8 +1,9 @@ from __future__ import annotations -from typing import Mapping +from typing import TYPE_CHECKING, Mapping -from tensorwaves.interface import ParameterValue +if TYPE_CHECKING: + from tensorwaves.interface import ParameterValue class ParameterFlattener: @@ -40,10 +41,11 @@ def flatten(self, parameters: Mapping[str, ParameterValue]) -> dict[str, float]: for par_name, value in parameters.items(): if isinstance(value, complex): if par_name not in self.__complex_to_real_imag_name: - raise ValueError( + msg = ( f"Parameter '{par_name}' has was not registered upon" f" constructing the {type(self).__name__}" ) + raise ValueError(msg) name_pair = self.__complex_to_real_imag_name[par_name] real_name, imag_name = name_pair flattened_parameters[real_name] = value.real diff --git a/src/tensorwaves/optimizer/callbacks.py b/src/tensorwaves/optimizer/callbacks.py index eb66c4e9..375740b9 100644 --- a/src/tensorwaves/optimizer/callbacks.py +++ b/src/tensorwaves/optimizer/callbacks.py @@ -1,18 +1,20 @@ -# pylint: disable=consider-using-with """Collection of loggers that can be inserted into an optimizer as callback.""" from __future__ import annotations import csv from abc import ABC, abstractmethod from datetime import datetime -from pathlib import Path -from typing import IO, Any, Iterable +from typing import IO, TYPE_CHECKING, Any, Iterable import numpy as np import yaml from tensorwaves.function._backend import raise_missing_module_error -from tensorwaves.interface import Estimator, Optimizer, ParameterValue + +if TYPE_CHECKING: + from pathlib import Path + + from tensorwaves.interface import Estimator, Optimizer, ParameterValue class Loadable(ABC): @@ -105,9 +107,8 @@ def __init__( iteration_step_size: int = 1, ) -> None: if function_call_step_size <= 0 and iteration_step_size <= 0: - raise ValueError( - "either function call or interaction step size should > 0." - ) + msg = "either function call or interaction step size should > 0." + raise ValueError(msg) self.__function_call_step_size = function_call_step_size self.__iteration_step_size = iteration_step_size self.__latest_function_call: int | None = None @@ -121,16 +122,17 @@ def __del__(self) -> None: def on_optimize_start(self, logs: dict[str, Any] | None = None) -> None: if logs is None: - raise ValueError( - f"{type(self).__name__} requires logs on optimize start" - " to determine header names" + msg = ( + f"{type(self).__name__} requires logs on optimize start to determine" + " header names" ) + raise ValueError(msg) if self.__function_call_step_size > 0: self.__latest_function_call = 0 if self.__iteration_step_size > 0: self.__latest_iteration = 0 _close_stream(self.__stream) - self.__stream = open(self.__filename, "w", newline="") + self.__stream = open(self.__filename, "w", newline="") # noqa: SIM115 self.__writer = csv.DictWriter( self.__stream, fieldnames=list(self.__log_to_rowdict(logs)), @@ -206,7 +208,7 @@ def cast_non_numeric(value: str) -> complex | float | int | str: if float_value.is_integer(): return int(float_value) return float_value - return complex_value + return complex_value # noqa: TRY300 except ValueError: return value @@ -239,7 +241,6 @@ def __init__( self.__stream: Any | None = None def on_optimize_start(self, logs: dict[str, Any] | None = None) -> None: - # pylint: disable=import-outside-toplevel, no-member try: import tensorflow as tf except ImportError: # pragma: no cover @@ -263,7 +264,6 @@ def on_iteration_end( def on_function_call_end( self, function_call: int, logs: dict[str, Any] | None = None ) -> None: - # pylint: disable=import-outside-toplevel, no-member try: import tensorflow as tf except ImportError: # pragma: no cover @@ -296,7 +296,7 @@ def __del__(self) -> None: def on_optimize_start(self, logs: dict[str, Any] | None = None) -> None: _close_stream(self.__stream) - self.__stream = open(self.__filename, "w") + self.__stream = open(self.__filename, "w") # noqa: SIM115 def on_optimize_end(self, logs: dict[str, Any] | None = None) -> None: if logs is None: @@ -336,7 +336,7 @@ def __dump_to_yaml(self, logs: dict[str, Any]) -> None: @staticmethod def load_latest_parameters(filename: Path | str) -> dict: with open(filename) as stream: - fit_stats = yaml.load(stream, Loader=yaml.Loader) + fit_stats = yaml.load(stream, Loader=yaml.Loader) # noqa: S506 return fit_stats["parameters"] @@ -348,7 +348,6 @@ def _cast_value(value: Any) -> ParameterValue: class _IncreasedIndent(yaml.Dumper): - # pylint: disable=too-many-ancestors def increase_indent(self, flow: bool = False, indentless: bool = False) -> None: return super().increase_indent(flow, False) diff --git a/src/tensorwaves/optimizer/minuit.py b/src/tensorwaves/optimizer/minuit.py index 223a4a12..75f1bddc 100644 --- a/src/tensorwaves/optimizer/minuit.py +++ b/src/tensorwaves/optimizer/minuit.py @@ -44,15 +44,16 @@ def __init__( self.__callback = callback self.__use_gradient = use_analytic_gradient if minuit_modifier is not None and not callable(minuit_modifier): - raise TypeError( + msg = ( "minuit_modifier has to be a callable that takes a" - f" {iminuit.Minuit.__module__}.{iminuit.Minuit.__name__} " - "instance. See constructor signature." + f" {iminuit.Minuit.__module__}.{iminuit.Minuit.__name__} instance. See" + " constructor signature." ) + raise TypeError(msg) self.__minuit_modifier = minuit_modifier self.__migrad_args = {} if migrad_args is None else migrad_args - def optimize( # pylint: disable=too-many-locals + def optimize( self, estimator: Estimator, initial_parameters: Mapping[str, ParameterValue], @@ -113,7 +114,7 @@ def wrapped_gradient(pars: list) -> Iterable[float]: name=tuple(flattened_parameters), ) minuit.errors = tuple( - 0.1 * abs(x) if abs(x) != 0.0 else 0.1 + 0.1 * abs(x) if abs(x) != 0.0 else 0.1 # noqa: PLR2004 for x in flattened_parameters.values() ) minuit.errordef = ( @@ -134,7 +135,7 @@ def wrapped_gradient(pars: list) -> Iterable[float]: parameter_values[name] = par_state.value parameter_errors[name] = par_state.error - assert minuit.fmin is not None + assert minuit.fmin is not None # noqa: S101 fit_result = FitResult( minimum_valid=minuit.valid, execution_time=end_time - start_time, diff --git a/src/tensorwaves/optimizer/scipy.py b/src/tensorwaves/optimizer/scipy.py index c023cf26..c3c293a8 100644 --- a/src/tensorwaves/optimizer/scipy.py +++ b/src/tensorwaves/optimizer/scipy.py @@ -37,12 +37,11 @@ def __init__( self.__method = method self.__minimize_options = scipy_options - def optimize( # pylint: disable=too-many-locals + def optimize( # noqa: C901 self, estimator: Estimator, initial_parameters: Mapping[str, ParameterValue], ) -> FitResult: - # pylint: disable=import-outside-toplevel try: from scipy.optimize import minimize except ImportError: # pragma: no cover diff --git a/tests/.pydocstyle b/tests/.pydocstyle deleted file mode 100644 index 26d0703b..00000000 --- a/tests/.pydocstyle +++ /dev/null @@ -1,4 +0,0 @@ -; ignore all pydocstyle errors in this folder - -[pydocstyle] -add_ignore = D diff --git a/tests/conftest.py b/tests/conftest.py index 390f4d58..9f24eba2 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -10,7 +10,6 @@ @pytest.fixture(scope="session") def pdg() -> "ParticleCollection": - # pylint: disable=import-outside-toplevel from qrules.particle import load_pdg return load_pdg() diff --git a/tests/data/test_data.py b/tests/data/test_data.py index a536cab9..0c6ca3d6 100644 --- a/tests/data/test_data.py +++ b/tests/data/test_data.py @@ -1,4 +1,3 @@ -# pylint: disable=import-outside-toplevel from __future__ import annotations from typing import TYPE_CHECKING @@ -6,9 +5,6 @@ import numpy as np import pytest -from tensorwaves.data import ( - _generate_without_progress_bar, # pyright: ignore[reportPrivateUsage] -) from tensorwaves.data import ( IdentityTransformer, IntensityDistributionGenerator, @@ -16,6 +12,7 @@ NumpyUniformRNG, TFPhaseSpaceGenerator, TFUniformRealNumberGenerator, + _generate_without_progress_bar, # pyright: ignore[reportPrivateUsage] finalize_progress_bar, ) from tensorwaves.function.sympy import create_function diff --git a/tests/data/test_phasespace.py b/tests/data/test_phasespace.py index 44492058..3027e496 100644 --- a/tests/data/test_phasespace.py +++ b/tests/data/test_phasespace.py @@ -117,7 +117,7 @@ def test_generate( phsp_momenta = phsp_generator.generate(sample_size, rng) assert set(phsp_momenta) == set(expected_sample) n_events = len(next(iter(expected_sample.values()))) - for i in expected_sample: # pylint: disable=consider-using-dict-items + for i in expected_sample: expected_momenta = expected_sample[i] momenta = phsp_momenta[i] assert len(expected_momenta) == n_events @@ -152,8 +152,8 @@ def test_generate_deterministic(self, pdg: "ParticleCollection"): assert list(phsp_momenta) == ["weights", "p0", "p1", "p2"] weights = phsp_momenta.get("weights", []) # type: ignore[var-annotated] del phsp_momenta["weights"] - print("Expected values, get by running pytest with the -s flag") - pprint( + print("Expected values, get by running pytest with the -s flag") # noqa: T201 + pprint( # noqa: T203 { i: np.round(four_momenta, decimals=10).tolist() for i, four_momenta in phsp_momenta.items() @@ -184,7 +184,7 @@ def test_generate_deterministic(self, pdg: "ParticleCollection"): } n_events = len(next(iter(expected_sample.values()))) assert set(phsp_momenta) == set(expected_sample) - for i in expected_sample: # pylint: disable=consider-using-dict-items + for i in expected_sample: expected_momenta = expected_sample[i] momenta = phsp_momenta[i] assert len(expected_momenta) == n_events diff --git a/tests/data/test_rng.py b/tests/data/test_rng.py index ac7f1687..c558fc89 100644 --- a/tests/data/test_rng.py +++ b/tests/data/test_rng.py @@ -1,10 +1,10 @@ -# pylint:disable=import-outside-toplevel import pytest from tensorwaves.data.rng import ( + NumpyUniformRNG, + TFUniformRealNumberGenerator, _get_tensorflow_rng, # pyright: ignore[reportPrivateUsage] ) -from tensorwaves.data.rng import NumpyUniformRNG, TFUniformRealNumberGenerator class TestNumpyUniformRNG: diff --git a/tests/data/test_transform.py b/tests/data/test_transform.py index d0b56369..7b08f7f9 100644 --- a/tests/data/test_transform.py +++ b/tests/data/test_transform.py @@ -22,7 +22,7 @@ def test_identity_chain(self, extend: bool): rng = np.random.default_rng(seed=0) data = {"x": rng.uniform(size=100), "y": rng.uniform(size=100)} transformed_data = chained_transform(data) - for key in data: # pylint: disable=consider-using-dict-items + for key in data: np.testing.assert_allclose(data[key], transformed_data[key], rtol=1e-13) if extend: assert set(transformed_data) == {"x", "y", "v", "w"} diff --git a/tests/function/test_ampform.py b/tests/function/test_ampform.py index fa990ad8..a7e8c794 100644 --- a/tests/function/test_ampform.py +++ b/tests/function/test_ampform.py @@ -1,4 +1,3 @@ -# pylint: disable=import-outside-toplevel import numpy as np import pytest diff --git a/tests/function/test_backend.py b/tests/function/test_backend.py index 4055e624..97c377fd 100644 --- a/tests/function/test_backend.py +++ b/tests/function/test_backend.py @@ -1,4 +1,3 @@ -# pylint: disable=import-error, import-outside-toplevel from tensorwaves.function._backend import find_function diff --git a/tests/function/test_function.py b/tests/function/test_function.py index 412a2281..7bca3c3c 100644 --- a/tests/function/test_function.py +++ b/tests/function/test_function.py @@ -1,4 +1,3 @@ -# pylint: disable=redefined-outer-name from textwrap import dedent import numpy as np diff --git a/tests/function/test_sympy.py b/tests/function/test_sympy.py index 4143ea92..b61c7756 100644 --- a/tests/function/test_sympy.py +++ b/tests/function/test_sympy.py @@ -1,5 +1,3 @@ -# cspell:ignore lambdifygenerated -# pylint: disable=redefined-outer-name from __future__ import annotations import logging @@ -10,10 +8,9 @@ import pytest import sympy as sp +# pyright: ignore[reportPrivateUsage] from tensorwaves.function.sympy import ( _collect_constant_sub_expressions, # pyright: ignore[reportPrivateUsage] -) -from tensorwaves.function.sympy import ( create_function, extract_constant_sub_expressions, fast_lambdify, @@ -117,6 +114,7 @@ def test_fast_lambdify(backend: str, max_complexity: int, use_cse: bool): if 0 < max_complexity <= 4: repr_start = "" else: + # cspell:ignore lambdifygenerated repr_start = " None: def __call__(self, parameters: Mapping[str, ParameterValue]) -> ParameterValue: x = parameters["x"] - y = parameters["y"] # pylint: disable=invalid-name + y = parameters["y"] return self.__a * x * x - self.__b * x * y + self.__c * y def true_gradient( diff --git a/tests/optimizer/test_minuit.py b/tests/optimizer/test_minuit.py index c8ed3027..d568b64a 100644 --- a/tests/optimizer/test_minuit.py +++ b/tests/optimizer/test_minuit.py @@ -1,16 +1,17 @@ -# pylint: disable=unsubscriptable-object from __future__ import annotations -from typing import Callable, Mapping +from typing import TYPE_CHECKING, Callable, Mapping import pytest -from pytest_mock import MockerFixture from tensorwaves.interface import Estimator, ParameterValue from tensorwaves.optimizer.minuit import Minuit2 from . import CallbackMock, assert_invocations +if TYPE_CHECKING: + from pytest_mock import MockerFixture + class Polynomial1DMinimaEstimator(Estimator): def __init__(self, polynomial: Callable) -> None: diff --git a/tests/optimizer/test_parameter.py b/tests/optimizer/test_parameter.py index 35d97e3c..4c1ec2be 100644 --- a/tests/optimizer/test_parameter.py +++ b/tests/optimizer/test_parameter.py @@ -1,4 +1,3 @@ -# pylint: disable=redefined-outer-name import pytest from tensorwaves.optimizer._parameter import ParameterFlattener diff --git a/tests/optimizer/test_scipy.py b/tests/optimizer/test_scipy.py index 92355993..c7116b86 100644 --- a/tests/optimizer/test_scipy.py +++ b/tests/optimizer/test_scipy.py @@ -1,15 +1,17 @@ from __future__ import annotations -from typing import Callable, Mapping +from typing import TYPE_CHECKING, Callable, Mapping import pytest -from pytest_mock import MockerFixture from tensorwaves.interface import Estimator, ParameterValue from tensorwaves.optimizer.scipy import ScipyMinimizer from . import CallbackMock, assert_invocations +if TYPE_CHECKING: + from pytest_mock import MockerFixture + class Polynomial1DMinimaEstimator(Estimator): def __init__(self, polynomial: Callable) -> None: diff --git a/tests/test_estimator.py b/tests/test_estimator.py index 935be3bd..6fe2f42a 100644 --- a/tests/test_estimator.py +++ b/tests/test_estimator.py @@ -1,8 +1,7 @@ -# pylint: disable=invalid-name import-error redefined-outer-name -# pylint: disable=invalid-name too-many-locals unsubscriptable-object from __future__ import annotations import math +from typing import TYPE_CHECKING import numpy as np import pytest @@ -13,9 +12,11 @@ from tensorwaves.estimator import ChiSquared, UnbinnedNLL, create_cached_function from tensorwaves.function import ParametrizedBackendFunction, PositionalArgumentFunction from tensorwaves.function.sympy import create_parametrized_function -from tensorwaves.interface import DataSample, ParameterValue from tensorwaves.optimizer.minuit import Minuit2 +if TYPE_CHECKING: + from tensorwaves.interface import DataSample, ParameterValue + class TestChiSquared: @pytest.mark.parametrize("backend", ["jax", "numpy", "tensorflow"]) diff --git a/tests/test_interface.py b/tests/test_interface.py index 56d55d72..74869737 100644 --- a/tests/test_interface.py +++ b/tests/test_interface.py @@ -1,4 +1,3 @@ -# pylint: disable=eval-used import pytest from IPython.lib.pretty import pretty diff --git a/typings/.pydocstyle b/typings/.pydocstyle deleted file mode 100644 index 26d0703b..00000000 --- a/typings/.pydocstyle +++ /dev/null @@ -1,4 +0,0 @@ -; ignore all pydocstyle errors in this folder - -[pydocstyle] -add_ignore = D From d4abb887cf81d7116ea4314e8373a97a431e8ab2 Mon Sep 17 00:00:00 2001 From: Remco de Boer <29308176+redeboer@users.noreply.github.com> Date: Fri, 7 Jul 2023 03:24:22 +0200 Subject: [PATCH 3/3] MAINT: verify installation on Python 3.11 (#484) * MAINT: ignore deprecation error in pytest * MAINT: update pip constraints and pre-commit --------- Co-authored-by: GitHub --- .constraints/py3.11.txt | 239 ++++++++++++++++++++++++++++++++++++++++ pyproject.toml | 2 + setup.cfg | 1 + 3 files changed, 242 insertions(+) create mode 100644 .constraints/py3.11.txt diff --git a/.constraints/py3.11.txt b/.constraints/py3.11.txt new file mode 100644 index 00000000..1360fe1d --- /dev/null +++ b/.constraints/py3.11.txt @@ -0,0 +1,239 @@ +# +# This file is autogenerated by pip-compile with Python 3.11 +# by the following command: +# +# pip-compile --extra=dev --no-annotate --output-file=.constraints/py3.11.txt --strip-extras setup.py +# +absl-py==1.4.0 +accessible-pygments==0.0.4 +alabaster==0.7.13 +ampform==0.14.6 +anyio==3.7.1 +argon2-cffi==21.3.0 +argon2-cffi-bindings==21.2.0 +arrow==1.2.3 +asttokens==2.2.1 +astunparse==1.6.3 +async-lru==2.0.2 +attrs==23.1.0 +babel==2.12.1 +backcall==0.2.0 +beautifulsoup4==4.12.2 +black==23.3.0 +bleach==6.0.0 +cachetools==5.3.1 +certifi==2023.5.7 +cffi==1.15.1 +cfgv==3.3.1 +chardet==5.1.0 +charset-normalizer==3.1.0 +click==8.1.4 +cloudpickle==2.2.1 +colorama==0.4.6 +comm==0.1.3 +contourpy==1.1.0 +coverage==7.2.7 +cycler==0.11.0 +debugpy==1.6.7 +decorator==5.1.1 +defusedxml==0.7.1 +deprecated==1.2.14 +distlib==0.3.6 +dm-tree==0.1.8 +docutils==0.19 +execnet==1.9.0 +executing==1.2.0 +fastjsonschema==2.17.1 +filelock==3.12.2 +flatbuffers==23.5.26 +fonttools==4.40.0 +fqdn==1.5.1 +gast==0.4.0 +google-auth==2.21.0 +google-auth-oauthlib==1.0.0 +google-pasta==0.2.0 +graphviz==0.20.1 +greenlet==2.0.2 +grpcio==1.56.0 +h5py==3.9.0 +hepunits==2.3.2 +identify==2.5.24 +idna==3.4 +imagesize==1.4.1 +iminuit==2.22.0 +importlib-metadata==6.7.0 +iniconfig==2.0.0 +ipykernel==6.24.0 +ipympl==0.9.3 +ipython==8.14.0 +ipython-genutils==0.2.0 +ipywidgets==8.0.7 +isoduration==20.11.0 +jax==0.4.13 +jaxlib==0.4.13 +jedi==0.18.2 +jinja2==3.1.2 +json5==0.9.14 +jsonpointer==2.4 +jsonschema==4.18.0 +jsonschema-specifications==2023.6.1 +jupyter==1.0.0 +jupyter-cache==0.6.1 +jupyter-client==8.3.0 +jupyter-console==6.6.3 +jupyter-core==5.3.1 +jupyter-events==0.6.3 +jupyter-lsp==2.2.0 +jupyter-server==2.7.0 +jupyter-server-terminals==0.4.4 +jupyterlab==4.0.2 +jupyterlab-code-formatter==2.2.1 +jupyterlab-myst==2.0.1 +jupyterlab-pygments==0.2.2 +jupyterlab-server==2.23.0 +jupyterlab-widgets==3.0.8 +keras==2.13.1 +kiwisolver==1.4.4 +libclang==16.0.0 +livereload==2.6.3 +llvmlite==0.40.1 +markdown==3.4.3 +markdown-it-py==2.2.0 +markupsafe==2.1.3 +matplotlib==3.7.2 +matplotlib-inline==0.1.6 +mdit-py-plugins==0.3.5 +mdurl==0.1.2 +mistune==3.0.1 +ml-dtypes==0.2.0 +mpmath==1.3.0 +mypy==1.4.1 +mypy-extensions==1.0.0 +myst-nb==0.17.2 +myst-parser==0.18.1 +nbclassic==1.0.0 +nbclient==0.6.8 +nbconvert==7.6.0 +nbformat==5.9.0 +nbmake==1.4.1 +nest-asyncio==1.5.6 +nodeenv==1.8.0 +notebook==6.5.4 +notebook-shim==0.2.3 +numba==0.57.1 +numpy==1.24.3 +oauthlib==3.2.2 +opt-einsum==3.3.0 +overrides==7.3.1 +packaging==23.1 +pandas==2.0.3 +pandocfilters==1.5.0 +parso==0.8.3 +particle==0.23.0 +pathspec==0.11.1 +pexpect==4.8.0 +phasespace==1.8.0 +pickleshare==0.7.5 +pillow==10.0.0 +platformdirs==3.8.1 +pluggy==1.2.0 +pre-commit==3.3.3 +prometheus-client==0.17.0 +prompt-toolkit==3.0.39 +protobuf==4.23.4 +psutil==5.9.5 +ptyprocess==0.7.0 +pure-eval==0.2.2 +py-cpuinfo==9.0.0 +pyasn1==0.5.0 +pyasn1-modules==0.3.0 +pycparser==2.21 +pydantic==1.10.11 +pydata-sphinx-theme==0.13.3 +pygments==2.15.1 +pyparsing==3.0.9 +pyproject-api==1.5.3 +pytest==7.4.0 +pytest-benchmark==4.0.0 +pytest-cov==4.1.0 +pytest-mock==3.11.1 +pytest-xdist==3.3.1 +python-constraint==1.4.0 +python-dateutil==2.8.2 +python-json-logger==2.0.7 +pytz==2023.3 +pyyaml==6.0 +pyzmq==25.1.0 +qrules==0.9.8 +qtconsole==5.4.3 +qtpy==2.3.1 +referencing==0.29.1 +requests==2.31.0 +requests-oauthlib==1.3.1 +rfc3339-validator==0.1.4 +rfc3986-validator==0.1.1 +rpds-py==0.8.8 +rsa==4.9 +ruff==0.0.277 +scipy==1.11.1 +send2trash==1.8.2 +six==1.16.0 +sniffio==1.3.0 +snowballstemmer==2.2.0 +soupsieve==2.4.1 +sphinx==5.3.0 +sphinx-autobuild==2021.3.14 +sphinx-book-theme==1.0.1 +sphinx-codeautolink==0.15.0 +sphinx-comments==0.0.3 +sphinx-copybutton==0.5.2 +sphinx-design==0.4.1 +sphinx-thebe==0.2.1 +sphinx-togglebutton==0.3.2 +sphinxcontrib-applehelp==1.0.4 +sphinxcontrib-devhelp==1.0.2 +sphinxcontrib-htmlhelp==2.0.1 +sphinxcontrib-jsmath==1.0.1 +sphinxcontrib-qthelp==1.0.3 +sphinxcontrib-serializinghtml==1.1.5 +sphobjinv==2.3.1 +sqlalchemy==2.0.18 +stack-data==0.6.2 +sympy==1.12 +tabulate==0.9.0 +tensorboard==2.13.0 +tensorboard-data-server==0.7.1 +tensorflow==2.13.0 +tensorflow-estimator==2.13.0 +tensorflow-io-gcs-filesystem==0.32.0 +tensorflow-probability==0.18.0 +termcolor==2.3.0 +terminado==0.17.1 +tinycss2==1.2.1 +tornado==6.3.2 +tox==4.6.4 +tqdm==4.65.0 +traitlets==5.9.0 +types-docutils==0.20.0.1 +types-pkg-resources==0.1.3 +types-pyyaml==6.0.12.10 +types-requests==2.31.0.1 +types-setuptools==68.0.0.1 +types-urllib3==1.26.25.13 +typing-extensions==4.5.0 +tzdata==2023.3 +uri-template==1.3.0 +urllib3==1.26.16 +virtualenv==20.23.1 +wcwidth==0.2.6 +webcolors==1.13 +webencodings==0.5.1 +websocket-client==1.6.1 +werkzeug==2.3.6 +wheel==0.40.0 +widgetsnbextension==4.0.8 +wrapt==1.15.0 +zipp==3.15.0 + +# The following packages are considered to be unsafe in a requirements file: +# setuptools diff --git a/pyproject.toml b/pyproject.toml index 3c8c81b1..2eaee7f8 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -26,6 +26,7 @@ include = '\.pyi?$' preview = true target-version = [ "py310", + "py311", "py37", "py38", "py39", @@ -158,6 +159,7 @@ filterwarnings = [ "ignore:divide by zero encountered in divide:RuntimeWarning", "ignore:divide by zero encountered in true_divide:RuntimeWarning", "ignore:invalid value encountered in .*:RuntimeWarning", + "ignore:module 'sre_constants' is deprecated:DeprecationWarning", "ignore:numpy.ufunc size changed, may indicate binary incompatibility.*:RuntimeWarning", "ignore:unclosed .*:ResourceWarning", ] diff --git a/setup.cfg b/setup.cfg index f1884775..ba3600d0 100644 --- a/setup.cfg +++ b/setup.cfg @@ -33,6 +33,7 @@ classifiers = Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 + Programming Language :: Python :: 3.11 Topic :: Scientific/Engineering Topic :: Scientific/Engineering :: Physics Typing :: Typed