Skip to content

Commit

Permalink
Merge pull request #104 from ArnoStrouwen/master
Browse files Browse the repository at this point in the history
various doc and style improvements
  • Loading branch information
ChrisRackauckas authored Feb 20, 2023
2 parents 0bdb0c9 + cb63711 commit 5852abb
Show file tree
Hide file tree
Showing 22 changed files with 79 additions and 69 deletions.
12 changes: 11 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,22 @@ cp("./docs/Project.toml", "./docs/src/assets/Project.toml", force = true)
include("pages.jl")

makedocs(sitename = "PolyChaos.jl",
clean = true, doctest = false, linkcheck = true,
linkcheck_ignore = [
"https://www.sciencedirect.com/science/article/pii/S235246771830105X",
],
strict = [
:doctest,
:linkcheck,
:parse_error,
# Other available options are
# :autodocs_block, :cross_references, :docs_block, :eval_block, :example_block, :footnote, :meta_block, :missing_docs, :setup_block
],
format = Documenter.HTML(analytics = "UA-90474609-3",
assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/PolyChaos/stable/"),
modules = [PolyChaos],
authors = "tillmann.muehlpfordt@kit.edu",
doctest = false,
pages = pages)

deploydocs(repo = "github.com/SciML/PolyChaos.jl.git",
Expand Down
10 changes: 5 additions & 5 deletions docs/src/DCsOPF.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ We formalize the numbering of the generators (superscript $g$), loads (superscri
\mathcal{N}^g = \{ 1, 3\}, \, \mathcal{N}^d = \{ 2, 4\}, \, \mathcal{N}^{br} = \{ 1, 2, 3, 4, 5 \}.
```

With each generator we associate a linear cost with cost coefficient $c_i$ for all $i \in \mathcal{N}^g$.
With each, we associate a linear cost with cost coefficient $c_i$ for all $i \in \mathcal{N}^g$.
Each generator must adhere to its engineering limits given by $(\underline{p}_i^g , \overline{p}_i^g )$ for all $i \in \mathcal{N}^g$.
Also, each line is constrained by its limits $(\underline{p}_i^{br}, \overline{p}_i^{br})$ for all $i \in \mathcal{N}^{br}$.

Expand All @@ -63,7 +63,7 @@ We concisely write
\mathsf{p}_i^d \sim \mathsf{U}(\mu_i, \sigma_i) \quad \forall i \in \mathcal{N}^d.
```

For simplicity we consider DC conditions.
For simplicity, we consider DC conditions.
Hence, energy balance reads

```math
Expand Down Expand Up @@ -98,7 +98,7 @@ using PolyChaos, JuMP, MosekTools, LinearAlgebra
```

Let's define system-specific quantities such as the incidence matrix and the branch flow parameters.
From these we can compute the PTDF matrix $\Psi$ (assuming the slack is at bus 1).
From these, we can compute the PTDF matrix $\Psi$ (assuming the slack is at bus 1).

```@example mysetup
A = [-1 1 0 0; -1 0 1 0; -1 0 0 1; 0 1 -1 0; 0 0 -1 1] # incidence matrix
Expand Down Expand Up @@ -136,7 +136,7 @@ d[2, [1, 3]] = convert2affinePCE(2.0, 0.2, mop.uni[2], kind = "μσ")
```

Now, let's put it all into an optimization problem, specifically a second-order cone program.
To build the second-order cone constraints we define a helper function `buildSOC`.
To build the second-order cone constraints, we define a helper function `buildSOC`.

```@example mysetup
function buildSOC(x::Vector, mop::MultiOrthoPoly)
Expand Down Expand Up @@ -181,7 +181,7 @@ For instance, we can look at the moments of the generated power:
p_moments = [[mean(psol[i, :], mop) var(psol[i, :], mop)] for i in 1:Ng]
```

Simiarly, we can study the moments for the branch flows:
Similarly, we can study the moments for the branch flows:

```@example mysetup
pbr_moments = [[mean(plsol[i, :], mop) var(plsol[i, :], mop)] for i in 1:Nl]
Expand Down
4 changes: 2 additions & 2 deletions docs/src/TypeHierarchy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -316,10 +316,10 @@
"$$\n",
"If the integrand $f$ is polynomial, then the specific Gauss quadrature rules possess the remarkable property that an $n$-point quadrature rule can integrate polynomial integrands $f$ of degree at most $2n-1$ *exactly*; no approximation error is made.\n",
"\n",
"For common measures, `PolyChaos` resorts to the package [`FastGaussQuadrature`](https://github.com/ajt60gaibb/FastGaussQuadrature.jl/)\n",
"For common measures, `PolyChaos` resorts to the package [`FastGaussQuadrature`](https://github.com/JuliaApproximation/FastGaussQuadrature.jl)\n",
"\n",
"!!! note\n",
" The compilation time of `FastGaussQuadrature` is currently extremely slow, [see here](https://github.com/ajt60gaibb/FastGaussQuadrature.jl/issues/47)."
" The compilation time of `FastGaussQuadrature` is currently extremely slow, [see here](https://github.com/JuliaApproximation/FastGaussQuadrature.jl/issues/47)."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/src/chi_squared_k1.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ y_m \langle \phi_m, \phi_m \rangle = \sum_{i=0}^n \sum_{j=0}^n x_i x_j \langle \
```

Hence, knowing the scalars $\langle \phi_m, \phi_m \rangle$, and $\langle \phi_i \phi_j, \phi_m \rangle$, the PCE coefficients $y_k$ can be obtained immediately.
From the PCE coefficients we can get the moments and compare them to the closed-form expressions.
From the PCE coefficients, we can get the moments and compare them to the closed-form expressions.

__Notice:__ A maximum degree of 2 suffices to get the *exact* solution with PCE.
In other words, increasing the maximum degree to values greater than 2 introduces nothing but computational overhead (and numerical errors, possibly).
Expand Down Expand Up @@ -142,7 +142,7 @@ print("\t\t\terror = $(moms_analytic(k)[3]-myskew(y))\n")
Let's plot the probability density function to compare results.
We first draw samples from the measure with the help of `sampleMeasure()`, and then evaluate the basis at these samples and multiply times the PCE coefficients.
The latter stop is done using `evaluatePCE()`.
Finally, we compare the result agains the analytical PDF $\rho(t) = \frac{\mathrm{e}^{-0.5t}}{\sqrt{2 t} \, \Gamma(0.5)}$ of the chi-squared distribution with one degree of freedom.
Finally, we compare the result against the analytical PDF $\rho(t) = \frac{\mathrm{e}^{-0.5t}}{\sqrt{2 t} \, \Gamma(0.5)}$ of the chi-squared distribution with one degree of freedom.

```@example mysetup
using Plots
Expand Down
4 changes: 2 additions & 2 deletions docs/src/chi_squared_k_greater1.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ y_m \langle \phi_m, \phi_m \rangle = \sum_{i=1}^k \sum_{j_1=0}^n \sum_{j_2=0}^n
```

Hence, knowing the scalars $\langle \phi_m, \phi_m \rangle$, and $\langle \phi_{j_1} \phi_{j_2}, \phi_m \rangle$, the PCE coefficients $y_k$ can be obtained immediately.
From the PCE coefficients we can get the moments and compare them to the closed-form expressions.
From the PCE coefficients, we can get the moments and compare them to the closed-form expressions.

__Notice:__ A maximum degree of 2 suffices to get the *exact* solution with PCE.
In other words, increasing the maximum degree to values greater than 2 introduces nothing but computational overhead (and numerical errors, possibly).
Expand Down Expand Up @@ -138,7 +138,7 @@ Let's plot the probability density function to compare results.
We first draw samples from the measure with the help of `sampleMeasure()`, and then evaluate the basis at these samples and multiply times the PCE coefficients.
The latter stop is done using `evaluatePCE()`.
Both steps are combined in the function `samplePCE()`.
Finally, we compare the result agains the analytical PDF $\rho(t) = \frac{t^{t/2-1}\mathrm{e}^{-t/2}}{2^{k/2} \, \Gamma(k/2)}$ of the chi-squared distribution with one degree of freedom.
Finally, we compare the result against the analytical PDF $\rho(t) = \frac{t^{t/2-1}\mathrm{e}^{-t/2}}{2^{k/2} \, \Gamma(k/2)}$ of the chi-squared distribution with one degree of freedom.

```@example mysetup
using Plots
Expand Down
6 changes: 3 additions & 3 deletions docs/src/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
!!! note

The core interface of all essential functions are *not* dependent on specialized types such as `AbstractOrthoPoly`.
Having said that, for exactly those essential functions there exist overloaded functions that accept specialized types such as `AbstractOrthoPoly` as arguments.
Having said that, for exactly those essential functions, there exist overloaded functions that accept specialized types such as `AbstractOrthoPoly` as arguments.

Too abstract?
For example, the function `evaluate` that evaluates a polynomial of degree `n` at points `x` has the core interface
Expand All @@ -19,7 +19,7 @@
evaluate(n::Int64,x::Vector{<:Real},op::AbstractOrthoPoly)
```

So fret not upon the encounter of multiply-dispatched versions of the same thing. It's there to simplify your life.
So fret not upon the encounter of multiple-dispatched versions of the same thing. It's there to simplify your life.

The idea of this approach is to make it simpler for others to copy and paste code snippets and use them in their own work.

Expand Down Expand Up @@ -56,7 +56,7 @@ mcdiscretization

## Show Orthogonal Polynomials

To get a human-readable output of the orthognoal polynomials there is the function `showpoly`
To get a human-readable output of the orthogonal polynomials, there is the function `showpoly`

```@docs
showpoly
Expand Down
16 changes: 8 additions & 8 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ using PolyChaos

# Overview

PolyChaos is a collection of numerical routines for orthogonal polynomials written in the [Julia](https://julialang.org/) programming language.
Starting from some non-negative weight (aka an absolutely continuous nonnegative measure), PolyChaos allows
PolyChaos is a collection of numerical routines for orthogonal polynomials, written in the [Julia](https://julialang.org/) programming language.
Starting from some non-negative weight (aka an absolutely continuous non-negative measure), PolyChaos allows

- to compute the coefficients for the monic three-term recurrence relation,
- to evaluate the orthogonal polynomials at arbitrary points,
Expand All @@ -20,22 +20,22 @@ These routines allow
- to compute moments,
- to compute the tensors of scalar products.

PolyChaos contains several *canonical* orthogonal polynomials such as Jacobi or Hermite polynomials.
PolyChaos contains several *canonical* orthogonal polynomials, such as Jacobi or Hermite polynomials.
For these, closed-form expressions and state-of-the art quadrature rules are used whenever possible.
However, a cornerstone principle of PolyChaos is to provide all the functionality for user-specific, arbitrary weights.

!!! note

What PolyChaos is not (at least currently):

- a self-contained introduction to orthogonal polynomials, quadrature rules and/or polynomial chaos expansions. We assume the user brings some experience to the table. However, over time we will focus on strengthening the tutorial charater of the package.
- a self-contained introduction to orthogonal polynomials, quadrature rules and polynomial chaos expansions. We assume the user brings some experience to the table. However, over time we will focus on strengthening the tutorial character of the package.
- a symbolic toolbox
- a replacement for [FastGaussQuadrature.jl](https://github.com/ajt60gaibb/FastGaussQuadrature.jl)
- a replacement for [FastGaussQuadrature.jl](https://github.com/JuliaApproximation/FastGaussQuadrature.jl)

## Installation

The package requires `Julia 1.3` or newer.
In `Julia` switch to the package manager
To install PolyChaos.jl, use the Julia package manager:

```julia
using Pkg
Expand Down Expand Up @@ -90,7 +90,7 @@ In case you are unfamiliar with orthogonal polynomials, perhaps [this background

The code base of `PolyChaos` is partially based on Walter Gautschi's [Matlab suite of programs for generating orthogonal polynomials and related quadrature rules](https://www.cs.purdue.edu/archives/2002/wxg/codes/OPQ.html), with much of the theory presented in his book *Orthogonal Polynomials: Computation and Approximation* published in 2004 by the Oxford University Press.

For the theory of polynomial chaos expansion we mainly consulted T. J. Sullivan. *Introduction to Uncertainty Quantification*. Springer International Publishing Switzerland. 2015.
For the theory of polynomial chaos expansion, we mainly consulted T. J. Sullivan. *Introduction to Uncertainty Quantification*. Springer International Publishing Switzerland. 2015.

## Contributing

Expand Down Expand Up @@ -129,7 +129,7 @@ archivePrefix = {arXiv},
}
```

Of course you are more than welcome to partake in GitHub's gamification: starring and forking is much appreciated.
Of course, you are more than welcome to partake in GitHub's gamification: starring and forking is much appreciated.

Enjoy.

Expand Down
6 changes: 3 additions & 3 deletions docs/src/math.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Theorem 1.17 states that: If $\mathrm{d} \lambda$ is symmetric, then
\pi_k(-t; \mathrm{d} \lambda) = (-1)^k \pi_k(t; \mathrm{d} \lambda), \quad k=0,1, \dots,
```

hence the parity of $k$ decides whether $\pi_k$ is even/odd.
hence, the parity of $k$ decides whether $\pi_k$ is even/odd.

!!! note

Expand All @@ -101,8 +101,8 @@ The fact that orthogonal polynomials can be represented in terms of a three-term
The importance of the three-term recurrence relation is difficult to overestimate. It provides

- efficient means of evaluating polynomials (and derivatives),
- zeros of orthogonal polynomials by means of a eigenvalues of a symmetric, tridiagonal matrix
- acces to quadrature rules,
- zeros of orthogonal polynomials by means of the eigenvalues of a symmetric, tridiagonal matrix
- access to quadrature rules,
- normalization constants to create orthonormal polynomials.

Theorem 1.27 states:
Expand Down
4 changes: 2 additions & 2 deletions docs/src/multiple_discretization.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ print("Gauss-Legendre error:\t$(abs(int_exact-int_gaussleg))\twith $N nodes")

Even worse!
Well, we can factor out $\frac{1}{\sqrt{1-t^2}}$, making the integral amenable to a Gauss-Chebyshev rule.
So, let's give it anothery try.
So, let's give it another try.

```@example mysetup
function quad_gausscheb(N, γ)
Expand Down Expand Up @@ -156,7 +156,7 @@ print("Discretization error:\t$(abs(int_exact-int_mc))\twith $(length(n)) nodes"
```

Et voilà, no error with fewer nodes.
(For this example, we'd need in fact just a single node.)
(For this example, we'd need just a single node.)

The function `mcdiscretization()` is able to construct the recurrence coefficients of the orthogonal polynomials relative to the weight $w$.
Let's inspect the values of the recurrence coefficients a little more.
Expand Down
8 changes: 4 additions & 4 deletions docs/src/numerical_integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Uniform01Measure(PolyChaos.w_uniform01, (0.0, 1.0), true)
```

Next, we need to compute the quadrature rule relative to the uniform measure.
To do this we use the composite type `Quad`.
To do this, we use the composite type `Quad`.

```jldoctest mylabel
julia> quadRule1 = Quad(n - 1, measure)
Expand All @@ -72,7 +72,7 @@ julia> nw(quadRule1)

This creates a quadrature rule `quadRule_1` relative to the measure `measure`.
The function `nw()` prints the nodes and weights.
To solve the integral we call `integrate()`
To solve the integral, we call `integrate()`

```jldoctest mylabel
julia> variant1 = integrate(f, quadRule1)
Expand Down Expand Up @@ -108,7 +108,7 @@ julia> coeffs(op)
0.5 0.0631313
```

Now, the quadrature rule can be constructed based on `op`, and the integral be solved.
Now, the quadrature rule can be constructed based on `op`, and the integral to be solved.

```jldoctest mylabel
julia> quadRule2 = Quad(n, op)
Expand Down Expand Up @@ -141,4 +141,4 @@ julia> 1 - cos(1) .- [variant0 variant1 variant0_revisited]

with `variant0` and `variant0_revisited` being the same and more accurate than `variant1`.
The increased accuracy is based on the fact that for `variant0` and `variant0_revisited` the quadrature rules are based on the recursion coefficients of the underlying orthogonal polynomials.
The quadrature for `variant1` is based on an general-purpose method that can be significantly less accurate, see also [the next tutorial](@ref QuadratureRules).
The quadrature for `variant1` is based on a general-purpose method that can be significantly less accurate, see also [the next tutorial](@ref QuadratureRules).
10 changes: 5 additions & 5 deletions docs/src/orthogonal_polynomials_canonical.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# [Univariate Monic Orthogonal Polynomials](@id UnivariateMonicOrthogonalPolynomials)

Univariate monic orthogonal polynomials make up the core building block of the package.
These are real polynomials $\{ \pi_k \}_{k \geq 0}$, which are univariate $\pi_k: \mathbb{R} \rightarrow \mathbb{R}$ and orthogonal relative to a nonnegative weight function $w: \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$, and which have a leading coefficient equal to one:
These are real polynomials $\{ \pi_k \}_{k \geq 0}$, which are univariate $\pi_k: \mathbb{R} \rightarrow \mathbb{R}$ and orthogonal relative to a non-negative weight function $w: \mathbb{R} \rightarrow \mathbb{R}_{\geq 0}$, and which have a leading coefficient equal to one:

```math
\begin{aligned}
Expand Down Expand Up @@ -29,8 +29,8 @@ Hence, every system of $n$ univariate monic orthogonal polynomials $\{ \pi_k \}_
## Canonical Orthogonal Polynomials

The so-called *classical* or *canonical* orthogonal polynomials are polynomials named after famous mathematicians who each discovered a special family of orthogonal polynomials, for example [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials) or [Jacobi polynomials](https://en.wikipedia.org/wiki/Jacobi_polynomials).
For *classical* orthogonal polynomials there exist closed-form expressions of---among others---the recurrence coefficients.
Also quadrature rules for *classical* orthogonal polynomials are well-studied (with dedicated packages such as [FastGaussQuadrature.jl](https://github.com/ajt60gaibb/FastGaussQuadrature.jl).
For *classical* orthogonal polynomials, there exist closed-form expressions of---among others---the recurrence coefficients.
Also, quadrature rules for *classical* orthogonal polynomials are well-studied (with dedicated packages such as [FastGaussQuadrature.jl](https://github.com/JuliaApproximation/FastGaussQuadrature.jl)).
However, more often than not these *classical* orthogonal polynomials are neither monic nor orthogonal, hence not normalized in any sense.
For example, there is a distinction between the [*probabilists'* Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#Definition) and the [*physicists'* Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials#Definition).
The difference is in the weight function $w(t)$ relative to which the polynomials are orthogonal:
Expand Down Expand Up @@ -151,7 +151,7 @@ julia> my_meas = Measure("my_meas", w, supp, false, Dict())
Measure("my_meas", w, (-1.0, 1.0), false, Dict{Any,Any}())
```

Notice: it is advisable to define the weight such that an error is thrown for arguments outside of the support.
Notice: it is advisable to define the weight such that an error is thrown for arguments outside the support.

Now, we want to construct the univariate monic orthogonal polynomials up to degree `deg` relative to `my_meas`.
The constructor is
Expand All @@ -175,7 +175,7 @@ symmetric: false
pars: Dict{Any,Any}()
```

By default, the recurrence coefficients are computed using the [Stieltjes procuedure](https://warwick.ac.uk/fac/sci/maths/research/grants/equip/grouplunch/1985Gautschi.pdf) with [Clenshaw-Curtis](https://en.wikipedia.org/wiki/Clenshaw%E2%80%93Curtis_quadrature) quadrature (with `Nquad` nodes and weights).
By default, the recurrence coefficients are computed using the [Stieltjes procedure](https://warwick.ac.uk/fac/sci/maths/research/grants/equip/grouplunch/1985Gautschi.pdf) with [Clenshaw-Curtis](https://en.wikipedia.org/wiki/Clenshaw%E2%80%93Curtis_quadrature) quadrature (with `Nquad` nodes and weights).
Hence, the choice of `Nquad` influences accuracy.

## [Multivariate Monic Orthogonal Polynomials](@id MultivariateMonicOrthogonalPolynomials)
Expand Down
Loading

0 comments on commit 5852abb

Please sign in to comment.