Skip to content

Commit

Permalink
IN_PROGRESS: Updating optimization documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
pietercollins committed Oct 16, 2023
1 parent e2b37e7 commit 4b3f303
Show file tree
Hide file tree
Showing 7 changed files with 81 additions and 71 deletions.
2 changes: 1 addition & 1 deletion doc/algebraic_equations.dox
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
\page algebraic_equations_page Algebraic Equations

This page describes methods for the rigorous numerical solution of algebraic equations.
For details on how this is implemented in Ariadne, see the \ref AlgebraicEquationSubModule documentation
For details on how this is implemented in %Ariadne, see the \ref AlgebraicEquationSubModule documentation

Consider the system of nonlinear algebraic equations
\f[ f(x) = 0; \quad x\in D \f]
Expand Down
4 changes: 2 additions & 2 deletions doc/logic.dox
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ On computing yet more digits we might find \f$r_1=3.141592653589793\,238\cdots\f
Of course, if \f$r_1\neq r_2\f$, by computing enough digits, we will \e eventually be able to decide which of \f$r_1<r_2\f$ or \f$r_2>r_1\f$ is true, but if \f$r_1=r_2\f$, then no matter how many digits we compute, we will still never know which if \f$r_1 \lesseqgtr r_2\f$ holds.
In this case again the computation would run forever, and we would consider the value \a indeterminate.

The resulting logic is <em>%Kleenean</em> logic \f$\K=\{T,F,I\}\f$, with three values \a true \f$\top,\mathsc{T}\f$, \a false \f$\bot,\mathsc{F}\f$ and \a indeterminate \f$\uparrow,\mathsc{I}\f$.
The resulting logic is <em>%Kleenean</em> logic \f$\K=\{T,F,I\}\f$, with three values \a true \f$\top,\mathsf{T}\f$, \a false \f$\bot,\mathsf{F}\f$ and \a indeterminate \f$\uparrow,\mathsf{I}\f$.
The %Kleenean type represents the result of <em>quasidecidable</em> predicates.

A subtype of the %Kleeneans is given by the <em>%Sierpinskian</em> type \f$\S=\{\tru,\indt\}\f$ with open sets \f$\{\},\{\tru\},\{\tru,\indt\}\f$. The %Sierpinsian type represents the result of <em>verifiable</em> predicates.
Expand Down Expand Up @@ -147,7 +147,7 @@ One may also think of \f$\S\f$ as "positive" %Kleeneans \f$\K^+\f$.
\section logical_operations Logical Operations

The standard logical operators \f$\neg,\wedge,\vee,\rightarrow,\leftrightarrow\f$ are all defined on \f$\K\f$.
They can be extracted from their %Boolean counterparts by considering a set-valued interpretation with \f$\mathsc{I}=\{\mathsc{T},\mathsc{F}\}\f$. Explicitly, the operators are given by
They can be extracted from their %Boolean counterparts by considering a set-valued interpretation with \f$\mathsf{I}=\{\mathsf{T},\mathsf{F}\}\f$. Explicitly, the operators are given by
\f[ \begin{array}[t]{|c|c|}\hline p&\!\neg{p}\!\\\hline \fls&\tru\\\indt&\indt\\\tru&\fls\\\hline \end{array} \qquad
\begin{array}[t]{|c|ccc|}\hline \!p \wedge q\!&\fls&\indt&\tru\\\hline \fls&\fls&\fls&\fls\\\indt&\fls&\indt&\indt\\\tru&\fls&\indt&\tru\\\hline\end{array} \quad
\begin{array}[t]{|c|ccc|}\hline \!p \vee q\!&\fls&\indt&\tru\\\hline \fls&\fls&\indt&\tru\\\indt&\indt&\indt&\tru\\\tru&\tru&\tru&\tru\\\hline\end{array} \quad
Expand Down
29 changes: 15 additions & 14 deletions doc/macros.js
Original file line number Diff line number Diff line change
Expand Up @@ -17,23 +17,24 @@ MathJax.Hub.Config({
Y: "{\\mathbb{Y}}",
A: "{\\mathbb{A}}",
seq: ["{\\vec{#1}}",1],
dt: ["{\\dot{#1}}",1],
fto: "{\\longrightarrow}",
pfto: "{\\dashrightarrow}",
psfto: "{\\dashrightarrow}",
interval: ["{[#1]}",1],
ivl: ["{[#1]}",1],
dt: ["{\\dot{#1}}",1],
unl: ["{\\underline{#1}}",1],
ovl: ["{\\overline{#1}}",1],
der: ["{\\dot{#1}}",1],
dag: "{\\dagger}",

hatR: "{\\,\\widehat{\\!R}}",
hatX: "{\\,\\widehat{\\!X}}",
hatY: "{\\widehat{Y}}",

tru: "{\\mathsc{T}}",
fls: "{\\mathsc{F}}",
indt: "{\\mathsc{I}}",
unkn: "{\\mathsc{U}}",
tru: "{\\mathsf{T}}",
fls: "{\\mathsf{F}}",
indt: "{\\mathsf{I}}",
unkn: "{\\mathsf{U}}",

precless: "\\prec\\!<",
precprec: "\\prec\\!\\!\\!\\prec",
Expand All @@ -42,24 +43,24 @@ MathJax.Hub.Config({
succsucc: "\\succ\\!\\!\\!\\succ",
gtrsucc: ">\\!\\succ",

and: "\\wedge",
or: "\\vee",
// and: "\\wedge",
// or: "\\vee",

dom: "{\\mathrm{dom}}",

tand: "{\\text{ and }}",
tor: "{\\text{ or }}",
timplies: "{\\text{ implies }}",

twoheaddownarrow: "{\\!\\downarrow\\!\\!\\downarrow}",
twoheaduparrow: "{\\!\\uparrow\\!\\!\\uparrow}",

pfto: "{\\dashrightarrow}",
psfto: "{\\dashrightarrow}",
dag: "{\\dagger}",

dom: "{\\mathrm{dom}}",
twoheaddownarrow: "{\\!\\downarrow\\!\\!\\downarrow}",
twoheaduparrow: "{\\!\\uparrow\\!\\!\\uparrow}",

cline: ["",1],
textsc: ["{\\mathsc{#1}}",1],
mathsc: ["\\text{#1}",1],
textsc: ["{\\mathsc{#1}}",1],
}
}
});
35 changes: 18 additions & 17 deletions doc/macros.sty
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
\usepackage[top=25mm,left=25mm,bottom=25mm,right=25mm]{geometry}
%\usepackage[top=25mm,left=25mm,bottom=25mm,right=25mm]{geometry}

\usepackage{amssymb}

\newcommand{\interval}[1]{{[#1]}}
\newcommand{\ivl}[1]{{[#1]}}
\newcommand{\B}{\mathbb{N}}
\newcommand{\K}{\mathbb{K}}
\newcommand{\S}{\mathbb{S}}
\renewcommand{\S}{\mathbb{S}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\F}{\mathbb{F}}
\newcommand{\I}{\mathbb{I}}
Expand All @@ -18,13 +19,21 @@
\newcommand{\dt}[1]{\dot{#1}}
\newcommand{\fto}{\longrightarrow}
\newcommand{\pfto}{\dashrightarrow}
\newcommand{\psfto}{\dashrightarrow}
\newcommand{\interval}[1]{{[#1]}}
\newcommand{\ivl}[1]{{[#1]}}
\newcommand{\unl}[1]{{\underline{#1}}}
\newcommand{\ovl}[1]{{\overline{#1}}}
\newcommand{\der}[1]{\dot{#1}}

\newcommand{\hatR}{\widehat{R}}
\newcommand{\hatX}{\widehat{X}}
\newcommand{\hatY}{\widehat{Y}}

\newcommand{\unl}[1]{{\underline{#1}}}
\newcommand{\ovl}[1]{{\overline{#1}}}
\newcommand{\tru}{\mathsf{T}}
\newcommand{\fls}{\mathsf{F}}
\newcommand{\indt}{\mathsf{I}}
\newcommand{\unkn}{\mathsf{U}}

\newcommand{\precless}{\prec\!<}
\newcommand{\precprec}{\prec\!\!\prec}
Expand All @@ -33,19 +42,11 @@
\newcommand{\succsucc}{\succ\!\!\succ}
\newcommand{\gtrsucc}{>\!\!\!\succ}

\newcommand{\and}{\wedge}
\newcommand{\or}{\vee}
%\newcommand{\and}{\wedge}
%\newcommand{\or}{\vee}

\newcommand{\dom}{\mathrm{dom}}

\newcommand{\tand}{\text{ and }}
\newcommand{\tor}{\text{ or }}
\newcommand{\timplies}{\text{ implies }}

\newcommand{\dom}{\mathrm{dom}}

\newcommand{\der}[1]{\dot{#1}}

\newcommand{\tru}{\mathsf{T}}
\newcommand{\fls}{\mathsf{F}}
\newcommand{\indt}{\mathsf{I}}
\newcommand{\unkn}{\mathsf{U}}
11 changes: 7 additions & 4 deletions doc/modules.dox
Original file line number Diff line number Diff line change
Expand Up @@ -506,7 +506,7 @@ namespace Ariadne {
* \ingroup SolverModule
* \brief Classes and functions for solving nonlinear algebraic equations.
*
* \details For the mathematical theory of algebraic equations, including implicit function problems, see the \ref algebraic_equations_page Page.
* \details <em>For the mathematical theory of algebraic equations, including implicit function problems, see the \ref algebraic_equations_page Page.</em>
*
* A \a Solver class provides functionality for solving algebraic equations, as defined in the \ref SolverInterface.
* %Two basic kinds of problem are considered.
Expand All @@ -529,9 +529,12 @@ namespace Ariadne {
* \ingroup SolverModule
* \brief Classes and functions for solving linear and nonlinear programming problems.
*
* The LinearProgram class supports construction and solution of linear
* programming problems. Tests for feasibility only are also possible. The
* computations are performed by a BLAS/LAPACK style lpslv() routing.
* \details <em>For the mathematical theory of linear programming, see the \ref linear_programming_page Page, and for the theory of nonlinear programming, see the \ref nonlinear_programming_page Page.</em>
*
* The \a LinearProgram class supports construction and solution of linear
* programming problems. Tests for feasibility only are also possible.
*
*
*/


Expand Down
63 changes: 34 additions & 29 deletions doc/nonlinear_programming.dox
Original file line number Diff line number Diff line change
Expand Up @@ -30,37 +30,44 @@

\page nonlinear_programming_page NonLinear Programming

This page describes the theory of nonlinear programming and algorithms for the rigorous numerical solution of nonlinear optimisation problems.
For details on how this is implemented in %Ariadne, see the \ref OptimisationSubModule documentation


\section nonlinear_optimisation Nonlinear Constrained Optimisation Problems

We first give the theory of nonlinear programming, including first- and second-order conditions for a local optimum.

\subsection standard_nonlinear_optimisation Standard optimisation problem

Consider the nonlinear programming problem
\f[ \boxed{ \max f(x) \text{ s.t. } g_j(x)\geq 0,\ j=1,\ldots,l; \ h_k(x) = 0,\ k=1,\ldots,m . } \f]
Applying a penalty function yields the unconstrained maximisation of
\f[ f(x) + \mu \sum_{j=1}^{l} \log g_j(x) - \frac{1}{2\nu} \sum_{k=1}^{m} (h_k(x) )^2 . \f]
\f[ \boxed{ \min f(x) \text{ s.t. } g_j(x)\geq 0,\ j=1,\ldots,l; \ h_k(x) = 0,\ k=1,\ldots,m . } \f]
Applying a barrier function for the inequality constraints and a penalty function for the equality constraints yields the unconstrained minimisation of
\f[ f(x) - \mu \sum_{j=1}^{l} \log(g_j(x)) + \frac{1}{2\nu} \sum_{k=1}^{m} (h_k(x) )^2 . \f]
Differentiating with respect to \f$x\f$ yields
\f[ \nabla_{\!i\,} f(x) + \mu \sum_{j=1}^{l} \frac{\nabla_{\!i\,}g_j(x)}{g_j(x)} - \frac{1}{\nu} \sum_{k=1}^{m} h_k(x)\nabla_{\!i\,}h_k(x) = 0. \f]
Setting \f$\lambda_j = \mu/g_j(x)\f$ and \f$\kappa_k = - h_k(x)/\nu\f$ yields
\f[ \nabla_{\!i\,} f(x) + \sum_{j=1}^{l} \lambda_j \nabla_{\!i\,}g_j(x) + \sum_{k=1}^{m} \kappa_k\,\nabla_{\!i\,}h_k(x) . \f]
Combining these equations yields
\f[ \begin{gathered} \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x) - \mu = 0; \\ h_k(x) + \nu \kappa_k = 0; \\ \lambda_j \geq 0; \ g_j(x) \geq 0 . \end{gathered} \f]
Setting \f$\nu=0\f$ these equations yields the <em>central path</em> for the problem, which is a relaxation of the optimality conditions
\f[ \boxed{ \begin{gathered} \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x) = \mu; \\ h_k(x) = 0; \\ \lambda_j \geq 0; \ g_j(x) \geq 0 . \end{gathered} } \f]
Taking \f$\mu\to0\f$ yields the standard Karush-Kuhn-Tucker conditions for optimality
Taking \f$\nu\to0\f$ in these equations yields the <em>central path</em> for the problem, which is a relaxation of the optimality conditions
\f[ \boxed{ \begin{gathered} \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x) = \mu; \\ h_k(x) = 0; \\ \lambda_j \gt 0; \ g_j(x) \gt 0 . \end{gathered} } \f]
Taking \f$\mu\to0\f$ yields the standard Karush-Kuhn-Tucker (KKT) conditions for optimality
\f[ \boxed{ \begin{gathered} \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x) = 0; \\ h_k(x) = 0; \\ \lambda_j \geq 0; \ g_j(x) \geq 0 . \end{gathered} } \f]
The Karush-Kuhn-Tucker conditions are necessary conditions for a <em>regular</em> local optimum i.e. one for which that gradients \f$\nabla h_k(x)\f$ of the equality constraints, and \f$\nabla g_j(x)\f$ for the <em>active</em> inequality constraints, are a linearly independent set of vectors.

A Lagrangian for this problem is
\f[ \boxed{\displaystyle L(x,\lambda,\kappa) = f(x) + \sum_{j=1}^{l} \lambda_j g_j(x) + \sum_{k=1}^{m} \kappa_k h_k(x) . } \f]
The standard Karush-Kuhn-Tucker conditions for optimality are obtained by setting the partial derivatives of \f$L\f$ to zero, with the inequality constraints \f$\lambda_j,g_j(x)\geq0\f$ allowing for the relaxation \f$\lambda_j=0 \vee g_j(x)=0\f$.

The standard (Fritz) John conditions for optimality are
\f[ \boxed{ \begin{gathered} \mu \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x)=0; \\ h_k(x)=0; \\ \mu + \sum_j \lambda_j + \sum_k \kappa_k^2 = 0; \\ \lambda_j \geq 0; \ g_j(x) \geq 0 . \end{gathered} } \f]
\f[ \boxed{ \begin{gathered} \mu \nabla f(x) + \sum_{j=1}^{l} \lambda_j \nabla g_j(x) + \sum_{k=1}^{m} \kappa_k \nabla h_k(x) = 0; \\ \lambda_j g_j(x)=0; \\ h_k(x)=0; \\ \mu + \sum_j \lambda_j + \sum_k \kappa_k^2 = 1; \\ \mu\geq 0; \ \lambda_j \geq 0; \ g_j(x) \geq 0 . \end{gathered} } \f]
The John conditions are necessary conditions for any local optimum (assuming differentiability of \f$f,g_j,h_k\f$.

\note Taking \f$f(x) = -cx\f$, \f$g(x)=-x\f$ and \f$h(x)=Ax-b\f$ yields the primal linear programming problem \f$\min cx \mid Ax=b,\ x\geq 0\f$.
\note Taking \f$f(x) = -cx\f$, \f$g(x)=x\f$ and \f$h(x)=Ax-b\f$ yields the primal linear programming problem \f$\min cx \mid Ax=b,\ x\geq 0\f$.
Taking \f$f(y)=yb\f$, \f$g(y) = yA-c\f$ yields the dual linear programming problem \f$\max yb \mid yA\leq c\f$.
This means that the standard \em primal nonlinear programming problem with only affine inequality constraints more closely resembles the \em dual linear programming problem.
Despite this, we shall use \c x for the primal variables of the standard primal nonlinear programming problem.
Despite this, we shall use \f$x\f$ for the primal variables of the standard primal nonlinear programming problem.



Expand All @@ -78,48 +85,46 @@ Taking \f$\mu\to 0\f$, we obtain the standard Karush-Kuhn-Tucker conditions.

Consider the problem
\f[ \max f(x) \text{ s.t. } h(x) = 0 . \f]
The central path is defined by
\f[ \boxed{ \begin{gathered} \nabla f(x) + \kappa \cdot \nabla h(x) = 0; \\ h(x) + \kappa \mu = 0. \end{gathered} } \f]
The Karush-Kuhn-Tucker optimality conditions are
\f[ \boxed{ \begin{gathered} \nabla f(x) + \kappa \cdot \nabla h(x) = 0; \\ h(x) = 0. \end{gathered} } \f]

The problem can be relaxed by adding the penalty function \f$-g(x)^2/2\mu\f$.
The problem can be relaxed by adding the penalty function \f$-h(x)^2/2\nu\f$.
Then the optimality conditions are
\f[ \nabla f(x) - \frac{g(x)}{\mu} \nabla g(x) = 0\f]
Taking Lagrange multiplier \f$\lambda = g(x)/\mu\f$, we obtain
\f[ \boxed{ \begin{gathered} \nabla f(x) - \lambda \nabla g(x) = 0; \\ g(x) - \lambda \mu = 0 \end{gathered} } \f]
which relaxes to \f$g(x)=0\f$ as \f$\mu\to0\f$.
\f[ \nabla f(x) - \frac{h(x)}{\nu} \nabla h(x) = 0\f]
Taking Lagrange multiplier \f$\kappa = -h(x)/\nu\f$, we obtain
\f[ \boxed{ \begin{gathered} \nabla f(x) + \kappa \cdot \nabla h(x) = 0; \\ h(x) + \kappa \nu = 0 \end{gathered} } \f]
which relaxes to \f$h(x)=0\f$ as \f$\nu\to0\f$.

\subsection nonlinear_bounded_constraints Bounded constraints

Consider the problem
\f[ \max f(x) \text{ s.t. } |g(x)| \leq \delta \f]
We can handle the problem in two ways; either by considering the smooth constraint \f$\delta^2-g(x)^2\geq0\f$ or the two constraints \f$\delta + g(x) \geq 0\f$ and \f$\delta - g(x)\geq 0\f$.

In the former case, we obtain the conditions
\f[ \begin{gathered} \nabla f(x) + 2 \lambda g(x) \nabla g(x) = 0; \\ \lambda (\delta^2 - g(x)^2) = \mu. \end{gathered} \f]
Replacing the standard Lagrange multiplier \f$\lambda\f$ with \f$-2\lambda g(x)\f$, we obtain
\f[ \begin{gathered} \nabla f(x) - \lambda \nabla g(x) = 0; \\ -\lambda (\delta^2 - g(x)^2) / 2g(x) = \mu. \end{gathered} \f]
In the former case considering \f$\delta^2-g(x)^2\geq0\f$, we obtain the central path equations
\f[ \begin{gathered} \nabla f(x) - 2 \lambda g(x) \nabla g(x) = 0; \\ \lambda (\delta^2 - g(x)^2) = \mu. \end{gathered} \f]
Replacing the standard Lagrange multiplier \f$\lambda\f$ with \f$ - 2\lambda g(x)\f$, we obtain
\f[ \begin{gathered} \nabla f(x) + \lambda \nabla g(x) = 0; \\ - \lambda (\delta^2 - g(x)^2) / 2g(x) = \mu. \end{gathered} \f]
Rearranging the second formula gives
\f[ \boxed{ \begin{gathered} \nabla f(x) - \lambda \nabla g(x) = 0; \\ \bigl(\delta^2-g(x)^2\bigr) \lambda + 2 g(x) \mu = 0 . \end{gathered} } \f]
\f[ \boxed{ \begin{gathered} \nabla f(x) + \lambda \cdot \nabla g(x) = 0; \\ \bigl(\delta^2-g(x)^2\bigr) \lambda + 2 g(x) \mu = 0 . \end{gathered} } \f]
Differentiating gives
\f[ (\delta^2-g(x)^2) {\Delta\lambda} + (2 \mu - 2 g(x) \lambda) \nabla g(x) {\Delta x} = 0\f]
\f[ (\delta^2-g(x)^2) {\Delta\lambda} - 2 ( g(x) \lambda - \mu ) \nabla g(x) {\Delta x} = 0\f]
In order to solve for \f${\Delta x}\f$ in terms of \f${\Delta\lambda}\f$, we require
\f$ \mu - g(x) \lambda \neq 0\f$.
Since the optimal \f$x^*,\lambda^*\f$ satisfy \f$(\delta^2-g(x^*)^2) \lambda^* + 2 g(x^*) \mu = 0\f$, we have \f$\lambda^* g(x^*) \geq 0\f$.
\f$ g(x) \lambda - \mu \neq 0\f$.
Since the optimal \f$x^*,\lambda^*\f$ satisfy \f$(\delta^2-g(x^*)^2) \lambda^* + 2 g(x^*) \mu = 0\f$, we have \f$\lambda^* g(x^*) \leq 0\f$.

In the former case, we obtain the conditions
\f[ \begin{gathered} \nabla f(x) - \lambda^+ \nabla g(x) + \lambda^- \nabla g(x) = 0; \\ \lambda^+ (\delta + g(x)) = \mu; \\ \lambda^- (\delta - g(x)) = \mu \end{gathered} \f]
Set \f$\lambda = \lambda^+ - \lambda^-\f$. Then we have
\f[ \lambda = \mu \biggl( \frac{1}{\delta+g(x)} - \frac{1}{\delta-g(x)}\biggr) = \frac{-2\mu g(x)}{\delta^2-g(x)^2} . \f]
In the latter case considering \f$\delta+g(x)\geq0\f$ and \f$\delta - g(x) \geq0\f$,, we obtain the conditions
\f[ \begin{gathered} \nabla f(x) + \lambda^+ \nabla g(x) - \lambda^- \nabla g(x) = 0; \\ \lambda^+ (\delta + g(x)) = \mu; \\ \lambda^- (\delta - g(x)) = \mu \end{gathered} \f]
Set \f$\lambda = \lambda^+ - \lambda^-\f$. Then we have \f$\nabla{f}+\lambda \cdot \nabla{g}(x) = 0\f$ and
\f[ \lambda = \mu \biggl( \frac{1}{\delta+g(x)} - \frac{1}{\delta-g(x)}\biggr) = \frac{-2 g(x) \mu}{\delta^2-g(x)^2} . \f]
Rearranging again gives
\f[ (\delta^2-g(x)^2) \lambda + 2 g(x) \mu = 0\f]
Note that \f$\lambda^+ + \lambda^- = 2\delta\mu/(\delta^2-g(x)^2) \geq \bigl|\lambda^+-\lambda^-\bigr| \f$

\subsection nonlinear_state_constraints State constraints

Setting \f$g(x)=x\f$ and taking \f$\underline{x}\leq x\leq\overline{x}\f$, we obtain the state constraints
\f[ \boxed{ \begin{gathered} \nabla f(x) - \lambda \nabla g(x) = 0; \\ (\overline{x}-x)(x-\underline{x})\lambda + (2x-\underline{x}-\overline{x}) \mu = 0 . \end{gathered} } \f]
\f[ \boxed{ \begin{gathered} \nabla f(x) + \lambda \cdot \nabla g(x) = 0; \\ (\overline{x}-x)(x-\underline{x})\lambda + (2x-\underline{x}-\overline{x}) \mu = 0 . \end{gathered} } \f]
If \f$\overline{x}=-\underline{x}=\delta\f$, this simplifies to
\f[ (\delta^2-x^2) \lambda + 2 x \mu = 0 . \f]
Differentiating yields
Expand Down
8 changes: 4 additions & 4 deletions doc/topology.dox
Original file line number Diff line number Diff line change
Expand Up @@ -359,10 +359,10 @@ However, taking these as a sub-base for all open sets yields a topology that may

<i>Example</i> We now give an example to show that the lower and upper sets, when taken as closed sets, need not generate an order topology.
<br/>
Consider \f$\R\times\R\f$ with the standard product order \f$(x_1,y_1)\leq(x_2,y_2)\iff x_1\leq x_2\and y_1\leq y_2\f$.
Suppose we take the sets \f$\{(x,y)\mid x\leq a\and y\leq b\}\f$ and \f$\{(x,y)\mid a\leq x\and b\leq y\}\f$ to generate the closed sets of the topology, so a sub-base of open sets is given by \f$\{(x,y)\mid x < a \or y < b\}\f$ and \f$\{(x,y)\mid a < x \or b < y\}\f$.
Then any open set \f$U\f$ must contain a quadrants of the form \f$x< a \and y>b\f$ and \f$x>a \and y< b\f$, so any two open sets intersect, so the topology cannot yield a partially-ordered space.
Note that in this case, the sets \f$\{ (x,y) \mid \underline{a} < x < \overline{a} \and \underline{b} < y < \overline{b}\}\f$ are not open.
Consider \f$\R\times\R\f$ with the standard product order \f$(x_1,y_1)\leq(x_2,y_2)\iff x_1\leq x_2\wedge y_1\leq y_2\f$.
Suppose we take the sets \f$\{(x,y)\mid x\leq a\wedge y\leq b\}\f$ and \f$\{(x,y)\mid a\leq x\wedge b\leq y\}\f$ to generate the closed sets of the topology, so a sub-base of open sets is given by \f$\{(x,y)\mid x < a \vee y < b\}\f$ and \f$\{(x,y)\mid a < x \vee b < y\}\f$.
Then any open set \f$U\f$ must contain a quadrants of the form \f$x< a \wedge y>b\f$ and \f$x>a \wedge y< b\f$, so any two open sets intersect, so the topology cannot yield a partially-ordered space.
Note that in this case, the sets \f$\{ (x,y) \mid \underline{a} < x < \overline{a} \wedge \underline{b} < y < \overline{b}\}\f$ are not open.

\subsubsection partialorderedspace Partially ordered space

Expand Down

0 comments on commit 4b3f303

Please sign in to comment.