Skip to content

Commit

Permalink
DOC: Improve documentation for SUR
Browse files Browse the repository at this point in the history
Improve SUR background documentation
  • Loading branch information
bashtage committed Jun 18, 2017
1 parent 9b26eed commit 7c7ad1c
Show file tree
Hide file tree
Showing 5 changed files with 146 additions and 12 deletions.
112 changes: 110 additions & 2 deletions doc/source/system/mathematical-detail.lyx
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ When residuals are assumed to be homoskedastic, the covariance can be consistent
ly estimated by
\begin_inset Formula
\[
\left(X^{\prime}\Delta^{-1}X\right)^{-1}\left(X^{\prime}\Delta^{-\frac{1}{2}}\hat{\Omega}\Delta^{-\frac{1}{2}}X\right)\left(X^{\prime}\Delta^{-1}X\right)^{-1}
\left(X^{\prime}\Delta^{-1}X\right)^{-1}\left(X^{\prime}\Delta^{-1}\hat{\Omega}\Delta^{-1}X\right)\left(X^{\prime}\Delta^{-1}X\right)^{-1}
\]

\end_inset
Expand All @@ -273,7 +273,7 @@ where
\end_inset

while in FGLS
\begin_inset Formula $\Delta=I_{N}\otimes\hat{\Sigma}$
\begin_inset Formula $\Delta=\hat{\Omega}$
\end_inset

.
Expand Down Expand Up @@ -407,6 +407,101 @@ is the sum of the scores across models

\end_layout

\begin_layout Standard
\noindent

\series bold
Debiased Estimators
\end_layout

\begin_layout Standard
\noindent
When the debiased flag is set, a small sample adjustment if applied so that
element
\begin_inset Formula $ij$
\end_inset

of
\begin_inset Formula $\hat{\Sigma}$
\end_inset

is scaled by
\begin_inset Formula
\[
\frac{T}{\sqrt{\left(T-P_{i}\right)\left(T-P_{j}\right)}}.
\]

\end_inset


\end_layout

\begin_layout Subsubsection*
Other Statistics
\end_layout

\begin_layout Standard
\noindent

\series bold
Goodness of fit
\end_layout

\begin_layout Standard
\noindent
The reported
\begin_inset Formula $R^{2}$
\end_inset

is always for the data, or weighted data is weights are used, for either
OLS or GLS.
The means that the reported
\begin_inset Formula $R^{2}$
\end_inset

for the GLS estimator may be negative
\end_layout

\begin_layout Standard
\noindent

\series bold
F statistics
\end_layout

\begin_layout Standard
\noindent
When the debiased covariance estimator is used (small sample adjustment)
the reported
\begin_inset Formula $F$
\end_inset

statistics use
\begin_inset Formula $K\left(T-\bar{P}\right)$
\end_inset

where
\begin_inset Formula $\bar{P}=K^{-1}\sum_{i=1}^{K}P_{i}$
\end_inset

where
\begin_inset Formula $P_{i}$
\end_inset

is the number of variables including the constant in model
\begin_inset Formula $i$
\end_inset

.
When models include restrictions it may be the case that the covariance
is singular.
When this occurs, the
\begin_inset Formula $F$
\end_inset

statistic cannot be calculated.
\end_layout

\begin_layout Subsection*
Memory efficient calculations
\end_layout
Expand Down Expand Up @@ -521,5 +616,18 @@ solve
.
\end_layout

\begin_layout Standard
\begin_inset Note Note
status open

\begin_layout Plain Layout
TODO: Explain restrictions
\end_layout

\end_inset


\end_layout

\end_body
\end_document
36 changes: 31 additions & 5 deletions doc/source/system/mathematical-detail.txt
Original file line number Diff line number Diff line change
Expand Up @@ -79,14 +79,14 @@ Covariance Estimation
When residuals are assumed to be homoskedastic, the covariance can be
consistently estimated by

.. math:: \left(X^{\prime}\Delta^{-1}X\right)^{-1}\left(X^{\prime}\Delta^{-\frac{1}{2}}\hat{\Omega}\Delta^{-\frac{1}{2}}X\right)\left(X^{\prime}\Delta^{-1}X\right)^{-1}
.. math:: \left(X^{\prime}\Delta^{-1}X\right)^{-1}\left(X^{\prime}\Delta^{-1}\hat{\Omega}\Delta^{-1}X\right)\left(X^{\prime}\Delta^{-1}X\right)^{-1}

where :math:`\Delta` is the weighting matrix used in the parameter
estimator. For example, in OLS :math:`\Delta=I_{NK}` while in
FGLS\ :math:`\Delta=I_{N}\otimes\hat{\Sigma}`. The estimator supports
using FGLS with an assumption that :math:`\hat{\Sigma}` is diagonal or
using a user-specified value of :math:`\Sigma`. When the FGLS estimator
is used, this simplifies to
FGLS\ :math:`\Delta=\hat{\Omega}`. The estimator supports using FGLS
with an assumption that :math:`\hat{\Sigma}` is diagonal or using a
user-specified value of :math:`\Sigma`. When the FGLS estimator is used,
this simplifies to

.. math:: \left(X^{\prime}\Delta^{-1}X\right)^{-1}

Expand Down Expand Up @@ -124,6 +124,32 @@ is the sum of the scores across models :math:`\left(j\right)` that have
the same observation index :math:`i`. This estimator allows arbitrary
correlation of residuals with the same observation index.

**Debiased Estimators**

When the debiased flag is set, a small sample adjustment if applied so
that element :math:`ij` of :math:`\hat{\Sigma}` is scaled by

.. math:: \frac{T}{\sqrt{\left(T-P_{i}\right)\left(T-P_{j}\right)}}.

Other Statistics
~~~~~~~~~~~~~~~~

**Goodness of fit**

The reported :math:`R^{2}` is always for the data, or weighted data is
weights are used, for either OLS or GLS. The means that the reported
:math:`R^{2}` for the GLS estimator may be negative

**F statistics**

When the debiased covariance estimator is used (small sample adjustment)
the reported :math:`F` statistics use :math:`K\left(T-\bar{P}\right)`
where :math:`\bar{P}=K^{-1}\sum_{i=1}^{K}P_{i}` where :math:`P_{i}` is
the number of variables including the constant in model :math:`i`. When
models include restrictions it may be the case that the covariance is
singular. When this occurs, the :math:`F` statistic cannot be
calculated.

Memory efficient calculations
-----------------------------

Expand Down
1 change: 0 additions & 1 deletion linearmodels/system/_utility.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,6 @@ def _compute_transform(self):
vals = np.real(vals)
vecs = np.real(vecs)
idx = np.argsort(vals)[::-1]
vals = vals[idx]
vecs = vecs[:, idx]
t, l = vecs[:, :k - c], vecs[:, k - c:]
q = self._qa[:, None]
Expand Down
3 changes: 2 additions & 1 deletion linearmodels/system/covariance.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
from numpy import eye, ones, sqrt, vstack, zeros
from numpy.linalg import inv

from linearmodels.system._utility import blocked_diag_product, blocked_inner_prod, inv_matrix_sqrt
from linearmodels.system._utility import (blocked_diag_product, blocked_inner_prod,
inv_matrix_sqrt)


class HomoskedasticCovariance(object):
Expand Down
6 changes: 3 additions & 3 deletions linearmodels/system/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from numpy import (asarray, cumsum, diag, eye, hstack, inf, nanmean,
ones_like, reshape, sqrt, zeros)
from numpy.linalg import inv, solve
from pandas import DataFrame, Series
from pandas import Series
from patsy.highlevel import dmatrices
from patsy.missing import NAAction

Expand Down Expand Up @@ -569,7 +569,7 @@ def _common_results(self, beta, cov, method, iter_count, nobs, cov_type,

return results

def _gls_finalize(self, beta, sigma, full_sigma, sigma_m12, gls_eps, eps,
def _gls_finalize(self, beta, sigma, full_sigma, gls_eps, eps,
cov_type, iter_count, **cov_config):
"""Collect results to return after GLS estimation"""
wx = self._wx
Expand Down Expand Up @@ -746,7 +746,7 @@ def fit(self, *, method=None, full_cov=True, iterate=False, iter_limit=100, tol=
x = blocked_diag_product(self._x, eye(k))
eps = y - x @ beta

return self._gls_finalize(beta, sigma, full_sigma, sigma_m12, gls_eps,
return self._gls_finalize(beta, sigma, full_sigma, gls_eps,
eps, cov_type, iter_count, **cov_config)

@property
Expand Down

0 comments on commit 7c7ad1c

Please sign in to comment.