From 09f184384ed20a75817aab7617747ae35f5f697f Mon Sep 17 00:00:00 2001
From: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Date: Thu, 13 Jul 2023 21:22:28 +0100
Subject: [PATCH] Resolve all outstanding warnings on rendering documentation
(#1301)
* Fix Markdown links
* Fix Markdown header levels
* Fix Markdown footnotes
* Switch Markdown parser to MyST
* Fix references in toctrees
* Fix title underline length
* Remove out of date instructions
* Ignore Numba deprecation warnings
* Fix ConsRiskyContribModel module docstring
* Fix section underline lengths in docstrings
* Escape inline markup (``*``)
* Escape LaTeX macros in docstrings
* Add link to Fashion Victim, from before it was removed
* Fix mathematical markup in notebooks
* Fix bullet points markup
* Fix spacing for sections in ConsIndShockModel.py
* Ignore the NARK directory
* Fix the duplicate object description warning
* Enable failing on warnings during documentation rendering in GitHub Actions
---
.github/workflows/documentation.yml | 1 +
Documentation/ARKitecture.md | 28 +-
Documentation/CHANGELOG.md | 2 +-
Documentation/conf.py | 20 +-
Documentation/contributing/CONTRIBUTING.md | 2 +-
.../contributing/Installation_instruction.md | 2 +-
Documentation/index.rst | 3 +-
Documentation/instructions.txt | 305 ------------------
Documentation/introduction.md | 4 +-
Documentation/quick-start.md | 16 +-
.../reference/tools/econforgeinterp.rst | 2 +-
Documentation/reference/tools/index.rst | 4 +-
.../simple-steps-getting-sphinx-working.txt | 184 -----------
HARK/ConsumptionSaving/ConsIndShockModel.py | 27 +-
.../ConsRiskyContribModel.py | 33 +-
HARK/distribution.py | 42 +--
HARK/frame.py | 6 +-
HARK/interpolation.py | 2 +-
.../KinkedRconsumerType.ipynb | 20 +-
.../PerfForesightConsumerType.ipynb | 2 +-
.../GenIncProcessModel.ipynb | 2 +-
examples/Journeys/Journey-PhD.ipynb | 2 +-
examples/LifecycleModel/LifecycleModel.py | 2 +-
requirements/dev.txt | 2 +-
24 files changed, 127 insertions(+), 586 deletions(-)
delete mode 100644 Documentation/instructions.txt
delete mode 100644 Documentation/simple-steps-getting-sphinx-working.txt
diff --git a/.github/workflows/documentation.yml b/.github/workflows/documentation.yml
index c91d80118..2a12f1408 100644
--- a/.github/workflows/documentation.yml
+++ b/.github/workflows/documentation.yml
@@ -51,6 +51,7 @@ jobs:
sphinx-build
-M html Documentation HARK-docs
-T
+ -W
- name: Set up git for deployment
run: |
diff --git a/Documentation/ARKitecture.md b/Documentation/ARKitecture.md
index 027f81578..e82c0983f 100644
--- a/Documentation/ARKitecture.md
+++ b/Documentation/ARKitecture.md
@@ -42,11 +42,11 @@ After you [installed](https://hark.readthedocs.io/en/latest/quick-start.html) an
### General Purpose Tools
-HARK's root directory contains six tool modules, [1](#myfootnote1) each containing a variety of functions and classes that can be used in many economic models-- or even for mathematical purposes that have nothing to do with economics. Some of the tool modules are very sparely populated at this time, while others are quite large. We expect that all of these modules will grow considerably in the near future, as new tools are ''low hanging fruit'' for contribution to the project. [2](#myfootnote2)
+HARK's root directory contains six tool modules, [^1] each containing a variety of functions and classes that can be used in many economic models-- or even for mathematical purposes that have nothing to do with economics. Some of the tool modules are very sparely populated at this time, while others are quite large. We expect that all of these modules will grow considerably in the near future, as new tools are ''low hanging fruit'' for contribution to the project. [^2]
-1: The ''taxonomy'' of these modules is in flux; the functions described here could be combined into fewer modules or further divided by purpose. [↩](#a1)
+[^1]: The ''taxonomy'' of these modules is in flux; the functions described here could be combined into fewer modules or further divided by purpose.
-2: That is, as the foundational, building-block elements of HARK, new tools are not difficult to program and do not require extensive integration with many moving parts. [↩](#a2)
+[^2]: That is, as the foundational, building-block elements of HARK, new tools are not difficult to program and do not require extensive integration with many moving parts.
#### HARK.core
@@ -56,9 +56,9 @@ Microeconomic models in HARK use the **_AgentType_** class to represent agents w
Macroeconomic models in HARK use the **_Market_** class to represent a market (or other aggregator) that combines the actions, states, and/or shocks (generally, outcomes) of individual agents in the model into aggregate outcomes that are ''passed back'' to the agents. For example, the market in a consumption-saving model might combine the individual asset holdings of all agents in the market to generate aggregate capital in the economy, yielding the interest rate on assets (as the marginal product of capital); the individual agents then learn the aggregate capital level and interest rate, conditioning their next action on this information. Objects that microeconomic agents treat as exogenous when solving (or simulating) their model are thus endogenous at the macroeconomic level. Like **_AgentType_**, the **_Market_** class also has a **_solve_** method, which seeks out a dynamic general equilibrium: a ''rule'' governing the dynamic evolution of macroeconomic objects such that if agents believe this rule and act accordingly, then their collective actions generate a sequence of macroeconomic outcomes that justify the belief in that rule. For a more complete description, see section [Market Class](#market-class).
-Beyond the model frameworks, **_HARK.core_** also defines a ''supersuperclass'' called **_HARKobject_**. When solving a dynamic microeconomic model with an infinite horizon (or searching for a dynamic general equilibrium), it is often required to consider whether two solutions are sufficiently close to each other to warrant stopping the process (i.e. approximate convergence). It is thus necessary to calculate the ''distance'' between two solutions, so HARK specifies that classes should have a **_distance_** method that takes a single input and returns a non-negative value representing the (generally dimensionless) distance between the object in question and the input to the method. As a convenient default, **_HARKobject_** provides a ''universal distance metric'' that should be useful in many contexts. [3](#myfootnote3) When defining a new subclass of **_HARKobject_**, the user simply defines the attribute **_distance_criteria_** as a list of strings naming the attributes of the class that should be compared when calculating the distance between two instances of that class. For example, the class **_ConsumerSolution_** has **_distance_criteria = ['cFunc']_**, indicating that only the consumption function attribute of the solution matters when comparing the distance between two instances of **_ConsumerSolution_**. See [here](https://hark.readthedocs.io/en/latest/reference/tools/core.html) for further documentation.
+Beyond the model frameworks, **_HARK.core_** also defines a ''supersuperclass'' called **_HARKobject_**. When solving a dynamic microeconomic model with an infinite horizon (or searching for a dynamic general equilibrium), it is often required to consider whether two solutions are sufficiently close to each other to warrant stopping the process (i.e. approximate convergence). It is thus necessary to calculate the ''distance'' between two solutions, so HARK specifies that classes should have a **_distance_** method that takes a single input and returns a non-negative value representing the (generally dimensionless) distance between the object in question and the input to the method. As a convenient default, **_HARKobject_** provides a ''universal distance metric'' that should be useful in many contexts. [^3] When defining a new subclass of **_HARKobject_**, the user simply defines the attribute **_distance_criteria_** as a list of strings naming the attributes of the class that should be compared when calculating the distance between two instances of that class. For example, the class **_ConsumerSolution_** has **_distance_criteria = ['cFunc']_**, indicating that only the consumption function attribute of the solution matters when comparing the distance between two instances of **_ConsumerSolution_**. See [here](https://hark.readthedocs.io/en/latest/reference/tools/core.html) for further documentation.
-3: Roughly speaking, the universal distance metric is a recursive supnorm, returning the largest distance between two instances, among attributes named in **_distance_criteria_**. Those attributes might be complex objects themselves rather than real numbers, generating a recursive call to the universal distance metric. [↩](#a3)
+[^3]: Roughly speaking, the universal distance metric is a recursive supnorm, returning the largest distance between two instances, among attributes named in **_distance_criteria_**. Those attributes might be complex objects themselves rather than real numbers, generating a recursive call to the universal distance metric.
#### HARK.utilities
@@ -84,11 +84,11 @@ Methods for optimizing an objective function for the purposes of estimating a mo
#### HARK.parallel
-By default, processes in Python are single-threaded, using only a single CPU core. The **_HARK.parallel_** module provides basic tools for using multiple CPU cores simultaneously, with minimal effort. [4](#myfootnote4) In particular, it provides the function **_multiThreadCommands_**, which takes two arguments: a list of **_AgentType_**s and a list of commands as strings; each command should be a method of the **_AgentType_**s. The function simply distributes the **_AgentType_**s across threads on different cores and executes each command in order, returning no output (the **_AgentType_**s themselves are changed by running the commands). Equivalent results would be achieved by simply looping over each type and running each method in the list. Indeed, **_HARK.parallel_** also has a function called **_multiThreadCommandsFake_** that does just that, with identical syntax to **_multiThreadCommands_**; multithreading in HARK can thus be easily turned on and off. [5](#myfootnote5) The module also has functions for a parallel implementation of the Nelder-Mead simplex algorithm, as described in Wiswall and Lee (2011). See [here](https://hark.readthedocs.io/en/latest/reference/tools/parallel.html) for full documentation.
+By default, processes in Python are single-threaded, using only a single CPU core. The **_HARK.parallel_** module provides basic tools for using multiple CPU cores simultaneously, with minimal effort. [^4] In particular, it provides the function **_multiThreadCommands_**, which takes two arguments: a list of **_AgentType_**s and a list of commands as strings; each command should be a method of the **_AgentType_**s. The function simply distributes the **_AgentType_**s across threads on different cores and executes each command in order, returning no output (the **_AgentType_**s themselves are changed by running the commands). Equivalent results would be achieved by simply looping over each type and running each method in the list. Indeed, **_HARK.parallel_** also has a function called **_multiThreadCommandsFake_** that does just that, with identical syntax to **_multiThreadCommands_**; multithreading in HARK can thus be easily turned on and off. [^5] The module also has functions for a parallel implementation of the Nelder-Mead simplex algorithm, as described in Wiswall and Lee (2011). See [here](https://hark.readthedocs.io/en/latest/reference/tools/parallel.html) for full documentation.
-4: **_HARK.parallel_** uses two packages that aren't included in the default distribution of Anaconda: **_joblib_** and **_dill_**; see [here](https://hark.readthedocs.io/en/latest/quick-start.html#using-hark-with-anaconda) for instructions on how to install them. [↩](#a4)
+[^4]: **_HARK.parallel_** uses two packages that aren't included in the default distribution of Anaconda: **_joblib_** and **_dill_**; see [here](https://hark.readthedocs.io/en/latest/quick-start.html#using-hark-with-anaconda) for instructions on how to install them.
-5: In the future, **_HARK.parallel_** might be absorbed into **_HARK.core_** and **_HARK.estimation_**, particularly if **_joblib_** and **_dill_** become part of the standard Anaconda distribution. [↩](#a5)
+[^5]: In the future, **_HARK.parallel_** might be absorbed into **_HARK.core_** and **_HARK.estimation_**, particularly if **_joblib_** and **_dill_** become part of the standard Anaconda distribution.
### AgentType Class
@@ -102,7 +102,7 @@ A discrete time model in our framework is characterized by a sequence of ''perio
- **_time_inv_**: A list of strings containing all of the variable names that are passed to at least one function in **_solveOnePeriod_** but do _not_ vary across periods. Each of these variables resides in a correspondingly named attribute of the **_AgentType_** instance.
-- **_time_vary_**: A list of strings naming the attributes of this instance that vary across periods. Each of these attributes is a list of period-specific values, which should be of the same length. If the solution method varies across periods, then **_'solveOnePeriod'_** is an element of **_time_vary_**. [6](#myfootnote6)
+- **_time_vary_**: A list of strings naming the attributes of this instance that vary across periods. Each of these attributes is a list of period-specific values, which should be of the same length. If the solution method varies across periods, then **_'solveOnePeriod'_** is an element of **_time_vary_**. [^6]
- **_solution_terminal_**: An object representing the solution to the ''terminal'' period of the model. This might represent a known trivial solution that does not require numeric methods, the solution to some previously solved ''next phase'' of the model, a scrap value function, or an initial guess of the solution to an infinite horizon model.
@@ -116,7 +116,7 @@ A discrete time model in our framework is characterized by a sequence of ''perio
An instance of **_AgentType_** also has the attributes named in **_time_vary_** and **_time_inv_**, and may have other attributes that are not included in either (e.g. values not used in the model solution, but instead to construct objects used in the solution).
-6: **_time_vary_** may include attributes that are never used by a function in **_solveOnePeriod_**. Most saliently, the attribute **_solution_** is time-varying but is not used to solve individual periods. [↩](#b1)
+[^6]: **_time_vary_** may include attributes that are never used by a function in **_solveOnePeriod_**. Most saliently, the attribute **_solution_** is time-varying but is not used to solve individual periods.
#### A Universal Solver
@@ -144,9 +144,9 @@ The attribute **_time_flow_** is **_True_** if variables are listed in ordinary
These methods are invoked to more conveniently access time-varying objects. When a new time-varying attribute is added, its name should be appended to **_time_vary_**, particularly if its values are used in the solution of the model (or is part of the solution itself). For example, the **_solve()_** method automatically adds the string **_'solution'_** to **_time_vary_** if it is not already present. Note that attributes listed in **_time_vary_** _must_ be lists if **_solve()_** or **_timeFlip()_** are used. Some values that could be considered ''time varying'' but are never used to solve the model are more conveniently represented as a **_numpy.array_** object (e.g. the history of a state or control variable from a simulation); because the **_numpy.array_** class does not have a **_reverse()_** method, these attributes should not be listed in **_time_vary_**.
-The base **_AgentType_** is sparsely defined, as most ''real'' methods will be application-specific. Two final methods bear mentioning. First, the **\_**call**\_** method points to **_assignParameters()_**, a convenience method for adding or adjusting attributes (inherited from **_HARKobject_**). This method takes any number of keyword arguments, so that code can be parsimoniously written as, for example, **_AgentInstance(attribute1 = value1, attribute2 = value2)_**. Using Python's dictionary capabilities, many attributes can be conveniently set with minimal code. Second, the **_resetRNG_** method simply resets the **_AgentType_**'s random number generator (as the attribute **_RNG_**) using the value in the attribute **_seed_**. [7](#myfootnote7) This method is useful for (_inter alia_) ensuring that the same underlying sequence of shocks is used for every simulation run when a model is solved or estimated.
+The base **_AgentType_** is sparsely defined, as most ''real'' methods will be application-specific. Two final methods bear mentioning. First, the **\_**call**\_** method points to **_assignParameters()_**, a convenience method for adding or adjusting attributes (inherited from **_HARKobject_**). This method takes any number of keyword arguments, so that code can be parsimoniously written as, for example, **_AgentInstance(attribute1 = value1, attribute2 = value2)_**. Using Python's dictionary capabilities, many attributes can be conveniently set with minimal code. Second, the **_resetRNG_** method simply resets the **_AgentType_**'s random number generator (as the attribute **_RNG_**) using the value in the attribute **_seed_**. [^7] This method is useful for (_inter alia_) ensuring that the same underlying sequence of shocks is used for every simulation run when a model is solved or estimated.
-7: Every instance of **_AgentType_** is created with a random number generator as an instance of the class **_numpy.random.RandomState_**, with a default **_seed_** of zero. [↩](#b2)
+[^7]: Every instance of **_AgentType_** is created with a random number generator as an instance of the class **_numpy.random.RandomState_**, with a default **_seed_** of zero.
### Market Class
@@ -174,7 +174,7 @@ This procedure is conducted by the **_makeHistory_** method of **_Market_** as a
#### Attributes of a Market
-To specify a complete instance of **_Market_**, the user should give it the following attributes: [8](#myfootnote8)
+To specify a complete instance of **_Market_**, the user should give it the following attributes: [^8]
- **_agents_**: A list of **_AgentType_**s, representing the agents in the market. Each element in **_agents_** represents an _ex-ante_ heterogeneous type; each type could have many _ex-post_ heterogeneous agents.
@@ -204,7 +204,7 @@ Further, each **_AgentType_** in **_agents_** must have two methods not necessar
When solving macroeconomic models in HARK, the user should also define classes to represent the output from the aggregate market process in **_millRule_** and for the model-specific dynamic rule. The latter should have a **_distance_** method to test for solution convergence; if the class inherits from **_HARKobject_**, the user need only list relevant attributes in **_distance_criteria_**.
-8: For some purposes, it might be useful to specify a subclass of **_Market_**, defining **_millRule_** and/or **_calcDynamics_** as methods rather than functions. [↩](#c1)
+[^8]: For some purposes, it might be useful to specify a subclass of **_Market_**, defining **_millRule_** and/or **_calcDynamics_** as methods rather than functions.
## DemARK
diff --git a/Documentation/CHANGELOG.md b/Documentation/CHANGELOG.md
index e4e57967c..b867ec8dc 100644
--- a/Documentation/CHANGELOG.md
+++ b/Documentation/CHANGELOG.md
@@ -46,7 +46,7 @@ Release Date: February, 16, 2023
- Add methods to non stochastically simulate an economy by computing transition matrices. Functions to compute transition matrices and ergodic distribution have been added [#1155](https://github.com/econ-ark/HARK/pull/1155).
- Fixes a bug that causes `t_age` and `t_cycle` to get out of sync when reading pre-computed mortality. [#1181](https://github.com/econ-ark/HARK/pull/1181)
- Adds Methods to calculate Heterogenous Agent Jacobian matrices. [#1185](https://github.com/econ-ark/HARK/pull/1185)
-- Enhances `combine_indep_dstns` to work with labeled distributions (`DiscreteDistributionLabeled`). [#1191](htttps://github.com/econ-ark/HARK/pull/1191)
+- Enhances `combine_indep_dstns` to work with labeled distributions (`DiscreteDistributionLabeled`). [#1191](https://github.com/econ-ark/HARK/pull/1191)
- Updates the `numpy` random generator from `RandomState` to `Generator`. [#1193](https://github.com/econ-ark/HARK/pull/1193)
- Turns the income and income+return distributions into `DiscreteDistributionLabeled` objects. [#1189](https://github.com/econ-ark/HARK/pull/1189)
- Creates `UtilityFuncCRRA` which is an object oriented utility function with a coefficient of constant relative risk aversion and includes derivatives and inverses. Also creates `UtilityFuncCobbDouglas`, `UtilityFuncCobbDouglasCRRA`, and `UtilityFuncConstElastSubs`. [#1168](https://github.com/econ-ark/HARK/pull/1168)
diff --git a/Documentation/conf.py b/Documentation/conf.py
index 27af3addd..f8102018e 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -1,3 +1,17 @@
+import warnings
+
+try:
+ import numba
+except ImportError:
+ pass
+else:
+ warnings.filterwarnings("ignore",
+ message="numba.generated_jit.*",
+ category=numba.NumbaDeprecationWarning)
+ warnings.filterwarnings("ignore",
+ message=".* 'nopython' .*",
+ category=numba.NumbaDeprecationWarning)
+
# Project information
project = "HARK"
copyright = "2020, Econ-ARK team"
@@ -17,13 +31,14 @@
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"nbsphinx",
- "recommonmark",
+ "myst_parser",
]
exclude_patterns = [
"_build",
"Thumbs.db",
".DS_Store",
+ "NARK",
]
language = "en"
@@ -57,5 +72,8 @@
# sphinx.ext.autosummary configuration
autosummary_generate = True
+# sphinx.ext.napoleon configuration
+napoleon_use_ivar = True # solves duplicate object description warning
+
# nbsphinx configuration
nbsphinx_execute = "never" # This is currently not working
diff --git a/Documentation/contributing/CONTRIBUTING.md b/Documentation/contributing/CONTRIBUTING.md
index e247fe0c4..907108529 100644
--- a/Documentation/contributing/CONTRIBUTING.md
+++ b/Documentation/contributing/CONTRIBUTING.md
@@ -4,7 +4,7 @@
- [Contributing Guide](#contributing-guide)
- [Developer's Certificate of Origin 1.1](#developer-s-certificate-of-origin-1-1)
-## [Code of Conduct](./doc/guides/contributing/coc.md)
+## Code of Conduct
The Econ-ARK project has a
[Code of Conduct](https://github.com/econ-ark/HARK/blob/master/.github/CODE_OF_CONDUCT.md)
diff --git a/Documentation/contributing/Installation_instruction.md b/Documentation/contributing/Installation_instruction.md
index ff300108a..8e504fa71 100644
--- a/Documentation/contributing/Installation_instruction.md
+++ b/Documentation/contributing/Installation_instruction.md
@@ -173,7 +173,7 @@ If you want to make changes or contributions (yay!) to HARK, you'll need to have
1. Navigate to wherever you want to put the repository and type `git clone git@github.com:econ-ark/HARK.git` ([more details here](https://git-scm.com/documentation)). If you get a permission denied error, you may need to setup SSH for GitHub, or you can clone using HTTPS: 'git clone https://github.com/econ-ark/HARK.git'.
-2. Then, create and activate a [virtual environment](<[virtualenv]((https://virtualenv.pypa.io/en/latest/))>).
+2. Then, create and activate a [virtual environment](https://virtualenv.pypa.io/en/latest/).
Install virtualenv if you need to and then type:
diff --git a/Documentation/index.rst b/Documentation/index.rst
index 520ccca9a..982be958d 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -21,6 +21,7 @@ you might want to look at the `DemARK
quick-start
ARKitecture
contributing/CONTRIBUTING.md
+ contributing/Installation_instruction.md
reference/index
CHANGELOG
license
@@ -36,7 +37,7 @@ you might want to look at the `DemARK
example_notebooks/GenIncProcessModel.ipynb
example_notebooks/LifecycleModel.ipynb
example_notebooks/HowWeSolveIndShockConsumerType.ipynb
- example_notebooks/Journey_1_PhD.ipynb
+ example_notebooks/Journey-PhD.ipynb
Indices and tables
==================
diff --git a/Documentation/instructions.txt b/Documentation/instructions.txt
deleted file mode 100644
index 1478d4812..000000000
--- a/Documentation/instructions.txt
+++ /dev/null
@@ -1,305 +0,0 @@
-Here are the steps I took to set up and run a Sphinx instance. Tutorials and instruction sets I personally found very useful are cataloged at the end of this instruction set.
-
-You will complete the following steps:
-
-- Install Sphinx
-- Set up Sphinx for your particular project:
- - in your project, use "quickstart" to create the required Sphinx documentation infrastructure
- - edit the project's Sphinx configuration file ("conf.py") to ensure preferred options are used
- - edit the project's main index file ("index.rst") to direct Sphinx to document (or auto-document) the correct sections of your project
-- Edit code files to ensure that Sphinx runs correctly:
- - *because Sphinx runs all files to extract docs*:
- - ensure that appropriate "script" code calls are wrapped in "if __name__ == "__main__" blocks
- - ensure that any hardcoded filepaths instead use appropriate sys/os calls to set up pathnames
- - confirm that the appropriate document string structure is in all files that
-- Run Sphinx and examine output
-
-
-We'll discuss each of these steps in turn below.
-
-
-## Install Sphinx
-
-This is wonderfully simple if you have anaconda:
-
- $ conda update conda
- $ conda install sphinx
- $ conda install numpydoc
-
-This should install the most recent versions of these tools. We will use
-numpydoc to make nice looking documentation.
-
-## Set up Sphinx for your particular project
-
-The first step is running a "quickstart" program which will set up the sphinx
-infrastructure for your project. Convention seems to be to use a "doc" directory,
-which quickstart will create for you. (If you already have a "doc" directory,
-simply create a directory with a different name; name isn't important.)
-
- $ cd ~/workspace/HARK/
- $ sphinx-quickstart doc
-
-This will create a "doc" directory there and launch the quick-start command-line
-interface. (Use "sphinx-doc" or some variation on "doc" if you already have a
-"doc" directory that you want to use for other things.)
-
-You will be ginve a lot of options, here are the ones I use to set up my Sphinx;
-empty spots after the colon on each line indicate [default choice] selected:
-
- > Separate source and build directories (y/n) [n]:
- > Name prefix for templates and static dir [_ ]:
- > Project name: HARK
- > Author name(s): Christopher D. Carroll, Alexander Kaufman, David C. Low, Nathan M. Palmer, Matthew N. White
- > Project version: 0.9
- > Project release [0.9]:
- > Project language [en]:
- > Source file suffix [.rst]:
- > Name of your master document (without suffix) [index]:
- > Do you want to use the epub builder (y/n) [n]:
- > autodoc: automatically insert docstrings from modules (y/n) [n]: y
- > doctest: automatically test code snippets in doctest blocks (y/n) [n]: y
- > intersphinx: link between Sphinx documentation of different projects (y/n) [n]: y
- > todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y
- > coverage: checks for documentation coverage (y/n) [n]: y
- > imgmath: include math, rendered as PNG or SVG images (y/n) [n]: n
- > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y
- > ifconfig: conditional inclusion of content based on config values (y/n) [n]:
- > viewcode: include links to the source code of documented Python objects (y/n) [n]: n
- > githubpages: create .nojekyll file to publish the document on GitHub pages (y/n) [n]: n
- > Create Makefile? (y/n) [y]: y
- > Create Windows command file? (y/n) [y]: y
-
-These options are used by quickstart to create the files and directories under
-
- ~/workspace/HARK/doc/
-
-which will run Sphinx. If you navigate to the above directory you should see:
-
- _ templates/
- _ build/
- _ static/
- index.rst
- conf.py
- make.bat
- Makefile
-
-The first three are directories which will contain output of running
-autodocumentation. Eventually you will look in _ build/html/index.html to find
-the "root" of the html documentation after we run the autodocs. This index.html
-will be intimately connected to the "index.rst" file as described below.
-
-The index.rst and conf.py files are where we control the setup for the output.
-
-- conf.py:
- - controls how Sphinx will run -- this is the configuration file for Sphinx and has largely been populated by the quickstart options we selected above. We'll add a couple things in a moment.
-- index.rst:
- - this controls how Sphinx will arrange the "root" of the html documentation. Essentially we are building the "table of contents" here (we will actually explicitly include a ToC command in here).
- - fun fact: if you make multiple "index.rst" files, Sphinx will dutifully create a matching "index.html" files for each ".rst" file. For example if you create three index files titled "index.rst," "index-manual-toc.rst," and "index-auto-toc.rst," after running sphinx you will get three matching index.html files under _ build/html called: "index.html," "index-manual-toc.html," and "index-auto-toc.html."
- - we can use this to try different types of index file.
-
-Now let's edit these two files.
-
-### Edit conf.py
-
-Here are useful elements to add to the conf.py file. I include the previous line in the conf.py default file so you can readily find these in the file itself:
-
-- Add a direct call to "abspath" at the beginning, *but* note that we need to add ".." instead of "." as suggested in the docs, because this file is one below the root of the code we want to document:
-
- # If extensions (or modules to document with autodoc) are in another directory,
- # add these directories to sys.path here. If the directory is relative to the
- # documentation root, use os.path.abspath to make it absolute, like shown here.
- #sys.path.insert(0, os.path.abspath('.'))
- sys.path.insert(0, os.path.abspath('..')) # <- Careful here to add ".."
-
-- Add numpydoc to the extensions -- most should be populated from the quickstart:
-
-
-
- # Add any Sphinx extension module names here, as strings. They can be
- # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
- # ones.
- extensions = [
- 'sphinx.ext.autodoc',
- 'sphinx.ext.doctest',
- 'sphinx.ext.autosummary',
- 'sphinx.ext.intersphinx',
- 'sphinx.ext.todo',
- 'sphinx.ext.coverage',
- 'sphinx.ext.mathjax',
- 'numpydoc',
- ]
- # **Be sure** to add the numpydoc file!
-
-
-- Just below extensions I add these two flags to automatically document the members of modules we want to document. (See more [here](http://www.sphinx-doc.org/en/stable/ext/autodoc.html#confval-autodoc_default_flags), and [here](http://www.sphinx-doc.org/en/stable/ext/autosummary.html).)
-
- autodoc_default_flags = ['members']
- autosummary_generate = True # Will generate some stubs. Can comment out to see difference.
-
-- Finally, choose the style we want to use. I think the "classic" style works better than the default, minimal, "alabaster" style, but there are many to choose from:
-
- # The theme to use for HTML and HTML Help pages. See the documentation for
- # a list of builtin themes.
- html_theme = 'classic'
- # See much more here: http://www.sphinx-doc.org/en/stable/theming.html
- # options:
- # - alabaster
- # - sphinx_rtd_theme # read the docs theme
- # - classic
- # - sphinxdoc
- # - scrolls
- # - agogo
- # - traditional
- # - nature
- # - haiku
- # - pyramid
- # - bizstyle
-
-That should do it for the conf.py file. Now onto the index.rst file.
-
-
-### Quick-run Sphinx
-
-Setting up conf.py is all that is needed -- we can quickly run Sphinx to see what the output looks like now before further direction.
-
-Simply navigate to the "doc" file (should already be there but just in case you are starting new) and make html:
-
- $ cd ~/workspace/HARK/doc/
- $ make html
-
-This will run Sphinx. You can find the very minimal output in "~/workspace/HARK/doc/_build/html/index.hml"
-_Note:_ There will be brely anything in this file -- now time to populate it by editing index.rst.
-
-
-### Run Sphinx
-
-Just as before we run SPhinx again. If the above two "Import
-
-Now we get to business. There are a large number of ways to build this file out.
-
-We will use a simple one. Before edit, file contents should look like:
-
- # [some filler at beginning]
-
- Welcome to HARK's documentation!
- ================================
-
- Contents:
-
- .. toctree::
- :maxdepth: 2
-
-
-
- Indices and tables
- ==================
-
- * :ref:`genindex`
- * :ref:`modindex`
- * :ref:`search`
-
-
-We will add the following -- see between the "***... START NEW ...****" and "***... END NEW ...****" lines. Note: don't include those "START NEW" and "END NEW" lines:
-
-
-
- Welcome to HARK's documentation!
- ================================
-
- Contents:
-
- .. toctree::
- :maxdepth: 2
-
-
- ********************* START NEW ********************** [Delete this line]
-
- .. autosummary::
- :toctree: generated
-
- ConsumptionSavingModel.ConsumptionSavingModel
- ConsumptionSavingModel.SolvingMicroDSOPs
- ConsumptionSavingModel.ConsPrefShockModel
-
- *********************** END NEW ********************** [Delete this line]
-
-
- Indices and tables
- ==================
-
- * :ref:`genindex`
- * :ref:`modindex`
- * :ref:`search`
-
-
-This will tell Sphinx to go up one level, find the "ConsumptionSavingModel" directory, and automatically generate documentation for the modules (code files) ConsumptionSavingModel, SolvingMicroDSOPs, and ConsPrefShockModel. The ":members:" directive tells Sphinx to search all member functions in those modules (code files) for documentation.
-
-**Important code note:** Sphinx will run code that it documents. This has two important implications:
-
-- any code that you *don't* want to run automatically in those modules must be wrapped in an "if __name__ == '__main__'" statement.
-- if there are any local pathnames referenced in those files, you will need to use the "os.path.abspath()" function from the os file to find the correct path. This is particularly important with reading in data for estimation.
-
-
-### Example of Code Docs
-
-
-Here is a very simple code example, taken from [this tutorial](https://codeandchaos.wordpress.com/2012/08/09/sphinx-and-numpydoc/), for a basic "foo" example:
-
-
- def foo(var1, var2, long_var_name='hi')
- """This function does something.
-
- Parameters
- ----------
- var1 : array_like
- This is a type.
- var2 : int
- This is another var.
- Long_variable_name : {'hi', 'ho'}, optional
- Choices in brackets, default first when optional.
-
- Returns
- -------
- describe : type
- Explanation
- """
- print var1, var2, long_var_name
-
-
-As the tutorial notes, this will produce docs that look like Numpy Docs (an [example](http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.min_scalar_type.html#numpy.min_scalar_type) here).
-
-This is good, because the default Sphinx documentation style is pretty unpleasant to look at.
-
-
-
-### Run Sphinx
-
-Just as before we run SPhinx again. If the above two "Important Code Notes" (about "if name==main" and "os.path.abspath()") are not a problem, this should run fine:
-
-
-$ cd ~/workspace/HARK/doc/
-$ make html
-
-This will run Sphinx. You can find the very minimal output in "~/workspace/HARK/doc/_build/html/index.hml"
-_Note:_ There will be barely anything in this file -- now time to populate it by editing index.rst.
-
-
-
-# Useful links
-
-Some extremely useful sources:
-
-- One of the authors, useful presentation: http://www.slideshare.net/shimizukawa/sphinx-autodoc-automated-api-documentation-pyconapac2015
- - https://www.youtube.com/watch?v=mdtxHjH2wog
-
-- High-level, friendly overview (note install approach is deprecated):
- - https://codeandchaos.wordpress.com/2012/07/30/sphinx-autodoc-tutorial-for-dummies/
- - https://codeandchaos.wordpress.com/2012/08/09/sphinx-and-numpydoc/
- - http://gisellezeno.com/tutorials/sphinx-for-python-documentation.html
- - http://thomas-cokelaer.info/tutorials/sphinx/docstring_python.html
-
-- Tutorial:
- - http://sphinx-tutorial.readthedocs.io/
- - http://matplotlib.org/sampledoc/index.html
- - very nice for details of sphinx setup in quickly digestible, reproducible format.
- - see here for nice example of including "welcome text."
diff --git a/Documentation/introduction.md b/Documentation/introduction.md
index 966f324df..611fe8739 100644
--- a/Documentation/introduction.md
+++ b/Documentation/introduction.md
@@ -8,8 +8,8 @@ Learning by doing has value, but only within limits. We do not require young dri
In recent years, considerable progress has been made in addressing these kinds of problems in many areas of economic modeling. Macroeconomists using representative agent models can send Dynare model files to each other; reduced form econometricians can choose from a host of econometric packages. But modelers whose questions require explicit structural modeling which involve nontrivial differences in agents (households, firms, etc.) that cannot simply be aggregated away are mostly still stuck in the bad old days.
-The ultimate goal of the HARK project is to fix these problems. Specifically, our aim is to produce an open source repository of highly modular, easily interoperable code for solving, simulating, and estimating dynamic economic models with _heterogeneous agents_. [1](#footnote_intro1) Further, we seek to establish (with input from the community) standards for the description and specification of objects like discrete approximations to continuous distributions and interpolated function approximations, so that numeric methods can be quickly swapped without ugly ''patching.''
+The ultimate goal of the HARK project is to fix these problems. Specifically, our aim is to produce an open source repository of highly modular, easily interoperable code for solving, simulating, and estimating dynamic economic models with _heterogeneous agents_.[^1] Further, we seek to establish (with input from the community) standards for the description and specification of objects like discrete approximations to continuous distributions and interpolated function approximations, so that numeric methods can be quickly swapped without ugly ''patching.''
We hope that HARK will make it much easier and faster for researchers to develop solution and estimation methods for new models. The open source nature of HARK will make it easier for other researchers to audit and verify new models and methods, and to collaborate on correcting deficiencies when found. As HARK expands to include more canonical models and more tools and utilities, we can all spend less time managing numerical minutiae and more time fretting about identification arguments and data accuracy.
-1: By ''heterogeneous,'' we mean that agents might differ before anything in the model has ''happened'' (_ex-ante_ heterogeneity); and agents might experience different stochastic events during the model (_ex-post_ heterogeneity). [↩](#intro1)
+[^1]: By ''heterogeneous,'' we mean that agents might differ before anything in the model has ''happened'' (_ex-ante_ heterogeneity); and agents might experience different stochastic events during the model (_ex-post_ heterogeneity).
diff --git a/Documentation/quick-start.md b/Documentation/quick-start.md
index 3219df321..f1e3ea538 100644
--- a/Documentation/quick-start.md
+++ b/Documentation/quick-start.md
@@ -79,11 +79,11 @@ We have a set of 30-second [Elevator Spiels](https://github.com/econ-ark/PARK/bl
The most broadly applicable advice is to go to [Econ-ARK](https://econ-ark.org) and click on "Notebooks", and choose [A Gentle Introduction to HARK](https://hark.readthedocs.io/en/latest/example_notebooks/Gentle-Intro-To-HARK.html) which will launch as a [jupyter notebook](https://jupyter.org/).
-#### [For people with a technical/scientific/computing background but little economics background](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-people-with-a-technicalscientificcomputing-background-but-no-economics-background)
+### [For people with a technical/scientific/computing background but little economics background](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-people-with-a-technicalscientificcomputing-background-but-no-economics-background)
- A good starting point is [A Gentle Introduction to HARK](https://hark.readthedocs.io/en/latest/example_notebooks/Gentle-Intro-To-HARK.html) which provides a light economic intuition.
-#### [For economists who have done some structural modeling](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-economists-who-have-done-some-structural-modeling)
+### [For economists who have done some structural modeling](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-economists-who-have-done-some-structural-modeling)
- A full replication of the [Iskhakov, Jørgensen, Rust, and Schjerning](https://onlinelibrary.wiley.com/doi/abs/10.3982/QE643) paper for solving the discrete-continuous retirement saving problem
@@ -91,13 +91,13 @@ The most broadly applicable advice is to go to [Econ-ARK](https://econ-ark.org)
- [Structural-Estimates-From-Empirical-MPCs](https://github.com/econ-ark/DemARK/blob/master/notebooks/Structural-Estimates-From-Empirical-MPCs-Fagereng-et-al.ipynb) is an example of the use of the toolkit in a discussion of a well known paper. (Yes, it is easy enough to use that you can estimate a structural model on somebody else's data in the limited time available for writing a discussion)
-#### [For economists who have not yet done any structural modeling but might be persuadable to start](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-economists-who-have-not-yet-done-any-structural-modeling-but-might-be-persuadable-to-start)
+### [For economists who have not yet done any structural modeling but might be persuadable to start](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-economists-who-have-not-yet-done-any-structural-modeling-but-might-be-persuadable-to-start)
- Start with [A Gentle Introduction to HARK](https://hark.readthedocs.io/en/latest/example_notebooks/Gentle-Intro-To-HARK.html) to get your feet wet
- A simple indirect inference/simulated method of moments structural estimation along the lines of Gourinchas and Parker's 2002 Econometrica paper or Cagetti's 2003 paper is performed by the [SolvingMicroDSOPs](https://github.com/econ-ark/SolvingMicroDSOPs/) [REMARK](https://github.com/econ-ark/REMARK); this code implements the solution methods described in the corresponding section of [these lecture notes](https://llorracc.github.io/SolvingMicroDSOPs/).
-#### [For Other Developers of Software for Computational Economics](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-other-developers-of-software-for-computational-economics)
+### [For Other Developers of Software for Computational Economics](https://github.com/econ-ark/PARK/blob/master/Elevator-Spiels.md#for-other-developers-of-software-for-computational-economics)
- Our workhorse module is [ConsIndShockModel.py](https://github.com/econ-ark/HARK/blob/master/HARK/ConsumptionSaving/ConsIndShockModel.py) which includes the IndShockConsumerType. A short explanation about the Agent Type can be found [here](https://hark.readthedocs.io/en/latest/example_notebooks/IndShockConsumerType.html) and an introduction how it is solved [here](https://hark.readthedocs.io/en/latest/example_notebooks/HowWeSolveIndShockConsumerType.html).
@@ -105,11 +105,11 @@ The most broadly applicable advice is to go to [Econ-ARK](https://econ-ark.org)
If you want to make changes or contributions to HARK, you'll need to have access to the source files. Installing HARK via `pip install econ-ark` (at the command line, or inside Spyder) makes it hard to access those files (and it's a bad idea to mess with the original code anyway because you'll likely forget what changes you made). If you are adept at GitHub, you can [fork](https://help.github.com/en/articles/fork-a-repo) the repo. If you are less experienced, you should download a personal copy of HARK again using `git clone` (see above) or the GitHub Desktop app.
-#### Clone HARK
+### Clone HARK
Navigate to wherever you want to put the repository and type `git clone git@github.com:econ-ark/HARK.git` ([more details here](https://git-scm.com/documentation)). If you get a permission denied error, you may need to setup SSH for GitHub, or you can clone using HTTPS: `git clone https://github.com/econ-ark/HARK.git`.
-#### (Optionally) Create a virtual environment
+### (Optionally) Create a virtual environment
If you are familiar with [virtual environments](https://virtualenv.pypa.io/en/latest/), you can optionally create and activate a virtual environment which will isolate the econ-ark specific tools from the rest of your computer.
@@ -131,11 +131,11 @@ econ-ark\\Scripts\\activate.bat
Once the virtualenv is activated, you may see `(econ-ark)` in your command prompt (depending on how your machine is configured)
-#### Install requirements
+### Install requirements
Make sure to change to HARK directory, and install HARK's requirements into the virtual environment with `pip install -r requirements.txt`.
-#### Test your installation
+### Test your installation
To check that everything has been set up correctly, run HARK's tests with `python -m unittest`.
diff --git a/Documentation/reference/tools/econforgeinterp.rst b/Documentation/reference/tools/econforgeinterp.rst
index 3c0657562..9f0b7aa25 100644
--- a/Documentation/reference/tools/econforgeinterp.rst
+++ b/Documentation/reference/tools/econforgeinterp.rst
@@ -1,5 +1,5 @@
Econforgeinterp
--------------
+---------------
.. automodule:: HARK.econforgeinterp
:members:
diff --git a/Documentation/reference/tools/index.rst b/Documentation/reference/tools/index.rst
index 660c606cd..71348b7db 100644
--- a/Documentation/reference/tools/index.rst
+++ b/Documentation/reference/tools/index.rst
@@ -1,5 +1,5 @@
Tools
-------
+-----
.. toctree::
:maxdepth: 3
@@ -12,7 +12,7 @@ Tools
frame
helpers
interpolation
- numba
+ numba_tools
parallel
rewards
simulation
diff --git a/Documentation/simple-steps-getting-sphinx-working.txt b/Documentation/simple-steps-getting-sphinx-working.txt
deleted file mode 100644
index 640fd9892..000000000
--- a/Documentation/simple-steps-getting-sphinx-working.txt
+++ /dev/null
@@ -1,184 +0,0 @@
-# Getting sphinx working
-
-Basic steps to get Sphinx running:
-
-1. Download and install Sphinx, numpydoc:
- $ conda update conda
- $ conda install sphinx
- $ conda install numpydoc
-
-2. create "doc" folder in your code directory, navigate there. For purpose of illustration will assume project is in ~/workspace/HARK
- $ cd ~/workspace/HARK
- $ mkdir doc
- $ cd doc
-
-3. Run Sphinx quickstart and select "autodoc" and others you would like:
- $ sphinx-quickstart
- I choose these:
- > autodoc: automatically insert docstrings from modules (y/n) [n]: y
- > doctest: automatically test code snippets in doctest blocks (y/n) [n]: y
- > intersphinx: link between Sphinx documentation of different projects (y/n) [n]: y
- > todo: write "todo" entries that can be shown or hidden on build (y/n) [n]: y
- > coverage: checks for documentation coverage (y/n) [n]: y
- > imgmath: include math, rendered as PNG or SVG images (y/n) [n]: n
- > mathjax: include math, rendered in the browser by MathJax (y/n) [n]: y
- > ifconfig: conditional inclusion of content based on config values (y/n) [n]: n
- > viewcode: include links to the source code of documented Python objects (y/n) [n]: n
- > githubpages: create .nojekyll file to publish the document on GitHub pages (y/n) [n]: n
-
-4. Open conf.py file with favorite text editor:
-
- $ cd ~/workspace/HARK/doc # if not already here
- $ atom conf.py &
-
-5. Changes to make:
- - find this line and add ".." instead of ".":
-
- sys.path.insert(0, os.path.abspath('..'))
-
- - ensure autosummary and numpydoc are included and add two lines for autodoc and autosummary below:
-
- extensions = [ ... # Leave whatever options were auto-included
- 'sphinx.ext.autosummary',
- 'numpydoc',
- ]
- autodoc_default_flags = ['members'] # must add outside ']' bracket
- autosummary_generate = True
-
- - Change theme to your favorite; more here: /home/npalmer/workspace/HARK-DOCS/HARK-docs-versions/doc-v2.0/conf.py
- html_theme = 'classic' # 'alabaster' is default
- # Others: sphinx_rtd_theme, sphinxdoc, scrolls, agogo, traditional, nature, haiku, pyramid, bizstyle
-
-6. use sphinx-apidoc to create the .rst files for each module to document -- otherwise must write each by hand, which is what we are avoiding by using Sphinx:
- - use sphinx-apidoc:
- $ cd ~/workspace/HARK/doc # if not already here
- $ sphinx-apidoc -f -o ./ ../
- - NOTE: syntax is:
- * '-f' force overwrite of html files
- * '-o' Required: where to find the *source .rst files for sphinx*; we are using 'doc' directly
- * './' target for '-o'
- * '../' where to look for the Python files to pull code out of
- - NOTE that when we want to create these for other files, such as ConsumptionSavingModel, we will need to indicate that here as well. Run this to generate .rst files for ConsumptionSavingModel:
- $ cd ~/workspace/HARK/doc # if not already here
- $ sphinx-apidoc -f -o ./ ../
- $ sphinx-apidoc -f -o ./ ../ConsumptionSavingModel/
- $ sphinx-apidoc -f -o ./ ../cstwMPC/
- $ sphinx-apidoc -f -o ./ ../SolvingMicroDSOPs/
- $ sphinx-apidoc -f -o ./ ../FashionVictim/
-
-
-
-7. Edit the main "index.rst" file to tell it explicitly what modules to include:
- $ cd ~/workspace/HARK/doc # if not already here
- $ atom index.rst &
-
-8. Insert the following "autosummary" text between 'Contents' and 'Indices and tables'
- **Very important note:** .rst files use indentions to designate collections; the modules listed under ".. autosummary::" (such as HARKutilities and HARKsimulation) **must** line up with the first colon ":" before ":toctree: generated"
-
-
-
-**EXAMPLE:**
-
---------------------------------------------------------------------------------
-
-Welcome to HARK's documentation!
-================================
-
-Contents:
-
-.. toctree::
- :maxdepth: 2
-
-********************* START NEW ********************** [Delete this line]
-.. autosummary::
- :toctree: generated
-
- HARKutilities
- HARKsimulation
- HARKparallel
- HARKinterpolation
- HARKestimation
- HARKcore
-
-*********************** END NEW ********************** [Delete this line]
-
-Indices and tables
-==================
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
-
---------------------------------------------------------------------------------
-
-9. The index.rst module will now automatically generate summaries of each of the indicated modules. *NOTE* that if we want to include others that are inside other folders, we will need to indicate the path as something like 'ConsumptionSavingModel.ConsumptionSavingModel'
-
-**Before** adding ConsumptionSavingModel.ConsumptionSavingModel , we will need to make sure that any hardcoded pathnames in the code are replaced with appropriate programatically determined pathnames. **TODO** -- still in progress
-
-
-10. Add a Very Brief Welcome Text -- **TODO** -- this can be raw text directly above "Contents" or something as included in a seperate .rst file we reference before "Contents"
-
-10.5. NOTE: If you do not have joblib installed, Sphinx will fail when it attempts to run HARKParallel. If you do not want to install joblib, remove HARKParallel from the index.rst file. To install: conda install joblib should install joblib to your anaconda package.
-
-
-11. Run
-
- $ cd ~/workspace/HARK/doc # if not already here
- $ make html
-
-12. You'll get a billion warnings, I think mostly because some things are missing documentations. Regardless,
- open this and observe the nice-looking API/docs. Be sure to try the search and index features!
-
- $ ~/workspace/HARK/doc/_build/html/index.html
-
-
-13. To add individual module to the automatic API and auto-summary generator:
- - Add the location of the module to the top of conf.py: sys.path.insert(0, os.path.abspath('../ConsumptionSavingModel/'))
- - Add individual modules to the index.rst: "ConsIndShockModel" etc.
- - Be sure to include the correct relative locations for each file: "sys.path.insert(0, os.path.abspath('../'))"
- - and be sure that file names locally (for file import) referenced using
-
-
-
-14. Update: creating of these docs for the website was accomplished following this tutorial: https://daler.github.io/sphinxdoc-test/includeme.html This approach is nice because it allows one to maintain the code physically in one location (simplifying creation of the docs) and the html output in another location. When all is done in the same physical directory, there is extensive switching between branches to accomplish the docs update.
-Important steps include:
- - in Makefile, appropriately changing the relative path to BUILDDIR
- - NOTE: this may be particularly important for changing the "windows make file" as well, however I do not have a windows machine to test this on.
- - Note: I did not use any of the "pdf manual" options.
- - adding the .nojekyll file to the appropriate place
-
-
-
-15. Steps to update docs and post:
- $ sphinx-apidoc -f -o ./ ../Module-name-to-document # recall, also need to insert in index.rst
- $ make html
- $ cd ../../HARK-docs
- $ git branch # confirm on gh-pages branch
- $ git push origin master
-
-
-
-
-
-_Useful references:_
-
-
-- One of the authors, useful presentation: http://www.slideshare.net/shimizukawa/sphinx-autodoc-automated-api-documentation-pyconapac2015
- - https://www.youtube.com/watch?v=mdtxHjH2wog
-
-- High-level, friendly overview (note install approach is deprecated):
- - https://codeandchaos.wordpress.com/2012/07/30/sphinx-autodoc-tutorial-for-dummies/
- - https://codeandchaos.wordpress.com/2012/08/09/sphinx-and-numpydoc/
- - http://gisellezeno.com/tutorials/sphinx-for-python-documentation.html
- - http://thomas-cokelaer.info/tutorials/sphinx/docstring_python.html
- - https://pythonhosted.org/an_example_pypi_project/sphinx.html#full-code-example
- - see for example code of automodule, autoclass, autofunction
-
-- Tutorial:
- - http://sphinx-tutorial.readthedocs.io/
- - http://matplotlib.org/sampledoc/index.html
- - very nice for details of sphinx setup in quickly digestible, reproducible format.
- - see here for nice example of including "welcome text."
-
-- Related:
- - http://mpastell.com/pweave/
diff --git a/HARK/ConsumptionSaving/ConsIndShockModel.py b/HARK/ConsumptionSaving/ConsIndShockModel.py
index 9d0c4e0bc..10cec6c99 100644
--- a/HARK/ConsumptionSaving/ConsIndShockModel.py
+++ b/HARK/ConsumptionSaving/ConsIndShockModel.py
@@ -279,8 +279,8 @@ def def_value_funcs(self):
Defines the value and marginal value functions for this period.
Uses the fact that for a perfect foresight CRRA utility problem,
if the MPC in period t is :math:`\\kappa_{t}`, and relative risk
- aversion :math:`\rho`, then the inverse value vFuncNvrs has a
- constant slope of :math:`\\kappa_{t}^{-\rho/(1-\rho)}` and
+ aversion :math:`\\rho`, then the inverse value vFuncNvrs has a
+ constant slope of :math:`\\kappa_{t}^{-\\rho/(1-\\rho)}` and
vFuncNvrs has value of zero at the lower bound of market resources
mNrmMin. See PerfForesightConsumerType.ipynb documentation notebook
for a brief explanation and the links below for a fuller treatment.
@@ -530,6 +530,7 @@ def add_stable_points(self, solution):
----------
solution : ConsumerSolution
Solution to this period's problem, which must have attribute cFunc.
+
Returns
-------
solution : ConsumerSolution
@@ -1002,6 +1003,7 @@ def add_stable_points(self, solution):
----------
solution : ConsumerSolution
Solution to this period's problem, which must have attribute cFunc.
+
Returns
-------
solution : ConsumerSolution
@@ -1424,19 +1426,22 @@ def add_stable_points(self, solution):
interest rates.
Discusson:
+
- The target and steady state should exist under the same conditions
as in ConsIndShock.
- The ConsIndShock code as it stands can not be directly applied
because it assumes that R is a constant, and in this model R depends
on the level of wealth.
- After allowing for wealth-depending interest rates, the existing
- code might work without modification to add the stable points. If not,
- it should be possible to find these values by checking within three
- distinct intervals:
- - From h_min to the lower kink.
- - From the lower kink to the upper kink
- - From the upper kink to infinity.
- the stable points must be in one of these regions.
+ code might work without modification to add the stable points. If not,
+ it should be possible to find these values by checking within three
+ distinct intervals:
+
+ - From h_min to the lower kink.
+ - From the lower kink to the upper kink
+ - From the upper kink to infinity.
+
+ the stable points must be in one of these regions.
"""
return solution
@@ -2734,7 +2739,7 @@ def calc_jacobian(self, shk_param, T):
LivPrb, PermShkStd,TranShkStd, DiscFac, UnempPrb, Rfree, IncUnemp, DiscFac .
Parameters:
- ----------
+ -----------
shk_param: string
name of variable to be shocked
@@ -3314,7 +3319,7 @@ def construct_lognormal_income_process_unemployment(self):
Note 2: All parameters are passed as attributes of the input parameters.
Parameters (passed as attributes of the input parameters)
- ----------
+ ---------------------------------------------------------
PermShkStd : [float]
List of standard deviations in log permanent income uncertainty during
the agent's life.
diff --git a/HARK/ConsumptionSaving/ConsRiskyContribModel.py b/HARK/ConsumptionSaving/ConsRiskyContribModel.py
index a1e4aa9e9..dcf94c064 100644
--- a/HARK/ConsumptionSaving/ConsRiskyContribModel.py
+++ b/HARK/ConsumptionSaving/ConsRiskyContribModel.py
@@ -8,17 +8,19 @@
The model is described in detail in the REMARK:
https://econ-ark.org/materials/riskycontrib
-@software{mateo_velasquez_giraldo_2021_4977915,
- author = {Mateo Velásquez-Giraldo},
- title = {{Mv77/RiskyContrib: A Two-Asset Savings Model with
- an Income-Contribution Scheme}},
- month = jun,
- year = 2021,
- publisher = {Zenodo},
- version = {v1.0.1},
- doi = {10.5281/zenodo.4977915},
- url = {https://doi.org/10.5281/zenodo.4977915}
-}
+.. code:: bibtex
+
+ @software{mateo_velasquez_giraldo_2021_4977915,
+ author = {Mateo Velásquez-Giraldo},
+ title = {{Mv77/RiskyContrib: A Two-Asset Savings Model with
+ an Income-Contribution Scheme}},
+ month = jun,
+ year = 2021,
+ publisher = {Zenodo},
+ version = {v1.0.1},
+ doi = {10.5281/zenodo.4977915},
+ url = {https://doi.org/10.5281/zenodo.4977915}
+ }
"""
from copy import deepcopy
@@ -65,9 +67,10 @@ class RiskyContribConsumerType(RiskyAssetConsumerType):
asset.
The frictions are:
- - A proportional tax on funds moved from the risky to the risk-free
- asset.
- - A stochastic inability to move funds between his accounts.
+
+ - A proportional tax on funds moved from the risky to the risk-free
+ asset.
+ - A stochastic inability to move funds between his accounts.
To partially avoid the second friction, the agent can commit to have a
fraction of his labor income, which is usually deposited in his risk-free
@@ -897,7 +900,7 @@ def rebalance_assets(d, m, n, tau):
----------
d : np.array
Array with rebalancing decisions. d > 0 represents depositing d*m into
- the risky asset account. d<0 represents withdrawing |d|*n (pre-tax)
+ the risky asset account. d<0 represents withdrawing ``|d|*n`` (pre-tax)
from the risky account into the risky account.
m : np.array
Initial risk-free assets.
diff --git a/HARK/distribution.py b/HARK/distribution.py
index 37be02226..22314c9cb 100644
--- a/HARK/distribution.py
+++ b/HARK/distribution.py
@@ -915,15 +915,15 @@ def expected(
The function to be evaluated.
This function should take the full array of distribution values
and return either arrays of arbitrary shape or scalars.
- It may also take other arguments *args.
+ It may also take other arguments \\*args.
This function differs from the standalone `calc_expectation`
method in that it uses numpy's vectorization and broadcasting
rules to avoid costly iteration.
Note: If you need to use a function that acts on single outcomes
of the distribution, consider `distribution.calc_expectation`.
- *args :
+ \\*args :
Other inputs for func, representing the non-stochastic arguments.
- The the expectation is computed at f(dstn, *args).
+ The the expectation is computed at ``f(dstn, *args)``.
Returns
-------
@@ -958,10 +958,10 @@ def dist_of_func(
func : function
The function to be evaluated.
This function should take the full array of distribution values.
- It may also take other arguments *args.
- *args :
+ It may also take other arguments \\*args.
+ \\*args :
Additional non-stochastic arguments for func,
- The function is computed as f(dstn, *args).
+ The function is computed as ``f(dstn, *args)``.
Returns
-------
@@ -1155,11 +1155,11 @@ def dist_of_func(
func : function
The function to be evaluated.
This function should take the full array of distribution values.
- It may also take other arguments *args.
- *args :
+ It may also take other arguments \\*args.
+ \\*args :
Additional non-stochastic arguments for func,
- The function is computed as f(dstn, *args).
- **kwargs :
+ The function is computed as ``f(dstn, *args)``.
+ \\*\\*kwargs :
Additional keyword arguments for func. Must be xarray compatible
in order to work with xarray broadcasting.
@@ -1201,15 +1201,15 @@ def expected(
The function to be evaluated.
This function should take the full array of distribution values
and return either arrays of arbitrary shape or scalars.
- It may also take other arguments *args.
+ It may also take other arguments \\*args.
This function differs from the standalone `calc_expectation`
method in that it uses numpy's vectorization and broadcasting
rules to avoid costly iteration.
Note: If you need to use a function that acts on single outcomes
of the distribution, consier `distribution.calc_expectation`.
- *args :
+ \\*args :
Other inputs for func, representing the non-stochastic arguments.
- The the expectation is computed at f(dstn, *args).
+ The the expectation is computed at ``f(dstn, *args)``.
labels : bool
If True, the function should use labeled indexing instead of integer
indexing using the distribution's underlying rv coordinates. For example,
@@ -1855,10 +1855,10 @@ def calc_expectation(dstn, func=lambda x: x, *args):
The function to be evaluated.
This function should take an array of shape dstn.dim() and return
either arrays of arbitrary shape or scalars.
- It may also take other arguments *args.
- *args :
+ It may also take other arguments \\*args.
+ \\*args :
Other inputs for func, representing the non-stochastic arguments.
- The the expectation is computed at f(dstn, *args).
+ The the expectation is computed at ``f(dstn, *args)``.
Returns
-------
@@ -1893,10 +1893,10 @@ def distr_of_function(dstn, func=lambda x: x, *args):
func : function
The function to be evaluated.
This function should take an array of shape dstn.dim().
- It may also take other arguments *args.
- *args :
+ It may also take other arguments \\*args.
+ \\*args :
Additional non-stochastic arguments for func,
- The function is computed at f(dstn, *args).
+ The function is computed at ``f(dstn, *args)``.
Returns
-------
@@ -1977,7 +1977,7 @@ def expected(func=None, dist=None, args=(), **kwargs):
The function to be evaluated.
This function should take the full array of distribution values
and return either arrays of arbitrary shape or scalars.
- It may also take other arguments *args.
+ It may also take other arguments ``*args``.
This function differs from the standalone `calc_expectation`
method in that it uses numpy's vectorization and broadcasting
rules to avoid costly iteration.
@@ -1987,7 +1987,7 @@ def expected(func=None, dist=None, args=(), **kwargs):
The distribution over which the function is to be evaluated.
args : tuple
Other inputs for func, representing the non-stochastic arguments.
- The the expectation is computed at f(dstn, *args).
+ The the expectation is computed at ``f(dstn, *args)``.
labels : bool
If True, the function should use labeled indexing instead of integer
indexing using the distribution's underlying rv coordinates. For example,
diff --git a/HARK/frame.py b/HARK/frame.py
index 10613dba2..143e96ca2 100644
--- a/HARK/frame.py
+++ b/HARK/frame.py
@@ -401,8 +401,10 @@ def repeat(self, tv_parameters):
tv_parameters : dict
A dictionary of 'time-varying' parameters.
Keys are (original) variable names. Values are dictionaries with:
- Keys are parameter names. Values as iterable contain time-varying
- parameter values. All time-varying values assumes to be of same length, N.
+
+ - Keys are parameter names.
+ - Values as iterable contain time-varying parameter values.
+ All time-varying values assumes to be of same length, N.
"""
# getting length of first iterable thing passed to it.
diff --git a/HARK/interpolation.py b/HARK/interpolation.py
index 9896f2696..4ba7021aa 100644
--- a/HARK/interpolation.py
+++ b/HARK/interpolation.py
@@ -588,7 +588,7 @@ class IdentityFunction(MetricObject):
Parameters
----------
i_dim : int
- Index of the dimension on which the identity is defined. f(*x) = x[i]
+ Index of the dimension on which the identity is defined. ``f(*x) = x[i]``
n_dims : int
Total number of input dimensions for this function.
"""
diff --git a/examples/ConsIndShockModel/KinkedRconsumerType.ipynb b/examples/ConsIndShockModel/KinkedRconsumerType.ipynb
index dff2c568b..dbfb385bc 100644
--- a/examples/ConsIndShockModel/KinkedRconsumerType.ipynb
+++ b/examples/ConsIndShockModel/KinkedRconsumerType.ipynb
@@ -75,7 +75,7 @@
"a_t &\\geq& \\underline{a}, \\\\\n",
"m_{t+1} &=& \\Rfree_t/(\\PermGroFac_{t+1} \\psi_{t+1}) a_t + \\theta_{t+1}, \\\\\n",
"\\Rfree_t &=& \\cases{\\Rfree_{boro} \\texttt{ if } a_t < 0 \\\\\n",
- " \\Rfree_{save} \\texttt{ if } a_t \\geq 0},\\\\\n",
+ "\\,\\! \\Rfree_{save} \\texttt{ if } a_t \\geq 0},\\\\\n",
"\\Rfree_{boro} &>& \\Rfree_{save}, \\\\\n",
"(\\psi_{t+1},\\theta_{t+1}) &\\sim& F_{t+1}, \\\\\n",
"\\mathbb{E}[\\psi]=\\mathbb{E}[\\theta] &=& 1.\n",
@@ -106,27 +106,27 @@
"| Parameter | Description | Code | Example value | Time-varying? |\n",
"| :---: | --- | --- | --- | --- |\n",
"| $\\DiscFac$ |Intertemporal discount factor | $\\texttt{DiscFac}$ | $0.96$ | |\n",
- "| $\\CRRA $ |Coefficient of relative risk aversion | $\\texttt{CRRA}$ | $2.0$ | |\n",
+ "| $\\CRRA$ |Coefficient of relative risk aversion | $\\texttt{CRRA}$ | $2.0$ | |\n",
"| $\\Rfree_{boro}$ | Risk free interest factor for borrowing | $\\texttt{Rboro}$ | $1.20$ | |\n",
"| $\\Rfree_{save}$ | Risk free interest factor for saving | $\\texttt{Rsave}$ | $1.01$ | |\n",
"| $1 - \\DiePrb_{t+1}$ |Survival probability | $\\texttt{LivPrb}$ | $[0.98]$ | $\\surd$ |\n",
"|$\\PermGroFac_{t+1}$|Permanent income growth factor|$\\texttt{PermGroFac}$| $[1.01]$ | $\\surd$ |\n",
- "| $\\sigma_\\psi $ | Standard deviation of log permanent income shocks | $\\texttt{PermShkStd}$ | $[0.1]$ |$\\surd$ |\n",
- "| $N_\\psi $ | Number of discrete permanent income shocks | $\\texttt{PermShkCount}$ | $7$ | |\n",
- "| $\\sigma_\\theta $ | Standard deviation of log transitory income shocks | $\\texttt{TranShkStd}$ | $[0.2]$ | $\\surd$ |\n",
- "| $N_\\theta $ | Number of discrete transitory income shocks | $\\texttt{TranShkCount}$ | $7$ | |\n",
+ "| $\\sigma_\\psi$ | Standard deviation of log permanent income shocks | $\\texttt{PermShkStd}$ | $[0.1]$ |$\\surd$ |\n",
+ "| $N_\\psi$ | Number of discrete permanent income shocks | $\\texttt{PermShkCount}$ | $7$ | |\n",
+ "| $\\sigma_\\theta$ | Standard deviation of log transitory income shocks | $\\texttt{TranShkStd}$ | $[0.2]$ | $\\surd$ |\n",
+ "| $N_\\theta$ | Number of discrete transitory income shocks | $\\texttt{TranShkCount}$ | $7$ | |\n",
"| $\\mho$ | Probability of being unemployed and getting $\\theta=\\underline{\\theta}$ | $\\texttt{UnempPrb}$ | $0.05$ | |\n",
- "| $\\underline{\\theta} $ | Transitory shock when unemployed | $\\texttt{IncUnemp}$ | $0.3$ | |\n",
+ "| $\\underline{\\theta}$ | Transitory shock when unemployed | $\\texttt{IncUnemp}$ | $0.3$ | |\n",
"| $\\mho^{Ret}$ | Probability of being \"unemployed\" when retired | $\\texttt{UnempPrb}$ | $0.0005$ | |\n",
- "| $\\underline{\\theta}^{Ret} $ | Transitory shock when \"unemployed\" and retired | $\\texttt{IncUnemp}$ | $0.0$ | |\n",
+ "| $\\underline{\\theta}^{Ret}$ | Transitory shock when \"unemployed\" and retired | $\\texttt{IncUnemp}$ | $0.0$ | |\n",
"| $(none)$ | Period of the lifecycle model when retirement begins | $\\texttt{T_retire}$ | $0$ | |\n",
"| $(none)$ | Minimum value in assets-above-minimum grid | $\\texttt{aXtraMin}$ | $0.001$ | |\n",
"| $(none)$ | Maximum value in assets-above-minimum grid | $\\texttt{aXtraMax}$ | $20.0$ | |\n",
"| $(none)$ | Number of points in base assets-above-minimum grid | $\\texttt{aXtraCount}$ | $48$ | |\n",
"| $(none)$ | Exponential nesting factor for base assets-above-minimum grid | $\\texttt{aXtraNestFac}$ | $3$ | |\n",
"| $(none)$ | Additional values to add to assets-above-minimum grid | $\\texttt{aXtraExtra}$ | $None$ | |\n",
- "| $\\underline{a} $ | Artificial borrowing constraint (normalized) | $\\texttt{BoroCnstArt}$ | $None$ | |\n",
- "| $(none) $ |Indicator for whether $\\texttt{vFunc}$ should be computed | $\\texttt{vFuncBool}$ | $True$ | |\n",
+ "| $\\underline{a}$ | Artificial borrowing constraint (normalized) | $\\texttt{BoroCnstArt}$ | $None$ | |\n",
+ "| $(none)$ |Indicator for whether $\\texttt{vFunc}$ should be computed | $\\texttt{vFuncBool}$ | $True$ | |\n",
"| $(none)$ |Indicator for whether $\\texttt{cFunc}$ should use cubic splines | $\\texttt{CubicBool}$ | $False$ | |\n",
"|$T$| Number of periods in this type's \"cycle\" |$\\texttt{T_cycle}$| $1$ | |\n",
"|(none)| Number of times the \"cycle\" occurs |$\\texttt{cycles}$| $0$ | |\n",
diff --git a/examples/ConsIndShockModel/PerfForesightConsumerType.ipynb b/examples/ConsIndShockModel/PerfForesightConsumerType.ipynb
index 4ec3fa3cb..e44551f17 100644
--- a/examples/ConsIndShockModel/PerfForesightConsumerType.ipynb
+++ b/examples/ConsIndShockModel/PerfForesightConsumerType.ipynb
@@ -119,7 +119,7 @@
"| Parameter | Description | Code | Example value | Time-varying? |\n",
"| :---: | --- | --- | --- | --- |\n",
"| $\\DiscFac$ |Intertemporal discount factor | $\\texttt{DiscFac}$ | $0.96$ | |\n",
- "| $\\CRRA $ |Coefficient of relative risk aversion | $\\texttt{CRRA}$ | $2.0$ | |\n",
+ "| $\\CRRA$ |Coefficient of relative risk aversion | $\\texttt{CRRA}$ | $2.0$ | |\n",
"| $\\Rfree$ | Risk free interest factor | $\\texttt{Rfree}$ | $1.03$ | |\n",
"| $1 - \\DiePrb_{t+1}$ |Survival probability | $\\texttt{LivPrb}$ | $[0.98]$ | $\\surd$ |\n",
"|$\\PermGroFac_{t+1}$|Permanent income growth factor|$\\texttt{PermGroFac}$| $[1.01]$ | $\\surd$ |\n",
diff --git a/examples/GenIncProcessModel/GenIncProcessModel.ipynb b/examples/GenIncProcessModel/GenIncProcessModel.ipynb
index 4570b10d2..6cfb2ad1a 100644
--- a/examples/GenIncProcessModel/GenIncProcessModel.ipynb
+++ b/examples/GenIncProcessModel/GenIncProcessModel.ipynb
@@ -91,7 +91,7 @@
"M_{t+1} &=& R a_t + \\theta_{t+1} \\\\\n",
"p_{t+1} &=& G_{t+1}(p_t)\\psi_{t+1} \\\\\n",
"\\psi_t \\sim F_{\\psi_t} &\\qquad& \\theta_t \\sim F_{\\theta_t} \\\\\n",
- " \\mathbb{E} [F_{\\psi_t}] = 1 & & \\mathbb{E} [F_{\\theta_t}] =1 \\\\\n",
+ "\\mathbb{E} [F_{\\psi_t}] = 1 & & \\mathbb{E} [F_{\\theta_t}] =1 \\\\\n",
"U(c) &=& \\frac{c^{1-\\rho}}{1-\\rho}\n",
"\\end{eqnarray*}"
]
diff --git a/examples/Journeys/Journey-PhD.ipynb b/examples/Journeys/Journey-PhD.ipynb
index f66a1b55f..bcad3309a 100644
--- a/examples/Journeys/Journey-PhD.ipynb
+++ b/examples/Journeys/Journey-PhD.ipynb
@@ -424,7 +424,7 @@
"\n",
"The Market class was designed to be a general framework for many different macro models. It involves a procedure of aggregating the agents' choices: eg. aggregating consumption and savings (`reap_vars` in the code) and then transforming the aggregated variables (`mill_rule` in the code).\n",
"\n",
- "If you would like to get better knowledge about this structure, first take a look at the [Hark documentation](https://hark.readthedocs.io/en/latest/ARKitecture.html). Next, to understand how the HARK Market class works in less standard setting, look at the [Fashion victim model](../notebooks/Fashion-Victim-Model.ipynb).\n"
+ "If you would like to get better knowledge about this structure, first take a look at the [Hark documentation](https://hark.readthedocs.io/en/latest/ARKitecture.html). Next, to understand how the HARK Market class works in less standard setting, look at the [Fashion victim model](https://github.com/econ-ark/DemARK/blob/99948acb7b59cc9a6fb7de758972266fa4b03a06/notebooks/Fashion-Victim-Model.ipynb).\n"
]
},
{
diff --git a/examples/LifecycleModel/LifecycleModel.py b/examples/LifecycleModel/LifecycleModel.py
index fe2d4f81f..ad244ea03 100644
--- a/examples/LifecycleModel/LifecycleModel.py
+++ b/examples/LifecycleModel/LifecycleModel.py
@@ -112,7 +112,7 @@
def savingRateFunc(SomeType, m):
"""
Parameters:
- ----------
+ -----------
SomeType:
Agent type that has been solved and simulated.
diff --git a/requirements/dev.txt b/requirements/dev.txt
index 5f35beda0..7eb36db45 100644
--- a/requirements/dev.txt
+++ b/requirements/dev.txt
@@ -4,6 +4,6 @@ nbval
pre-commit
pytest
pytest-xdist
-recommonmark>=0.7
+myst-parser>=2
sphinx>=6.1
pydata-sphinx-theme