Skip to content
This repository has been archived by the owner on Jan 23, 2024. It is now read-only.

Benchmarking the accuracy of the prediction of ABM-based models against DSGE #21

Open
rht opened this issue Mar 16, 2017 · 1 comment
Open

Comments

@rht
Copy link
Member

rht commented Mar 16, 2017

For rigor's sake, it would be effective to create a benchmark based on a real dataset that can be reused by several ABM implementations, throughout the time. This is analogous to the NIST hash function competition or ML datasets in various fields but is instead for economic models. If such benchmark exists, prediction accuracy and runtime perf can be iteratively improved over time.

I found a super relevant paper here and its implementation here. I have been diving into https://github.com/S120/benchmark, wasn't able to find how a dataset is included. The most I could find so far (in order to proceed) is from footnote-26 in the paper:

Real time series are taken from the Federal Reserve Economic Data (FRED): they are quarterly data ranging from 1955-01-01 to 2013-10-01 for unemployment (not seasonally adjusted, FRED code: LRUN64TTUSQ156N) and ranging from 1947-01-01 to 2013-10-01 for investments, consumption and GDP (FRED codes: PCECC96, GPDIC96, and GDPC1 respectively)

Even so, I can't figure any micro params from the paper, e.g. how many agents (households+firms+banks) have to be spawned (whether the number should grow organically over several decades), initial endowments/prices/wages. I could find the sequence of events for each round at section 2.1.
Do you have any recommendation on how to write the model in ABCE, @DavoudTaghawiNejad

@rht
Copy link
Member Author

rht commented Mar 16, 2017

cc: @S120 / @antoinegodin

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants