-
Notifications
You must be signed in to change notification settings - Fork 556
/
project_gsu.Rmd
125 lines (98 loc) · 8.39 KB
/
project_gsu.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
title: "Bayesian Data Analysis course - Project work (GSU)"
date: "Page updated: `r format.Date(file.mtime('project_gsu.Rmd'),'%Y-%m-%d')`"
---
Bayesian Data Analysis Global South (GSU) 2023
## Project work details
Project work involves choosing a data set and performing a whole
analysis according to all the parts of Bayesian workflow studied along
the course. In this course instance there are no project
presentations, but you will get feedback from your peers. You can do
the project work in groups if you like.
## Project schedule
- See [the overall schedule of the GSU 2023 course](gsu2023.html#Schedule_2023)
- Start working on the project in midway of the course.
- Project report deadline 15th May Submit in peergrade (separate "class", the class code will be posted in the course chat announcements).
- Project report peer grading 18-20th May.
## Groups
In this course instance the project work can be done in groups of 1-4
persons, but you don't need to find a group.
If you don't have a group, you can ask other students in the group
chat channel **#project**. Tell what kind of data you are interested
in (e.g. medicine, health, biological, engineering, political,
business), whether you prefer R or Python, and whether you have
already more concrete idea for the topic.
## Evaluation
In this course instance the project work's evaluation consists of only from
- peergraded project report (40%) (within peergrade submission 80% and feedack 20%)
## Project report
In the project report you practice presenting the problem and data
analysis results, which means that minimal listing of code and figures
is not a good report. There are different levels for how data analysis
project could be reported. This report should be more than a summary
of results without workflow steps. While describing the steps and
decisions made during the workflow, to keep the report readable some
of the **diagnostic outputs and code** can be put in the **appendix**. If you
are uncertain you can ask TAs in TA sessions whether you are on a good
level of amount of details.
The report **should not be over 20 pages** and should include
1. Introduction describing
- the motivation
- the problem
- and the main modeling idea.
- Showing some illustrative figure is recommended.
1. Description of the data and the analysis problem. Provide information where the data was obtained, and if it has been previously used in some online case study and how your analysis differs from the existing analyses.
1. Description of at least two models, for example:
- non hierarchical and hierarchical,
- linear and non linear,
- variable selection with many models.
1. Informative or weakly informative priors, and justification of their
choices.
1. Stan, rstanarm or brms code.
1. How the Stan model was run, that is, what options were used. This is also more clear as combination of textual explanation and the actual code line.
1. Convergence diagnostics ($\widehat{R}$, ESS, divergences) and what was done if the convergence was not good with the first try. <u>This should be reported for all models.</u>
1. Posterior predictive checks and what was done to improve the model. <u>This should be reported for all models.</u>
1. **Optional/Bonus**: Predictive performance assessment if applicable (e.g. classification
accuracy) and evaluation of practical usefulness of the accuracy. This should be reported for all models as well.
1. Sensitivity analysis with respect to prior choices (i.e. checking whether the result changes a lot if prior is changed). <u>This should be reported for all models.</u>
1. Model comparison (e.g. with LOO-CV).
1. Discussion of issues and potential improvements.
1. Conclusion what was learned from the data analysis.
1. Self-reflection of what the you/group learned while making the project.
## Data sets
As some data sets have been overused for these particular goals, note that the
following ones are forbidden in this work (more can be added to this list so
make sure to check it regularly):
- extremely common data sets like titanic, mtcars, iris
- Baseball batting (used by Bob Carpenter's StanCon case study).
- Data sets used in the course demos
It's best to use a dataset for which there is no ready made analysis in internet, but if you choose a dataset used already in some online case study, provide the link to previous studies and report how your analysis differs from those (for example if someone has made non-Bayesian analysis and you do the full Bayesian analysis).
Depending on the model and the structure of the data, a good data set would have more than 100 observations but less than 1 million. If you know an interesting big data set, you can use a smaller subset of the data to keep the computation times feasible. It would be good that the data has some structure, so that it is sensible to use multilevel/hierarchical models.
If you're looking for inspiration or you're not sure where to begin, take a browse over [this list of datasets](https://github.com/awesomedata/awesome-public-datasets) arranged by topic, the datasets mentioned in the [lecture slides](https://github.com/avehtari/BDA_course_Aalto/blob/gh-pages/project_work/slides_project_work.pdf) (see slide 6), or else look at some of these publically accessible databases:
- [EU data](https://data.europa.eu/data/datasets?locale=en&minScoring=0)
- [ICPSR](https://www.icpsr.umich.edu/web/pages/ICPSR/index.html)
- [The World Bank](https://data.worldbank.org/)
## Model requirements
- Every parameter needs to have an explicit proper prior. Improper flat priors
are not allowed.
- A hierarchical model is a model where the prior of certain parameter
contain other parameters that are also estimated in the model. For instance,
`b ~ normal(mu, sigma)`, `mu ~ normal(0, 1)`, `sigma ~ exponential(1)`.
- Do not impose hard constrains on a parameter unless they are natural to
them. `uniform(a, b)` should not be used unless the boundaries are really logical boundaries and values beyond the boundaries are completely impossible.
- At least some models should include covariates. Modelling the outcome without predictors
is likely too simple for the project.
- `brms` and `rstanarm` can be used, but you need to report the priors used (including reporting the priors `brms` and `rstanamr` assign by default).
## Some examples
The following case study examples demonstrate how text, equations, figures, and code, and inference results can be included in one report. These examples don't necessarily have all the workflow steps required in your report, but different steps are illustrated in different case studies and you can get good ideas for your report just by browsing through them.
- [BDA R and Python demos](demos.html) are quite minimal in description of the data and discussion of the results, but show many diagnostics and basic plots.
- Some [Stan case studies](https://mc-stan.org/users/documentation/case-studies) focus on some specific methods, but there are many case studies that are excellent examples for this course. They don't include all the steps required in this course, but are good examples of writing. Some of them are longer or use more advanced models than required in this course.
- [Bayesian workflow for disease transmission modeling in Stan](https://mc-stan.org/users/documentation/case-studies/boarding_school_case_study.html)
- [Model-based Inference for Causal Effects in Completely Randomized Experiments](https://mc-stan.org/users/documentation/case-studies/model-based_causal_inference_for_RCT.html)
- [Tagging Basketball Events with HMM in Stan](https://mc-stan.org/users/documentation/case-studies/bball-hmm.html)
- [Model building and expansion for golf putting](https://mc-stan.org/users/documentation/case-studies/golf.html)
- [A Dyadic Item Response Theory Model](https://mc-stan.org/users/documentation/case-studies/dyadic_irt_model.html)
- [Predator-Prey Population Dynamics:
the Lotka-Volterra model in Stan](https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html)
- [Hierarchical model for motivational shifts in aging monkeys](https://dansblog.netlify.app/posts/2022-09-04-everybodys-got-something-to-hide-except-me-and-my-monkey/everybodys-got-something-to-hide-except-me-and-my-monkey.html)
- Some [StanCon case studies](https://github.com/stan-dev/stancon_talks) (scroll down) can also provide good project ideas.