-
Notifications
You must be signed in to change notification settings - Fork 0
/
training_1.Rmd
368 lines (250 loc) · 28.1 KB
/
training_1.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
---
title: "Documentation"
output:
html_document:
css: "css/custom.css"
toc: TRUE
toc_float: TRUE
anchor_sections: FALSE
includes:
in_header: plausible_analytics.html
---
<br>
:::presentation
<br>
You can view slides from this talk [here](https://cghlewis.github.io/mpsi-training1/).
:::
---
## Overview
---
Data documentation is not only required by funders including IES, it is essential for data management. Data documentation should cover the who, what, where, when, and why of your project and your data ([OSF: NIH](https://osf.io/48ufs/)).
```{r, echo=FALSE, fig.align="center", out.width='65%'}
knitr::include_graphics("img/document2.png")
```
Documentation also allows you to:
* Track decisions/changes made throughout the life cycle of the project
* Make decisions/analyses replicable
* Clean data with fidelity
* Ensure others use and interpret data accurately
* Discover errors in data
* Allow others to find archived data through metadata
* Reduces [data rot](https://www.teaguehenry.com/strings-not-factors/2021/1/24/data-management-for-researchers-three-terrifying-tales), when good data is transformed over time and becomes unusable because no one tracks the transformation
Documentation can take many forms. [IES](https://ies.ed.gov/funding/datasharing_implementation.asp) states that they expect documentation to "be a comprehensive and stand-alone document that includes all the information necessary to replicate the analysis performed by the original research team" and should include:
* a summary of the purpose of the data collection
* methodology and procedures used to collect the data
* timing of the data collection
* details of the data codes, definition of variables, transformations, variable field locations, and frequencies
However, a) IES provides no template for how documentation is laid out OR what tools to use to create/share this information and b) outside of IES required documentation for data sharing, there may be additional documentation your team may want to keep to help manage internal processes.
This training will cover types of documentation you may want to keep. Many of these documents have overlapping information and some terms may be used interchangeably in the field (ex: Data dictionary and codebook or README and metadata). Also, some of the documents are important to start day one of the project, while others are more important to create at the end of the project when you are ready to share data. The point of this next section isn't to implement all of these documents, but rather to consider which documents capture the information you need to have a successful project. It may just be a protocol and a data dictionary. Or it may be a protocol, a data dictionary and a README for each data file. Or it may be all of these documents! It depends on things such as the scale of your project and how you plan to share the data at the end of your project.
Additional reading on the importance of data documentation can be found here:
📑 [University of Helsinki](https://www.helsinki.fi/en/research/guide-for-data-documentation)
📑 [Washington University in St. Louis](https://libguides.wustl.edu/c.php?g=47355&p=303435)
---
## Documentation
---
### Protocol
I've seen this document called many other names including procedures manual, standard operation procedures (sop), research protocol, or coordinator's manual. The CITI Good Clinical Practices [training](https://about.citiprogram.org/series/good-clinical-practice-gcp/) calls this a Clinical Protocol. Whatever term you use, this is a document/s to record all of your procedures and changes made to those procedures throughout the grant.
At some point someone will ask you how or why you did something the way you did. This is the document that will save you because it is very likely that you will not remember. This document also helps standardize your procedures. You can and should refer back to this document during a longitudinal study to ensure you continue to implement procedures in the same way.
I highly recommend always adding protocol as part of any data documentation plan. You can make separate protocol documents per procedure or keep them all in one standalone document with a table of contents. Most likely your protocol will not be shared outside of your team, but the information in your protocol can be used to inform other documentation such as READMEs or codebooks. Your protocol can live in any format that works for you (ex: .docx, .md, .txt, google doc, or you can use a platform such as [protocols.io](https://www.protocols.io/)). It is a living document that will be continually updated so use a format that makes sense for you. All of your project/grant specific protocols should be housed in an easily accessible location such as the top of a project folder.
It is also possible to have protocols that may be applicable across all projects (such as rules for data entry). You can keep a copy of these more broad rules in an institutional protocol (or some may call it a lab manual) to be referred to for all projects and keep it in an accessible location such as your team folder.
Protocol should cover procedures/decisions for the following:
* Participant recruitment (inclusionary and exclusionary decisions)
* Consent and assent
* Participant selection
* Randomization and blinding/unblinding
* Data collection
* Staff training
* Data entry, data retrieval, data scoring (including decision rules)
* Payments/incentives
* Intervention implementation
Each protocol should *begin* with the following:
* Title
* Date the protocol was made
* Who made the protocol
* Any rationale behind the protocol
* Any related documents or research behind the protocol
There may be times when you need to revise your protocol, either due to decisions made by the researchers (ex: moving surveys from paper to online due to the pandemic) or to note deviations from the protocol discovered during data collection (ex: data collectors were allowing students to read assessment instructions on their own rather than reading them out loud to the group). For any *changes to* or *deviations from* the protocol after the project has begun, add the following below the original protocol section:
* Revision Date
* Who decided on the revision
* Any rationale behind the revision
* Any related documents or research behind the revision
It is also good practice to add a copy of your data collection instruments to your protocol (Surveys, Interview Questions, etc.), as well as changes/versions of those and explanations for the changes.
📑 These [slides](https://figshare.com/articles/Data_Management_and_Data_Management_Plans/7890827) from Jessica Logan, Ph.D. have nice protocol examples.
📑 This [sample](img/sample_sop.pdf) protocol (or sop) from CITI, while not education focused, provides a nice template for how to lay out each individual protocol.
📑 This is a cool [example](https://cds-snc.github.io/design-research-handbook/home/) of an interactive research protocol housed on GitHub. You could build something similar in a tool like Google Docs. While this one isn't education specific, you can see how the outline can be replaced with education research content.
<br>
### Style Guide
A [style guide](https://en.wikipedia.org/wiki/Style_guide) is a set of standards around data management rules. It improves consistency within and across files and projects. This document includes conventions for procedures such as variable naming, value coding, file naming, versioning, and file structure. Consider having a style guide specific to every project (for variable naming, value coding) as well as a general style guide that applies to all projects (for file structure, versioning, and file naming). This guide can be be kept in any form such as .txt or .pdf and should be housed where it can easily be accessed by all team members (Ex: a style guide that applies across projects may be kept on a wiki and a project-specific style guide may be kept as a protocol or README in a project folder). See [Training3](https://cghlewis.github.io/mpsi-data-training/training_3.html) for best practices in style guides.
Example [style guide](https://github.com/bartvandebiezen/file-name-conventions/blob/master/README.md).
<br>
### Wiki
I am including a wiki in this list even though I think it is an unconventional documentation tool in education research. A wiki can help your team internally document, organize, and navigate reference material. If you use platforms such as Microsoft SharePoint, your default team page type is wiki and your home page can be a great place to store information that your team frequently refers to. SharePoint describes a wiki as "a site that is designed for groups of people to quickly capture and share ideas by creating simple pages and linking them together." It is a page that anyone on your team can add/edit content (a short blurb about the document) and then can link to those referenced documents for easy access.
Wikis can be made for specific research projects and are great for referring to frequently requested information (ex: recruitment information or participant payment details). You can also make a general research team wiki that includes information that either applies to your entire team (ex: Team meeting notes or employee manual) or information that is relevant across all projects (ex: Style guide, lab manual).
📑 You can make wikis with many other tools, but you can read more about SharePoint's wiki [here](https://support.microsoft.com/en-us/office/create-and-edit-a-wiki-dc64f9c2-d1a2-44b5-ac59-b9d535551a32).
📑 [Notion](https://www.notion.so/guides/how-to-build-a-wiki-for-your-company) is another tool I recently learned of to build a wiki as well.
📑 There is a great episode of the [Education Data Chat](https://www.buzzsprout.com/1074286/5185513-episode-5-organizing-shared-network-drives-tips-and-tricks) podcast where they discuss wikis as well.
<br>
### Data Dictionary
A data dictionary is another essential data management document and I highly suggest you start this before your ever collect a piece of data. This will help you set up successful back-end coding as you create data collection tools and it will serve as a reference document you can use as questions come up about the data. It is also a document that can be shared with others to help them better understand your data. A data dictionary is usually in rectangular format, for example an Excel spreadsheet, Google sheet or something similar that has rows and columns. It includes all information relevant to every variable in your data.
📑 Here is a nice [example](https://media.screensteps.com/image_assets/assets/001/878/671/original/8fbcc565-602f-4ac6-b828-2eaaeb67578b.png) of a very simple data dictionary.
Your dictionary **should** capture information such as:
* Variable Name
* Variable Label
* Exact question text
* Associated scale/measure (if the item belongs to a scale)
* Associated form (Student survey, teacher rating of student, etc.)
* Value range or value codes ([1-99] or 0=No, 1=Yes)
* Measurement unit (numeric, string, date, etc.)
* Missing data codes
* Variable universe (Who gets this question? Is there skip logic?)
* What time periods does this variable exist
* Reverse coding
* Calculations (composite variables, scores)
* Notes (such as versions/changes to this variable)
* Question number (order number of the question on the measure)
You may consider having separate data dictionaries per expected final dataset. So for example, if you plan to have a final teacher dataset and a final student dataset, you may want to have a student dictionary and a teacher dictionary rather than having only one data dictionary and marking teacher or student in the universe column.
As I mentioned earlier, it is important to start a data dictionary **before** ever collecting data. You should be able to begin filling in information for the data you have control over. For example, if you are making and administering the survey, you should be able to know what variables will be in that data, how you want to name them, and how they should be coded. However, you may not know this information for other variables until after data collection. For example, if you are administering a 3rd party assessment, you may not know what the exported data is going to look like until after you have collected the data and it has been scored. Just remember, this is a living document that will updated continually throughout the life of the project.
**In order to start a data dictionary, you should gather the following information**:
* What measures are we collecting? (Ex: Student Assessment, Teacher Observation, Teacher Survey, Principal Interview)
* What are the items/scales included?
* What is your variable naming protocol? (See [variable naming in Style Guide](https://cghlewis.github.io/mpsi-data-training/training_3.html#Variable_Naming))
* Do your items/scales have pre-determined value coding rules or can we assign our own? What is our value coding protocol? (See [value coding in Style Guide](https://cghlewis.github.io/mpsi-data-training/training_3.html#Value_Coding))
It's good to also consider starting to document the variables you aren't collecting but you are **assigning** or **deriving**.
* Variables you are assigning:
+ To maintain anonymity and to track participants and merge data over time, you will be assigning identifiers.
+ Participant/Study ID (*stu_id*, *tch_id*)
+ Location/Site ID (*sch_id*, *dist_id*)
+ Depending on your study design you may also need to account for the following:
+ Treatment/condition/intervention group (Ex: *int*)
+ Time/wave (*year*, *wave*, *cohort*)
+ Randomization blocks (*class*, *group*)
* Variables you derive:
+ These include things such as composite scores, weights, or continuous variables grouped into categories. You can start to add these variables and their calculations/logic to your data dictionary.
+ Ex: *scale_mean*, *sch_size_cat*
:::question
**Quick thought: Accounting for Time**
When deciding how to account for time, consider how you plan to merge your data across time.
a) Merge wide (each row is a unique participant and all associated time points are in that row), then append time to your variables (prefix or suffix) <br>
b) Merge long (a unique participant occurs in multiple rows for each time period collected), then time can be its own variable/s
:::
<br>
Additional reading on data dictionaries can be found here:
📑 [OSF](https://help.osf.io/hc/en-us/articles/360019739054-How-to-Make-a-Data-Dictionary)
📑 [USDA](https://data.nal.usda.gov/data-dictionary-purpose)
📑 [University of Helsinki](https://www.helsinki.fi/en/research/guide-for-data-documentation#section-72708)
📑 [Karl Broman](http://kbroman.org/dataorg/pages/dictionary.html)
📑 [Penn State](https://www.methodology.psu.edu/ra/most/redcap/)
📑 [Education Data Done Right](https://www.eddatadoneright.com/)
<br>
### Codebook
Like a data dictionary, a codebook also captures variable information, as well as project level information. It should capture all a user would need to know in order to understand and correctly interpret your data. It is typically part of the metadata that is added to a data archive, or is given to those who request your data after the project is complete. It usually takes the form of a plain text file (.txt), pdf (.pdf), or extensible markup language (.xml), rather than a proprietary form such as .docx. If you embed metadata into your data, such as variable labels, value labels and missing values, several statistical programs (including R, SPSS, SAS and Stata) will export codebooks for your final datasets. You can make one large codebook that includes all data for your project, or make separate codebooks by dataset (ex: Teacher level dataset codebook), or even by measure (ex: Student survey codebook).
📑 Here is a nice example of a [codebook](https://ddialliance.org/sites/default/files/06551.pdf).
Your codebook should captures things such as:
* Study title
* Names of Investigators
* Table of contents
* Purpose and format of the codebook
* Variable level details
+ Variable name
+ Variable label
+ Question text
+ Measurement unit
+ Coded values or value range (1,2,3,4)
+ Value labels (*excellent*, *good*, *fair*, *poor*)
+ Summary statistics
+ Missing data values
+ Skip patterns
+ Notes
+ If data is in text format, indicate position of each variable
* Computations
* Imputations
Other optional content for a codebook:
* Citations for scales used and published reliability
* Data collection instruments
* Consent agreements
* Methodological details
* Flowchart of data collection instruments/screeners
* Study timeline
*Last, I have found that sometimes the term data dictionary is synonymous with codebook. I don't think the name matters, but for the sake of this training I am specifying a data dictionary as the document in tabular form with only variable level information and a codebook as a document usually in text format. You can keep either or both types of documentation. I think the data dictionary as laid out in this training is the easier document to update throughout the life of the project and assists with project implementation, while I view the codebook as a document to summarize the final datasets at the end of the project (and to be included with data archives/requests).*
Further information on codebooks can be found here:
📑 [ICPSR](https://www.icpsr.umich.edu/icpsrweb/content/shared/ICPSR/faqs/what-is-a-codebook.html)
📑 [SAMHDA](https://www.datafiles.samhsa.gov/faq/what-codebook-nid3440)
📑 [Kai Horstmann](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjb8-TwpMPsAhUM2qwKHXFdB1cQFjAKegQIBBAC&url=https%3A%2F%2Fosf.io%2F72hrh%2F%3Faction%3Ddownload%26version%3D2&usg=AOvVaw340qQlC9mDMzc444K_dkhT)
📑 [The R package Codebook](https://rubenarslan.github.io/codebook/index.html)
<br>
### README
A README is a plain text (.txt), markdown (.md), pdf (.pdf), or extensible markup language (.xml), standalone file that explains your files. README files are most known for their use in programming, but have become more prevalent in research. It is recommended to make one README file per dataset. This standalone document will accompany each dataset. It should be named so it is easily associated with the dataset and it should be housed in the same folder the data is in.
A README should capture data such as:
* General information (Title of dataset, contact information, key dates, funding)
* File information (Description, format, creation date, versions)
* Access information (Licenses, recommended citation, associated publications)
* Methodology (Consent rates, data collection methods, data processing methods, software used, quality-assurance procedures)
* Optional: Your codebook can also be part of your README, rather than a separate document
* Here is an [example](https://libguides.wustl.edu/ld.php?content_id=29108244)
Excellent examples, templates, and further details can be found here:
📑 [README with Codebook included: University of Munster](https://osf.io/y2gjr/)
📑 [README template: Oregon State](https://guides.library.oregonstate.edu/ld.php?content_id=45294345)
📑 [README template: Cornell](https://cornell.app.box.com/v/ReadmeTemplate)
📑 [README template: OSF](https://osf.io/sj8xv/)
📑 [README recommended content: UCI](https://guides.lib.uci.edu/datamanagement/readme)
📑 [README recommended content: Cornell](https://data.research.cornell.edu/content/readme)
📑 [README recommended content: IHEID](https://libguides.graduateinstitute.ch/rdm/readme)
<br>
### Metadata
Metadata is *data about data*. They are structured data that "provide information about the dataset to help people find, understand, and use your data" ([IES](https://ies.ed.gov/funding/datasharing_faq.asp)). Most of the time when you hear the term metadata, it will be referring to what I am going to call project-level metadata.
1. Project-level metadata is a type of descriptive and legal metadata that aids in finding your project through internet searches. It is like the "label of the dataset". At minimum the information captured should include who created the data, when the data were created, a title and description of the data, and a unique identifier for the data. This is the type of data we discussed capturing earlier within the *General*, *File*, and *Access* portion of our *README* files.
Common elements of project-level metadata include:
```{r, echo=FALSE, fig.align="center", out.width='80%'}
knitr::include_graphics("img/metadata.png")
```
Source: [University of Portland](https://libguides.up.edu/datamanagement/documentation)
Metadata can also also be categorized further into data-level and variable-level:
2. Data-level metadata, which might be considered administrative or technical metadata, focuses more on the details of each dataset such as instruments used to collect the data and software used to process the data. Lucikly, creating this metadata requires no extra effort because this is the exact information captured in sections like *Methodology* in your *README* files above.
3. Variable-level metadata, which might be considered structural metadata, is data about the variables in your dataset. This can be both incorporated into your data, by adding attributes like variable and value labels, as well as accounted for externally, and it is includes all the information discussed earlier in your *codebook* and/or *data dictionary*. This data should also describe how datasets relate to one another (ex: student dataset is linked to teacher dataset through tch_id). If your data is qualitative in nature (such as documents, photos, videos, or sound files), metadata such as date/time/location, tags/keywords, or measurement information can sometimes be embedded directly into your files ([IHEID](https://libguides.graduateinstitute.ch/rdm/qualitative)).
The most notable thing about metadata is that many fields follow metadata standards that aid in the "retrieving and indexing" of your data ([University of Arizona](https://data.library.arizona.edu/best-practices/data-documentation-readme-metadata)). Standards include things such as agreed upon formats, vocabularies, and structure. For example social science often follows the Data Documentation Initiative ([DDI](https://ddialliance.org/)) standards. However, ([IES](https://ies.ed.gov/funding/datasharing_faq.asp)) states that education has no standards and that instead "researchers should document everything and strive to make notation as interpretable as possible". Metadata can either be embedded within data or included in a separate file such as a README.txt or .pdf file.
If you plan to archive your data, the data repository you plan to use may have standards in place that you will need to follow. Consult your repository's website for detailed information on what is required for archival. Many universities have archives that are available for use and IES also has recommended data repositories that can be found [here](https://ies.ed.gov/funding/datasharing_faq.asp).
Additional metadata information can be found here:
📑 [Oregon State](https://guides.library.oregonstate.edu/research-data-services/data-management-metadata#:~:text=Project%2Dlevel%20metadata%20describes%20the,Dataset%20title)
📑 [UCI](https://guides.lib.uci.edu/datamanagement/describe)
📑 [NCSU](https://www.lib.ncsu.edu/do/data-management/metadata)
📑 [ICPSR](https://www.icpsr.umich.edu/web/pages/datamanagement/dmp/plan.html)
📑 [DDI](https://ddialliance.org/training/getting-started/data-catalog)
📑 [ADS](https://guides.archaeologydataservice.ac.uk/g2gp/CreateData_1-2#ref-CreateData_1-2-9)
📑 [University of Helsinki](https://www.helsinki.fi/en/research/guide-for-data-documentation)
📑 [London School of Economics and Political Science](https://www.lse.ac.uk/library/research-support/research-data-management/metadata-and-documentation)
📑 [Washington University in St. Louis](https://libguides.wustl.edu/c.php?g=47355&p=305263)
📑 [Data Management for Researchers](https://pelagicpublishing.com/products/data-management-for-researchers-briney)
📑 [DMP Tool General Guidance](https://dmptool.org/general_guidance)
📑 [CESSDA](https://www.cessda.eu/Training/Training-Resources/Library/Data-Management-Expert-Guide/2.-Organise-Document/Documentation-and-metadata)
📑 [Washington University](https://libguides.wustl.edu/drmr/dataprep)
📑 [Jessica Rex](https://zenodo.org/record/3534954#.YFIxBZ1KiUk)
<br>
### Miscellaneous
Last you should consider documenting **all** your data processes to allow for reproducibility. This may fit into one of the above mentioned documents, or it may be a separate text or markdown files (such as a README), a syntax file, or even an Excel file. Keep this document in the folder related to the content and name it accordingly.
Consider tracking things such as:
* Versioning (Ex: `README_district-file-changelog.txt`):
* If you use version control, this is easy to track through commit messages or commenting. However, if you do not use software that has version control, consider keeping a document, such as a changelog, that tracks differences between files.
+ `district-data_2020-10-02.xlsx`: File received from District A containing 2019-20 test scores
+ `district-data_2020-10-03.xlsx`: File received from District A with 5 additional students added after `district-data_2020-10-02.xlsx` was received
+ `district-data_2020-10-04.xlsx`: File received form District A with test scores updated for 30 students because errors were found in `district-data_2020-10-03.xlsx`
<br>
* Data cleaning plan (Ex: `district_data-cleaning-plan.md`)
* This will be discussed more in later trainings, but a written analysis plan can be helpful to communicate across your team, as a reminder to your future self, and can also be added to your documentation or a technical appendix for a report. This plan can be in a standalone text file, markdown file, or as comments in a syntax file and should include, in *prose*, the steps you plan to take to take your data from raw form to final analysis/clean form. Other people may refer to this as a ["Chain of Processing"](https://www.teaguehenry.com/strings-not-factors/2021/1/24/data-management-for-researchers-three-terrifying-tales). This allows anyone to follow your data cleaning process, even if they don't understand your actual code (ex: R or Stata code).
<br>
* Files used in data cleaning process (Ex: `README_district_data-cleaning-process.txt`).
* Data cleaning plan: `district_cleaning-plan.md`
* Syntax: `district_cleaning-syntax.R`
* Input: `district-data_raw_2020_09_08.xlsx`
* Output: `district-data_clean_2020_10_09.csv`
<br>
* A setup file for steps to create tables and reports (Ex: `00_setup.md`):
* Step 1: Run the file `01_clean-the-data.R` to clean the data
* Step 2: Run the file `02_check-errors.R` to check for errors
* Step 3: Run the file `03_run-report.R` to create the report
<br>
### Conclusion
The main takeaway is that it is important to document everything. Not only is it required by funders and by data repositories, as well as necessary for those submitting data requests, but it is crucial to the integrity and reproducibility of your project. While you should try to standardize your documentation as much as possible, don't get stuck in the details of the format or the structure. Instead focus on getting the content, rules, and procedures down before you forget.
Although, *publishing data* will come in a later module, I did want to wrap up by saying that when it comes time to publish your data (whether in a repository or data sharing through your own request system), plan to include the following documentation with your final datasets:
* README file (for each dataset) - including the project-level metadata
* Data dictionaries and/or codebooks
📑 If you want to see an amazingly thorough (and very long) example of data documentation, explore the [Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 User's Manual](https://nces.ed.gov/pubs2015/2015074.pdf) from IES.