A program to create Climate and Forecast datasets with xarray.
To get started, run:
poetry install
Then, for local development, set the variable:
ODM2_CONNECTION_STR=postgresql:///{DATABASE}?host={HOST}&port=5432&user={USERNAME}
in your .env
file or pass the password in the database URL. You can also export the password outside the .env
file using:
export PGPASSWORD=$(gcloud auth print-access-token)
Also, see config.py, note that for ferrybox you can set the password in the TSB_CONNECTION_STR
variable in your .env
file.
A local static database for tests can be found here. Run the tests with:
poetry run pytest .
To skip Docker tests, use:
poetry run pytest -m "not docker" .
The different entry points can be listed with:
poetry run dscreator --help
By default, data will be saved to the ./catalog
folder. This can be changed using the environment variable STORAGE_PATH
. The program uses a restart file from the given storage location to create new slices in time.
poetry run dscreator sios --stop-after-n-files 2
# or
poetry run dscreator msource-inlet --stop-after-n-files 1 --acdd yes
# or
poetry run dscreator msource-outlet --max-time-slice 240 --stop-after-n-files 2 --acdd ncml
For dynamic datasets, add an app
to main.py that contains:
- An
extractor
, subclassed fromBaseExtractor
in sources/base.py. See TimeseriesExtractor. - A dataset builder, subclassed from the appropriate class in datasets/base.py. See MSourceInletBuilder.
It is also possible to use a notebook, as done for exceedence_limits.ipynb.
A local thredds
server that reads the files can be started using Docker:
docker compose up
and accessed at http://localhost/thredds/catalog/catalog.html.
The local catalog config file can be found in catalog.xml. Documentation for working with this configuration file can be found here. The ncml documentation is also useful.
For most tasks, xarray is a good choice. The command-line tool ncdump is also useful for small tasks.