Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Beginning to flesh out the interface #5

Merged
merged 17 commits into from
Nov 15, 2024
Merged

Conversation

lewisjared
Copy link
Contributor

@lewisjared lewisjared commented Oct 29, 2024

Description

Flesh out a basic interface for a metric.

The goal is to provide an interface that is:

  • flexible to support a range of different ways of calling the benchmarking package
  • provides the ability to include some glue code to do REF-specific "stuff"
  • include type-hints to be explicit about what is available and what output is expected
  • easily testable
  • be extensible in future

Checklist

Please confirm that this pull request has done the following:

  • Tests added
  • Documentation added (where applicable)
  • Changelog item added to changelog/

@lewisjared lewisjared requested a review from lee1043 October 30, 2024 01:05
@lewisjared lewisjared assigned acordonez and lewisjared and unassigned acordonez Oct 30, 2024
@lewisjared lewisjared requested review from acordonez and nocollier and removed request for lee1043 and acordonez October 30, 2024 01:05
@lewisjared
Copy link
Contributor Author

@lee1043 @acordonez @nocollier @bouweandela Turns out I can only have a single reviewer per PR until this repo is made public...

This is broadly how I was thinking about approaching the registration of metrics/providers, i.e each metrics provider has a separate package (and dependencies) inside the repo that implements:

  • A bunch of metrics which also describe their requirements
  • Exposes an instance of a "MetricsProvider" which has the metrics that can be calculated.

Each metrics provider can then run and test metrics locally independently of the rest of the framework.

The ref will then find the appropriate providers and roll that info up. How it does that somewhat depends on how we run things. It might not be as easy as import ref-metrics-esmvaltool due to differing dependencies, but I've got some thoughts about how we could manage that.

There will be a follow-up RFC to further this out in greater detail.

What exactly is part of the inputs/outputs/configuration objects will also expanded on as we need, but should closely align to @acordonez's EMDS standard.

* origin/main:
  chore: Always install/choose the local .esgpull directory
  fix: Typo in the cache name
  test: Add integration tests
  chore: Test that regeneration occurs
  fix: Typo in file to hash
  docs: Changelog
  ci: Install esgpull config
  docs: Update readme
  chore: Add example of CMEC output from pcmdi
  ci: Add caching of test data
  feat: Fetch some example files from esgf
* main:
  docs: Changelog
  chore: Fix mypy
  test: Fix tests
  chore: Fix mypy
  feat: validate database urls
  chore: Update configuration
  feat: Add sqlalchemy and alembic
  docs: Changelog
  chore: Fix tests
  test: Add tests for the ref package
  feat: Add a config subcommands
  feat: Add a ref package
@lewisjared
Copy link
Contributor Author

I'm going to merge this now so this doesn't block anything. We can update fields etc easily at this stage

@lewisjared lewisjared merged commit 0ba92d8 into main Nov 15, 2024
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants