Interested in contributing to TensorFlow Probability? We appreciate all kinds of help!
We gladly welcome pull requests.
Before making any changes, we recommend opening an issue (if it doesn't already exist) and discussing your proposed changes. This will let us give you advice on the proposed changes. If the changes are minor, then feel free to make them without discussion.
Want to contribute but not sure of what? Here are a few suggestions:
-
Add a new example or tutorial. Located in
examples/
, these are a great way to familiarize yourself and others with TFP tools. -
Solve an existing issue. These range from low-level software bugs to higher-level design problems. Check out the label good first issue.
All submissions, including submissions by project members, require review. After a pull request is approved, we merge it. Note our merging process differs from GitHub in that we pull and submit the change into an internal version control system. This system automatically pushes a git commit to the GitHub repository (with credit to the original author) and closes the pull request.
We use Travis CI to do automated style checking and run unit-tests (discussed in more detail below). A build will be triggered when you open a pull request, or update the pull request by adding a commit, rebasing etc.
We test against TensorFlow nightly on Python 2.7 and 3.6. We shard our tests
across several build jobs (identified by the SHARD
environment variable).
Linting, in particular, is only done on the first shard, so look at that shard's
logs for lint errors if any.
All pull-requests will need to pass the automated lint and unit-tests before being merged. As Travis-CI tests can take a bit of time, see the following sections on how to run the lint checks and unit-tests locally while you're developing your change.
See the TensorFlow Probability style guide. Running pylint
detects many (but certainly not all) style issues. TensorFlow Probability
follows a custom pylint
configuration.
All TFP code-paths must be unit-tested; see this unit-test guide for recommended test setup.
Unit tests ensure new features (a) work correctly and (b) guard against future breaking changes (thus lower maintenance costs).
To run existing unit-tests on CPU, use the command:
bazel test --copt=-O3 --copt=-march=native //tensorflow_probability/...
from the root of the tensorflow_probability
repository. To run tests on GPU,
you just need to ensure the GPU-enabled version of TensorFlow is installed.
However, you will also need to include the flag --jobs=1
, since by default
Bazel will run many tests in parallel, and each one will try to claim all the
GPU memory:
bazel test --jobs=1 --copt=-O3 --copt=-march=native //tensorflow_probability/...
Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to https://cla.developers.google.com/ to see your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one (even if it was for a different project), you probably don't need to do it again.