Skip to content

Latest commit

 

History

History
146 lines (98 loc) · 4.54 KB

CONTRIBUTING.md

File metadata and controls

146 lines (98 loc) · 4.54 KB

Contribute To PyTorch/XLA

We appreciate all contributions. If you are planning to contribute a bug fix for an open issue, please comment on the thread and we're happy to provide any guidance. You are very welcome to pick issues from good first issue and help wanted labels.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.

Building Manually

To build from source:

  • Clone the PyTorch repo as per instructions.

    git clone --recursive https://github.com/pytorch/pytorch
    cd pytorch/
  • Clone the PyTorch/XLA repo:

    git clone --recursive https://github.com/pytorch/xla.git

Building Docker Image

  • We provide a Dockerfile in docker/ that you can use to build images as the following command:

    cd xla/
    docker build -t torch-xla -f docker/Dockerfile .

Building With Script

  • To build and install torch and torch_xla:

    xla/scripts/build_torch_wheels.sh

Build From Source

  • Apply PyTorch patches:

    xla/scripts/apply_patches.sh
  • Install the Lark parser used for automatic code generation:

    pip install lark-parser
  • Currently PyTorch does not build with GCC 6.x, 7.x, and 8.x (various kind of ICEs). CLANG 7, 8, 9 and 10 are known to be working, so install that in your VM:

    sudo apt-get install clang-8 clang++-8
    export CC=clang-8 CXX=clang++-8

    You may need to add the following line to your /etc/apt/sources.list file:

    deb http://deb.debian.org/debian/ testing main

    And run the following command before trying again to install CLANG:

    sudo apt-get update
  • Build PyTorch from source following the regular instructions.

    python setup.py install
  • Install Bazelisk following the instructions. Bazelisk automatically picks a good version of Bazel for PyTorch/XLA build.

  • Build the PyTorch/XLA source:

    cd xla/
    python setup.py install

Before Submiting A Pull Request:

In pytorch/xla repo we enforce coding style for both C++ and Python files. Please try to format your code before submitting a pull request.

C++ Style Guide

pytorch/xla uses clang-format-7 with a customized style config. If your PR touches the C++ source files, please run the following command before submmiting a PR.

# How to install: sudo apt install clang-format-7
# If your PR only changes foo.cpp, run the following in xla/ folder
clang-format-7 -i -style=file /PATH/TO/foo.cpp
# To format all cpp files, run the follwoing in xla/ folder
find -name '*.cpp' -o -name '*.h' | xargs clang-format-7 -i -style=file

Python Style Guide

pytorch/xla uses yapf(specially version 0.30.0 in case it's not backward compatible) with a customized style config. If your PR touches the Python source files, please run the following command before submmiting a PR.

# How to install: pip install yapf==0.30.0
yapf --recursive -i *.py test/ scripts/ torch_xla/

Running the Tests

To run the tests, follow one of the options below:

  • Run on local CPU using the XRT client:

    export XRT_DEVICE_MAP="CPU:0;/job:localservice/replica:0/task:0/device:XLA_CPU:0"
    export XRT_WORKERS="localservice:0;grpc://localhost:40934"

    Select any free TCP port you prefer instead of 40934 (totally arbitrary).

  • Run on Cloud TPU using the XRT client, set the XRT_TPU_CONFIG environment variable:

    export XRT_TPU_CONFIG="tpu_worker;0;<IP of the TPU node>:8470"

Note that the IP of the TPU node can change if the TPU node is reset. If PyTorch seem to hang at startup, verify that the IP of your TPU node is still the same of the one you have configured.

If you are planning to be building from source and hence using the latest PyTorch/TPU code base, it is suggested for you to select the Nightly builds when you create a Cloud TPU instance.

Then run test/run_tests.sh and test/cpp/run_tests.sh to verify the setup is working.