Note: this tool is a work in progress and not yet in a useful state
xl2times
is an open source tool to convert TIMES model Excel input files to DD format ready for processing by GAMS. The intention is to make it easier for people to reproduce research results on TIMES models.
TIMES is an energy systems model generator from the International Energy Agency that is used around the world to inform energy policy. It is fully explained in the TIMES Model Documentation.
The Excel input format accepted by this tool is documented in the TIMES Model Documentation PART IV. Additional table types are documented in the VEDA support forum. Example inputs are available at https://github.com/kanors-emr/Model_Demo_Adv_Veda
You can install the latest published version of the tool from PyPI using pip (preferably in a virtual environment):
pip install xl2times
You can also install the latest development version by cloning this repository and running the following command in the root directory:
pip install .
After installation, run the following command to see the basic usage and available options:
xl2times --help
The tool's documentation is at http://xl2times.readthedocs.io/ and the source is in the docs/
directory.
The documentation is generated by Sphinx and hosted on ReadTheDocs. We use the following extensions:
myst-parser
: to be able to write documentation in markdownsphinx-book-theme
: the themesphinx-copybutton
: to add copy buttons to code blockssphinxcontrib-apidoc
: to automatically generate API documentation from the Python package
TODO: currently docstrings are in multiple formats. We should pick one of the Google/NumPy styles and change all docstrings to that format: https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy
We recommend installing the tool in editable mode (-e
) in a Python virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt
pip install -e .[dev]
We use the black code formatter. The pip
command above will install it along with other requirements.
We also use the pyright type checker -- our GitHub Actions check will fail if pyright detects any type errors in your code. You can install pyright in your virtual environment and check your code by running these commands in the root of the repository:
pip install pyright==1.1.304
pyright
Additionally, you can install a git pre-commit that will ensure that your changes are formatted and pyright detects no issues before creating new commits:
pre-commit install
If you want to skip these pre-commit steps for a particular commit, if for instance pyright has issues but you still want to commit your changes to your branch, you can run:
git commit --no-verify
See our GitHub Actions CI .github/workflows/ci.yml
and the utility script utils/run_benchmarks.py
to see how to run the tool on the DemoS models.
In short, use the commands below to clone the benchmarks data into your local benchmarks
dir.
Note that this assumes you have access to all these repositories (some are private and
you'll have to request access) - if not, comment out the inaccessible benchmarks from benchmakrs.yml
before running.
mkdir benchmarks
# Get VEDA example models and reference DD files
# XLSX files are in private repo for licensing reasons, please request access or replace with your own licensed VEDA example files.
git clone git@github.com:olejandro/demos-xlsx.git benchmarks/xlsx/
git clone git@github.com:olejandro/demos-dd.git benchmarks/dd/
# Get Ireland model and reference DD files
git clone git@github.com:esma-cgep/tim.git benchmarks/xlsx/Ireland
git clone git@github.com:esma-cgep/tim-gams.git benchmarks/dd/Ireland
Then to run the benchmarks:
# Run a only a single benchmark by name (see benchmarks.yml for name list)
python utils/run_benchmarks.py benchmarks.yml --verbose --run DemoS_001-all | tee out.txt
# Run all benchmarks (without GAMS run, just comparing CSV data)
python utils/run_benchmarks.py benchmarks.yml --verbose | tee out.txt
# Run benchmarks with regression tests vs main branch
git branch feature/your_new_changes --checkout
# ... make your code changes here ...
git commit -a -m "your commit message" # code must be committed for comparison to `main` branch to run.
python utils/run_benchmarks.py benchmarks.yml --verbose | tee out.txt
At this point, if you haven't broken anything you should see something like:
Change in runtime: +2.97s
Change in correct rows: +0
Change in additional rows: +0
No regressions. You're awesome!
If you have a large increase in runtime, a decrease in correct rows or fewer rows being produced, then you've broken something and will need to figure out how to fix it.
If your change is causing regressions on one of the benchmarks, a useful way to debug and find the difference is to run the tool in verbose mode and compare the intermediate tables. For example, if your branch has regressions on Demo 1:
# First, on the `main` branch:
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all --verbose > before 2>&1
# Then, on your branch:
git checkout my-branch-name
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all --verbose > after 2>&1
# And then compare the files `before` and `after`
code -d before after
VS Code will highlight the changes in the two files, which should correspond to any differences in the intermediate tables.
To publish a new version of the tool on PyPI, update the version number in pyproject.toml
, and then run:
python -m pip install --upgrade build
python -m pip install --upgrade twine
rm -rf dist
python -m build
python -m twine upload dist/*
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.