Skip to content

Commit

Permalink
Merge pull request #1241 from GbotemiB/fix-paths
Browse files Browse the repository at this point in the history
Fix Paths for custom data
  • Loading branch information
davide-f authored Dec 20, 2024
2 parents 7185b74 + 619c0c2 commit ec37fe1
Show file tree
Hide file tree
Showing 3 changed files with 71 additions and 63 deletions.
15 changes: 9 additions & 6 deletions doc/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,13 @@ This part of documentation collects descriptive release notes to capture the mai

**New Features and Major Changes**

* Include option in the config to allow for custom airport data `PR #1241 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1241>`__


**Minor Changes and bug-fixing**



PyPSA-Earth 0.5.0
=================

Expand Down Expand Up @@ -54,15 +57,15 @@ PyPSA-Earth 0.5.0

* Enable configfile specification for mock_snakemake `PR #1135 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1135>`__

* Removed duplications of devendencies in environment.yaml `PR #1128 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1128>`_
* Removed duplications of devendencies in environment.yaml `PR #1128 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1128>`__

* Fix pre-commit docformatter python issue. `PR #1153 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1153>`__

* Drop duplicate entries in `AL_production.csv` data used in `build_industry_demand` rule `PR #1143 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1143>`__

* Fix bugs in `prepare_sector_network.py` related to links with H2 buses and bug of re-addition of H2 and battery carriers in present `PR #1145 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1145>`__

* Drop entries that contain non-string elements in country column of `CO2_emissions_csv` data in `prepare_transport_data_input.py` script `PR #1166 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1166>`_
* Drop entries that contain non-string elements in country column of `CO2_emissions_csv` data in `prepare_transport_data_input.py` script `PR #1166 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1166>`__

* Local tests are now run with `make test`. This uses a `Makefile` which runs snakemake calls with different configurations. `PR #1053 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1053>`__

Expand All @@ -72,15 +75,15 @@ PyPSA-Earth 0.5.0

* Adds CI to update keep pinned environment files up to date. `PR #1183 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1183>`__ and `PR #1210 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1210>`__

* Revise ports data for export in `add_export.py` related to sector model `PR #1175 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1175>`_
* Revise ports data for export in `add_export.py` related to sector model `PR #1175 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1175>`__

* Restore string values of tech_colors in config file `PR #1205 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1205>`_
* Restore string values of tech_colors in config file `PR #1205 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1205>`__

* Drop vrestil dependency `PR #1220 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1220>`__

* Remove duplicate entries from hydrogen export ports `PR #1233 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1233>`_
* Remove duplicate entries from hydrogen export ports `PR #1233 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1233>`__

* Fix the environment placing a version limit to numpoly `PR #1237 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1237>`_
* Fix the environment placing a version limit to numpoly `PR #1237 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/1237>`__

PyPSA-Earth 0.4.1
=================
Expand Down
5 changes: 3 additions & 2 deletions scripts/prepare_airports.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@

import numpy as np
import pandas as pd
from _helpers import BASE_DIR

# from _helpers import configure_logging

Expand Down Expand Up @@ -94,8 +95,8 @@ def preprocess_airports(df):
# country_list = country_list_to_geofk(snakemake.config["countries"])'

if snakemake.params.airport_custom_data:
custom_airports = Path.joinpath("data", "custom", "airports.csv")
shutil.move(custom_airports, snakemake.output[0])
custom_airports = Path(BASE_DIR).joinpath("data", "custom", "airports.csv")
shutil.copy(custom_airports, snakemake.output[0])
else:
# Prepare downloaded data
download_data = download_airports()
Expand Down
114 changes: 59 additions & 55 deletions scripts/prepare_ports.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
import country_converter as coco
import numpy as np
import pandas as pd
from _helpers import BASE_DIR

# from _helpers import configure_logging

Expand Down Expand Up @@ -76,69 +77,72 @@ def filter_ports(dataframe):
# store_path_data = Path.joinpath(Path().cwd(), "data")
# country_list = country_list_to_geofk(snakemake.config["countries"])'

df = download_ports().copy()

# Add ISO2 country code for each country
df = df.rename(
columns={
"Country Code": "country_full_name",
"Latitude": "y",
"Longitude": "x",
"Main Port Name": "name",
}
)
df["country"] = df.country_full_name.apply(
lambda x: coco.convert(names=x, to="ISO2", not_found=None)
)

# Drop small islands that have no ISO2:
df = df[df.country_full_name != "Wake Island"]
df = df[df.country_full_name != "Johnson Atoll"]
df = df[df.country_full_name != "Midway Islands"]

# Select the columns that we need to keep
df = df.reset_index()
df = df[
[
"World Port Index Number",
"Region Name",
"name",
"Alternate Port Name",
"country",
"World Water Body",
"Liquified Natural Gas Terminal Depth (m)",
"Harbor Size",
"Harbor Type",
"Harbor Use",
"country_full_name",
"y",
"x",
if snakemake.params.custom_export:
custom_export_path = Path(BASE_DIR).joinpath(
"data", "custom", "export_ports.csv"
)
shutil.copy(custom_export_path, snakemake.output[1])
else:

df = download_ports().copy()

# Add ISO2 country code for each country
df = df.rename(
columns={
"Country Code": "country_full_name",
"Latitude": "y",
"Longitude": "x",
"Main Port Name": "name",
}
)
df["country"] = df.country_full_name.apply(
lambda x: coco.convert(names=x, to="ISO2", not_found=None)
)

# Drop small islands that have no ISO2:
df = df[df.country_full_name != "Wake Island"]
df = df[df.country_full_name != "Johnson Atoll"]
df = df[df.country_full_name != "Midway Islands"]

# Select the columns that we need to keep
df = df.reset_index()
df = df[
[
"World Port Index Number",
"Region Name",
"name",
"Alternate Port Name",
"country",
"World Water Body",
"Liquified Natural Gas Terminal Depth (m)",
"Harbor Size",
"Harbor Type",
"Harbor Use",
"country_full_name",
"y",
"x",
]
]
]

# Drop ports that are very small and that have unknown size (Unknown size ports are in total 19 and not suitable for H2 - checked visually)
ports = df.loc[df["Harbor Size"].isin(["Small", "Large", "Medium"])]
# Drop ports that are very small and that have unknown size (Unknown size ports are in total 19 and not suitable for H2 - checked visually)
ports = df.loc[df["Harbor Size"].isin(["Small", "Large", "Medium"])]

ports.insert(8, "Harbor_size_nr", 1)
ports.loc[ports["Harbor Size"].isin(["Small"]), "Harbor_size_nr"] = 1
ports.loc[ports["Harbor Size"].isin(["Medium"]), "Harbor_size_nr"] = 2
ports.loc[ports["Harbor Size"].isin(["Large"]), "Harbor_size_nr"] = 3
ports.insert(8, "Harbor_size_nr", 1)
ports.loc[ports["Harbor Size"].isin(["Small"]), "Harbor_size_nr"] = 1
ports.loc[ports["Harbor Size"].isin(["Medium"]), "Harbor_size_nr"] = 2
ports.loc[ports["Harbor Size"].isin(["Large"]), "Harbor_size_nr"] = 3

df1 = ports.copy()
df1 = df1.groupby(["country_full_name"]).sum("Harbor_size_nr")
df1 = df1[["Harbor_size_nr"]]
df1 = df1.rename(columns={"Harbor_size_nr": "Total_Harbor_size_nr"})
df1 = ports.copy()
df1 = df1.groupby(["country_full_name"]).sum("Harbor_size_nr")
df1 = df1[["Harbor_size_nr"]]
df1 = df1.rename(columns={"Harbor_size_nr": "Total_Harbor_size_nr"})

ports = ports.set_index("country_full_name").join(df1, how="left")
ports = ports.set_index("country_full_name").join(df1, how="left")

ports["fraction"] = ports["Harbor_size_nr"] / ports["Total_Harbor_size_nr"]
ports["fraction"] = ports["Harbor_size_nr"] / ports["Total_Harbor_size_nr"]

ports.to_csv(snakemake.output[0], sep=",", encoding="utf-8", header="true")
ports.to_csv(snakemake.output[0], sep=",", encoding="utf-8", header="true")

if snakemake.params.custom_export:
custom_export_path = Path.joinpath("data", "custom", "export_ports.csv")
shutil.move(custom_export_path, snakemake.output[1])
else:
filter_ports(ports).to_csv(
snakemake.output[1], sep=",", encoding="utf-8", header="true"
)

0 comments on commit ec37fe1

Please sign in to comment.