- ROIs
- HIRHAM
- MAR
- RACMO
- SMB to ROIs
- D to ROI
- BMB to ROI
- Reconstructed
- TMB
- Validation prep
- Uncertainty
- Figures
- Notes
- Overview map
- SMB/D/MB timeseries
- SMB/D/MB vs all others: Reverse accumulated
- This vs. Mouginot (2019): GIS xy
- This vs. Colgan (2019): GIS xy
- This vs. IMBIE/GRACE/VC GIS XY
- HIRHAM MAR RACMO timeseries
- This vs. Mouginot (2019): regions xy
- This vs. Colgan (2019): Sectors xy
- Zwally and Mouginot RCM coverage
- Reconstructed runoff -> VHD
- CRediT
- Helper functions
- Stats
- Environment
- Manuscript
- Makefile
ROI refers both to the citet:zwally_2012 sectors and the citet:mouginot_2019_glacier basins and regions.
Procedure to map the ROIs to the SMB grids:
- ROIs are in EPSG:4326 projection, and need to be re-projected to the SMB projection
- HIRHAM
- rotated pole
- MAR
- EPSG:3413
- RACMO
- EPSG:3413
SMB grids do not match ROIs perfectly. When the ROIs are larger, that is fine - whatever the SMB is in the sector interior is the modeled SMB. It is problematic where the ROIs are smaller than the model domain. Because the largest SMB losses are at the edge, if some RCM pixels are outside of a ROI and not counted, this would introduce large errors.
- For each SMB, determine the mask
- HIRHAM
- GL2offline_CountryGlacierMasks_SD.nc:glacGRL with only the largest cluster used (remove peripheral)
- MAR
- ‘msk’ variable provided in each NetCDF data file. See code in [[#MAR_ice_mask_50][#MAR_ice_mask_50]] for algorithm to derive mask from this floating point fractional mask that does not distinguish between main ice sheet and peripheral glaciers.
- RACMO
- No mask provided. Look at summer season and assume any cell with any values != 0 is in the model domain, and all cells summed = 0 is outside
- For all model domain cells outside of a ROI, enlarge the ROI to include them, and join the new ROI cells to the nearest ROI cell that is inside the model domain
log_info "Loading Mouginot 2019"
g.mapset -c Mouginot_2019
v.import input=${DATADIR}/Mouginot_2019/Greenland_Basins_PS_v1.4.2.shp output=basins snap=1
# remove peripheral ice caps
db.execute sql="DELETE FROM basins WHERE name LIKE 'ICE_CAPS_%'"
v.db.addcolumn map=basins columns="SUBREGION1_num INT,cat_ INT"
db.execute sql="UPDATE basins SET cat_=cat"
# convert NW NO NE CW CE SW SE based on clock
db.execute sql="UPDATE basins SET SUBREGION1_num=11 WHERE SUBREGION1='NW'"
db.execute sql="UPDATE basins SET SUBREGION1_num=12 WHERE SUBREGION1='NO'"
db.execute sql="UPDATE basins SET SUBREGION1_num=1 WHERE SUBREGION1='NE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=3 WHERE SUBREGION1='CE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=9 WHERE SUBREGION1='CW'"
db.execute sql="UPDATE basins SET SUBREGION1_num=5 WHERE SUBREGION1='SE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=7 WHERE SUBREGION1='SW'"
log_info "Loading Zwally 2012"
g.mapset -c Zwally_2012
v.import input=${DATADIR}/Zwally_2012/sectors output=sectors snap=1
grass -c EPSG:4326 G_HIRHAM
# from gridOBC_SD.nc
ncdump -v rlon ~/data/HIRHAM/gridOBC_SD.nc
ncdump -v rlat ~/data/HIRHAM/gridOBC_SD.nc
<<init_bash>>
<<init_grass>>
g.region w=-13.65 e=1.3 s=-9.8 n=12.2 res=0:03
g.region w=w-0.025 e=e+0.025 s=s-0.025 n=n+0.025 -p
g.region save=RCM
g.region res=0:0.5 # ~1 km grid cell resolution
g.region save=sectors --o
r.in.gdal -ol input=NetCDF:${DATADIR}/HIRHAM/daily/DarcySensitivity_RP810_Daily2D_GL2LIN_Darcy_60m_liqCL_wh1_smb_HydroYr_2012_2013_DM_SD.nc:smb band=200 output=SMB
r.region map=SMB region=RCM
v.in.ogr -o input=./dat/Zrot.gpkg output=Z
v.in.ogr -o input=./dat/Mrot.gpkg output=M
d.mon start=wx0
r.colors map=SMB color=viridis
d.erase
d.rast SMB
d.vect Z fill_color=none
d.vect M fill_color=none
d.grid 1:0:0
- https://lists.osgeo.org/pipermail/grass-user/2011-October/062180.html
- This no longer works in GRASS 7.8 https://lists.osgeo.org/pipermail/grass-user/2020-November/081828.html
- Therefore, the following is done 1x in a VM (Ubuntu 18.04) running GRASS 7.4
rm -fR Gnorm Grot
grass -c EPSG:4326 Gnorm
DATADIR=~/data
v.import input=${DATADIR}/Zwally_2012/sectors output=Z
v.import snap=1 input=${DATADIR}/Mouginot_2019/Greenland_Basins_PS_v1.4.2.shp output=M
cat << EOF > ./Gnorm/PERMANENT/PROJ_INFO
name: General Oblique Transformation
datum: wgs84
towgs84: 0.000,0.000,0.000
proj: ob_tran
o_proj: latlon
ellps: wgs84
a: 6378137.0000000000
es: 0.0066943800
f: 298.2572235630
lat_0: 0.0000000000
lon_0: 180.0000000000
o_lat_p: 18.0
o_lon_p: -200.0
EOF
# rotated_pole:grid_north_pole_latitude = 18. ;
# rotated_pole:grid_north_pole_longitude = -200.
cat << EOF > ./Gnorm/PERMANENT/PROJ_UNITS
unit: degree
units: degrees
meters: .0174532925
EOF
grass -e -c EPSG:4326 Grot
grass ./Grot/PERMANENT
v.proj location=Gnorm input=Z
v.proj location=Gnorm input=M
# g.region vector=Z,M
# d.mon start=wx0
# d.erase
# d.vect Z
# d.vect M
# d.grid 1:0:0
v.out.ogr input=Z output=./dat/Zrot.gpkg # Zwally_2012_HIRHAM.gpkg
v.out.ogr input=M output=./dat/Mrot.gpkg # Mouginot_2019_HIRHAM.gpkg
This should be:
<<import_mouginot>>
<<import_zwally>>
But we’re loading different files that have been converted to the rotated pole, so here I **duplicate** those code blocks but change the input filename.
log_info "Loading Zwally 2012"
g.mapset -c Zwally_2012
v.in.ogr -o input=Zwally_2012_HIRHAM.gpkg output=sectors
v.db.dropcolumn map=sectors columns="cat_"
v.db.renamecolumn map=sectors column=cat__1,cat_
log_info "Loading Mouginot 2019"
g.mapset -c Mouginot_2019
v.in.ogr -o input=Mouginot_2019_HIRHAM.gpkg output=basins
# remove peripheral ice caps
db.execute sql="DELETE FROM basins WHERE name LIKE 'ICE_CAPS_%'"
v.db.addcolumn map=basins columns="SUBREGION1_num INT"
# convert NW NO NE CW CE SW SE based on clock
db.execute sql="UPDATE basins SET SUBREGION1_num=11 WHERE SUBREGION1='NW'"
db.execute sql="UPDATE basins SET SUBREGION1_num=12 WHERE SUBREGION1='NO'"
db.execute sql="UPDATE basins SET SUBREGION1_num=1 WHERE SUBREGION1='NE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=3 WHERE SUBREGION1='CE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=9 WHERE SUBREGION1='CW'"
db.execute sql="UPDATE basins SET SUBREGION1_num=5 WHERE SUBREGION1='SE'"
db.execute sql="UPDATE basins SET SUBREGION1_num=7 WHERE SUBREGION1='SW'"
g.mapset PERMANENT
r.in.gdal -ol input="NetCDF:${DATADIR}/HIRHAM/GL2offline_CountryGlacierMasks_SD.nc:glacGRL" output=mask
r.region map=mask region=RCM
r.mapcalc "mask_ice_all = if(mask == 1, 1, null())"
r.clump input=mask_ice_all output=mask_ice_clump
main_clump=$(r.stats -c -n mask_ice_clump sort=desc | head -n1 | cut -d" " -f1)
r.mapcalc "mask_ice = if(mask_ice_clump == ${main_clump}, 1, null())"
We’ll develop this here on the HIRHAM data 1x, but do so generically so that when working with MAR and RACMO we can just <<expand_sectors>>. The one requirement is that we expect a “mask_ice” raster in the PERMANENT mapset which defines the ice domain.
<<expand_zwally>>
<<expand_mouginot>>
log_info "Expanding Zwally sectors to cover RCM domain"
g.mapset Zwally_2012
g.region region=sectors
v.to.rast input=sectors output=sectors use=attr attribute_column=cat_
# r.mapcalc "outside = if(isnull(sectors) & not(isnull(mask_ice)), 1, null())"
# Works fine if limited to contiguous cells (hence main_clump).
# Deosn't work great for distal islands (e.g. NE GL).
# Probably need to jump to v.distance or r.distance if we want to assign these to sectors.
r.grow.distance input=sectors value=value
r.mapcalc "sectors_e = if(mask_ice, int(value), null())" # sectors_enlarged
r.to.vect input=sectors_e output=sectors_e type=area
log_info "Expanding Mouginot basins to cover RCM domain"
g.mapset Mouginot_2019
g.region region=sectors
v.to.rast input=basins output=basins use=attr attribute_column=cat_ labelcolumn=SUBREGION1
# r.mapcalc "outside = if(isnull(basins) & mask_ice, 1, null())"
r.grow.distance input=basins value=value_b
r.mapcalc "basins_e = if(mask_ice, int(value_b), null())"
r.category map=basins separator=":" > ./tmp/basins_cats
r.category map=basins_e separator=":" rules=./tmp/basins_cats
r.to.vect input=basins_e output=basins_e type=area
v.to.rast input=basins output=regions use=attr attribute_column=SUBREGION1_num labelcolumn=SUBREGION1
# r.mapcalc "outside = if(isnull(regions) & mask_ice, 1, null())"
r.grow.distance input=regions value=value_r
r.mapcalc "regions_e = if(mask_ice, int(value_r), null())"
r.category map=regions separator=":" > ./tmp/region_cats
r.category map=regions_e separator=":" rules=./tmp/region_cats
r.to.vect input=regions_e output=regions_e type=area
In the case of HIRHAM, we want to export the ROIs as rasters for eventual work in the XY domain
mkdir -p tmp
r.out.gdal input=sectors_e@Zwally_2012 output=./tmp/HIRHAM_sectors_e_Zwally_2012.tif
r.out.gdal input=regions_e@Mouginot_2019 output=./tmp/HIRHAM_regions_e_Mouginot_2019.tif
grass ./G_HIRHAM/PERMANENT
g.mapset PERMANENT
d.mon start=wx0
d.erase
d.rast mask_ice
# d.vect basins@Mouginot_2019 fill_color=none
# d.vect sectors@Zwally_2012 fill_color=none
d.rast sectors_e@Zwally_2012
d.rast basins_e@Mouginot_2019
d.rast regions_e@Mouginot_2019
<<init_bash>>
<<init_grass>>
cdo sinfov ${DATADIR}/MAR/3.12/MAR-2000.nc | head -n20
g.region w=-640000 e=860000 s=-3347928 n=-647928 res=20000 -p
g.region w=w-10000 e=e+10000 s=s-10000 n=n+10000 res=20000 -p # adjust from cell center to edges
g.region save=RCM
g.region res=1000 -p
g.region save=sectors
r.external -o source=NetCDF:${DATADIR}/MAR/3.12/MAR-2000.nc:msk output=mask
r.region map=mask region=RCM
r.colors map=mask color=haxby
# r.mapcalc "mask_ice_all = if(mask == 0, null(), 1)"
r.mapcalc "mask_ice_1 = if(mask >= 50, 1, null())"
r.grow input=mask_ice_1 output=mask_ice_all radius=3 new=1
r.clump input=mask_ice_all output=mask_clump
main_clump=$(r.stats -c -n mask_clump sort=desc | head -n1 | cut -d" " -f1)
r.mapcalc "mask_ice = if((mask_clump == ${main_clump}) & (mask > 0.5), 1, null())"
<<import_zwally>>
<<expand_zwally>>
<<import_mouginot>>
<<expand_mouginot>>
g.mapset PERMANENT
grass ./G_MAR/PERMANENT
g.mapset PERMANENT
d.mon start=wx0
d.erase
d.rast mask_ice
# d.vect basins@Mouginot_2019 fill_color=none
# d.vect sectors@Zwally_2012 fill_color=none
d.rast sectors_e@Zwally_2012
d.rast basins_e@Mouginot_2019
d.rast regions_e@Mouginot_2019
<<init_bash>>
<<init_grass>>
cdo -sinfo ${DATADIR}/RACMO/daily/smb_rec.2020_JAS.BN_RACMO2.3p2_ERA5_3h_FGRN055.1km.DD.nc | head -n 9
# x : -638956 to 856044 by 1000 km
# y : -3354596 to -655596 by 1000 km
g.region w=-638956 e=856044 s=-3354596 n=-655596 res=1000 -p
g.region n=n+500 s=s-500 w=w-500 e=e+500 res=1000 -p
g.region save=RCM
g.region res=1000 -p
g.region save=sectors
r.external -o source=NetCDF:${DATADIR}/RACMO/Icemask_Topo_Iceclasses_lon_lat_average_1km.nc:Promicemask output=mask
r.region map=mask region=RCM
r.colors map=mask color=haxby
r.mapcalc "mask_ice_all = if(mask >= 2, 1, null())"
r.clump input=mask_ice_all output=mask_clump
main_clump=$(r.stats -c -n mask_clump sort=desc | head -n1 | cut -d" " -f1)
r.mapcalc "mask_ice = if((mask_clump == ${main_clump}) & (mask > 0.5), 1, null())"
- Nothing to do here because RACMO on EPSG:3413 projection.
<<import_zwally>>
<<expand_zwally>>
<<import_mouginot>>
<<expand_mouginot>>
g.mapset PERMANENT
grass ./G_RACMO/PERMANENT
g.mapset PERMANENT
d.mon start=wx0
d.erase
d.rast mask_ice
# d.vect basins@Mouginot_2019 fill_color=none
# d.vect sectors@Zwally_2012 fill_color=none
d.rast sectors_e@Zwally_2012
d.rast basins_e@Mouginot_2019
d.rast regions_e@Mouginot_2019
import xarray as xr
import sys
f = sys.argv[1]
ds = xr.open_dataset(f)
if 'time' in ds.variables:
tvar = 'time'
if 'TIME' in ds.variables:
tvar = 'TIME'
t = [(str(_)[0:10]) for _ in ds[tvar].values]
for _ in t: print(_)
<<init_bash>>
<<init_grass>>
log_info "HIRHAM ROI areas"
RCM=HIRHAM
mkdir -p tmp/${RCM}
g.region s=0 w=0 n=441 e=300 res=1 -a
dir=${DATADIR}/${RCM}/daily
f=$(ls ${dir}/*.nc|head -n1) # debug
r.in.gdal -o input="NetCDF:${DATADIR}/HIRHAM/ZwallyMasks_all_SD.nc:cellarea" output=area_HIRHAM
# r.region map=area_HIRHAM region=RCM
# Regions and Sectors
r.in.gdal -o input=./tmp/HIRHAM_sectors_e_Zwally_2012.tif output=sectors_e
r.in.gdal -o input=./tmp/HIRHAM_regions_e_Mouginot_2019.tif output=regions_e
r.region -c map=sectors_e
r.region -c map=regions_e
if [ -z ${RECENT+x} ]; then
f_list=$(ls ${dir}/*.nc) ## initial
log_info "Initial run. Processing all files"
else
f_list=$(ls ${dir}/*.nc | tail -n 30) ## operational
log_warn "RECENT set to ${RECENT}. Processing subset of files"
fi
for f in ${f_list}; do
dates=$(python ./nc_dates.py ${f})
band=0
d=1985-09-01 # debug
for d in ${dates}; do
band=$(( ${band} + 1 ))
log_info "HIRHAM: ${d}"
if [[ -e ./tmp/${RCM}/sector_${d}.bsv ]]; then continue; fi
var=smb
if [[ ${f} == *"SMBmodel"* ]]; then var=gld; fi
r.in.gdal -o input="NetCDF:${f}:${var}" band=${band} output=SMB_raw --o --q
r.mapcalc "SMB = SMB_raw * area_HIRHAM" --o
r.univar -t --q map=SMB zones=sectors_e \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/sector_${d}.bsv
r.univar -t --q map=SMB zones=regions_e \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/region_${d}.bsv
# r.univar -t --q map=SMB zones=basins_e \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/${d}/ \
# > ./tmp/${RCM}/basin_${d}.bsv
done
done
<<init_bash>>
<<init_grass>>
RCM=MAR
mkdir -p tmp/${RCM}
dir=${DATADIR}/MAR/3.12
f=$(ls ${dir}/MAR-????.nc|head -n1) # debug
if [ -z ${RECENT+x} ]; then
f_list=$(ls ${dir}/*.nc) ## initial
log_info "Initial run. Processing all files"
else
f_list=$(ls ${dir}/*.nc | tail -n 2) ## operational
log_warn "RECENT set to ${RECENT}. Processing subset of files"
fi
for f in ${f_list}; do
dates=$(python ./nc_dates.py ${f})
###
### "band = -1" and "band = band + 2"
### This code is necessary because the SMB raster in the NetCDF files
### has dimensions (time, sector, y, x) where sector, per Xavier, is:
### "k=1 permanent ice, k=2 tundra". GRASS exposes these dimensions as
### even and odd band numbers, so 1 is Jan 1 ice, and 2 is jan 1 tundra
### and 3 is Jan 2 ice, etc.
###
band=-1
d=1986-01-01 # debug
for d in ${dates}; do
band=$(( ${band} + 2 ))
log_info "${RCM}: ${d}"
if [[ -e ./tmp/${RCM}/sector_${d}.bsv ]]; then continue; fi
r.in.gdal -o input="NetCDF:${f}:smb" band=${band} output=SMB_raw --o --q
r.region map=SMB_raw region=RCM
r.mapcalc "SMB = if(mask > 50, SMB_raw * (mask/100), null())" --o
r.univar -t --q map=SMB zones=sectors_e@Zwally_2012 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/sector_${d}.bsv
r.univar -t --q map=SMB zones=regions_e@Mouginot_2019 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/region_${d}.bsv
# r.univar -t --q map=SMB zones=basins_e@Mouginot_2019 \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/${d}/ \
# > ./tmp/${RCM}/basin_${d}.bsv
done
done
<<init_bash>>
<<init_grass>>
RCM=RACMO
mkdir -p tmp/${RCM}
dir=${DATADIR}/${RCM}/daily
f=$(ls ${dir}/*.nc|head -n1) # debug
if [ -z ${RECENT+x} ]; then
f_list=$(ls ${dir}/*.nc) ## initial
log_info "Initial run. Processing all files"
else
f_list=$(ls ${dir}/*.nc | tail -n 2) ## operational
log_warn "RECENT set to ${RECENT}. Processing subset of files"
fi
for f in ${f_list}; do
dates=$(python ./nc_dates.py ${f})
band=0
d=1989-07-01 # debug
for d in ${dates}; do
band=$(( ${band} + 1 ))
log_info "${RCM}: ${d}"
if [[ -e ./tmp/${RCM}/sector_${d}.bsv ]]; then continue; fi
var=smb_rec
if [[ ${f} != *"ERA5"* ]]; then var=SMB_rec; fi # 1986 through 1989
r.in.gdal -o input="NetCDF:${f}:${var}" band=${band} output=SMB --o --q
r.region map=SMB region=RCM
r.univar -t --q map=SMB zones=sectors_e@Zwally_2012 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/sector_${d}.bsv
r.univar -t --q map=SMB zones=regions_e@Mouginot_2019 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/${RCM}/region_${d}.bsv
# r.univar -t --q map=SMB zones=basins_e@Mouginot_2019 \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/${d}/ \
# > ./tmp/${RCM}/basin_${d}.bsv
done
done
<<init_bash>>
for RCM in HIRHAM MAR RACMO; do
# for ROI in sector region basin; do
for ROI in sector region; do
log_info ${RCM} ${ROI}
head -n1 $(ls ./tmp/${RCM}/${ROI}_????-??-??.bsv|tail -n1) > ./tmp/${RCM}_${ROI}.bsv
# head -n1 ./tmp/${RCM}/${ROI}_2000-01-01.bsv > ./tmp/${RCM}_${ROI}.bsv
tail -q -n1 ./tmp/${RCM}/${ROI}_*.bsv >> ./tmp/${RCM}_${ROI}.bsv
done
done
For HIRHAM, resolution is in degrees not meters.
- 1 degree of latitude @ equator is 110574 m => 0.5 minutes or 0:30 seconds or 0.008333 degrees = 0.008333 * 110574 = 921.413142
- 1 degree of latitude @ 10 °N is 110608 m => 110608 * 0.0083333 = 921.7296464 m
import pandas as pd
import xarray as xr
import numpy as np
import datetime
import os
import uncertainties
from uncertainties import unumpy
time = pd.date_range(start = "1986-01-01",
end = datetime.datetime.utcnow().date() + datetime.timedelta(days = 7),
freq = "D")
SMB = xr.Dataset()
SMB["time"] = (("time"), time)
SMB["sector"] = pd.read_csv("./tmp/HIRHAM_sector.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
rstr = {11:'NW', 12:'NO', 1:'NE', 3:'CE', 9:'CW', 5:'SE', 7:'SW'}
rnum = pd.read_csv("./tmp/HIRHAM_region.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
SMB["region"] = [rstr[n] for n in rnum]
# SMB["basin"] = pd.read_csv("./tmp/HIRHAM_basin.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
def bsv2nc(SMB, bsv, dim):
df = pd.read_csv("./tmp/" + bsv + ".bsv", delimiter="|", index_col=0, parse_dates=True).reindex(time)
df.columns = df.columns.astype(np.int64)
if dim == "region":
df.columns = [rstr[n] for n in df.columns]
missing = SMB[dim].values[~pd.Series(SMB[dim].values).isin(df.columns)]
if len(missing) > 0:
df[missing] = np.nan
df = df.reindex(sorted(df.columns), axis=1)
if 'HIRHAM' in bsv:
# grid applied with 'r.mapcalc "SMB = SMB_raw * area_HIRHAM" --o' prior to 'r.univar'
grid = 1
error = 0.15
if 'MAR' in bsv:
grid = 1000 * 1000
error = 0.15
if 'RACMO' in bsv:
grid = 1000 * 1000
error = 0.15
# mm->m grid m^3->kg kg->Gt
CONV = 1E-3 * grid * 1E3 / 1E12
df = df * CONV
SMB[bsv] = (("time", dim), df)
SMB[bsv + '_err'] = (("time", dim), df*error)
return SMB
SMB = bsv2nc(SMB, "HIRHAM_sector", "sector")
SMB = bsv2nc(SMB, "HIRHAM_region", "region")
# SMB = bsv2nc(SMB, "HIRHAM_basin", "basin")
SMB = bsv2nc(SMB, "MAR_sector", "sector")
SMB = bsv2nc(SMB, "MAR_region", "region")
# SMB = bsv2nc(SMB, "MAR_basin", "basin")
SMB = bsv2nc(SMB, "RACMO_sector", "sector")
SMB = bsv2nc(SMB, "RACMO_region", "region")
# SMB = bsv2nc(SMB, "RACMO_basin", "basin")
err = unumpy.uarray([1,1,1], [0.1, 0.15, 0.7])
print('{:.4f}'.format(err.sum()))
### create SMB from individual SMBs
# for roi in ['sector','region','basin']:
for roi in ['sector','region']:
mean = SMB[['HIRHAM_'+roi, 'MAR_'+roi, 'RACMO_'+roi]].to_array(dim='m').mean('m')
s = 'SMB_' + roi
SMB[s] = (('time',roi), mean.data)
# SMB[s].attrs["long_name"] = "Surface mass balance (RCM mean)"
# SMB[s].attrs["standard_name"] = "land_ice_mass_tranport"
# SMB[s].attrs["units"] = "Gt d-1"
# SMB[s].attrs["coordinates"] = "time region"
s = s + '_err'
SMB[s] = (('time',roi), mean.data * 0.15)
# SMB[s].attrs["long_name"] = "Surface mass balance (RCM mean) uncertainty"
# SMB[s].attrs["standard_name"] = "land_ice_mass_tranport"
# SMB[s].attrs["units"] = "Gt d-1"
# SMB[s].attrs["coordinates"] = "time region"
SMB['SMB'] = (('time'), SMB['SMB_sector'].sum(dim='sector').data)
SMB['SMB_err'] = (('time'), SMB['SMB_sector'].sum(dim='sector').data * 0.09)
fn = './tmp/SMB.nc'
if os.path.exists(fn): os.remove(fn)
SMB.to_netcdf(fn)
Test:
import xarray as xr
ds = xr.open_dataset('./tmp/SMB.nc')
ds[['SMB','HIRHAM_sector','MAR_sector','RACMO_sector']].sum(dim='sector').to_dataframe().plot()
ds[['SMB','HIRHAM_sector','MAR_sector','RACMO_sector']].sum(dim='sector').to_dataframe().head()
time | SMB | HIRHAM_sector | MAR_sector | RACMO_sector |
---|---|---|---|---|
1986-01-01 00:00:00 | 2.3474 | 2.41357 | 2.34658 | 2.28207 |
1986-01-02 00:00:00 | 3.20165 | 3.24823 | 3.35151 | 3.00523 |
1986-01-03 00:00:00 | 4.22387 | 4.19169 | 4.1993 | 4.28062 |
1986-01-04 00:00:00 | 1.5914 | 1.66981 | 1.82024 | 1.28414 |
1986-01-05 00:00:00 | 1.97892 | 1.97788 | 2.44208 | 1.51681 |
Already done in each of the D products
- citet:mankoff_2020_solid includes ROI for each gate
- citet:mouginot_2019_forty is provided on Zwally sector
<<init_bash>>
<<init_grass>>
g.mapset -c GF
r.external -o source=NetCDF:${DATADIR}/Karlsson_2021/basalmelt.nc:gfmelt output=GF_ann
g.region raster=GF_ann
r.mapcalc "GF = GF_ann / 365"
r.univar -t --q map=GF zones=sectors_e@Zwally_2012 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/daily/ \
> ./tmp/BMB_GF_sector.bsv
r.univar -t --q map=GF zones=regions_e@Mouginot_2019 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/daily/ \
> ./tmp/BMB_GF_region.bsv
# r.univar -t --q map=GF zones=basins_e@Mouginot_2019 \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/daily/ \
# > ./tmp/BMB_GF_basin.bsv
g.mapset -c vel
r.external -o source=NetCDF:${DATADIR}/Karlsson_2021/basalmelt.nc:fricmelt output=vel_ann
g.region raster=vel_ann
r.mapcalc "vel = vel_ann / 365"
TODO: Convert vel from NBK units to… m melt per grid cell?
r.univar -t --q map=vel zones=sectors_e@Zwally_2012 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/daily/ \
> ./tmp/BMB_vel_sector.bsv
r.univar -t --q map=vel zones=regions_e@Mouginot_2019 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/daily/ \
> ./tmp/BMB_vel_region.bsv
# r.univar -t --q map=vel zones=basins_e@Mouginot_2019 \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/daily/ \
# > ./tmp/BMB_vel_basin.bsv
- Determine VHD routing map as per citet:mankoff_2017_VHD
- For each source cell, estimate the contribution to that sector VHD per unit mass of water
- That is, from the water source, the integrated VHD between source and outlet.
- Rather than doing full routing for each and every interior cell, the integrated VHD per cell can be estimated as the difference between the source pressure+elevation and the basal pressure+elevation terms.
- Then, use this source map applied to each day of runoff from one of the RCMs.
Do the initial work in BedMachine (3413), then re-project into MAR because that is the RCM with future data.
- from Morlighem et al. (2017)
- See GEUS-Glaciology-and-Climate/ice_discharge#26
log_info "Importing BedMachine"
g.mapset -c VHD
for var in $(echo mask surface bed thickness); do
echo $var
r.external source=${DATADIR}/Morlighem_2017/BMv4_3413/${var}.tif output=${var}
done
g.region raster=surface
g.region res=1000
g.region save=BedMachine
r.colors map=mask color=haxby
r.mapcalc "mask_ice_0 = if(mask == 2, 1, null())"
The ice mask needs to be expanded so that land terminating glaciers contain 1 grid cell outside the ice domain. This is so that the discharge location has 0 thickness (0 pressure term) and all pressure energy is released. Only gravitational potential energy remains. Submarine discharge remains pressurized by the ice thickness.
r.grow input=mask_ice_0 output=mask_ice_1 radius=1.5 new=1
r.mapcalc "mask_01 = if((mask == 0) | (mask == 3), null(), mask_ice_1)"
Also fills in nunatuks.
This is done because hydrologic routing will terminate at domain boundaries, even if they’re inland. We want to route to the ice edge, because eventually that is where all the water goes.
r.colors map=mask color=haxby
r.mapcalc "not_ice = if(isnull(mask_01) ||| (mask != 2), 1, 0)"
# No mask, NULLS are not clumped
r.clump input=not_ice output=clumps
# d.rast clumps
main_clump=$(r.stats -c -n clumps sort=desc | head -n1 | cut -d" " -f1)
r.mask -i raster=clumps maskcats=${main_clump} --o
r.mapcalc "all_ice = 1"
r.clump input=all_ice output=clumps2
# d.rast clumps2
main_clump=$(r.stats -c -n clumps2 sort=desc | head -n1 | cut -d" " -f1)
r.mask raster=clumps2 maskcats=${main_clump} --o
r.mapcalc "mask_ice = MASK"
# ice mask with no islands
# # original mask ice
# r.mask -r
# r.mapcalc "mask_ice_islands = if(mask == 2, 1, null())"
log_info "Calculating subglacial head with k = 1.0"
r.mapcalc "pressure = 1 * 0.917 * thickness"
r.mapcalc "head = mask_ice * bed + pressure"
After calculating the head, we use 3rd party tools to get the flow direction and streams
THRESH=300
log_warn "Using threshold: ${THRESH}"
log_info "r.stream.extract..."
r.stream.extract elevation=head threshold=${THRESH} memory=16384 direction=dir stream_raster=streams stream_vector=streams
- The flow direction
dir
is negative where flow leaves the domain. These are the outlets. - Encode each outlet with a unique id
log_info "Calculating outlets"
r.mapcalc "outlets_1 = if(dir < 0, 1, null())"
r.out.xyz input=outlets_1 | \
cat -n | \
tr '\t' '|' | \
cut -d"|" -f1-3 | \
v.in.ascii input=- output=outlets_uniq separator=pipe \
columns="x int, y int, cat int" x=2 y=3 cat=1
Using r.stream.basins
, we can get basins for every outlet.
log_info "r.stream.basins..."
g.extension r.stream.basins url=https://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.stream.basins # url=https://github.com/OSGeo/grass-addons/tree/grass7/src/raster/r.stream.basins branch=grass7
r.stream.basins -m direction=dir points=outlets_uniq basins=basins_uniq memory=16384 --verbose
https://github.com/OSGeo/grass-addons/tree/grass7/src/raster/r.stream.basins
g.extension r.stream.distance url=https://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.stream.distance
r.stream.distance -o stream_rast=outlets_1 direction=dir elevation=head method=downstream difference=delta_head
r.stream.distance -o stream_rast=outlets_1 direction=dir elevation=bed method=downstream difference=delta_z_head
r.stream.distance -o stream_rast=outlets_1 direction=dir elevation=pressure method=downstream difference=delta_p_head
The effective head change is the change in the elevation head and the change in the pressure head, minus the change in pressure head that is occupied with the effect of changing pressure on the changing phase transition temperature (PTT; citet:mankoff_2017_VHD).
The relationship between changing pressure and changing PTT is,
\begin{equation} PTT = C_T C_p ρ_w, \end{equation}
with \(C_T\) the Clausius-Clapeyron slope (8.6E-8 K Pa-1), \(c_p\) the specific heat of water (4184 J K-1 kg-1), and \(ρ_w\) the density of water (1000 kg m-3).
frink "(8.6E-8 K Pa^(-1)) * (4184 J K^(-1) kg^(-1)) * (1000 kg m^(-3))"
Meaning 0.36 of the pressure reduction energy release is “lost”, warming the water to match the increased PTT, and 1-0.36 = 0.64 of the pressure reduction energy release can be used to melt basal ice.
From citet:mankoff_2017_VHD or citet:karlsson_2021:
# The effective change in head is...
r.mapcalc "dh = delta_z_head + 0.64 * delta_p_head"
# also "dh = delta_head - 0.36 * delta_p_head"
# Energy (J) is m*g*z
r.mapcalc 'q_z = (1000 * 9.8 * delta_z_head)'
r.mapcalc 'q_p = (1000 * 9.8 * 0.64 * delta_p_head)'
r.mapcalc 'q = (1000 * 9.8 * dh)'
# r.mapcalc "q = (1000 * 9.8 * delta_head) - 0.36 * (917 * 9.8 * delta_p_head)"
r.mapcalc "unit_melt = q / (335 * 1000)" # 335 kJ/kg ice
# Other BMB is m/day equivalent melt, so we should aim for that.
# r.mapcalc "vhd = unit_melt * (10^-3.0) * if(mask@PERMANENT > 50, ru_raw * (mask@PERMANENT/100), null())" --o
## The above is slow (42 s). Let's speed it up.
## Calculate everything except 'ru_raw' which must be done in the loop (elsewhere)
r.mapcalc "scale = unit_melt * (10^-3.0) * if(mask@PERMANENT > 50, (mask@PERMANENT/100), null())" # 1x @ 30 s
- Mask ice should be 1 pixel wider on land and 0 pixels wider for marine terminating glaciers. This is so that for land-terminating glaciers, the pressure head drops to 0 (thickness is 0) at the outlet, but for marine terminating glaciers, the pressure head should remain at ice thickness.
- Click around… record values at an marine terminating outlet, and then click upstream on or off the stream and check that delta_z_head and delta_p_head make sense. Repeat for land-terminating outlet.
g.mapset -c tmp
r.to.vect input=mask output=mask type=area
r.to.vect input=mask_ice output=mask_ice type=area
r.to.vect input=basins_uniq output=basins type=area
d.mon wx0
d.erase
d.rast surface
d.rast bed
d.rast thickness
d.rast pressure
d.rast dir
d.rast delta_head
d.rast delta_p_head
d.rast delta_z_head
d.vect basins fill_color=none color=grey width=3
d.vect mask fill_color=none color=black width=3
d.vect mask_ice fill_color=none color=red
d.vect streams
d.vect outlets_uniq color=cyan
<<init_bash>>
<<init_grass>>
docker run -it --user "$(id -u)":"$(id -g)" --mount type=bind,src="${DATADIR}",dst=/data --mount type=bind,src="$(pwd)",dst=/home/user --env PARALLEL="--delay 0.1 -j -1" mass_balance grass ./G_MAR/PERMANENT
g.mapset -c VHD
RCM=MAR
mkdir -p tmp/BMB
dir=${DATADIR}/${RCM}/3.12
f=$(ls ${dir}/MAR-????.nc|head -n1) # debug
f=$(ls ${dir}/MAR-????.nc|tail -n1) # debug
f=MAR-2022_datum.nc
if [ -z ${RECENT+x} ]; then
f_list=$(ls ${dir}/*.nc) ## initial
log_info "Initial run. Processing all files"
else
f_list=$(ls ${dir}/*.nc | tail -n 2) ## operational
log_warn "RECENT set to ${RECENT}. Processing subset of files"
fi
for f in ${f_list}; do
dates=$(python3 ./nc_dates.py ${f})
band=-1
d=1986-01-01 # debug
d=1991-01-01 # debug
d=2022-01-01 # debug
for d in ${dates}; do
band=$(( ${band} + 2 ))
log_info "MAR BMB: ${d} (band: ${band})"
if [[ -e ./tmp/BMB/sector_${d}.bsv ]]; then continue; fi
r.in.gdal -o input="NetCDF:${f}:ru" band=${band} output=ru_raw --o --q
r.region map=ru_raw region=RCM
r.mapcalc "vhd = scale * ru_raw" --o
r.null map=vhd null=0
r.univar -t --q map=vhd zones=sectors_e@Zwally_2012 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/BMB/sector_${d}.bsv
r.univar -t --q map=vhd zones=regions_e@Mouginot_2019 \
| cut -d"|" -f1,13 \
| datamash -t"|" transpose \
| sed s/^sum/${d}/ \
> ./tmp/BMB/region_${d}.bsv
# r.univar -t --q map=vhd zones=basins_e@Mouginot_2019 \
# | cut -d"|" -f1,13 \
# | datamash -t"|" transpose \
# | sed s/^sum/${d}/ \
# > ./tmp/BMB/basin_${d}.bsv
done
done
grass ./G_MAR/VHD
# unit_melt: Where (and how much) melting occurs from BedMachine
# mask_ice: Where MAR mask is.
r.mapcalc "mask_bedmachine = if(unit_melt)"
r.report -i -h units=k map=mask_bedmachine
r.report -i -h units=k map=mask_ice
r.report -i --q -h units=k map=mask_bedmachine,mask_ice
<<init_bash>>
# for ROI in sector region basin; do
for ROI in sector region; do
log_info MAR ${ROI}
head -n1 ./tmp/BMB/${ROI}_2000-01-01.bsv > ./tmp/BMB_VHD_${ROI}.bsv
tail -q -n1 ./tmp/BMB/${ROI}_*.bsv >> ./tmp/BMB_VHD_${ROI}.bsv
done
import pandas as pd
import xarray as xr
import numpy as np
import datetime
import os
time = pd.date_range(start = "1986-01-01",
end = datetime.datetime.utcnow().date() + datetime.timedelta(days = 7),
freq = "D")
BMB = xr.Dataset()
BMB["time"] = (("time"), time)
BMB["sector"] = pd.read_csv("./tmp/BMB_GF_sector.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
rstr = {11:'NW', 12:'NO', 1:'NE', 3:'CE', 9:'CW', 5:'SE', 7:'SW'}
rnum = pd.read_csv("./tmp/BMB_GF_region.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
BMB["region"] = [rstr[n] for n in rnum]
# BMB["basin"] = pd.read_csv("./tmp/BMB_GF_basin.bsv", delimiter="|", nrows=0, index_col=0).columns.astype(int)
BMB['GF_sector'] = (('sector'),
pd.read_csv("./tmp/BMB_GF_sector.bsv",
delimiter="|", index_col=0).values.flatten())
BMB['vel_sector'] = (('sector'),
pd.read_csv("./tmp/BMB_vel_sector.bsv",
delimiter="|", index_col=0).values.flatten())
BMB['VHD_sector'] = (('time','sector'),
pd.read_csv("./tmp/BMB_VHD_sector.bsv",
delimiter="|", index_col=0, parse_dates=True).reindex(time))
BMB['GF_region'] = (('region'),
pd.read_csv("./tmp/BMB_GF_region.bsv",
delimiter="|", index_col=0).values.flatten())
BMB['vel_region'] = (('region'),
pd.read_csv("./tmp/BMB_vel_region.bsv",
delimiter="|", index_col=0).values.flatten())
BMB['VHD_region'] = (('time','region'),
pd.read_csv("./tmp/BMB_VHD_region.bsv",
delimiter="|", index_col=0, parse_dates=True).reindex(time))
# BMB['GF_basin'] = (('basin'),
# pd.read_csv("./tmp/BMB_GF_basin.bsv",
# delimiter="|", index_col=0).values.flatten())
# BMB['vel_basin'] = (('basin'),
# pd.read_csv("./tmp/BMB_vel_basin.bsv",
# delimiter="|", index_col=0).values.flatten())
# df = pd.read_csv("./tmp/BMB_VHD_basin.bsv",
# delimiter="|", index_col=0, parse_dates=True).reindex(time)
# df.columns = df.columns.astype(np.int64)
# missing = BMB['basin'].values[~pd.Series(BMB['basin'].values).isin(df.columns)]
# if len(missing) > 0:
# df[missing] = np.nan
# df = df.reindex(sorted(df.columns), axis=1)
# BMB['VHD_basin'] = (('time','basin'), df)
# VHD is in mm w.eq / day
# scale from my computed daily VHD to the same units NBK BMB is on, which is [m w.eq]
# for roi in ['sector','region','basin']:
for roi in ['sector','region']:
v = 'VHD_'+roi
BMB[v] = (('time',roi), BMB[v].data * 1E-3)
# grid cells m^3->kg kg->Gt
BMB = BMB * 1000 * 1000 * 1E3 / 1E12
BMB['GF_sector_err'] = BMB['GF_sector'] * 0.5
BMB['vel_sector_err'] = BMB['vel_sector'] * 0.333
BMB['VHD_sector_err'] = BMB['VHD_sector'] * 0.15
BMB['GF_region_err'] = BMB['GF_region'] * 0.5
BMB['vel_region_err'] = BMB['vel_region'] * 0.333
BMB['VHD_region_err'] = BMB['VHD_region'] * 0.15
fn = './tmp/BMB.nc'
if os.path.exists(fn): os.remove(fn)
BMB.to_netcdf(fn)
print(BMB)
# BMB = xr.open_dataset('./tmp/BMB.nc')
From citet:karlsson_2021 (Table 1)
Sector | GF | vel | VHD | TOTAL |
---|---|---|---|---|
CE | 0.5 | 1.2 | 0.7 | 2.4 |
CW | 0.7 | 3.6 | 0.5 | 4.8 |
NE | 1.3 | 1.8 | 0.2 | 3.2 |
NO | 0.4 | 0.7 | 0.2 | 1.3 |
NW | 0.6 | 2.9 | 0.5 | 4.0 |
SE | 0.7 | 1.7 | 0.9 | 3.3 |
SW | 1.2 | 1.1 | 1.0 | 3.3 |
TOTAL | 5.3 | 13.0 | 4.1 | 22.3 |
import xarray as x
BMB = xr.open_dataset('./tmp/BMB.nc')
BMB = BMB.sel({'time':slice('2010','2019')}).mean(dim='time')
BMB = BMB[['GF_region','vel_region','VHD_region']]
# print(BMB)
df = BMB.to_dataframe()\
.sort_index()\
.rename({'GF_region':'GF', 'vel_region':'vel', 'VHD_region':'VHD'}, axis='columns')
df['TOTAL'] = df.sum(axis='columns')
df.loc['TOTAL'] = df.sum(axis='rows')
df.round(3)*365
region | GF | vel | VHD | TOTAL |
---|---|---|---|---|
CE | 0.73 | 1.46 | 1.095 | 2.92 |
CW | 0.73 | 2.555 | 0.73 | 4.38 |
NE | 1.46 | 1.095 | 0.365 | 2.92 |
NO | 0.365 | 0.73 | 0.365 | 1.46 |
NW | 0.73 | 2.19 | 0.73 | 3.65 |
SE | 0.73 | 2.555 | 1.46 | 4.38 |
SW | 1.095 | 1.46 | 2.19 | 4.745 |
TOTAL | 5.84 | 11.68 | 6.935 | 24.455 |
Adjust SMB and D using 1986 through 2012 overlap
import pandas as pd
<<get_datadir>>
fname = 'Greenland_mass_balance_totals_1840-2012_ver_20141130_with_uncert_via_Kjeldsen_et_al_2015.csv'
k2015 = pd.read_csv(DATADIR + '/Kjeldsen_2015/' + fname, index_col=0, parse_dates=True)\
.rename(columns={'discharge from 6 year lagged average runoff' : 'D',
'discharge 1sigma' : 'D_err'})
k2015.index.name = 'time'
k2015['SMB'] = k2015['accumulation'] - k2015['runoff']
k2015['SMB_err'] = (k2015['accumulation 1sigma']**2 + k2015['runoff 1sigma']**2)**0.5
k2015 = k2015.drop(columns=['accumulation', 'accumulation 1sigma',
'melt', 'melt 1sigma', 'retention',
'retention 1sigma',
# 'runoff', 'runoff 1sigma',
'TMB', 'TMB 1sigma'])
<<load_K2015_raw>>
<<load_SMB>>
<<load_D>>
time = pd.date_range(start = "1986-01-01", end = str(D['time'].values[-1]), freq = "D")
D = D.reindex({'time':time}).bfill(dim='time')
SMB = SMB[['SMB','SMB_err']].resample({'time':'YS'}).sum().sel({'time':slice('1986','2012')}).to_dataframe()
D = D[['D','D_err']].resample({'time':'YS'}).sum().sel({'time':slice('1986','2012')}).to_dataframe()
k2015_overlap = k2015.loc['1986':'2012']
<<k2015_adj_prep>>
import scipy as sp
import scipy.stats as sps
slope, intercept, r_value, p_value, std_err = sps.linregress(SMB['SMB'].values,
k2015_overlap['SMB'].values)
k2015 = k2015.rename(columns={'SMB':'SMB_orig'})
k2015['SMB'] = (k2015['SMB_orig'] - intercept)/slope
k2015['SMB_err'] = (k2015['SMB_err'] + SMB['SMB_err'].mean())
slope, intercept, r_value, p_value, std_err = sps.linregress(D['D'].values,
k2015_overlap['D'].values)
k2015 = k2015.rename(columns={'D':'D_orig'})
k2015['D'] = (k2015['D_orig'] - intercept)/slope
k2015['D_err'] = (k2015['D_err'] + D['D_err'].mean())
<<load_K2015>>
k2015 = k2015.loc['1986':'2012']
import scipy as sp
import scipy.stats as sps
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
fig = plt.figure(1, figsize=(3.26,7.1)) # w,h
fig.clf()
fig.set_tight_layout(True)
axSMB = fig.add_subplot(311)
axD = fig.add_subplot(312)
axMB = fig.add_subplot(313)
years = SMB.index.year
y2str = [_[2:4] for _ in years.astype(str)]
color = years - min(years); color = color / max(color)
cmap = cm.viridis(color)
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1]) # https://stackoverflow.com/questions/51034408/
kw_errbar = {'fmt':',', 'color':'k', 'alpha':0.33, 'linewidth':0.5}
kw_nums = {'fontsize':9,
'fontweight':'bold',
'horizontalalignment':'center','verticalalignment':'center'}
kw_adj = {'linewidth':1, 'marker':(4,0,45), 'markersize':7, 'markevery':[-1,1], 'mfc':'none', 'clip_on':False}
###
### SMB
###
e = axSMB.errorbar(SMB['SMB'].values, k2015['SMB_orig'].values,
xerr=SMB['SMB_err'], yerr=k2015['SMB_err'], **kw_errbar)
for i,s in enumerate(y2str):
axSMB.text(SMB['SMB'].values[i], k2015['SMB_orig'].values[i], s, c=cmap(color)[i], **kw_nums)
def print_stats_in_graph(x0,y0,x1,y1, ax):
slope, intercept, r_value, p_value, std_err = sps.linregress(x0, y0)
bias = np.mean(x0 - y0)
RMSE = np.sqrt(np.mean((x0 - y0)**2))
slope_adj, intercept_adj, r_value_adj, p_value_adj, std_err_adj = sps.linregress(x1, y1)
bias_adj = np.mean(x1 - y1)
RMSE_adj = np.sqrt(np.mean((x1 - y1)**2))
kw_text = {'horizontalalignment':'right', 'fontsize':9}
s = "r$^2$: %.2f → %.2f\nbias: %d → %d\nRMSE: %d → %d\nslope: %.1f → %.1f\nintercept: %d → %d"
s = "%.2f → %.2f r$^2$\n%d → %d bias\n%d → %d RMSE\n%.1f → %.1f slope\n%d → %d intercept"
ax.text(1.0, -0.03, s % (round(r_value**2,2), round(r_value_adj**2,2),
round(bias), round(bias_adj),
round(RMSE), round(RMSE_adj),
round(slope,1), round(slope_adj,1),
round(intercept), round(intercept_adj)),
transform=ax.transAxes, **kw_text)
print_stats_in_graph(SMB['SMB'].values, k2015['SMB_orig'].values,
SMB['SMB'].values, k2015['SMB'].values, axSMB)
for i,s in enumerate(y2str):
axSMB.plot([SMB['SMB'].values[i], SMB['SMB'].values[i]],
[k2015['SMB'].values[i], k2015['SMB'].values[i]],
c=cmap(color[i]), **kw_adj)
###
### D
###
e = axD.errorbar(D['D'].values, k2015['D_orig'].values,
xerr=D['D_err'], yerr=k2015['D_err'], **kw_errbar)
for i,s in enumerate(y2str):
axD.text(D['D'].values[i], k2015['D_orig'].values[i], s, c=cmap(color)[i], **kw_nums)
print_stats_in_graph(D['D'].values, k2015['D_orig'].values,
D['D'].values, k2015['D'].values, axD)
for i,s in enumerate(y2str):
axD.plot([D['D'].values[i], D['D'].values[i]],
[k2015['D'].values[i], k2015['D'].values[i]],
c=cmap(color[i]), **kw_adj)
###
### MB
###
MB = SMB['SMB'] - D['D']
k2015['MB_orig'] = k2015['SMB_orig'] - k2015['D_orig']
k2015['MB'] = k2015['SMB'] - k2015['D']
k2015['MB_err'] = (k2015['SMB_err']**2 + k2015['D_err']**2)**0.5
e = axMB.errorbar(MB, k2015['MB_orig'], xerr=D['D_err'], yerr=k2015['MB_err'], **kw_errbar)
for i,s in enumerate(y2str):
axMB.text(MB.values[i], k2015['MB_orig'].values[i], s, c=cmap(color)[i], **kw_nums)
print_stats_in_graph(MB.values, k2015['MB_orig'].values,
MB.values, k2015['MB'].values, axMB)
for i,s in enumerate(y2str):
axMB.plot([MB.values[i], MB.values[i]],
[k2015['MB'].values[i], k2015['MB'].values[i]],
c=cmap(color[i]), **kw_adj)
axSMB.set_xticks([0,750])
axSMB.set_ylabel('SMB', labelpad=-20)
axD.set_xticks([375,600])
axD.set_ylabel('D', labelpad=-20)
# axMB.text(0, 0.9, '+BMB', transform=axMB.transAxes)
axMB.set_xticks([-450, 0, 300])
axMB.set_ylabel('MB$^{*}$', labelpad=-25)
axMB.set_xlabel('This Study', labelpad=-10)
for ax in [axSMB,axD,axMB]:
ax.set_yticks(ax.get_xticks())
ax.set_xlim(ax.get_xticks()[[0,-1]])
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
axMB.set_yticklabels([str(ax.get_yticks()[0]), '', str(ax.get_yticks()[-1])])
axMB.set_xticklabels([str(ax.get_xticks()[0]), '', str(ax.get_xticks()[-1])])
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
s = axSMB.scatter(SMB['SMB'], k2015['SMB'], alpha=0, c=color, cmap=cmap)
# axCB = add_sub_axes(axMB, [1.1, 0.0, 0.075, 1])
axCB = add_sub_axes(axMB, [0.0, -0.50, 1, 0.075])
cb = plt.colorbar(s, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
cb.ax.xaxis.set_ticks_position('top')
cb.ax.xaxis.set_label_position('top')
cb.set_label('Time [year]', labelpad=-10)
axCB.set_xticklabels(k2015.index.year[[0,-1]].values)
plt.savefig('fig/K2015_adjusted.png', transparent=False, bbox_inches='tight', dpi=300)
Generate a TMB netcdf file from SMB, D, and BMB
<<py_import>>
SMB = xr.open_dataset("./tmp/SMB.nc")
SMB['region'] = SMB['region'].astype(str)
import xarray as xr
<<get_datadir>>
ds = xr.open_dataset(DATADIR + "/Mankoff_2020/ice/latest/gate.nc")
# ds = xr.open_dataset("/home/kdm/projects/ice_discharge/out/gate.nc")
# rstr = {'NW':11, 'NO':12, 'NE':1, 'CE':3, 'CW':9, 'SE':5, 'SW':7}
# rnum = [rstr[r] for r in ds['region'].values]
# ds['region'] = (('gate'), rnum)
ds_sector = ds.drop_vars(["mean_x","mean_y","mean_lon","mean_lat","sector","region","coverage", "ID_Moon", "ID_Moon_dist"])\
.groupby("Zwally_2012")\
.sum()\
.rename({"Zwally_2012":"sector",
"discharge":"D_sector",
"err":"err_sector"})
ds_region = ds.drop_vars(["mean_x","mean_y","mean_lon","mean_lat","sector","Zwally_2012","coverage", "ID_Moon", "ID_Moon_dist"])\
.groupby("region")\
.sum()\
.rename({"discharge":"D_region",
"err":"err_region"})
# ds_basin = ds.drop_vars(["mean_x","mean_y","mean_lon","mean_lat","Zwally_2012","region","coverage"])\
# .groupby("sector")\
# .sum()\
# .rename({"sector":"basin",
# "discharge":"D_basin",
# "err":"err_basin"})
D = ds_sector
D = D.merge(ds_region, compat='override')
# D = D.merge(ds_basin)
D['D'] = (('time'), D['D_sector'].sum(dim='sector').data)
D['D_err'] = (('time'), D['err_sector'].sum(dim='sector').data)
# convert from Gt/year @ misc time-steps -> Gt/day @ daily timestep
msave = D.copy(deep=True)
D = (D / 365).resample({"time":"1D"})\
.mean()\
.interpolate_na(dim="time")
# I want monotonic cubic interpolation for discharge
D['D'] = (('time'), (msave['D']/365).resample({'time':'1D'}).mean().to_dataframe().interpolate(method='pchip').values.flatten())
D['region'] = D['region'].astype(str)
<<forecast_D>>
D = D_fc
- D:
- Estimate projected (up to -30 to +7d) D from long term trend + seasonal trend
- D err:
- Take last observed to now+7d window (for ex 25 days)
- Take that window last 3 years (ex: 75 days)
- Diff -> abs -> rank
- Obs+1 err is largest of ranked from past window (75 days)
- Obs+2 err is largest + 2nd largest of ranked
- +7d err is (in this case) cumsum of 25 largest.
import numpy as np
import pandas as pd
import xarray as xr
import scipy as sp
import scipy.stats
import scipy.signal
time_start = '1986-01-01'
# time_first_obs = D['time'][0].values
time_last_obs = D['time'][-1].values
time_forecast_start = pd.Timestamp(time_last_obs) + datetime.timedelta(days=1)
time_forecast_end = datetime.datetime.utcnow().date() + datetime.timedelta(days = 7)
time_index_all = pd.date_range(start=time_start, end=time_forecast_end, freq='D')
time_index_last3y = pd.date_range(start = pd.Timestamp(time_last_obs) - datetime.timedelta(days=365*3), end=time_last_obs, freq='D')
time_index_forecast = pd.date_range(start = time_forecast_start, end=time_forecast_end, freq='D')
time_index_hindcast = np.concatenate([time_index_forecast - datetime.timedelta(days=365*3),
time_index_forecast - datetime.timedelta(days=365*2),
time_index_forecast - datetime.timedelta(days=365*1)])
D_fc = D.reindex({'time':time_index_all})
# https://gist.github.com/rabernat/1ea82bb067c3273a6166d1b1f77d490f
def detrend_dim(da, dim, deg=1):
# detrend along a single dimension
p = da.polyfit(dim=dim, deg=deg)
fit = xr.polyval(da[dim], p.polyfit_coefficients)
return fit,p
for v in ['D_sector','D_region','D']:
# trend is linear long term trend of last 3 years of observations.
last3_plus_forecast = D_fc[v].sel({'time':slice(time_index_last3y[0],time_forecast_end)})
trend, _ = detrend_dim(last3_plus_forecast, 'time')
# season is the average of (the detrended values of forecasted calendar dates for the past 3 years starting at 0)
season = (last3_plus_forecast - trend).reindex({'time':time_index_hindcast})
# Find the average trend of the last 3 seasons
last3_break = np.where(np.diff(season['time'].values).astype('timedelta64[D]').astype(int) > 1)[0]+1
last3_3 = season.isel({'time':slice(0,last3_break[0])})
last3_2 = season.isel({'time':slice(last3_break[0],last3_break[1])})
last3_1 = season.isel({'time':slice(last3_break[1],999)})
last3_3 = last3_3 - last3_3.isel({'time':0}) # start forecast at 0
last3_2 = last3_2 - last3_2.isel({'time':0})
last3_1 = last3_1 - last3_1.isel({'time':0})
season_trend = xr.zeros_like(last3_3.reindex(time=time_index_forecast))
season_trend.data = (last3_3.values + last3_2.values + last3_1.values)/3
# forecast is the long term trend adjusted to start from the last observed,
# then cropped to the last timestamp, then seasonality added, then gap-filled
forecast = trend.reindex({'time':time_index_forecast})
forecast = forecast - forecast.isel(time=0) + D[v].isel({'time':-1}) + forecast.diff(dim='time').isel({'time':-1})
forecast = forecast + season_trend
D_fc[v] = xr.concat([D[v],forecast], dim='time') # .interp(time=time_index_all)
vv = {'D':'D_err', 'D_sector':'err_sector', 'D_region':'err_region'}[v]
e = xr.ones_like(D_fc[vv].sel({'time':time_index_forecast})).cumsum(dim='time')/100
baseline_err = D_fc[vv].dropna(dim='time').mean(dim='time')
e = baseline_err * e
e = e.reindex({'time':time_index_all})
e = e.where(~e.isnull(), other=0)
D_fc[vv] = D_fc[vv].ffill(dim='time') + e
D_fc = D_fc.bfill(dim='time')
<<load_D>>
fig = plt.figure(1)
fig.clf()
ax = fig.add_subplot(111)
ROI='CW'
# ROI=71
if type(ROI) is str:
if ROI != 'GIS':
df = D_fc[['D_region','err_region']]\
.sel({'region':ROI})\
.to_dataframe()[['D_region','err_region']]\
.rename(columns={'D_region':'D','err_region':'D_err'})
else:
df = D_fc[['D','D_err']].to_dataframe()
else:
df = D_fc[['D_sector','err_sector']]\
.sel({'sector':ROI})\
.to_dataframe()[['D_sector','err_sector']]\
.rename(columns={'D_sector':'D','err_sector':'D_err'})
df['D'].plot(ax=ax)
ax.fill_between(df.index,
df['D'] - df['D_err'],
df['D'] + df['D_err'],
alpha = 0.25)
df.describe()
D | D_err | |
---|---|---|
count | 13088 | 13088 |
mean | 0.218221 | 0.02056 |
std | 0.0268189 | 0.000214674 |
min | 0.167778 | 0.0205481 |
25% | 0.192242 | 0.0205481 |
50% | 0.22236 | 0.0205481 |
75% | 0.238363 | 0.0205481 |
max | 0.278995 | 0.0273223 |
<<py_import>>
BMB = xr.open_dataset('./tmp/BMB.nc')
BMB['region'] = BMB['region'].astype(str)
BMB['BMB'] = (('time'), (BMB['GF_sector'] \
+ BMB['vel_sector'] \
+ BMB['VHD_sector']).sum(dim='sector').data)
BMB['BMB_err'] = (('time'), ((BMB['GF_sector_err']**2 \
+ BMB['vel_sector_err']**2 \
+ BMB['VHD_sector_err']**2)**0.5).sum(dim='sector').data)
BMB['BMB_sector'] = (('time','sector'), BMB['VHD_sector'].data + BMB['GF_sector'].data + BMB['vel_sector'].data)
BMB['BMB_region'] = (('time','region'), BMB['VHD_region'].data + BMB['GF_region'].data + BMB['vel_region'].data)
# BMB['BMB_basin'] = BMB['VHD_basin'] + BMB['GF_basin'] + BMB['vel_basin']
BMB['BMB_sector_err'] = (('time','sector'), \
((BMB['GF_sector_err'].expand_dims({'time':BMB['time'].size}).data)**2 \
+ (BMB['vel_sector_err'].expand_dims({'time':BMB['time'].size}).data)**2 \
+ BMB['VHD_sector_err'].data**2\
)**0.5\
)
BMB['BMB_region_err'] = (('time','region'), \
((BMB['GF_region_err'].expand_dims({'time':BMB['time'].size}).data)**2 \
+ (BMB['vel_region_err'].expand_dims({'time':BMB['time'].size}).data)**2 \
+ BMB['VHD_region_err'].data**2\
)**0.5\
)
# print(BMB)
BMB[['VHD_sector','GF_sector','vel_sector','BMB']].sum(dim='sector').to_dataframe().resample('MS').sum().head(12)
time | VHD_sector | GF_sector | vel_sector | BMB |
---|---|---|---|---|
1986-01-01 00:00:00 | 0.000112777 | 0.489858 | 0.997998 | 1.48797 |
1986-02-01 00:00:00 | 0.000106645 | 0.442452 | 0.901418 | 1.34398 |
1986-03-01 00:00:00 | 0.000177201 | 0.489858 | 0.997998 | 1.48803 |
1986-04-01 00:00:00 | 7.43387e-05 | 0.474056 | 0.965805 | 1.43994 |
1986-05-01 00:00:00 | 0.00151314 | 0.489858 | 0.997998 | 1.48937 |
1986-06-01 00:00:00 | 0.284667 | 0.474056 | 0.965805 | 1.72453 |
1986-07-01 00:00:00 | 1.55146 | 0.489858 | 0.997998 | 3.03932 |
1986-08-01 00:00:00 | 1.06005 | 0.489858 | 0.997998 | 2.5479 |
1986-09-01 00:00:00 | 0.239852 | 0.474056 | 0.965805 | 1.67971 |
1986-10-01 00:00:00 | 0.000920525 | 0.489858 | 0.997998 | 1.48878 |
1986-11-01 00:00:00 | 0.000236823 | 0.474056 | 0.965805 | 1.4401 |
1986-12-01 00:00:00 | 0.000221377 | 0.489858 | 0.997998 | 1.48808 |
- Monthly changes in VHD seem reasonable
- Changes in GF and vel (constants) are due to days-in-month.
<<load_K2015>>
## Add BMB
<<load_BMB>>
# constant or time invariant:
BMB_C = BMB[['GF_sector','vel_sector','GF_sector_err','vel_sector_err']]\
.to_dataframe()\
.sum(axis='index')
# VHD is time variable.
v = BMB[['VHD_sector','VHD_sector_err']]\
.resample({'time':'YS'})\
.sum()\
.sum(dim='sector')\
.to_dataframe()
vr = v.merge(k2015['runoff'], left_index=True, right_index=True)
slope, intercept, r_value, p_value, std_err = sps.linregress(vr['runoff'].values,
vr['VHD_sector'].values)
print('slope: ', slope)
print('intercept: ', intercept)
print('r^2: ', r_value**2)
print('p: ', p_value)
print('std_err :', std_err)
k2015['BMB_GF'] = BMB_C['GF_sector']*365
k2015['BMB_GF_err'] = BMB_C['GF_sector_err']*365
k2015['BMB_vel'] = BMB_C['vel_sector']*365
k2015['BMB_vel_err'] = BMB_C['vel_sector_err']*365
k2015['BMB_VHD'] = (k2015['runoff']) * slope
k2015['BMB_VHD_err'] = (k2015['runoff 1sigma']) * slope
k2015[[_ for _ in k2015.columns if '_' in _ ]].head()
k2015['BMB'] = k2015[['BMB_GF','BMB_vel','BMB_VHD']].sum(axis='columns')
k2015['BMB_err'] = (k2015['BMB_GF']**2 \
+ k2015['BMB_vel']**2 \
+ k2015['BMB_VHD']**2)**0.5
k2015['MB'] = k2015['SMB'] - k2015['D'] - k2015['BMB']
<<load_and_adjust_K2015>>
k2015 = k2015.loc['1840':'1986']
# k2015 = k2015.loc['1840':'1986'].resample('1D').ffill().iloc[:-1] # cut off 1986-01-01
k2015 = k2015.loc['1840':'1986'].iloc[:-1] # cut off 1986-01-01
<<load_SMB>>
time = np.append(k2015.index, SMB['time'].values)
SMB = SMB.reindex({'time':time})
SMB['SMB'] = (('time'),
np.append(k2015['SMB'].values/365, SMB['SMB'].sel({'time':slice('1986','2200')}).values))
SMB['SMB_err'] = (('time'),
np.append(k2015['SMB_err'].values/365, SMB['SMB_err'].sel({'time':slice('1986','2200')}).values))
<<load_D>>
# dt_last_obs = D['time'].values[-1]
# err = []
# for dt in pd.date_range(start=dt_last_obs, end=time[-1]):
# std = D['D'].sel(time=(D['time.month'] == dt.month) & (D['time.day'] == dt.day)).std().values
# # print(std)
# err_today = max(max(err), 2*std) if dt != dt_last_obs else 2*std
# err.append(err_today)
D = D.reindex({'time':time})
D['D'] = (('time'),
np.append(k2015['D'].values/365, D['D'].sel({'time':slice('1986','2200')}).values))
D['D_err'] = (('time'),
np.append(k2015['D_err'].values/365, D['D_err'].sel({'time':slice('1986','2200')}).values))
# D['D_err'] = (('time'),
# np.hstack((k2015['D_err'].values/365,
# D['D_err'].sel({'time':slice('1986',dt_last_obs)}).to_dataframe().values,
# err[1:])).ravel())
<<load_BMB>>
BMB = BMB.reindex({'time':time}).fillna(0)
BMB['BMB'] = (('time'),
np.append(k2015['BMB'].values/365, BMB['BMB'].sel({'time':slice('1986','2200')}).values))
BMB['BMB_err'] = (('time'),
np.append(k2015['BMB_err'].values/365, BMB['BMB_err'].sel({'time':slice('1986','2200')}).values))
# add the sectors from SMB to D (14 and 21(?) don't have any D)
D = D.reindex({'sector': SMB['sector']})# .fillna(0)
# D = D.reindex({'basin': SMB['basin']})# .fillna(0)
D = D.reindex({'region': SMB['region']})# .fillna(0)
D['region'] = D['region'].astype(str)
D = D.ffill(dim='time')
D = D.bfill(dim='time')
import subprocess
import os
for roi in ['sector', 'region']: # TODO: 'basin'
MB = xr.Dataset()
MB["time"] = (("time"), time)
MB["time"].attrs["cf_role"] = "timeseries_id"
MB["time"].attrs["standard_name"] = "time"
MB["time"].attrs["axis"] = "T"
MB[roi] = ((roi), D[roi].data)
if roi == 'sector':
MB[roi].attrs["long_name"] = "Zwally 2012 sectors"
elif roi == 'region':
MB[roi].attrs["long_name"] = "Mouginot 2019 regions"
elif roi == 'basin':
MB[roi].attrs["long_name"] = "Mouginot 2019 basins"
MB_hist = (k2015['SMB'] - k2015['D'] - k2015['BMB'])/365
MB_recent = (SMB['SMB'] - D['D'] - BMB['BMB']).to_dataframe('MB').loc['1986':]['MB']
MB_combined = pd.concat([MB_hist, MB_recent])
#MB['MB'] = (('time'), MB_hist.append(MB_recent))
MB['MB'] = (('time'), MB_combined)
MB['MB_err'] = (('time'), (SMB['SMB_err'].data**2 + D['D_err'].data**2 + BMB['BMB_err'].data**2)**0.5)
v = 'MB'
MB[v].attrs["long_name"] = "Mass balance"
MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
MB[v].attrs["units"] = "Gt d-1"
MB[v].attrs["coordinates"] = 'time'
MB['MB_ROI'] = (('time',roi), SMB['SMB_'+roi].data - D['D_'+roi].data - BMB['BMB_'+roi].data)
MB['MB_ROI_err'] = (('time',roi), (SMB['SMB_'+roi+'_err'].data**2 + D['err_'+roi].data**2 + BMB['BMB_'+roi+'_err'].data**2)**0.5)
# MB['MB_ROI'].attrs = MB['MB'].attrs
# MB['MB_ROI'].attrs["coordinates"] = 'time ROI'
# if roi == 'region':
# from IPython import embed; embed()
MB['SMB'] = (('time'), SMB['SMB'].data)
MB['SMB_err'] = (('time'), SMB['SMB_err'].data)
MB['SMB_ROI'] = (('time',roi), SMB['SMB_'+roi].data)
MB['SMB_ROI_err'] = (('time',roi), SMB['SMB_'+roi+'_err'].data)
# MB['SMB_HIRHAM'] = (('time',roi), SMB['HIRHAM_'+roi]-D['D_'+roi] )
# MB['SMB_MAR'] = (('time',roi), SMB['HIRHAM_'+roi]-D['D_'+roi])
# MB['SMB_RACMO'] = (('time',roi), SMB['HIRHAM_'+roi]-D['D_'+roi])
for v in ['SMB', 'SMB_ROI', 'SMB_err', 'SMB_ROI_err']:
ln = 'Surface mass balance'
if 'err' in v: ln = ln + ' uncertainty'
MB[v].attrs['long_name'] = ln
MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
MB[v].attrs["units"] = "Gt d-1"
MB[v].attrs["coordinates"] = 'time ' + roi
MB['D'] = (('time'), D['D'].data)
MB['D_err'] = (('time'), D['D_err'].data)
MB['D_ROI'] = (('time',roi), D['D_'+roi].data)
MB['D_ROI_err'] = (('time',roi), D['err_'+roi].data)
for v in ['D','D_ROI', 'D_err', 'D_ROI_err']:
ln = 'Marine mass balance'
if 'err' in v: ln = ln + ' uncertainty'
MB[v].attrs['long_name'] = ln
MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
MB[v].attrs["units"] = "Gt d-1"
MB['D'].attrs["coordinates"] = 'time'
MB['D_ROI'].attrs["coordinates"] = 'time ' + roi
MB['BMB'] = (('time'), BMB['BMB'].data)
MB['BMB_err'] = (('time'), BMB['BMB_err'].data)
MB['BMB_ROI'] = (('time',roi), BMB['BMB_'+roi].data)
MB['BMB_ROI_err'] = (('time',roi), BMB['BMB_'+roi+'_err'].data)
for v in ['BMB','BMB_ROI']:
ln = 'Basal mass balance'
if 'err' in v: ln = ln + ' uncertainty'
MB[v].attrs['long_name'] = ln
MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
MB[v].attrs["units"] = "Gt d-1"
MB['BMB'].attrs["coordinates"] = 'time'
MB['BMB_ROI'].attrs["coordinates"] = 'time ' + roi
# MB['BMB'] = (('time'), BMB['BMB'])
# MB['BMB_ROI'] = (('time',roi), BMB['BMB_'+roi])
# MB['BMB_GF_ROI'] = (('time',roi), BMB['BMB_GF_'+roi])
# MB['BMB_vel_ROI'] = (('time',roi), BMB['BMB_vel_'+roi])
# MB['BMB_VHD_ROI'] = (('time',roi), BMB['BMB_VHD_'+roi])
# for v in ['BMB','BMB_ROI']:
# MB[v].attrs['long_name'] = 'Basal mass balance'
# MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
# MB[v].attrs["units"] = "Gt d-1"
# MB['BMB'].attrs["coordinates"] = 'time'
# MB['BMB_ROI'].attrs["coordinates"] = 'time ' + roi
for RCM in ['HIRHAM','MAR','RACMO']:
MB['MB_'+RCM] = (('time',roi), SMB[RCM+'_'+roi].data-D['D_'+roi].data-BMB['BMB_'+roi].data )
v = 'MB_'+RCM
MB[v].attrs['long_name'] = 'Mass balance from ' + v.split('_')[1]
MB[v].attrs["standard_name"] = "land_ice_mass_tranport"
MB[v].attrs["units"] = "Gt d-1"
MB[v].attrs["coordinates"] = 'time ' + roi
# if roi == 'region':
# from IPython import embed; embed()
# SMB['SMB_'+roi].sel({'region':'CE'}), '\n\n', D['D_'+roi].sel({'region':'CE'}), '\n\n', (SMB['SMB_'+roi] - D['D_'+roi]).sel({'region':'CE'}), '\n\n', MB.sel({'region':'CE'})
# SMB['SMB_'+roi].isel({'time':0}), '\n\n', D['D_'+roi].isel({'time':0}), '\n\n', (SMB['SMB_'+roi] - D['D_'+roi]).isel({'time':0}), '\n\n', MB.isel({'time':0})
MB.attrs['featureType'] = 'timeSeries'
MB.attrs['title'] = 'Greenland ice sheet mass balance from 1840 through next week'
MB.attrs['summary'] = MB.attrs['title']
MB.attrs['keywords'] = 'Greenland; Mass; Mass balance'
# MB.attrs['Conventions'] = 'CF-1.8'
MB.attrs['source'] = 'git commit: ' + subprocess.check_output(['git', 'describe', '--always']).strip().decode('UTF-8')
# MB.attrs['comment'] = 'TODO'
# MB.attrs['acknowledgment'] = 'TODO'
# MB.attrs['license'] = 'TODO'
# MB.attrs['date_created'] = datetime.datetime.now().strftime('%Y-%m-%d')
MB.attrs['creator_name'] = 'Ken Mankoff'
MB.attrs['creator_email'] = 'kdm@geus.dk'
MB.attrs['creator_url'] = 'http://kenmankoff.com'
MB.attrs['institution'] = 'GEUS'
# MB.attrs['time_coverage_start'] = 'TODO'
# MB.attrs['time_coverage_end'] = 'TODO'
# MB.attrs['time_coverage_resolution'] = 'TODO'
MB.attrs['references'] = '10.22008/promice/mass_balance'
MB.attrs['product_version'] = 1.0
comp = dict(zlib=True, complevel=2, dtype='float32')
encoding = {var: comp for var in MB.data_vars} # all
encoding['time'] = dict(dtype='int32')
fn = './TMB/MB_'+roi+'.nc'
if os.path.exists(fn): os.remove(fn)
MB.to_netcdf(fn, mode='w', encoding=encoding)
# maybe also some CSV output
MB_df = MB[['MB','MB_err','SMB','SMB_err','D','D_err','BMB','BMB_err']].to_dataframe()
# Remove the forecast after september 1st if data after september 1st is only forecast
# Determine the last date processed
# last_date = MB_df.index.max()
# Calculate September 1st of the last year processed
# sep_1_last_year = pd.to_datetime(f'{last_date.year}-09-01')
# Check if the difference is less than 7 days
# if (last_date - sep_1_last_year).days < 7:
# Filter the DataFrame to remove dates beyond September 1st
# MB_df = MB_df[MB_df.index < sep_1_last_year]
MB_df.to_csv('./TMB/MB_SMB_D_BMB.csv', float_format='%.6f')
# daily to annual
df = pd.read_csv('./TMB/MB_SMB_D_BMB.csv', index_col=0, parse_dates=True)
df = df.resample('1D')\
.ffill()\
.resample('AS')\
.sum()\
.iloc[:-1]
df.index = df.index.year
df.to_csv('./TMB/MB_SMB_D_BMB_ann.csv', float_format='%.6f')
ds = xr.Dataset()
<<get_datadir>>
df = pd.read_excel(DATADIR + '/Mouginot_2019/pnas.1904242116.sd02.xlsx', sheet_name=1)
## Discharge
c0 = 15 # Column containing 1972
c1 = 61 # Last column
r0 = 8 # sub-table start [LibreOffice is 1-based, Python is 0-based]
r1 = 15 # sub-table stop
ds['time'] = (('time'), pd.to_datetime(df.iloc[r0-1][df.columns[c0:(c1 + 1)]].astype(int).values, format="%Y"))
ds['region'] = (('region'), df.iloc[r0:r1][df.columns[1]])
ds['D'] = (('time','region'), df.iloc[r0:r1][df.columns[c0:(c1 + 1)]].values.T.astype(float))
c2 = 82; c3 = 128
ds['D_err'] = (('time','region'), df.iloc[r0:r1][df.columns[c2:(c3 + 1)]].values.T.astype(float))
r0 = 20; r1=27
ds['SMB'] = (('time','region'), df.iloc[r0:r1][df.columns[c0:(c1 + 1)]].values.T.astype(float))
ds['SMB_err'] = (('time','region'), df.iloc[r0:r1][df.columns[c2:(c3 + 1)]].values.T.astype(float))
r0 = 30; r1=37
ds['MB'] = (('time','region'), df.iloc[r0:r1][df.columns[c0:(c1 + 1)]].values.T.astype(float))
ds['MB_err'] = (('time','region'), df.iloc[r0:r1][df.columns[c2:(c3 + 1)]].values.T.astype(float))
mouginot = ds
# print(mouginot)
Not much to do because VC from Simonsen (2021) provided as MB by ROI
ncdump -chs ${DATADIR}/Simonsen_2021/ds1.nc
import xarray as xr
<<get_datadir>>
ds = xr.open_dataset(DATADIR + "/Simonsen_2021/ds1.nc")
# print(ds)
vc = xr.Dataset()
t = pd.to_datetime([pd.to_datetime(str(int(np.floor(t)))+'-01-01') + pd.to_timedelta((t-np.floor(t))*365, unit='D') for t in ds.time.values])
vc['time'] = (("time"), t)
vc['sector'] = (("sector"), np.arange(1,9))
vc['SEC'] = (("time","sector"), ds[['VMB_basin_1','VMB_basin_2','VMB_basin_3','VMB_basin_4','VMB_basin_5','VMB_basin_6','VMB_basin_7','VMB_basin_8']].to_dataframe().values)
vc['err'] = (("time","sector"), ds[['VMBer_basin_1','VMBer_basin_2','VMBer_basin_3','VMBer_basin_4','VMBer_basin_5','VMBer_basin_6','VMBer_basin_7','VMBer_basin_8']].to_dataframe().values)
http://products.esa-icesheets-cci.org/products/downloadlist/GMB/
import pandas as pd
from datetime import timedelta, datetime
<<get_datadir>>
root = DATADIR + '/CCI/GMB/greenland_gravimetric_mass_balance_rl06_dtuspace_v2_0-170820/time_series'
df = pd.read_csv(root + '/GIS00_grace.dat',
delim_whitespace=True,
header=None,
names=['date','mass','err'],
index_col=0)
# df = None
# for s in np.arange(1,9):
# ss = str(s).zfill(2)
# df_tmp = pd.read_csv(root + '/GIS'+ss+'_grace.dat',
# delim_whitespace=True,
# header=None,
# names=['date','mass','err'],
# index_col=0)
# if df is None:
# df = df_tmp
# else:
# df = df + df_tmp
def convert_partial_year(number):
year = int(number)
d = timedelta(days=(number - year)*365)
day_one = datetime(year,1,1)
date = d + day_one
return date
df.index = pd.to_datetime([convert_partial_year(_) for _ in df.index])
grace = df
grace.head()
mass | err | |
---|---|---|
2002-04-16 20:23:59.999999 | 1057.32 | 260.241 |
2002-05-12 09:35:59.999997 | 1122.55 | 116.317 |
2002-08-15 07:11:59.999997 | 801.489 | 65.6365 |
2002-09-17 03:36:00.000001 | 685.488 | 491.12 |
2002-10-16 08:23:59.999999 | 834.592 | 71.0989 |
This spreadsheet contains the IMBIE-2019 datasets for Greenland, which includes data on the annual rate of change and cumulative change in Greenland’s ice sheet mass, its surface mass balance and ice discharge anomalies, and their estimated uncertainty. The data are expressed in units of rate of mass change (Gigatons per year – sheet 1, columns B, C, F, G, J and K) mass (Gigatons – sheet 1, columns D, E, H, I, L and M) and in units of equivalent mean global sea level rise (millimetres per year – sheet 2, columns B, C, F, G, J and K, and millimetres – sheet 2, columns D, E, H, I, L and M).
import pandas as pd
<<get_datadir>>
imbie = pd.read_excel(DATADIR + "/IMBIE/imbie_dataset_greenland_dynamics-2020_02_28.xlsx", sheet_name=0, index_col=0, usecols=(0,1,2,3,4,5,6,9,10))\
.rename(columns={"Rate of ice sheet mass change (Gt/yr)":"MB",
"Rate of ice sheet mass change uncertainty (Gt/yr)":"MB_err",
"Cumulative ice sheet mass change (Gt)" : "MB_cum",
"Cumulative ice sheet mass change uncertainty (Gt)" : "MB_cum_err",
"Rate of mass balance anomaly (Gt/yr)":"SMB",
"Rate of mass balance anomaly uncertainty (Gt/yr)":"SMB_err",
"Rate of ice dynamics anomaly (Gt/yr)":"D",
"Rate of ice dyanamics anomaly uncertainty (Gt/yr)":"D_err"})
imbie['index'] = pd.to_datetime('1980-01-01') + pd.to_timedelta((imbie.index-1980) * 365, unit="D")
# Appears that these fractional dates are supposed to be start-of-month? I think so...
# Let's hard-code this.
imbie['index'] = [pd.to_datetime(y + '-' + m + '-01') for y in imbie.index.astype(int).unique().astype(str) for m in np.arange(1,13).astype(str)]
imbie.index = imbie['index']
imbie = imbie.drop('index', axis='columns')
# imbie.tail(30)
def load_Colgan_2019(sheet=0):
<<get_datadir>>
df_all = pd.read_excel(DATADIR + "/data/Colgan_2019/MassBalance_07022019.xlsx", index_col=0, sheet_name=sheet)
df_all = df_all.loc[df_all.index.dropna()]\
.drop(index='Total')\
.drop(columns='Unnamed: 43')
if 'Unnamed: 44' in df_all.columns: df_all = df_all.drop(columns='Unnamed: 44')
df_all.index = (df_all.index.astype(float) * 10).astype(int)
df_all = df_all.T
df_all = df_all.astype(float)
df = df_all.iloc[::2]
df.index = pd.to_datetime(df.index, format="%Y")
df_err = df_all.iloc[1::2]
df_err.index = df.index
return df, df_err
p,e = load_Colgan_2019(sheet=3)
promice = xr.Dataset()
promice['time'] = (("time"), p.index)
promice['sector'] = (("sector"), p.columns)
promice['D'] = (("time","sector"), p.values)
promice['D_err'] = (("time","sector"), e.values)
p,e = load_Colgan_2019(sheet=4)
promice['SMB'] = (("time","sector"), p.values)
promice['SMB_err'] = (("time","sector"), e.values)
p,e = load_Colgan_2019(sheet=5)
promice['MB'] = (("time","sector"), p.values)
promice['MB_err'] = (("time","sector"), e.values)
# print(promice)
import xarray as xr
ds = xr.open_dataset('./TMB/MB_region.nc')\
.sel({'time':slice('1840','1985')})
# print(ds)
df = ds[['SMB','SMB_err', 'D','D_err']].to_dataframe()
df['SMB_err_pct'] = df['SMB_err']/df['SMB']*100
df['D_err_pct'] = df['D_err']/df['D']*100
# df[['SMB_err_pct','D_err_pct']].plot()
df.describe()
SMB | SMB_err | D | D_err | SMB_err_pct | D_err_pct | |
---|---|---|---|---|---|---|
count | 146 | 146 | 146 | 146 | 146 | 146 |
mean | 1.03703 | 0.290886 | 1.13836 | 0.22485 | 27.0017 | 19.777 |
std | 0.32795 | 0.0235757 | 0.0710128 | 0.0129634 | 56.2203 | 0.871667 |
min | -0.0542495 | 0.235687 | 0.971574 | 0.192683 | -626.302 | 17.9976 |
25% | 0.817315 | 0.273792 | 1.09358 | 0.216094 | 22.4278 | 19.1176 |
50% | 1.05309 | 0.289416 | 1.12913 | 0.224121 | 27.6212 | 19.9054 |
75% | 1.23641 | 0.30463 | 1.18175 | 0.232779 | 34.966 | 20.377 |
max | 1.83787 | 0.348189 | 1.30492 | 0.259258 | 100.047 | 21.4068 |
What % of RACMO is not covered by Zwally & Mouginot?
# grass -c ./G_RACMO/domain_overlap
grass ./G_RACMO/domain_overlap
g.region raster=mask
r.in.gdal -o input="NetCDF:${DATADIR}/RACMO/Icemask_Topo_Iceclasses_lon_lat_average_1km.nc:Promicemask" output=promicemask
g.list type=raster mapset=* -m
g.list type=vector mapset=* -m
d.mon wx0
# g.gui.mapswipe first=mask_ice@PERMANENT second=mask_ice_shrink@ROI
d.rast mask_ice@PERMANENT
d.vect basins@Mouginot_2019 fillcolor=none # basins is also a raster
r.mapcalc "mouginot = if(basins@Mouginot_2019)"
r.mapcalc "mouginot_e = if(basins_e@Mouginot_2019)"
r.mapcalc "zwally = if(sectors@Zwally_2012)"
r.mapcalc "zwally_e = if(sectors_e@Zwally_2012)"
r.mapcalc "RACMO = if(mask_ice@PERMANENT)"
r.category map=mouginot separator=":" rules=- << EOF
1:M2019
EOF
r.category map=zwally separator=":" rules=- << EOF
1:Z2012
EOF
r.category map=RACMO separator=":" rules=- << EOF
1:RACMO
EOF
r.report
# r.report -hi units=k map=mouginot,RACMO
r.report -hi units=k map=RACMO,mouginot --q
echo ""
r.report -hi units=k map=RACMO,zwally --q
#+RESULTS[(2021-07-30 08:33:59) d2d0666b0232e30b709e147825c7b2c5546415b2]:
+-----------------------------------------------------------------------------+ | Category Information | square | |description |kilometers| |-----------------------------------------------------------------------------| |1|RACMO | 1,718,959| | |----------------------------------------------------------------|----------| | |1|M2019. . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 1,696,419| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 22,540| |------------------------------------------------------------------|----------| |*|no data | 2,320,241| | |----------------------------------------------------------------|----------| | |1|M2019. . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 20,685| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 2,299,556| |-----------------------------------------------------------------------------| |TOTAL | 4,039,200| +-----------------------------------------------------------------------------+ +-----------------------------------------------------------------------------+ | Category Information | square | |description |kilometers| |-----------------------------------------------------------------------------| |1|RACMO | 1,718,959| | |----------------------------------------------------------------|----------| | |1|Z2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 1,678,864| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 40,095| |------------------------------------------------------------------|----------| |*|no data | 2,320,241| | |----------------------------------------------------------------|----------| | |1|Z2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 24,475| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 2,295,766| |-----------------------------------------------------------------------------| |TOTAL | 4,039,200| +-----------------------------------------------------------------------------+
What % of SMB changes occur in the uncovered regions?
import xarray as xr
<<get_datadir>>
ds = xr.open_mfdataset(DATADIR + '/RACMO/daily/SMB_rec.201*.nc', combine='by_coords')
ds = ds.resample({'time':'YS'}).sum()
ds.to_netcdf('./tmp/RACMO_2010s.nc')
ds = xr.open_mfdataset(DATADIR + '/RACMO/daily/SMB_rec.2020*.nc', combine='by_coords')
ds = ds.sum(dim='time')
ds.to_netcdf('./tmp/RACMO_2020.nc')
2020
r.in.gdal -o input="NetCDF:./tmp/RACMO_2020.nc:SMB_rec" output=SMB_2020 --o --q
r.region map=SMB_2020 region=RCM
r.univar -g map=SMB_2020 zones=RACMO | grep sum # 318648388
r.univar -g map=SMB_2020 zones=mouginot | grep sum # 338377530
r.univar -g map=SMB_2020 zones=zwally | grep sum # 351498574
Domain | Mass gain [Gt] |
---|---|
RACMO | 319 |
RACMO M | 338 |
RACMO Z | 351 |
2010s
echo "Year R M Z"
for i in $(seq 10); do
r.in.gdal -o input="NetCDF:./tmp/RACMO_2010s.nc:SMB_rec" band=${i} output=SMB --o --q 2>/dev/null
r.region map=SMB region=RCM --q
eval $(r.univar -g map=SMB zones=RACMO | grep sum | sed 's/sum/R/')
eval $(r.univar -g map=SMB zones=mouginot | grep sum | sed 's/sum/M/')
eval $(r.univar -g map=SMB zones=zwally | grep sum | sed 's/sum/Z/')
echo $(( 2010 + $(( ${i}-1)) )) ${R} ${M} ${Z}
done
#+RESULTS[(2021-07-30 08:34:18) e0760b49600c22820b5ff97f5cfcc4a62ccbf7e4]: RMZ
Year | R | M | Z |
2010 | 138865169.863205 | 164141022.770475 | 188693278.633955 |
2011 | 162135323.732203 | 183110164.892631 | 192990105.059272 |
2012 | 119050877.731281 | 145874329.623628 | 164998233.714134 |
2013 | 392202448.814299 | 405757511.6404 | 408685069.94199 |
2014 | 304046429.91578 | 322362180.505164 | 327710495.051068 |
2015 | 294980810.845057 | 311374539.451985 | 319797206.757863 |
2016 | 233733244.138948 | 257247406.928596 | 266283590.975364 |
2017 | 399613619.668526 | 415379433.607018 | 425252437.272332 |
2018 | 420151685.34887 | 429386998.485341 | 429046485.925065 |
2019 | 45504675.1813576 | 76604095.5701525 | 84186156.4168217 |
import pandas as pd
df = pd.DataFrame(tab[:][1:], columns=tab[:][0])
df = df.set_index('Year')
df = df / 1E6
df['M%'] = df['M'] / df['R'] * 100 - 100
df['Z%'] = df['Z'] / df['R'] * 100 - 100
print(df)
print(df.describe())
What % of MAR is not covered by BedMachine?
# grass -c ./G_MAR/BedMachine_overlap
grass ./G_MAR/BedMachine_overlap
g.region raster=mask
g.list type=raster mapset=* -m
g.list type=vector mapset=* -m
# d.mon wx0
# # g.gui.mapswipe first=mask_ice@PERMANENT second=mask_ice_shrink@ROI
# d.rast mask_ice@PERMANENT
# d.rast mask_bedmachine@VHD
r.mapcalc "MAR = if(SMB@PERMANENT, 1, null())" --o
r.mapcalc "BedMachine = if(delta_head@VHD, 1, null())" --o
r.category map=MAR separator=":" rules=- << EOF
1:MAR
EOF
r.category map=BedMachine separator=":" rules=- << EOF
1:BedMachine
EOF
r.report -hi units=k map=MAR,BedMachine --q
echo ""
echo ""
r.report -hi units=k map=BedMachine,MAR --q
#+RESULTS[(2021-09-24 10:19:54) 2c555bc62cbe34c18ac9d67f625bb17cd01633b1]:
+-----------------------------------------------------------------------------+ | Category Information | square | |description |kilometers| |-----------------------------------------------------------------------------| |1|MAR | 1,825,600| | |----------------------------------------------------------------|----------| | |1|BedMachine . . . . . . . . . . . . . . . . . . . . . . . . . .| 1,708,400| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 117,200| |------------------------------------------------------------------|----------| |*|no data | 2,308,800| | |----------------------------------------------------------------|----------| | |1|BedMachine . . . . . . . . . . . . . . . . . . . . . . . . . .| 26,400| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 2,282,400| |-----------------------------------------------------------------------------| |TOTAL | 4,134,400| +-----------------------------------------------------------------------------+ +-----------------------------------------------------------------------------+ | Category Information | square | |description |kilometers| |-----------------------------------------------------------------------------| |1|BedMachine | 1,734,800| | |----------------------------------------------------------------|----------| | |1|MAR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 1,708,400| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 26,400| |------------------------------------------------------------------|----------| |*|no data | 2,399,600| | |----------------------------------------------------------------|----------| | |1|MAR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .| 117,200| | |*|no data. . . . . . . . . . . . . . . . . . . . . . . . . . . .| 2,282,400| |-----------------------------------------------------------------------------| |TOTAL | 4,134,400| +-----------------------------------------------------------------------------+
What % of MAR runoff occurs in the uncovered regions?
import xarray as xr
<<get_datadir>>
ds = xr.open_mfdataset(DATADIR + '/MAR/3.12/MAR-201*.nc', combine='by_coords')['ru']
ds = ds.sel({'sector':1}).resample({'time':'YS'}).mean()
ds = ds.where(~np.isinf(ds), 0)
ds.to_netcdf('./tmp/MAR_2010s.nc')
ds = xr.open_mfdataset(DATADIR + 'MAR/3.12/MAR-2020.nc', combine='by_coords')['ru']
ds = ds.sel({'sector':1}).sum(dim='time')
ds.to_netcdf('./tmp/MAR_2020.nc')
2020
r.in.gdal -o input="NetCDF:./tmp/MAR_2020.nc:ru" output=RU_2020 --o --q
r.region map=RU_2020 region=RCM
r.mapcalc "RU_2020 = RU_2020 * MAR"
r.univar -g map=RU_2020 zones=MAR | grep sum # 1253668
r.univar -g map=RU_2020 zones=BedMachine | grep sum # 969295
2010s
echo "Year MAR BedMachine"
for i in $(seq 10); do
r.in.gdal -o input="NetCDF:./tmp/MAR_2010s.nc:ru" band=${i} output=RU --o --q 2>/dev/null
r.region map=RU region=RCM --q
r.mapcalc "RU = RU * MAR" --o --q
eval $(r.univar -g map=RU zones=MAR | grep sum | sed 's/sum/MAR/')
eval $(r.univar -g map=RU zones=BedMachine | grep sum | sed 's/sum/BedMachine/')
echo $(( 2010 + $(( ${i}-1)) )) ${MAR} ${BedMachine}
done
import pandas as pd
df = pd.DataFrame(tab[:][1:], columns=tab[:][0])
df = df.set_index('Year')
df = df / 1E6
df['BM%'] = df['BedMachine'] / df['MAR'] * 100 - 100
df['MAR%'] = df['MAR'] / df['BedMachine'] * 100
print(df)
print(df.describe())
From ESSD:
% ONE-COLUMN FIGURES
\begin{figure}[t]
\includegraphics[width=8.3cm]{FILE NAME}
\caption{TEXT}
\end{figure}
% TWO-COLUMN FIGURES
\begin{figure*}[t]
\includegraphics[width=12cm]{FILE NAME}
\caption{TEXT}
\end{figure*}
- Coast is noisy and poorly defined due to different RCM boundaries
- Generate ROIs without the coast, just interior divisions
grass -c ./G_RACMO/ROI
g.region region=sectors -pa
r.mask -r
g.copy vector=sectors_e@Zwally_2012,sector
g.copy vector=regions_e@Mouginot_2019,region
for roi in sector region; do
v.to.lines input=${roi} output=${roi}_lines
v.to.rast input=${roi}_lines output=${roi}_lines type=line use=val val=1
r.grow input=mask_ice radius=-3 output=mask_ice_shrink
r.mask mask_ice_shrink
r.thin input=${roi}_lines output=${roi}_thin
r.to.vect input=${roi}_thin output=${roi}_interior type=line
v.clean input=${roi}_interior output=${roi}_clean tool=rmdangle threshold=10000
v.generalize input=${roi}_clean output=${roi}_interior_general method=douglas threshold=5000
v.generalize input=${roi}_interior_general output=${roi}_interior_smooth method=chaiken threshold=1
v.out.ogr input=${roi}_interior_smooth output=./tmp/${roi}_interior.gpkg
done
wget https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_countries.zip -O ./tmp/countries.zip
(cd tmp; unzip countries.zip)
v.import input=./tmp/ne_10m_admin_0_countries.shp output=countries
v.extract input=countries output=greenland where='name = "Greenland"'
v.import input=~/data/Mankoff_2020/ice/latest/gates.gpkg output=gates
lightblue="166:206:227"
lightgreen="178:223:138"
pink="251:154:153"
cat << EOF | ps.map -e input=- output=./tmp/overview.eps --o
border n
# scale 1:1000000
paper a3
end
vareas gates
color black
width 3
label D gates
end
text -94000 -1106944 NO
color ${lightblue}
fontsize 24
end
text -185295 -1605917 NW
color ${lightblue}
fontsize 24
end
text 202794 -1403555 NE
color ${lightblue}
fontsize 24
end
text 39242 -2121523 CW
color ${lightblue}
fontsize 24
end
text 466141 -2124295 CE
color ${lightblue}
fontsize 24
end
text -120000 -2800681 SW
color ${lightblue}
fontsize 24
end
text 110000 -2650000 SE
color ${lightblue}
fontsize 24
end
text -254597 -1154069 11
color ${pink}
fontsize 18
end
text -10655 -1034870 12
color ${pink}
fontsize 18
end
text 144580 -1015465 13
color ${pink}
fontsize 18
end
text 294272 -1021009 14
color ${pink}
fontsize 18
end
text 247147 -1281584 21
color ${pink}
fontsize 18
end
text 452281 -1436820 22
color ${pink}
fontsize 18
end
text 424560 -1736204 23
color ${pink}
fontsize 18
end
text 349714 -1941338 31
color ${pink}
fontsize 18
end
text 650000 -2157560 32
color ${pink}
fontsize 18
end
text 380000 -2257354 33
color ${pink}
fontsize 18
end
text 220000 -2451399 41
color ${pink}
fontsize 18
end
text 83595 -2700000 42
color ${pink}
fontsize 18
end
text 60000 -2936512 43
color ${pink}
fontsize 18
end
text 10000 -3105609 50
color ${pink}
fontsize 18
end
text -100000 -2906020 61
color ${pink}
fontsize 18
end
text -100000 -2595547 62
color ${pink}
fontsize 18
end
text 47558 -2262898 71
color ${pink}
fontsize 18
end
text -93817 -2027272 72
color ${pink}
fontsize 18
end
text -265685 -1544931 81
color ${pink}
fontsize 18
end
text -450000 -1312077 82
color ${pink}
fontsize 18
end
vlines sector_interior_smooth
color $pink
width 2
label Sector
end
vlines region_interior_smooth
color $lightblue
width 5
label Region
end
vareas sectors@Zwally_2012
color white
fcolor white
label Ice
end
vareas greenland
color gray
fcolor 192:192:192
label Land
end
# vlegend
# where 6 13
# fontsize 18
# end
# scalebar s
# length 100
# units kilometers
# segment 1
# where 7 12
# fontsize 18
# end
EOF
convert -trim ./tmp/overview.eps ./fig/overview.png
o ./fig/overview.png
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.dates as dates
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
fig = plt.figure(1, figsize=(8,6)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(211)
MB = xr.open_dataset("./TMB/MB_sector.nc")
kw = {'ax':ax, 'legend':False, 'drawstyle':'steps-post'}
SMB = MB[['SMB','SMB_err']]\
.to_dataframe()\
.resample('1D')\
.interpolate(method='time')\
.resample('AS')\
.sum()
SMB['SMB'].plot(color='b', linestyle='-', alpha=0.5, **kw)
ax.fill_between(SMB.index.values,
(SMB['SMB']-SMB['SMB_err']).values.flatten(),
(SMB['SMB']+SMB['SMB_err']).values.flatten(),
color='b', alpha=0.1,
step='post')
DBMB = -MB[['D','BMB','D_err','BMB_err']]\
.to_dataframe()\
.resample('1D')\
.interpolate(method='time')\
.resample('AS')\
.sum()
DBMB['sum'] = DBMB[['D','BMB']].sum(axis='columns')
DBMB['err'] = (DBMB['D_err']**2 + DBMB['BMB_err']**2)**0.5
DBMB['sum'].plot(color='gray', linestyle='--', **kw)
ax.fill_between(DBMB.index.values,
(DBMB['sum']-DBMB['err']).values.flatten(),
(DBMB['sum']+DBMB['err']).values.flatten(),
color='k', alpha=0.1,
step='post')
MB_ann = MB[['MB','MB_err']].to_dataframe()\
.resample('1D')\
.interpolate(method='time')\
.resample('AS')\
.sum()
MB_ann['MB'].plot(color='k', linestyle='-', **kw)
ax.fill_between(MB_ann.index.values,
(MB_ann['MB'] - MB_ann['MB_err']).values.flatten(),
(MB_ann['MB'] + MB_ann['MB_err']).values.flatten(),
color='k', alpha=0.5,
step='post')
adj(ax, ['left','bottom'])
ax.grid(b=True, which='major', axis='y', alpha=0.33)
# ax.xaxis.set_major_locator(dates.YearLocator(20, month=1, day=1))
# ax.xaxis.set_minor_locator(dates.YearLocator())
ticks = [datetime.datetime(i, 1, 1) for i in range(1840, 2021, 20)]
ax.set_xticks(ticks)
ax.set_xticklabels(range(1840,2021,20))
# locator = dates.AutoDateLocator(interval_multiples=True)
# ax.xaxis.set_major_locator(locator)
# ax.set_xticks(ticks)
ax.set_ylabel("Mass gain [Gt yr$^{-1}$]")
ax.set_xlabel("")
ax2 = fig.add_subplot(212)
kw = {'ax':ax2, 'drawstyle':'steps-post'}
MB = MB.sel({'time':slice('2019-01-01','2020-12-31')})
SMB = MB[['SMB','SMB_err']].to_dataframe()
SMB['SMB'].rename(index='SMB').plot(color='b', linestyle='-', legend=True, alpha=0.5, **kw)
# ax2.fill_between(SMB.index,
# (SMB['SMB']-SMB['SMB_err']).values.flatten(),
# (SMB['SMB']+SMB['SMB_err']).values.flatten(),
# color='b', alpha=0.1,
# step='post')
DBMB = -MB[['D','BMB','D_err','BMB_err']].to_dataframe()
DBMB['sum'] = DBMB[['D','BMB']].sum(axis='columns')
DBMB['err'] = (DBMB['D_err']**2 + DBMB['BMB_err']**2)**0.5
DBMB['sum'].rename(index='D+BMB').plot(color='gray', linestyle='--', legend=True, **kw)
# ax2.fill_between(DBMB.index.values,
# (DBMB['sum']-DBMB['err']).values.flatten(),
# (DBMB['sum']+DBMB['err']).values.flatten(),
# color='k', alpha=0.1,
# step='post')
MB = MB[['MB','MB_err']].to_dataframe()
MB['MB'].rename(index='MB').plot(color='k', linestyle='-', legend=True, **kw)
# ax2.fill_between(MB.index.values,
# (MB['MB'] - MB['MB_err']).values.flatten(),
# (MB['MB'] + MB['MB_err']).values.flatten(),
# color='k', alpha=0.5,
# step='post')
ax2.set_xlabel("Time [Month; Year]")
ax2.set_ylabel("Mass gain [Gt d$^{-1}$]")
ax2.xaxis.set_minor_locator(plt.NullLocator())
adj(ax2, ['left','bottom'])
ax2.grid(b=True, which='major', axis='y', alpha=0.33)
plt.legend(loc='lower right', framealpha=0, ncol=3)
plt.savefig('fig/MB_ts.png', transparent=False, bbox_inches='tight', dpi=300)
Improve the figure generated below with an Inkscape overlay
inkscape -d 300 -z ./fig/overview.svg -e ./fig/overview_w_plots.png
o ./fig/overview_w_plots.png
import xarray as xr
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=10)
rc('text', usetex=False)
plt.close()
fig = plt.figure(1, figsize=(2.5,1.5)) # w,h
MB = xr.open_dataset("./TMB/MB_region.nc")
MB = MB.sel({'time':slice('1986','2099')})
for r in MB['region'].values:
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
kw = {'ax':ax, 'legend':False, 'drawstyle':'steps-post'}
MB_r = MB.sel({'region':r})
MB_r_SMB = MB_r['SMB_ROI'].to_dataframe(name='M2021')\
.resample('AS')\
.sum()\
.rename(columns={'M2021':'SMB'})
MB_r_SMB.plot(color='b', linestyle='-', alpha=0.5, **kw)
# ax.fill_between(MB_r_SMB.index,
# MB_r_SMB.values.flatten(),
# color='b', alpha=0.25, step='post')
(-1*(MB_r['D_ROI'] + MB_r['BMB_ROI'])).to_dataframe(name='M2021')\
.resample('AS')\
.sum()\
.rename(columns={'M2021':'D + BMB'})\
.plot(color='gray', linestyle='--', **kw)
MB_r_MB = MB_r['MB_ROI'].to_dataframe(name='M2021')\
.resample('AS')\
.sum()\
.rename(columns={'M2021':'MB'})
MB_r_MB.plot(color='k', linestyle='-', **kw)
# MB_r_pos = MB_r_MB.where(MB_r_MB['MB'] > 0, 0)
# MB_r_neg = MB_r_MB.where(MB_r_MB['MB'] < 0, 0)
# ax.fill_between(MB_r_pos.index, MB_r_pos.values.flatten(), color='r', alpha=0.1, step='post')
# ax.fill_between(MB_r_neg.index, MB_r_neg.values.flatten(), color='b', alpha=0.1, step='post')
# ax.fill_between(MB_r_MB.index,
# MB_r_MB.values.flatten(),
# color='k',
# alpha=0.25,
# step='post')
# plt.legend(loc='lower left')
# ax.set_ylabel("Mass gain [Gt yr$^{-1}$]")
ax.set_ylabel("")
ax.set_xlabel("")
ax.set_xticks(ax.get_xlim())
# ax.set_xticklabels(ax.get_xlim())
yt = {'NO':50, 'NE':50, 'NW':100, 'CW':100, 'SW':100, 'SE':200, 'CE':100}
ax.set_yticks([-yt[r], 0, yt[r]])
# if r == 'SE':
# ax.set_ylabel('Mass gain [Gt yr$^{-1}$]')
# ax.set_xlabel('Time [Year]')
# ax.set_xticklabels(['1986','2021'])
adj(ax, ['left','bottom'])
ax.grid(b=True, which='major', axis='y', alpha=0.33)
# plt.setp(ax.xaxis.get_majorticklabels(), rotation=90)
# plt.ion()
# plt.show()
plt.savefig('fig/MB_ts_'+r+'.png', transparent=True, bbox_inches='tight', dpi=300)
Improve the figure generated below with an Inkscape overlay
inkscape -d 300 -z ./fig/MB_cumsum_compare_manual.svg -e ./fig/MB_cumsum_compare_manual.png
o ./fig/MB_cumsum_compare_manual.png
import xarray as xr
import numpy as np
import pandas as pd
from matplotlib.lines import Line2D
from adjust_spines import adjust_spines as adj
import matplotlib.dates as dates
import matplotlib.ticker as tck
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
# | Color | R | G | B | hex |
# |-------------+-----+-----+-----+---------|
# | light blue | 166 | 206 | 227 | #a6cee3 |
# | dark blue | 31 | 120 | 180 | #1f78b4 |
# | light green | 178 | 223 | 138 | #b2df8a |
# | dark green | 51 | 160 | 44 | #33a02c |
# | pink | 251 | 154 | 153 | #fb9a99 |
# | red | 227 | 26 | 28 | #e31a1c |
# | pale orange | 253 | 191 | 111 | #fdbf6f |
# | orange | 255 | 127 | 0 | #ff7f00 |
C_this = 'k'
C_M2019 = 'gray'
C_GMB = '#e31a1c'
C_VC = '#045a8d' # '1f78b4'
C_IMBIE = '#74a9cf'
C_PROMICE = '#fdbf6f'
OFFSET = 00 # where to put 0 for This Study
ADJ = 2050
OFFSET_M2019 = 2800 + ADJ
OFFSET_GRACE = ADJ
OFFSET_VC = 2900 + ADJ
OFFSET_IMBIE = 2900 + ADJ
OFFSET_C2019 = 2100 + ADJ
OFFSET_K2015 = 1900 + ADJ
# plt.close(1)
fig = plt.figure(1, figsize=(8,6)) # w,h
# get_current_fig_manager().window.move(0,0)
fig.clf()
fig.set_tight_layout(True)
# import matplotlib.gridspec as gridspec
# gs = gridspec.GridSpec(1, 1) #w,h
# ax = plt.subplot(gs[:,:])
ax = fig.add_subplot(111)
axins = ax.inset_axes([0.15, 0.08, 0.7, 0.38])
kw = {'clip_on':True, 'linewidth':1, 'legend':False}
kw_err = {'clip_on':True, 'linewidth':1, 'alpha':0.1}
# MB (this)
MB = xr.open_dataset('./TMB/MB_sector.nc')
MB = MB.sel({'time':slice('1971','2100')})
BMB = xr.open_dataset('./tmp/BMB.nc').sum(dim='region').sel({'time':slice('1971','2100')})
# remove un-used sector-level data
BMB = BMB.drop_vars([_ for _ in BMB.keys() if 'sector' in _])\
.drop_vars(['GF_region','vel_region','VHD_region','sector'])
BMB = BMB.rename_vars({'GF_region_err':'GF_err', 'vel_region_err':'vel_err', 'VHD_region_err':'VHD_err'})
this = MB['MB'].to_dataframe(name='MB').resample('1D').ffill().cumsum()
# this['err'] = MB['D_err'].to_dataframe(name='err')[::-1].cumsum()[::-1]
D_err = MB['D_err'].to_dataframe(name='err')
SMB_err = 0
BMB_err_recent = BMB['GF_err'].values + 0*BMB['VHD_err'].rename('err').to_dataframe(name='err')
BMB_err_reconstruct = MB['BMB_err'].sel(time=slice('1971','1985')).to_dataframe(name='err')
BMB_err = (BMB_err_reconstruct*365).append(BMB_err_recent)
this_err = (D_err**2 + SMB_err**2 + BMB_err**2)**0.5
this['err'] = this_err[::-1].cumsum()[::-1]
# TODO: expand error from last observed through projected for graphic
# Uncertainty is treated correctly in published product
# Accumulating it here and expanding is not needed - it is not visible at the scale on this graphic.
this.loc[:'1985'] = this.loc[:'1985'].resample('Y').mean().resample('1D').bfill()
this['MB'] = this['MB'] + OFFSET
tt = (this.index[-1] - this.index) / (this.index[-1] - pd.Timestamp('2000-01-01'))
tt = tt.where(tt <= 1, 1)
this['bumpy'] = this['MB']
this['smooth'] = this['bumpy'].rolling(365).mean()
this['err_core'] = (this['bumpy'] * (1-tt)) + (this['smooth'] * tt)
for a in [ax,axins]:
p_this = this['MB'].plot(ax=a, color=C_this, drawstyle='steps-post', **kw)
a.fill_between(this.index,
(this['err_core'] - this['err']).values.flatten(),
(this['err_core'] + this['err']).values.flatten(),
color=C_this, clip_on=True, alpha=0.05)
# Mouginot
if 'mouginot' not in locals():
<<load_mouginot>>
M2019 = mouginot['MB'].sum(dim='region').to_dataframe('M2019').cumsum()
M2019 = M2019 - np.min(M2019) - OFFSET_M2019
M2019_err = mouginot['D_err'].sum(dim='region').to_dataframe('M2019')[::-1].cumsum()[::-1]
M2019 = M2019[M2019.index.year >= 1971]
M2019_err = M2019_err[M2019_err.index.year >= 1971]
for a in [ax,axins]:
p_M2019 = M2019.plot(ax=a, label='M2019', color=C_M2019, **kw)
(M2019-M2019_err).plot(ax=a, color=C_M2019, alpha=0.25, linestyle='--', **kw)
(M2019+M2019_err).plot(ax=a, color=C_M2019, alpha=0.25, linestyle='--', **kw)
# GRACE
<<load_grace>>
grace_scale = 1
grace['plot'] = grace_scale * grace['mass'] - OFFSET_GRACE
grace['plot_err'] = grace_scale * grace['err']
for a in [ax,axins]:
p_gmb = grace['plot'].plot(ax=a, label='GMB', color=C_GMB, **kw)
a.fill_between(grace['plot'].index,
(grace['plot'] + grace['plot_err']),
(grace['plot'] - grace['plot_err']),
color=C_GMB, **kw_err)
# grace_scale, grace_offset = 0.84, -2800
# grace['plot'] = grace_scale * grace['mass']+grace_offset
# grace['plot_err'] = grace_scale * grace['err']
# p_gmb = grace['plot'].plot(ax=ax, label='GRACE', color=C_GMB, **kw)
# ax.fill_between(grace['plot'].index,
# (grace['plot'] + grace['plot_err']),
# (grace['plot'] - grace['plot_err']),
# color=C_GMB, **kw_err)
# VC
<<load_vc>>
vc_sum = vc['SEC'].sum(dim='sector').to_dataframe(name='VC').cumsum()
vc_sum = vc_sum - np.min(vc_sum) - OFFSET_VC
vc_err = vc['err'].sum(dim='sector').to_dataframe(name='VC')[::-1].cumsum()[::-1]
for a in [ax,axins]:
p_VC = vc_sum.plot(ax=a, label='VC', color=C_VC, **kw)
(vc_sum - vc_err).plot(ax=a, linestyle='--', color=C_VC, **kw)
(vc_sum + vc_err).plot(ax=a, linestyle='--', color=C_VC, **kw)
# IMBIE
<<load_imbie>>
imbie = imbie[['MB_cum','MB_cum_err']].dropna()
imbie['MB_cum'] = imbie['MB_cum'] - np.min(imbie['MB_cum']) - OFFSET_IMBIE
for a in [ax,axins]:
lw = kw['linewidth']; kw['linewidth']=3
p_IMBIE = imbie['MB_cum'].plot(ax=a, label='IMBIE', color=C_IMBIE, **kw)
kw['linewidth']=lw
a.fill_between(imbie.index,
imbie['MB_cum'] - imbie['MB_cum_err'][::-1].values,
imbie['MB_cum'] + imbie['MB_cum_err'][::-1].values,
color=C_IMBIE, **kw_err)
# PROMICE MB (Colgan 2019)
<<load_PROMICE_MB>>
P = promice['MB'].sum(dim="sector").to_dataframe('C2019').cumsum()
P = P - np.min(P) - OFFSET_C2019
P_err = ((promice['MB_err'].sum(dim="sector")) * 0 + 32).to_dataframe('C2019')[::-1].cumsum()[::-1]
for a in [ax,axins]:
p_PROMICE = P.plot(ax=a, label='C2019', color=C_PROMICE, **kw)
kw_err['alpha'] = 0.33
a.fill_between(P.index.values, (P-P_err).values.flatten(), (P+P_err).values.flatten(), color = C_PROMICE, **kw_err)
<<load_K2015>>
k2015.loc[k2015.index[-1] + datetime.timedelta(days=365)] = k2015.iloc[-1] # replicate last row for plotting
k2015['MB'] = k2015['SMB_orig'] - k2015['D_orig']
k2015['MB_err'] = (k2015['SMB_err']**2 + k2015['D_err']**2)**0.5
K = k2015['MB'].cumsum().loc['1971':]
K = K - np.min(K) - OFFSET_K2015
K_err = (k2015['MB_err']*0 + 36)[::-1].cumsum()[::-1].loc['1971':]
for a in [ax,axins]:
p_k2015 = K.plot(ax=a, label='K2015', color='cyan', drawstyle='steps-post', **kw)
kw_err['alpha'] = 0.05
a.fill_between(K.index.values, (K-K_err).values.flatten(), (K+K_err).values.flatten(),
color='cyan', **kw_err)
hl = ax.hlines(0, datetime.datetime(1971, 1, 1), datetime.datetime(2100,1,1), color='k', alpha=0.25)
ax.xaxis.set_minor_locator(dates.YearLocator(1, month=1, day=1))
axins.set_xlim(datetime.datetime(2010, 1, 1), ax.get_xlim()[1])
axins.set_ylim(-6000,-3000)
axins.patch.set_alpha(0)
axins.set_xlabel('')
ax2 = ax.twinx()
adj(ax, ['left','bottom'])
adj(axins, ['left','bottom'])
adj(ax2, ['right','bottom'])
for label in (axins.get_xticklabels() + axins.get_yticklabels()):
label.set_fontsize(10)
# SLE (mm) = mass of ice (Gt) x (1 / 361.8)
lims = np.array(ax.get_ylim()) * (-1/361.8)
ax2.set_ylim(lims)
ax2.set_ylabel('Cumulative eustatic sea level change [mm]')
ax2.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.set_ylabel("Cumulative mass change [Gt]")
ax.set_xlabel("Time [Year]")
# ax.set_ylim([0,7000])
# ax.set_yticklabels(['0','-1000','-2000','-3000','-4000','-5000','-6000','-7000'])
# ax.grid(b=True, which='major', axis='y', alpha=0.33)
color = [C_this, C_M2019, C_GMB, C_VC, C_IMBIE, C_PROMICE, 'cyan']
label = ['This Study', 'Mouginot 2019', 'GMB', 'VC', 'IMBIE', 'Colgan 2019', 'Kjeldsen 2015']
lines = [Line2D([0], [0], color=c) for c in color]
legend = plt.legend(lines, label, framealpha=1, loc='upper right', bbox_to_anchor=(0.95,1), ncol=1, facecolor='w', fontsize=10)
# plt.legend()
plt.savefig('fig/MB_cumsum_compare.png', transparent=False, bbox_inches='tight', dpi=300)
plt.savefig('fig/MB_cumsum_compare.svg', transparent=False, bbox_inches='tight', dpi=300)
import xarray as xr
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import scipy.stats as sps
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
fig = plt.figure(1, figsize=(3.26,7)) # w,h
fig.clf()
fig.set_tight_layout(True)
axSMB = fig.add_subplot(311)
axD = fig.add_subplot(312)
axMB = fig.add_subplot(313)
kw_adj = {'marker':(4,0,90), 'markersize':7, 'markevery':[-1,1], 'mfc':'none', 'clip_on':False}
# Mouginot SMB, D, and MB (Mouginot, 2019)
<<load_mouginot>>
mouginot = mouginot.where(mouginot['time'].dt.year >= 1987).dropna(dim='time')
mouginot = mouginot.sum(dim='region')
MB = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'A-JUN'})\
.sum()
MB = MB.where((MB['time'].dt.year <= 2018) & (MB['time'].dt.year >= 1987)).dropna(dim='time')
years = MB.time.dt.year.values
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
cmap = cm.viridis(color) # https://stackoverflow.com/questions/51034408n-matplotlib
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
# graphics
def my_plot(ax, x, xerr, y, yerr, only_dot=False):
scatter = ax.scatter(x.values, y.values, alpha=0, c=color, cmap=cmap)
if only_dot == False:
e = ax.errorbar(x.values, y.values, xerr=xerr, yerr=yerr, fmt=',',
color='k', alpha=0.5, linewidth=0.5)
else:
for i,s in enumerate(y2str):
ax.plot([x.values[i], x.values[i]],
[y.values[i], y.values[i]],
c=cmap(color[i]), **kw_adj)
for i,s in enumerate(y2str):
if only_dot == False:
ax.text(x.values[i], y.values[i], s,
c = cmap(color)[i],
fontsize=10,
fontweight='bold',
horizontalalignment='center',
verticalalignment='center')
if only_dot == True: return
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
ax.text(1.0, -0.03, "r$^2$: %.2f\nbias: %d\nRMSE: %d\nslope: %.1f"
% (round(r_value**2,2), round(bias), round(RMSE), round(slope,1)),
transform=ax.transAxes,
horizontalalignment='right',
fontsize=9)
return scatter
my_plot(axSMB, MB['SMB'], MB['SMB_err'], mouginot['SMB'], mouginot['SMB_err'])
my_plot(axD, MB['D'], MB['D_err'], mouginot['D'], mouginot['D_err'])
my_plot(axD, MB['D']+MB['BMB'], (MB['D_err']**2+MB['BMB_err']**2)**0.5, mouginot['D'], mouginot['D_err'], only_dot=True)
my_plot(axMB, MB['MB']-MB['BMB'], MB['MB_err'], mouginot['MB'], mouginot['MB_err'], only_dot=True)
s = my_plot(axMB, MB['MB'], MB['MB_err'], mouginot['MB'], mouginot['MB_err'])
axSMB.text(0, 0.9, 'SMB', transform=axSMB.transAxes)
axSMB.set_xticks([0,600])
axSMB.set_ylabel('Mouginot 2019', labelpad=-25)
axD.text(0, 0.9, 'D [$\diamond$=+BMB]', transform=axD.transAxes)
axD.set_xticks([375,600])
axMB.text(0, 0.9, 'MB$^{*}$ [$\diamond$=-BMB]', transform=axMB.transAxes)
axMB.set_xticks([-450, 0, 150])
axMB.set_xlabel('This Study', labelpad=-10)
axMB.set_xticklabels([str(axMB.get_xticks()[0]), '', str(axMB.get_xticks()[-1])])
for ax in [axSMB,axD,axMB]:
ax.set_yticks(ax.get_xticks())
ax.set_xlim(ax.get_xticks()[[0,-1]])
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
# axCB = add_sub_axes(axMB, [1.1, 0.0, 0.075, 1])
axCB = add_sub_axes(axMB, [0.0, -0.50, 1, 0.075])
cb = plt.colorbar(s, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
cb.ax.xaxis.set_ticks_position('top')
cb.ax.xaxis.set_label_position('top')
cb.set_label('Time [year]', labelpad=-10)
axCB.set_xticklabels(mouginot['time'].dt.year[[0,-1]].values)
plt.savefig('fig/mouginot_2019.png', transparent=False, bbox_inches='tight', dpi=300)
import xarray as xr
import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as sps
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
# plt.close(1)
fig = plt.figure(1, figsize=(3.26,7)) # w,h
# get_current_fig_manager().window.move(0,0)
fig.clf()
fig.set_tight_layout(True)
# import matplotlib.gridspec as gridspec
# gs = gridspec.GridSpec(1, 1) #w,h
# ax = plt.subplot(gs[:,:])
axSMB = fig.add_subplot(311)
axD = fig.add_subplot(312)
axMB = fig.add_subplot(313)
# PROMICE MB (Colgan 2019)
<<load_PROMICE_MB>>
promice = promice.sum(dim='sector')
MB = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'YS'})\
.sum()\
.reindex(time=promice['time'])
kw_adj = {'marker':(4,0,90), 'markersize':7, 'markevery':[-1,1], 'mfc':'none', 'clip_on':False}
years = MB.time.dt.year.values
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
cmap = cm.viridis(color)
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
# graphics
def my_plot(ax, x, xerr, y, yerr, only_dot=False):
scatter = ax.scatter(x.values, y.values, alpha=0, c=color, cmap=cmap)
if only_dot == False:
e = ax.errorbar(x.values, y.values, xerr=xerr, yerr=yerr, fmt=',', color='k', alpha=0.5, linewidth=0.5)
else:
for i,s in enumerate(y2str):
ax.plot([x.values[i], x.values[i]],
[y.values[i], y.values[i]],
c=cmap(color[i]), **kw_adj)
for i,s in enumerate(y2str):
if only_dot == False:
ax.text(x.values[i], y.values[i], s,
c = cmap(color)[i],
fontsize=10,
fontweight='bold',
horizontalalignment='center',
verticalalignment='center')
if only_dot == True: return
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
# np.sqrt(np.mean((predictions-targets)**2))
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
# ax.plot(x, slope * x + intercept, 'k', linewidth=0.5)
ax.text(1.0, -0.03, "r$^2$: %.2f\nbias: %d\nRMSE: %d\nslope: %.1f"
% (round(r_value**2,2), round(bias), round(RMSE), round(slope,1)),
transform=ax.transAxes,
horizontalalignment='right',
fontsize=9)
return scatter
my_plot(axSMB, MB['SMB'], MB['SMB_err'], promice['SMB'], promice['SMB_err'])
my_plot(axD, MB['D'], MB['D_err'], promice['D'], promice['D_err'])
my_plot(axD, MB['D']+MB['BMB'], (MB['D_err']**2+MB['BMB_err']**2)**0.5, promice['D'], promice['D_err'], only_dot=True)
s = my_plot(axMB, MB['MB'], MB['MB_err'], promice['MB'], promice['MB_err'])
my_plot(axMB, MB['MB']-MB['BMB'], MB['MB_err'], promice['MB'], promice['MB_err'], only_dot=True)
axSMB.text(0, 0.9, 'SMB', transform=axSMB.transAxes)
axSMB.set_xticks([0,600])
axSMB.set_ylabel('Colgan 2019', labelpad=-25)
axD.text(0, 0.9, 'D [$\diamond$=+BMB]', transform=axD.transAxes)
axD.set_xticks([300,600])
axMB.text(0, 0.9, 'MB$^{*}$ [$\diamond$=-BMB]', transform=axMB.transAxes)
axMB.set_xticks([-450, 0, 150])
axMB.set_xlabel('This Study', labelpad=-10)
axMB.set_xticklabels([str(axMB.get_xticks()[0]), '', str(axMB.get_xticks()[-1])])
for ax in [axSMB,axD,axMB]:
ax.set_yticks(ax.get_xticks())
ax.set_xlim(ax.get_xticks()[[0,-1]])
ax.set_ylim(ax.get_xlim())
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
# axCB = add_sub_axes(axMB, [1.1, 0.0, 0.075, 1])
axCB = add_sub_axes(axMB, [0.0, -0.50, 1, 0.075])
cb = plt.colorbar(s, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
cb.ax.xaxis.set_ticks_position('top')
cb.ax.xaxis.set_label_position('top')
cb.set_label('Time [year]', labelpad=-10)
axCB.set_xticklabels(promice['time'].dt.year[[0,-1]].values)
plt.savefig('fig/colgan_2019.png', transparent=False, bbox_inches='tight', dpi=300)
import xarray as xr
import numpy as np
import pandas as pd
import scipy as sp
import scipy.stats as sps
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
# ONE COLUMN: %\includegraphics[width=8.3cm]{FILE NAME}
fig = plt.figure(1, figsize=(3.26, 7)) # w,h
fig.clf()
fig.set_tight_layout(True)
axGRACE = fig.add_subplot(311)
axVC = fig.add_subplot(312)
axIMBIE = fig.add_subplot(313)
# graphics
def my_plot(ax, x, xerr, y, yerr):
if type(x) == xr.core.dataarray.DataArray:
years = x.time.dt.year.values
else:
years = x.index.year.values
print(years)
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
# https://stackoverflow.com/questions/51034408/
cmap = cm.viridis(color)
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
e = ax.errorbar(x.values, y.values, xerr=xerr, yerr=yerr, fmt=',', color='k', alpha=0.5, linewidth=0.5)
scatter = ax.scatter(x.values, y.values, alpha=0, c=color, cmap=cmap)
for i,s in enumerate(y2str):
ax.text(x.values[i], y.values[i], s,
c = cmap(color)[i],
fontsize=10,
fontweight='bold',
horizontalalignment='center',
verticalalignment='center')
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
ax.text(1.0, -0.03,
"r$^2$: %.2f\nbias: %d\nRMSE: %d\nslope: %.1f"
% (round(r_value**2,2), round(bias), round(RMSE), round(slope,1)),
transform=ax.transAxes,
horizontalalignment='right',
fontsize=9)
axCB = add_sub_axes(ax, [0.05, 0.95, 0.3, 0.05])
cb = plt.colorbar(scatter, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
axCB.set_xticklabels(years[[0,-1]])
for tick in axCB.xaxis.get_major_ticks():
tick.label.set_fontsize(9)
return scatter
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
# this
this_MB = xr.open_dataset('./TMB/MB_sector.nc')
MB = this_MB[['MB','MB_err']].to_dataframe()
MB = MB.resample('MS').sum()
<<load_grace>>
grace = grace.resample('MS').mean().interpolate(method='time').diff()
grace[grace.index.year == 2017] = np.nan
grace = grace.dropna()
grace = grace.iloc[1:-1]
# this_MB_grace = this_MB.sel({'time':grace.index})
this_MB_grace = MB.merge(grace, left_index=True, right_index=True)
this_MB_grace = this_MB_grace.resample('YS').sum()
s = my_plot(axGRACE, this_MB_grace['MB'], this_MB_grace['MB_err'], this_MB_grace['mass'], this_MB_grace['err'])
# this
this_MB = xr.open_dataset('./TMB/MB_sector.nc')\
.resample({'time':'YS'})\
.sum()[['MB','MB_err']]
<<load_VC>>
vc = vc.sum(dim='sector')\
.resample({'time':'YS'})\
.sum()
this_MB_vc = this_MB.sel({'time':slice(vc.time[0], vc.time[-1])})
<<load_imbie>>
# imbie = imbie.resample('1D').mean().interpolate(method='time')
imbie = imbie.resample('YS').mean()
imbie = imbie[imbie.index.year > 1986]
imbie = imbie.dropna()
this_MB_imbie = this_MB.sel({'time':slice(imbie.index[0], imbie.index[-1])})
s = my_plot(axVC, this_MB_vc['MB'], this_MB_vc['MB_err'], vc['SEC'], vc['err'])
s = my_plot(axIMBIE, this_MB_imbie['MB'], this_MB_imbie['MB_err'], imbie['MB'], imbie['MB_err'])
for ax in [axGRACE, axVC, axIMBIE]:
ax.set_xticks([-550, 0, 200])
ax.set_yticks(ax.get_xticks())
ax.set_xlim(ax.get_xticks()[[0,-1]])
ax.set_ylim(ax.get_xlim())
if ax != axGRACE: ax.set_yticklabels(['','',''])
if ax != axIMBIE: ax.set_xticklabels(['','',''])
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
axGRACE.set_ylabel('GMB', labelpad=-25)
axVC.set_ylabel('VC')
axIMBIE.set_ylabel('IMBIE')
axIMBIE.set_xlabel('This Study', labelpad=-10)
axIMBIE.set_xticklabels([str(axIMBIE.get_xticks()[0]), '', str(axIMBIE.get_xticks()[-1])])
plt.savefig('fig/this_v_grace_vc_imbie.png', transparent=False, bbox_inches='tight', dpi=300)
import xarray as xr
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=10)
rc('text', usetex=False)
# plt.close(1)
fig = plt.figure(1, figsize=(8,8)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(311)
C = 'k'; L='-'
C_HIRHAM = 'gray'; L_HIRHAM='--'
C_MAR = 'blue'; L_MAR = ':'
C_RACMO = 'red'; L_RACMO = '-'
# MB (this)
MB = xr.open_dataset("./TMB/MB_sector.nc")
MB = MB.sel({'time':slice('1986-01-01','2020-12-31')})
MB = MB.sum(dim='sector')
kw = {'ax':ax, 'legend':False, 'drawstyle':'steps-post'}
MB_ann = MB.to_dataframe()\
.resample('AS')\
.sum()\
.rename(columns={'MB':'This Study',
'MB_HIRHAM':'HIRHAM',
'MB_MAR':'MAR',
'MB_RACMO':'RACMO'})
MB_ann.loc[pd.to_datetime('2021-01-01', format='%Y-%m-%d')] = MB_ann.iloc[-1]
MB_ann['This Study'].plot(color=C, linestyle=L, **kw)
MB_ann['HIRHAM'].plot(color=C_HIRHAM, linestyle=L_HIRHAM, **kw)
MB_ann['MAR'].plot(color=C_MAR, linestyle=L_MAR, **kw)
MB_ann['RACMO'].plot(color=C_RACMO, linestyle=L_RACMO, **kw)
# ax.fill_between(MB_ann.index, MB_ann.values.flatten(), color='k', alpha=0.1, step='post')
# plt.legend(loc='lower left', ncol=2)
plt.legend(loc='lower left', ncol=2, framealpha=0)
ax2 = fig.add_subplot(312)
MB = MB.sel({'time':slice('2019-01-01','2020-12-31')})
kw = {'ax': ax2, 'drawstyle':'steps-post'}
MB['MB'].plot(color=C, linestyle=L, **kw)
MB['MB_HIRHAM'].plot(color=C_HIRHAM, linestyle=L_HIRHAM, **kw)
MB['MB_MAR'].plot(color=C_MAR, linestyle=L_MAR, **kw)
MB['MB_RACMO'].plot(color=C_RACMO, linestyle=L_RACMO, **kw)
ax3 = fig.add_subplot(313)
kw = {'ax': ax3, 'drawstyle':'steps-post'}
(MB['MB_HIRHAM']-MB['MB']).plot(color=C_HIRHAM, linestyle=L_HIRHAM, **kw)
(MB['MB_MAR']-MB['MB']).plot(color=C_MAR, linestyle=L_MAR, **kw)
(MB['MB_RACMO']-MB['MB']).plot(color=C_RACMO, linestyle=L_RACMO, **kw)
ax.set_ylabel("Mass gain [Gt yr$^{-1}$]")
ax.set_xlabel("")
adj(ax, ['left','bottom'])
ax2.set_xlabel("Time [Year]")
ax2.set_ylabel("Mass gain [Gt d$^{-1}$]")
adj(ax2, ['left','bottom'])
ax3.set_xlabel("Time [Year]")
ax3.set_ylabel("Difference [Gt d$^{-1}$]")
adj(ax3, ['left','bottom'])
plt.savefig('fig/MB_3RCM.png', transparent=False, bbox_inches='tight', dpi=300)
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
from matplotlib import rc
from adjust_spines import adjust_spines as adj
import scipy.stats as sps
rc('font', size=8)
rc('text', usetex=False)
fig = plt.figure(1, figsize=(((12*0.666)/2.54),((23*0.666)/2.54))) # w,h
fig.clf()
fig.set_tight_layout(True)
gs = gridspec.GridSpec(ncols=3, nrows=7, figure=fig) #w,h
gs.update(wspace=1, hspace=1)
<<load_mouginot>>
# this
ds = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'A-JUN'})\
.sum()
MB = ds['MB_ROI']
SMB = ds['SMB_ROI']
D = ds['D_ROI'] # + ds['BMB_ROI']
mouginot = mouginot.where(mouginot['time'].dt.year >= 1987).dropna(dim='time')
SMB = SMB.where((SMB['time'].dt.year <= 2018) & (SMB['time'].dt.year >= 1987)).dropna(dim='time')
D = D.where((D['time'].dt.year <= 2018) & (D['time'].dt.year >= 1987)).dropna(dim='time')
MB = MB.where((MB['time'].dt.year <= 2018) & (MB['time'].dt.year >= 1987)).dropna(dim='time')
region = MB['region'].values
<<round_axes>>
def my_plot(ax, x, y):
years = x.time.dt.year.values
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
# https://stackoverflow.com/questions/51034408/how-to-make-the-color-of-one-end-of-colorbar-darker-in-matplotlib
cmap = cm.viridis(color)
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
scatter = ax.scatter(x.values, y.values, alpha=0.75, c=color, cmap=cmap, clip_on=False, s=5)
if x.region.values == 'NW':
if ax == axSMB: ax.set_title('SMB')
if ax == axD: ax.set_title('D')
if ax == axMB: ax.set_title('MB$^{*}$')
if (ax == axSMB):
ax.set_ylabel(x.region.values)
if (x.region.values == 'SE'):
ax.set_ylabel('M2019\nSE', labelpad=-10)
ax.set_xlabel('This Study')
if ax == axSMB:
ax.set_ylim([-90,220])
ax.set_xlim([-90,220])
if ax == axD:
ax.set_ylim([20,180])
ax.set_xlim([20,180])
if ax == axMB:
ax.set_ylim([-120,80])
ax.set_xlim([-120,80])
ax.set_xticks(round_axes(ax.get_xlim(), ax.get_ylim()))
ax.set_yticks(ax.get_xlim())
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
if ax==axD and x.region.values == 'SE':
s = "%.2f r$^{2}$\n%d bias\n%d RMSE"
else:
s = "%.2f\n%d\n%d"
ax.text(0, 1, s % (round(r_value**2,2), round(bias), round(RMSE)),
transform=ax.transAxes,
horizontalalignment='left',
verticalalignment='top',
fontsize=8)
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
return scatter
# graphics
for i,region in enumerate(['NW','NO','NE','CW','CE','SW','SE']):
axSMB = fig.add_subplot(gs[i,0])
axD = fig.add_subplot(gs[i,1])
axMB = fig.add_subplot(gs[i,2])
my_plot(axSMB, SMB.sel({'region':region}), mouginot['SMB'].sel({'region':region}))
my_plot(axD, D.sel({'region':region}), mouginot['D'].sel({'region':region}))
s = my_plot(axMB, MB.sel({'region':region}), mouginot['MB'].sel({'region':region}))
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
axCB = add_sub_axes(axD, [0, -0.99, 1, 0.1])
cb = plt.colorbar(s, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
axCB.set_xticklabels(MB['time'].dt.year[[0,-1]].values)
# plt.legend()
plt.savefig('fig/mouginot_2019_regions.png', transparent=False, bbox_inches='tight', dpi=150)
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
from matplotlib import rc
from adjust_spines import adjust_spines as adj
import scipy as sp
import scipy.stats as sps
rc('font', size=8)
rc('text', usetex=False)
# plt.close(1)
fig = plt.figure(1, figsize=(((12*0.666)/2.54),((22*0.666)/2.54))) # w,h
# get_current_fig_manager().window.move(0,0)
fig.clf()
fig.set_tight_layout(True)
gs = gridspec.GridSpec(ncols=3, nrows=8, figure=fig) #w,h
gs.update(wspace=1, hspace=1)
# PROMICE MB (Colgan 2019)
<<load_PROMICE_MB>>
# this
ds = xr.open_dataset('./TMB/MB_sector.nc')\
.resample({'time':'YS'})\
.sum()\
.reindex(time=promice['time'])
MB = ds['MB_ROI']
SMB = ds['SMB_ROI']
D = ds['D_ROI'] # + ds['BMB_ROI']
# collapse all Wally sectors to just super-sectors
promice['Z'] = (('sector'), (promice.sector.values/10).astype(int))
promice = promice.groupby('Z').sum().rename({'Z':'sector'})
MB['Z'] = (('sector'), (MB.sector.values/10).astype(int))
MB = MB.groupby('Z').sum().rename({'Z':'sector'})
SMB['Z'] = (('sector'), (SMB.sector.values/10).astype(int))
SMB = SMB.groupby('Z').sum().rename({'Z':'sector'})
D['Z'] = (('sector'), (D.sector.values/10).astype(int))
D = D.groupby('Z').sum().rename({'Z':'sector'})
sector = MB['sector']
<<round_axes>>
def my_plot(ax, x, y):
years = x.time.dt.year.values
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
# https://stackoverflow.com/questions/51034408/how-to-make-the-color-of-one-end-of-colorbar-darker-in-matplotlib
cmap = cm.viridis(color)
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
scatter = ax.scatter(x.values, y.values, alpha=0.75, c=color, cmap=cmap, clip_on=False, s=5)
if x.sector.values == 1:
if ax == axSMB: ax.set_title('SMB')
if ax == axD: ax.set_title('D')
if ax == axMB: ax.set_title('MB$^{*}$')
if (ax == axSMB):
ax.set_ylabel(x.sector.values)
if (x.sector.values == 8):
ax.set_ylabel('C2019\n8', labelpad=-10)
ax.set_xlabel('This Study')
if ax == axSMB:
ax.set_ylim([-90,190])
ax.set_xlim([-90,190])
if ax == axD:
ax.set_ylim([0,160])
ax.set_xlim([0,160])
if ax == axMB:
ax.set_ylim([-110,60])
ax.set_xlim([-110,60])
ax.set_xticks(round_axes(ax.get_xlim(), ax.get_ylim()))
ax.set_yticks(ax.get_xlim())
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
if ax == axSMB and x.sector.values == 1:
s = "%.2f r$^{2}$\n%.1f bias\n%.1f RMSE"
else:
s = "%.2f\n%.1f\n%.1f"
ax.text(0, 1, s % (round(r_value**2,2), round(bias), round(RMSE)),
transform=ax.transAxes,
horizontalalignment='left',
verticalalignment='top',
fontsize=8)
ax.plot(ax.get_xlim(), ax.get_ylim(), color='k', alpha=0.25, linestyle='--')
adj(ax, ['left','bottom'])
return scatter
# graphics
for row in np.arange(8):
axSMB = fig.add_subplot(gs[row,0])
axD = fig.add_subplot(gs[row,1])
axMB = fig.add_subplot(gs[row,2])
my_plot(axSMB,
SMB.sel({'sector':sector[row]}),
promice['SMB'].sel({'sector':sector[row]}))
if (sector[row] != 14) & (sector[row] != 22):
my_plot(axD,
D.sel({'sector':sector[row]}),
promice['D'].sel({'sector':sector[row]}))
s = my_plot(axMB,
MB.sel({'sector':sector[row]}),
promice['MB'].sel({'sector':sector[row]}))
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
axCB = add_sub_axes(axD, [0, -1.1, 1, 0.1])
cb = plt.colorbar(s, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
axCB.set_xticklabels(promice['time'].dt.year[[0,-1]].values)
# plt.legend()
plt.savefig('fig/colgan_2019_sectors.png', transparent=False, bbox_inches='tight', dpi=150)
grass -c ./G_HIRHAM/coverage
g.list type=raster mapset=* -m
g.region region=sectors
f=DarcySensitivity_RP810_Daily2D_GL2LIN_Darcy_60m_liqCL_wh1_smb_HydroYr_2000_2001_DM_SD.nc
r.in.gdal -ol input="NetCDF:${DATADIR}/HIRHAM/daily/${f}:smb" band=1 output=SMB --o --q
r.region -c map=SMB
r.mapcalc "RCM = if(SMB) * 0 + 1"
r.mapcalc "regions = if(regions@Mouginot_2019) * 0 + 2"
r.mapcalc "regions_e = if(regions_e@Mouginot_2019) * 0 + 4"
r.null map=RCM null=0
r.null map=regions null=0
r.null map=regions_e null=0
r.mapcalc "coverage_M = RCM + regions + regions_e"
r.null map=coverage_M setnull=0
r.stats coverage_M
r.category map=coverage_M separator=":" rules=- << EOF
1:RCM (peripheral)
2:Mouginot (2019) and no RCM
3:RCM and Mouginot (2019) (peripheral; excluded)
5:RCM and Mouginot (2019) expanded (included)
7:RCM and Mouginot (2019) (included)
EOF
# | light blue | 166 | 206 | 227 | #a6cee3 |
# | dark blue | 31 | 120 | 180 | #1f78b4 |
# | light green | 178 | 223 | 138 | #b2df8a |
# | dark green | 51 | 160 | 44 | #33a02c |
# | pink | 251 | 154 | 153 | #fb9a99 |
# | red | 227 | 26 | 28 | #e31a1c |
# | pale orange | 253 | 191 | 111 | #fdbf6f |
# | orange | 255 | 127 | 0 | #ff7f00 |
cat << EOF | r.colors map=coverage_M rules=-
1 192:192:192
2 253:191:111
3 255:0:0
5 172:223:138
7 166:206:227
EOF
rm -f fig/coverage_hirham.png
d.mon start=png output=fig/coverage_hirham.png
d.rast coverage_M
d.legend -c -n raster=coverage_M at=5,15,40,30 bgcolor=white
d.grid 3
d.mon stop=png
# o ./fig/coverage_hirham.png
r.out.gdal input=coverage_M output=./tmp/H_cover_M.tif
# display in QGIS
# export in QGIS
# edit in Inkscape
See Load and adjust Reconstructed K2015
import xarray as xr
import numpy as np
import pandas as pd
from adjust_spines import adjust_spines as adj
import scipy.stats as sps
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
fig = plt.figure(1, figsize=(3.26,3.2)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
kw_adj = {'marker':(4,0,90), 'markersize':7, 'markevery':[-1,1], 'mfc':'none', 'clip_on':False}
<<load_and_adjust_K2015>>
# provides vr w/ VHD_sector VHD_sector_err runoff
x = vr['runoff']
xerr = x * 0.15
y = vr['VHD_sector']
yerr = vr['VHD_sector_err']
years = x.index.year.values
color = years - min(years); color = color / max(color)
y2str = [_[2:4] for _ in years.astype(str)]
cmap = cm.viridis(color) # https://stackoverflow.com/questions/51034408n-matplotlib
cmap = mpl.colors.ListedColormap(cmap[:-3,:-1])
scatter = ax.scatter(x.values, y.values, alpha=0, c=color, cmap=cmap)
e = ax.errorbar(x.values, y.values, xerr=xerr, yerr=yerr, fmt=',',
color='k', alpha=0.5, linewidth=0.5)
for i,s in enumerate(y2str):
ax.text(x.values[i], y.values[i], s,
c = cmap(color)[i],
fontsize=10,
fontweight='bold',
horizontalalignment='center',
verticalalignment='center')
slope, intercept, r_value, p_value, std_err = sps.linregress(x.values, y.values)
bias = np.mean(x.values - y.values)
RMSE = np.sqrt(np.mean((x.values - y.values)**2))
ax.text(1.0, 0.0, "r$^2$: %.2f\nslope: %.2f"
% (round(r_value**2,2), round(slope,2)),
transform=ax.transAxes,
horizontalalignment='right',
fontsize=9)
ax.set_xlabel('Runoff', labelpad=-10)
ax.set_ylabel('BMB$_{\mathrm{VHD}}$', labelpad=-15)
ax.set_xticks(ax.get_xticks()[[0,-1]])
ax.set_xlim(ax.get_xticks()[[0,-1]])
ax.set_yticks(ax.get_yticks()[[0,-1]])
ax.set_ylim(ax.get_yticks()[[0,-1]])
adj(ax, ['left','bottom'])
# https://stackoverflow.com/questions/17478165/
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
return transform
def add_sub_axes(axis, rect):
fig = axis.figure
left, bottom, width, height = rect
trans = axis_to_fig(axis)
figleft, figbottom = trans((left, bottom))
figwidth, figheight = trans([width,height]) - trans([0,0])
return fig.add_axes([figleft, figbottom, figwidth, figheight])
# axCB = add_sub_axes(axMB, [1.1, 0.0, 0.075, 1])
axCB = add_sub_axes(ax, [0.0, -0.40, 1, 0.075])
cb = plt.colorbar(scatter, cax=axCB, orientation='horizontal', ticks=[0,1])
cb.set_alpha(1) # https://stackoverflow.com/questions/4478725/
cb.draw_all()
cb.ax.xaxis.set_ticks_position('top')
cb.ax.xaxis.set_label_position('top')
cb.set_label('Time [year]', labelpad=-10)
axCB.set_xticklabels(x.index.year[[0,-1]].values)
plt.savefig('fig/reconstructed_runoff.png', transparent=False, bbox_inches='tight', dpi=300)
Task | KDM | XF | PLL | MS | KKK | NBK | BN | MRvdB | AS | WC | JEB | SBS | MDK | APA | SBA | RSF |
Conceptualization | 2 | 2 | 2 | |||||||||||||
Data curation | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 2 | ||||||
Implementation | 3 | 2 | 2 | 1 | 1 | |||||||||||
Funding | 3 | 3 | 2 | 3 | ||||||||||||
Methods: SMB | 3 | 3 | 3 | 1 | ||||||||||||
Methods: D | 3 | 1 | 1 | 1 | 1 | |||||||||||
Methods: D forecast | 2 | 2 | ||||||||||||||
Methods: BMB | 1 | 3 | ||||||||||||||
Methods: Reconstructed | 2 | 2 | ||||||||||||||
Validation | 3 | |||||||||||||||
Validation: GRACE | 3 | |||||||||||||||
Validation: VC | 1 | 3 | ||||||||||||||
Project administration | 3 | 1 | 1 | 1 | ||||||||||||
Resources | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 2 | ||||||
Software | 3 | 3 | 3 | 3 | 2 | |||||||||||
Visualization | 3 | |||||||||||||||
Writing (First draft) | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 1 | 1 | |||
Writing (Editing) | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
#ERROR | 34 | 19 | 19 | 11 | 14 | 14 | 14 | 12 | 13 | 10 | 9 | 8 | 7 | 8 | 5 | 10 |
import seaborn as sns
import numpy as np
import pandas as pd
from matplotlib.colors import LinearSegmentedColormap
from matplotlib import rc
rc('font', size=8)
rc('text', usetex=False)
cmap = sns.color_palette("Blues", 3)
plt.close(1)
fig = plt.figure(1, figsize=(4.0,3.8)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
df = pd.read_csv('./tmp/credit.csv', index_col=0)
df = df.replace(1,3).replace(2,3)
sns.set()
im = sns.heatmap(df, vmin=1, vmax=3, cmap=cmap, fmt='d', ax=ax, xticklabels=True, cbar=False)
ax.xaxis.set_ticks_position('top')
im.set_xticklabels(im.get_xmajorticklabels(), fontsize=8)
im.set_yticklabels(im.get_ymajorticklabels(), fontsize=10)
plt.xticks(rotation=-90)
# # Manually specify colorbar labelling after it's been generated
# colorbar = ax.collections[0].colorbar
# colorbar.set_ticks([1.33, 2, 2.66])
# colorbar.set_ticklabels(['1', '2', '3'])
# colorbar.set_label('Contribution')
ax.set_ylabel('What')
ax.set_xlabel('Who')
ax.xaxis.set_label_position('top')
plt.savefig('./fig/credit.png', transparent=False, bbox_inches='tight', dpi=150)
import numpy as np
import xarray as xr
import datetime
import pandas as pd
# http://matplotlib.org/examples/pylab_examples/spine_placement_demo.html
def adjust_spines(ax,spines, offset=10):
for loc, spine in ax.spines.items():
if loc in spines:
spine.set_position(('outward', offset)) # outward
# by 10 points
#spine.set_smart_bounds(True)
else:
spine.set_color('none') # don't
# draw spine
# turn off ticks where there
# is no spine
if 'left' in spines:
ax.yaxis.set_tick_params(length=5)
ax.yaxis.set_tick_params(direction='in', which='major')
ax.yaxis.set_tick_params(direction='in', which='minor')
ax.yaxis.set_ticks_position('left')
ax.yaxis.set_label_position('left')
elif 'right' in spines:
ax.yaxis.set_tick_params(length=5)
ax.yaxis.set_tick_params(direction='in', which='major')
ax.yaxis.set_tick_params(direction='in', which='minor')
ax.yaxis.set_ticks_position('right')
ax.yaxis.set_label_position('right')
else:
# no yaxis ticks
ax.yaxis.set_ticks([])
if 'bottom' in spines:
ax.xaxis.set_ticks_position('bottom')
ax.xaxis.set_tick_params(length=5)
ax.xaxis.set_tick_params(direction='in', which='major')
ax.xaxis.set_tick_params(direction='in', which='minor')
ax.xaxis.set_label_position('bottom')
elif 'top' in spines:
ax.xaxis.set_ticks_position('top')
ax.xaxis.set_tick_params(length=5)
ax.xaxis.set_tick_params(direction='in', which='major')
ax.xaxis.set_tick_params(direction='in', which='minor')
ax.xaxis.set_label_position('top')
else:
# no xaxis
# ticks
ax.xaxis.set_ticks([])
if __name__ == "__main__":
import numpy as np
x = np.random.random(100)
fig = plt.figure(100)
fig.clf()
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.plot(x)
adjust_spines(ax,["left","bottom"])
def round_axes(x, y=None):
x = np.append(x,y) if y is not None else np.array(x)
# print(x)
mmin = np.min(x)
mmax = np.max(x)
sign = np.sign([mmin,mmax])
# mmin = 10**np.floor(np.log10(sign[0]*mmin*10)) * sign[0]
# mmax = 10**np.ceil(np.log10(sign[1]*mmax/10)) * sign[1]
mmin = np.floor(mmin/10)*10
mmax = np.ceil(mmax/10)*10
return(mmin,mmax)
set -o nounset
set -o pipefail
# set -x
# set -o errexit
### uncomment the above line when doing initial run. When rerunning and
### counting on GRASS failing w/ overwrite issues (speed increase), the
### line above must be commented
red='\033[0;31m'; orange='\033[0;33m'; green='\033[0;32m'; nc='\033[0m' # No Color
log_info() { echo -e "${green}[$(date --iso-8601=seconds)] [INFO] ${@}${nc}"; }
log_warn() { echo -e "${orange}[$(date --iso-8601=seconds)] [WARN] ${@}${nc}"; }
log_err() { echo -e "${red}[$(date --iso-8601=seconds)] [ERR] ${@}${nc}" >&2; }
# trap ctrl_c INT # trap ctrl-c and call ctrl_c()
# ctrl_c() { log_err "CTRL-C. Cleaning up"; }
debug() { if [[ debug:- == 1 ]]; then log_warn "debug:"; echo $@; fi; }
https://grass.osgeo.org/grass74/manuals/variables.html
GRASS_VERBOSE | |
---|---|
-1 | complete silence (also errors and warnings are discarded) |
0 | only errors and warnings are printed |
1 | progress and important messages are printed (percent complete) |
2 | all module messages are printed |
3 | additional verbose messages are printed |
export GRASS_VERBOSE=3
# export GRASS_MESSAGE_FORMAT=silent
if [ -z ${DATADIR+x} ]; then
echo "DATADIR environment varible is unset."
echo "Fix with: \"export DATADIR=/path/to/data\""
exit 255
fi
# set -x # print commands to STDOUT before running them
Then “make update”
manual:
rsync -ahiu --progress glacio.clean:data/HIRHAM/daily/ ~/data/HIRHAM/daily/
Run with 0 5 * * * ~/bin/HIRHAM_daily.sh
#!/bin/bash
cd ~/data/HIRHAM/daily/
# wget -a ~/data/HIRHAM/wget.log --user MB --password <PW> -np -nc ftp.dmi.dk:KDM_SMB_files/*
rm -f ../wget.log
wget -a ../wget.log --user MB --password <PW> -np -nc ftp.dmi.dk:KDM_SMB_files/*
Run with 0 5 * * * ~/bin/MAR_daily.sh
#!/bin/bash
mkdir -p ~/data/MAR/3.12
mkdir -p ~/data/MAR/daily
cd ~/data/MAR/daily
# TODO: This script fails starting in 2030.
Y1=$(date +%Y)
Y0=$(( $Y1 - 1 ))
M=$(date +%m)
rm -f wget.log
if [[ $M == '01' ]]; then wget -a wget.log -np -nc ftp://ftp.climato.be/fettweis/tmp/ken/MAR-${Y0}*.nc; fi
wget -a wget.log -np -nc ftp://ftp.climato.be/fettweis/tmp/ken/MAR-${Y1}*.nc
mv MAR-????.nc ../3.12/
Goal: Upload and replace files for the ice discharge dataset. Requires downloading existing metadata to get the current DOI, then uploading to that DOI (after which a new one gets issued by the dataverse)
- Email: [[mu4e:msgid:1c48e4d2-decc-4747-874a-4f22e824b8bco@googlegroups.com][Re: [Dataverse-Users] Re: Updating a dataset but keeping filenames.]]
- Install pyDataverse dev edition:
pip install git+https://github.com/aussda/pyDataverse
No - doesn’t seem to grab latest version. Let’s install a specific Git hash from yesterday (https://adamj.eu/tech/2019/03/11/pip-install-from-a-git-repository/)
python -m pip install --upgrade git+https://github.com/aussda/pyDataverse.git@3b040ff23b665ec2650bebcf4bd5478de6881af0
from pyDataverse.api import NativeApi
import os
import json
import subprocess
hash = subprocess.check_output(["git", "describe", "--always"]).strip().decode('utf-8')
# hash = subprocess.check_output(["git", "describe", "--always", "--dirty='*'"]).strip().decode('utf-8')
assert("*" not in hash)
# establish connection
base_url = 'https://dataverse01.geus.dk/'
import secret
api_token = secret.dataverse_api_token
api = NativeApi(base_url, api_token)
# get dataverse metadata
identifier = 'doi:10.22008/FK2/OHI23Z'
resp = api.get_dataset(identifier)
files = resp.json()['data']['latestVersion']['files']
for f in files:
persistentId = f['dataFile']['persistentId']
description = f['dataFile']['description']
filename = f['dataFile']['filename']
fileId = f['dataFile']['id']
assert(os.path.isfile("./TMB/"+filename))
description = description.split(".")[0] + ". "
description = description + "Git hash: " + hash
if 'content' in locals(): del(content)
if filename[-3:] == ".nc": content = "application/x-netcdf"
if filename[-3:] == "csv": content = "text/csv"
if filename[-3:] == "txt": content = "text/plain"
json_dict={"description":description,
"directoryLabel":".",
"forceReplace":True,
"filename":filename,
"label":filename,
"contentType":content}
json_str = json.dumps(json_dict)
# print(f)
## replace
d = api.replace_datafile(persistentId, "./TMB/"+filename, json_str)
if d.json()["status"] == "ERROR":
print(d.content)
print("\n")
continue
# need to update filenames after uploading because of DataVerse bug
# https://github.com/IQSS/dataverse/issues/7223
file_id = d.json()['data']['files'][0]['dataFile']['id']
d2 = api.update_datafile_metadata(file_id, json_str=json_str, is_filepid=False)
# print(d2)
resp = api.publish_dataset(identifier, "major")
curl -H X-Dataverse-key:$API_TOKEN -X POST -F “file=@$FILENAME” -F ‘jsonData={“description”:”My description.”,”directoryLabel”:”data/subdir1”,”categories”:[“Data”], “restrict”:”false”}’ “$SERVER_URL/api/datasets/:persistentId/add?persistentId=$PERSISTENT_ID”
import xarray as xr
MB = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'1D'})\
.interpolate()\
.resample({'time':'YS'})\
.sum()\
.sel({'time':slice('1840','2020')})
df_yr = MB[['MB','MB_err','SMB','SMB_err','D','D_err','BMB','BMB_err']].to_dataframe()
print("ALL:")
print(df_yr.describe())
print("1986:")
print(df_yr.loc['1986':].describe())
# "global" reported single-value uncertainty comes from post-1986 MB error: 86 Gt yr-1
df_yr.describe()
MB | MB_err | SMB | SMB_err | D | D_err | BMB | BMB_err | |
---|---|---|---|---|---|---|---|---|
count | 181 | 181 | 181 | 181 | 181 | 181 | 181 | 181 |
mean | -77.8599 | 125.921 | 370.343 | 91.3573 | 424.315 | 74.6828 | 23.8878 | 12.9145 |
std | 117.96 | 20.3674 | 99.6646 | 31.2918 | 30.9147 | 15.8567 | 1.46252 | 3.6951 |
min | -427.64 | 73.777 | 86.7881 | 7.81093 | 358.131 | 39.3987 | 19.0767 | 5.1754 |
25% | -160.176 | 123.202 | 304.03 | 94.7674 | 401.443 | 75.3806 | 22.8978 | 14.0381 |
50% | -63.471 | 133.669 | 379.461 | 104.687 | 422.618 | 80.9589 | 23.8786 | 14.5193 |
75% | 5.00208 | 138.391 | 437.003 | 109.865 | 442.152 | 83.9221 | 24.7245 | 14.9267 |
max | 142.889 | 152.914 | 583.959 | 123.155 | 495.131 | 94.7579 | 29.3559 | 16.2937 |
Years with min & max MB
df_yr = df_yr.loc['1986':]
print(df_yr.iloc[df_yr['MB'].argmin()])
print("")
print(df_yr.iloc[df_yr['MB'].argmax()])
print("")
import xarray as xr
MB = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'1D'})\
.interpolate()\
.resample({'time':'AS'})\
.sum()\
.resample({'time':'10AS'})\
.mean()\
MB['MB_std'] = xr.open_dataset('./TMB/MB_region.nc')\
.resample({'time':'1D'})\
.interpolate()\
.resample({'time':'AS'})\
.sum()\
.resample({'time':'10AS'})\
.std()['MB']
df_yr = MB[['MB','MB_err','MB_std','SMB','SMB_err','D','D_err','BMB','BMB_err']].to_dataframe().iloc[:-1]
print(df_yr[['MB','MB_std']])
df_yr.describe()
MB | MB_err | MB_std | SMB | SMB_err | D | D_err | BMB | BMB_err | |
---|---|---|---|---|---|---|---|---|---|
count | 18 | 18 | 18 | 18 | 18 | 18 | 18 | 18 | 18 |
mean | -77.3385 | 126.129 | 81.3971 | 370.496 | 91.6934 | 423.946 | 74.8369 | 23.8879 | 12.9551 |
std | 83.213 | 19.6228 | 27.2256 | 57.0234 | 30.5371 | 29.7031 | 15.5009 | 1.11574 | 3.59007 |
min | -247.874 | 81.4862 | 42.5923 | 263.7 | 23.733 | 372.877 | 40.192 | 21.6532 | 5.36642 |
25% | -131.287 | 124.799 | 67.2294 | 318.8 | 96.6988 | 406.228 | 75.6907 | 23.1526 | 13.9899 |
50% | -47.7066 | 134.098 | 75.9084 | 394.896 | 105.014 | 418.442 | 81.4557 | 23.7927 | 14.5241 |
75% | -9.20288 | 138.205 | 95.9791 | 413.98 | 109.084 | 439.636 | 83.5231 | 24.6875 | 14.9481 |
max | 52.8625 | 146.471 | 129.469 | 447.832 | 116.051 | 487.076 | 92.0417 | 25.9794 | 15.6157 |
Decades with min & max SMB
print(df_yr.iloc[df_yr['SMB'].argmin()])
print("")
print(df_yr.iloc[df_yr['SMB'].argmax()])
print("")
Decades with min and max D
print(df_yr.iloc[df_yr['D'].argmin()])
print("")
print(df_yr.iloc[df_yr['D'].argmax()])
print("")
Decades with min and max BMB
print(df_yr.iloc[df_yr['BMB'].argmin()])
print("")
print(df_yr.iloc[df_yr['BMB'].argmax()])
print("")
FROM ubuntu:20.04
LABEL authors="Signe Hillerup Larsen"
LABEL maintainer="shl@geus.dk"
# system environment
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get -y update && apt-get install -y --no-install-recommends --no-install-suggests \
bash \
cdo \
datamash \
gawk \
gdal-bin \
grass-gui \
nco \
netcdf-bin \
parallel \
proj-bin \
sed \
python-is-python3 \
wget \
git \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
RUN echo LANG="en_US.UTF-8" > /etc/default/locale
ENV LANGUAGE en_US.UTF-8
ENV LANG C
ENV LC_ALL C
ENV LC_CTYPE C
ENV SHELL /bin/bash
# create a user
RUN useradd --create-home user && chmod a+rwx /home/user
ENV HOME /home/user
WORKDIR /home/user
RUN mkdir -p /data
ENV DATADIR /data
# Install grass addons
RUN grass -e -c EPSG:3413 $HOME/G_tmp \
g.extension r.stream.basins \
url= https://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.stream.basins \
g.extension r.stream.distance \
url=https://trac.osgeo.org/grass/browser/grass-addons/grass7/raster/r.stream.distance
RUN rm -r $HOME/G_tmp
ENV PATH=/home/user/.local/bin:$PATH
ENV CONDA_DIR /opt/conda
RUN wget --no-check-certificate\
https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& bash Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
ENV PATH=$CONDA_DIR/bin:$PATH
RUN conda --version
RUN conda install -c conda-forge \
xarray \
pandas \
matplotlib \
jupyter \
tabulate \
pint \
bottleneck \
uncertainties \
scipy \
cfchecker \
geopandas
# switch the user
USER user
# docker build -f Dockerfile_grass -t ice_discharge_grass .
cd docker
docker build -t mass_balance .
docker run -it mass_balance # run it
docker run -it mass_balance python -c 'import pandas as pd; print(pd)'
docker run -it --user "$(id -u)":"$(id -g)" --mount type=bind,src="${DATADIR}",dst=/data --mount type=bind,src="$(pwd)",dst=/home/user --env PARALLEL="--delay 0.1 -j -1" mass_balance
# docker tag local-image:tagname new-repo:tagname
docker tag mass_balance:latest hillerup/mass_balance:latest
docker push hillerup/mass_balance:latest
It is more reproducible to use the Docker image hillerup/mass_balance:latest, but for record-keeping sake, here is the Python environment.
docker run hillerup/mass_balance conda env export -n base | tee environment.yml
import matplotlib
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.ticker as ticker
from adjust_spines import adjust_spines as adj
from matplotlib import rc
rc('font', size=8)
rc('text', usetex=False)
df = pd.read_csv('./TMB/MB_SMB_D_BMB.csv', index_col=0, parse_dates=True)
df = df.resample('1D').ffill()
df = df[df.index.dayofyear <= 365] # drop leap years for simplicity
df = df['1986-09-01':] # hydro years
# hydro year dates but dropping leap years
t = pd.date_range(start='1987-01-01', freq='D', periods=df.index.size+100) # hydro year index
t = t[t.dayofyear <= 365][0:df.index.size]
df['t_hydro'] = t
df['hydro_year'] = df.set_index('t_hydro').index.year
df['hydro_doy'] = df.set_index('t_hydro').index.dayofyear
nyear = int(np.ceil(df.index.size/365))
df['t_repeat'] = (pd.date_range(start='2000-01-01', freq='D', periods=365).to_list() * nyear) [0:df.index.size]
plt.close(1)
fig = plt.figure(1, figsize=(3.5,2.0)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
kw_plot = {'drawstyle':'steps', 'rot':90, 'color':'k', 'linewidth':0.1}
this_year = datetime.datetime.now().year
last_day = df.index[-1].strftime("%b %-d")
for hyr in df['hydro_year'].unique():
this = df[df['hydro_year'] == hyr]
this = this.set_index('t_repeat')
this['MB'].cumsum().plot(ax=ax, label='_no', **kw_plot)
if hyr in [1992,1996,2010,2011, 2012, 2019, 2020, 2021]:
ax.text(this.index[-1], this['MB'].cumsum().values[-1], str(hyr),
fontsize=5,
verticalalignment='center')
mean = df[(df['hydro_year'] >= 1991) & (df['hydro_year'] <= 2020)]
mean = mean.groupby(mean['hydro_doy']).mean()
mean.index = df['t_repeat'][0:365]
mean['MB'].cumsum().plot(ax=ax, drawstyle='steps', color='k', linewidth=2, label='1991 through 2020 mean')
kw_plot['color'] = 'r'
kw_plot['linewidth'] = 2
# this['MB'].cumsum().plot(ax=ax, label='2022 (through '+last_day+')', **kw_plot)
if this.index.size > 8:
this['MB'].cumsum().iloc[:-10].plot(ax=ax, label='Through '+last_day, **kw_plot)
kw_plot['alpha'] = 0.8
this['MB'].cumsum().iloc[-7:].plot(ax=ax, label='_no', **kw_plot)
else:
this['MB'].cumsum().plot(ax=ax, label='Through '+last_day, **kw_plot)
std = df[(df['hydro_year'] >= 1991) & (df['hydro_year'] <= 2020)]
for hy in std['hydro_year'].unique():
idx = (std['hydro_year'] == hy)
std = std.where(~idx, other=std[idx].cumsum())
std = std.groupby(std['hydro_doy']).std()
std.index = df['t_repeat'][0:365]
ax.fill_between(std.index,
mean['MB'].cumsum()+std['MB'],
mean['MB'].cumsum()-std['MB'],
color='b', linewidth=2, label='_no', alpha=0.1)
adj(ax, ['bottom','left'])
ax.set_ylim([-500,300])
ax.set_yticks([-500,-400,-300,-200,-100,0,100,200,300])
ax.set_ylabel('Hydrologic year cumulative\nmass change [Gt]')
## https://stackoverflow.com/questions/17158382/
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_locator(ticker.FixedLocator(ax.get_xticks()+16))
months = ['Sep','Oct','Nov','Dec','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug']
ax.xaxis.set_minor_formatter(ticker.FixedFormatter(months))
ax.set_xlabel('')
plt.setp(ax.xaxis.get_minorticklabels(), rotation=90)
ax.tick_params(which='minor', length=0)
plt.xticks(rotation=90)
plt.legend(fontsize=7)
plt.savefig('./twitfig.png', dpi=150, bbox_inches='tight')
import matplotlib
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
import xarray as xr
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.ticker as ticker
from adjust_spines import adjust_spines as adj
from matplotlib import rc
rc('font', size=8)
rc('text', usetex=False)
with xr.open_dataset('/home/shl/data/promice/TMB/MB_sector.nc') as ds:
#print(ds.sel({'sector':71})[['MB','SMB','D','BMB']].to_dataframe().drop(columns='sector').tail(8))
df = ds.sel({'sector':71})[['MB_ROI','SMB_ROI','D_ROI','BMB_ROI']].to_dataframe().drop(columns='sector').copy()
df = df.resample('1D').ffill()
df = df[df.index.dayofyear <= 365] # drop leap years for simplicity
df = df['1986-09-01':] # hydro years
df = df[df.index.dayofyear <= 365] # drop leap years for simplicity
df = df['1986-09-01':] # hydro years
# hydro year dates but dropping leap years
t = pd.date_range(start='1987-01-01', freq='D', periods=df.index.size+100) # hydro year index
t = t[t.dayofyear <= 365][0:df.index.size]
df['t_hydro'] = t
df['hydro_year'] = df.set_index('t_hydro').index.year
df['hydro_doy'] = df.set_index('t_hydro').index.dayofyear
nyear = int(np.ceil(df.index.size/365))
df['t_repeat'] = (pd.date_range(start='2000-01-01', freq='D', periods=365).to_list() * nyear) [0:df.index.size]
plt.close(1)
fig = plt.figure(1, figsize=(3.5,2.0)) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
kw_plot = {'drawstyle':'steps', 'rot':90, 'color':'k', 'linewidth':0.1}
this_year = datetime.datetime.now().year
last_day = df.index[-1].strftime("%b %-d")
for hyr in df['hydro_year'].unique():
this = df[df['hydro_year'] == hyr]
this = this.set_index('t_repeat')
this['MB_ROI'].cumsum().plot(ax=ax, label='_no', **kw_plot)
if hyr in [1992,1996,2000,2003,2012,2015,2018,2021]:
ax.text(this.index[-1], this['MB_ROI'].cumsum().values[-1], str(hyr),
fontsize=5,
verticalalignment='center')
mean = df[(df['hydro_year'] >= 1991) & (df['hydro_year'] <= 2021)]
mean = mean.groupby(mean['hydro_doy']).mean()
mean.index = df['t_repeat'][0:365]
mean['MB_ROI'].cumsum().plot(ax=ax, drawstyle='steps', color='k', linewidth=2, label='1991 through 2021 mean')
kw_plot['color'] = 'r'
kw_plot['linewidth'] = 2
# this['MB'].cumsum().plot(ax=ax, label='2022 (through '+last_day+')', **kw_plot)
if this.index.size > 8:
this['MB_ROI'].cumsum().iloc[:-10].plot(ax=ax, label='Through '+last_day, **kw_plot)
kw_plot['alpha'] = 0.8
this['MB_ROI'].cumsum().iloc[-7:].plot(ax=ax, label='_no', **kw_plot)
else:
this['MB_ROI'].cumsum().plot(ax=ax, label='Through '+last_day, **kw_plot)
std = df[(df['hydro_year'] >= 1991) & (df['hydro_year'] <= 2021)]
for hy in std['hydro_year'].unique():
idx = (std['hydro_year'] == hy)
std = std.where(~idx, other=std[idx].cumsum())
std = std.groupby(std['hydro_doy']).std()
std.index = df['t_repeat'][0:365]
ax.fill_between(std.index,
mean['MB_ROI'].cumsum()+std['MB_ROI'],
mean['MB_ROI'].cumsum()-std['MB_ROI'],
color='b', linewidth=2, label='_no', alpha=0.1)
adj(ax, ['bottom','left'])
ax.set_ylim([-40,20])
ax.set_yticks([-40,-30,-20,-10,0,10,20])
ax.set_ylabel('Hydrologic year cumulative\nmass change [Gt]')
## https://stackoverflow.com/questions/17158382/
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_locator(ticker.FixedLocator(ax.get_xticks()+16))
months = ['Sep','Oct','Nov','Dec','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug']
ax.xaxis.set_minor_formatter(ticker.FixedFormatter(months))
ax.set_xlabel('')
plt.setp(ax.xaxis.get_minorticklabels(), rotation=90)
ax.tick_params(which='minor', length=0)
plt.xticks(rotation=90)
plt.legend(fontsize=7)
plt.savefig('./Sermeq_Kujalleq_twitfig.png', dpi=150, bbox_inches='tight')
import matplotlib
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
import xarray as xr
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.ticker as ticker
from adjust_spines import adjust_spines as adj
from matplotlib import rc
rc('font', size=8)
rc('text', usetex=False)
with xr.open_dataset('/home/shl/data/promice/TMB/MB_sector.nc') as ds:
#print(ds.sel({'sector':71})[['MB','SMB','D','BMB']].to_dataframe().drop(columns='sector').tail(8))
df = ds.sel({'sector':71})[['MB_ROI','SMB_ROI','D_ROI','BMB_ROI']].to_dataframe().drop(columns='sector').copy()
df = df.resample('1D').ffill()
df = df[df.index.dayofyear <= 365] # drop leap years for simplicity
df = df['1986-09-01':] # hydro years
df = df[df.index.dayofyear <= 365] # drop leap years for simplicity
df = df['1986-09-01':] # hydro years
# hydro year dates but dropping leap years
t = pd.date_range(start='1987-01-01', freq='D', periods=df.index.size+100) # hydro year index
t = t[t.dayofyear <= 365][0:df.index.size]
df['t_hydro'] = t
df = df['1999-09-01':]
df['hydro_year'] = df.set_index('t_hydro').index.year
df['hydro_doy'] = df.set_index('t_hydro').index.dayofyear
nyear = int(np.ceil(df.index.size/365))
df['t_repeat'] = (pd.date_range(start='2000-01-01', freq='D', periods=365).to_list() * nyear) [0:df.index.size]
plt.close(1)
fig = plt.figure(1, figsize=(3.5,2.0) ) # w,h
fig.clf()
fig.set_tight_layout(True)
ax = fig.add_subplot(111)
kw_plot = {'drawstyle':'steps', 'rot':90, 'color':'k', 'linewidth':0.1}
this_year = datetime.datetime.now().year
last_day = df.index[-1].strftime("%b %-d")
for hyr in df['hydro_year'].unique():
this = df[df['hydro_year'] == hyr]
this = this.set_index('t_repeat')
this['MB_ROI'].cumsum().plot(ax=ax, label='_no', **kw_plot)
if hyr in [2000,2003,2012,2015,2018,2021] : #[2010,2011, 2012, 2021]:
ax.text(this.index[-1], this['MB_ROI'].cumsum().values[-1], str(hyr),
fontsize=5,
verticalalignment='center')
mean = df[(df['hydro_year'] >= 2000) & (df['hydro_year'] <= 2021)]
mean = mean.groupby(mean['hydro_doy']).mean()
mean.index = df['t_repeat'][0:365]
mean['MB_ROI'].cumsum().plot(ax=ax, drawstyle='steps', color='k', linewidth=2, label='2000 through 2021 mean')
kw_plot['color'] = 'r'
kw_plot['linewidth'] = 2
# this['MB'].cumsum().plot(ax=ax, label='2022 (through '+last_day+')', **kw_plot)
if this.index.size > 8:
this['MB_ROI'].cumsum().iloc[:-10].plot(ax=ax, label='Through '+last_day, **kw_plot)
kw_plot['alpha'] = 0.8
this['MB_ROI'].cumsum().iloc[-7:].plot(ax=ax, label='_no', **kw_plot)
else:
this['MB_ROI'].cumsum().plot(ax=ax, label='Through '+last_day, **kw_plot)
std = df[(df['hydro_year'] >= 2000) & (df['hydro_year'] <= 2021)]
for hy in std['hydro_year'].unique():
idx = (std['hydro_year'] == hy)
std = std.where(~idx, other=std[idx].cumsum())
std = std.groupby(std['hydro_doy']).std()
std.index = df['t_repeat'][0:365]
ax.fill_between(std.index,
mean['MB_ROI'].cumsum()+std['MB_ROI'],
mean['MB_ROI'].cumsum()-std['MB_ROI'],
color='b', linewidth=2, label='_no', alpha=0.1)
adj(ax, ['bottom','left'])
ax.set_ylim([-40,20])
ax.set_yticks([-40,-30,-20,-10,0,10,20])
ax.set_ylabel('Hydrologic year cumulative\nmass change [Gt]')
## https://stackoverflow.com/questions/17158382/
ax.xaxis.set_major_formatter(ticker.NullFormatter())
ax.xaxis.set_minor_locator(ticker.FixedLocator(ax.get_xticks()+16))
months = ['Sep','Oct','Nov','Dec','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug']
ax.xaxis.set_minor_formatter(ticker.FixedFormatter(months))
ax.set_xlabel('')
plt.setp(ax.xaxis.get_minorticklabels(), rotation=90)
ax.tick_params(which='minor', length=0)
plt.xticks(rotation=90)
plt.legend(fontsize=7)
plt.savefig('./Sermeq_Kujalleq_2000_present_twitfig.png', dpi=150, bbox_inches='tight')
- Graph up
- 📈 (mass loss, SLR up)
- Graph down
- 📉 (mass gain, SLR down)
- Ocean wave
- 🌊
- Ice cube
- 🧊
- Water drop
- 💧
- Greenland flag
- 🇬🇱
import tweepy
import pandas as pd
import numpy as np
import datetime
df = pd.read_csv('./TMB/MB_SMB_D_BMB.csv', index_col=0, parse_dates=True, usecols=(0,1,2))
now = datetime.datetime.now()
this_year = now.year
today = now.isoformat()[0:10]
tomorrow = str(df.index[-7])[0:10]
next_week = str(df.index[-1])[0:10]
assert(now.isoformat()[0:10] == str(df.index[-8])[0:10])
def SLR_daterange(t0, t1):
d = df[(df.index >= t0) & (df.index <= t1)].sum()
d['mm'] = -1*d['MB'] / 361.8 # Gt to mm
d['mm_err'] = -1*d['MB_err'] / 361.8 # Gt to mm
d[['mm_err','MB_err']] = np.abs(d[['mm_err','MB_err']])
return d
far = SLR_daterange('1990-01-01', today)
ytd = SLR_daterange(str(this_year)+'-01-01', today)
next7 = SLR_daterange(tomorrow, next_week)
def my_fmt(n, dec=2):
if dec == 0:
return str(int(round(n)))
return str(round(n,dec))
tweet_str = ""
tweet_str += f"Sea level rise (#SLR: 📈=🌊+🧊💧) from 🇬🇱Greenland"
tweet_str += f"\n\nSince 1990 (IPCC FAR): ~{my_fmt(far['mm'])} mm"
tweet_str += f" ({my_fmt(far['MB'], dec=0)} Gt)"
tweet_str += f"\n\n{this_year} → today: "
if ytd['mm'] > 0.01:
tweet_str += f"~{my_fmt(ytd['mm'])} mm ("
tweet_str += f"{my_fmt(ytd['MB'], dec=0)} Gt"
if ytd['mm'] > 0.01: tweet_str += f")"
tweet_str += f"\n\nNext 7 days ({tomorrow} →️{next_week})"
if next7['mm'] < 0: # mass gain
tweet_str += f"📉: mass gain +{my_fmt(next7['MB'], dec=1)} ±{my_fmt(next7['MB_err'], dec=1)} Gt"
if next7['mm'] > 0: # mass loss
tweet_str += f"📈: {my_fmt(next7['mm'])} ±{my_fmt(next7['mm_err'])} mm ({my_fmt(next7['MB'], dec=1)} ±{my_fmt(next7['MB_err'], dec=1)} Gt)"
tweet_str += '\n'
tweet_str += '\nhttps://doi.org/10.5194/essd-13-5001-2021'
tweet_str += '\nhttps://doi.org/10.22008/FK2/OHI23Z'
tweet_str += '\nhttps://github.com/GEUS-Glaciology-and-Climate/mass_balance'
# print(len(tweet_str))
print(tweet_str)
import secret
# # Authenticate to Twitter
auth = tweepy.OAuthHandler(secret.API_KEY, secret.API_SECRET)
# auth.get_authorization_url() # go to URL, get code
# auth.get_access_token("<code>")
# returns:
# ACCESS_TOKEN = 'string'
# ACCESS_TOKEN_SECRET = 'string'
auth.set_access_token(secret.ACCESS_TOKEN, secret.ACCESS_TOKEN_SECRET)
# Create API object
api = tweepy.API(auth)
# https://stackoverflow.com/questions/43490332/sending-multiple-medias-with-tweepy
# Upload an image and get ID
media = api.media_upload('twitfig.png')
# Create a tweet
api.update_status(tweet_str, media_ids=[media.media_id])
This project uses software - bash, GRASS, Python, etc. Using the Docker image supports the goal of bit-matching reproducibility.
Below I provide the version of the software(s) used to create this document in order to support the goal of bit-matching reproducibility.
for tool in gdal-bin parallel sed gawk netcdf-bin proj-bin nco cdo bash grass-gui datamash; do dpkg -l | grep "ii ${tool} " | cut -c5-90; done| sort
(org-version nil t)
Org mode version 9.3.6 (9.3.6-4-gdfa7a3-elpa @ /home/shl/scimax/elpa/org-20200217/)
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib import rc
rc('font', size=12)
rc('text', usetex=False)
matplotlib.pyplot.xkcd()
- I set
DATADIR
as abash
environment variable in my login scripts. - This is so that Python babel blocks can also easily get that property.
import os
DATADIR = os.environ['DATADIR']
Example:
<<get_DATADIR>>
print(DATADIR)
bibexport ms > library.bib
Then switch from system to local library.bib in ms.org
(add-to-list 'org-latex-classes
`("copernicus"
"\\documentclass{copernicus}
[NO-DEFAULT-PACKAGES]
[NO-PACKAGES]
[EXTRA]"
("\\section{%s}" . "\\section*{%s}")
("\\subsection{%s}" . "\\subsection*{%s}")
("\\subsubsection{%s}" . "\\subsubsection*{%s}")
("\\paragraph{%s}" . "\\paragraph*{%s}")
("\\subparagraph{%s}" . "\\subparagraph*{%s}"))
)
(setq-local org-latex-title-command "")
This code, and all code files in this project, are derived products tangled from the sob.org source file.
PROMICE_MB: all
# dist
all: FORCE
mkdir -p tmp dat
# set up
grass -e -c EPSG:4326 G_HIRHAM
grass ./G_HIRHAM/PERMANENT/ --exec ./HIRHAM.sh
grass -e -c XY G_HIRHAM_XY
grass -e -c EPSG:3413 G_MAR
grass ./G_MAR/PERMANENT --exec ./MAR.sh
grass -e -c EPSG:3413 G_RACMO
grass ./G_RACMO/PERMANENT --exec ./RACMO.sh
# BMB setup on the MAR grid
grass ./G_MAR/PERMANENT --exec ./BMB.sh
make SMB
make BMB
make dist
SMB: FORCE
# partition RCM by Zwally sectors, Mouginot basins, and Mouginot regions
grass ./G_HIRHAM_XY/PERMANENT --exec ./SMB_HIRHAM_ROI.sh
grass ./G_MAR/PERMANENT --exec ./SMB_MAR_ROI.sh
grass ./G_RACMO/PERMANENT --exec ./SMB_RACMO_ROI.sh
./SMB_merge.sh
python ./SMB_bsv2nc.py
BMB: FORCE
grass ./G_MAR/PERMANENT --exec ./BMB_MAR.sh
./BMB_merge.sh
python ./BMB_bsv2nc.py
update: FORCE
# remove previously forecasted MAR
for n in $$(seq -10 10); do d=$$(date --date="$${n} days ago" --iso-8601); rm -f ./tmp/MAR/*_$${d}.bsv; done
RECENT=true make SMB
for n in $$(seq -10 10); do d=$$(date --date="$${n} days ago" --iso-8601); rm -f ./tmp/BMB/*_$${d}.bsv; done
RECENT=true make BMB
make dist
validate: FORCE
grass -e -c EPSG:3413 G
dist: FORCE
mkdir -p TMB
# create end-user data product
python ./build_TMB_nc.py
python upload_client.py --server https://thredds01.geus.dk/upload_mass_balance --token 'signes-hemmelige-token' TMB/*
# cp TMB/* /mnt/thredds_fileshare/mass_balance/.
# python ./upload_to_DV.py
# python ./twitfig.py
# python ./twitbot.py
FORCE: # dummy target
clean_30:
# Rebuild the last 30 days
for n in $$(seq -10 30); do d=$$(date --date="$${n} days ago" --iso-8601); rm -f ./tmp/MAR/*_$${d}.bsv; done
for n in $$(seq -10 30); do d=$$(date --date="$${n} days ago" --iso-8601); rm -f ./tmp/RACMO/*_$${d}.bsv; done
for n in $$(seq -10 30); do d=$$(date --date="$${n} days ago" --iso-8601); rm -f ./tmp/BMB/*_$${d}.bsv; done
make update
clean_all:
rm -fR G G_RACMO G_HIRHAM G_HIRHAM_XY G_MAR G_tmp tmp dat TMB
clean_SMB:
rm -fR tmp/HIRHAM tmp/MAR tmp/RACMO