Are significant digits of P24 in GRIB the same as double precision in ECHAM6/NETCDF? #679
Replies: 10 comments 13 replies
-
Moin Qinggang, First: this is the right place :-) while the esm tools team doesn't provide direct model or Fortran support, we do host the forum. It is the best "collective place" we could think of for a question/answer style forum. precision with grib might be a bit of a tricky topic, especially given that echam I/O is not so straightforward. It may help if you could hypothetically write down what you would expect to happen on the analysis side. What would your python code produce if Fortran did what you wanted? Some questions from my side to be able to help better:
I can't promise anything: there is a good chance that I will redirect you to the mpi-met forum to ask there instead, but we can try :-) |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang,
As my previous supervisor said, "GRIB is an illness and NetCDF is the medicine". I don't know why someone would use GRIB in climate modelling instead of NetCDF. CDO, Xarray, ...I am not sure if they tell you the exact precision of the data or the allocation of the data in the current session. You can get the correct answer by going to the source code of the subroutine that writes files in Echam. If the calculations are performed in DP but written in SP but you need high accuracy data, then I would branch-off & fork the model and would implement my DP output routine there. |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang,
ESM-Tools is actually setting it correctly. If you have a look at the
As you can see some Wiso outputs are NetCDF but some are not. So why is that? Here is the reason: All Wiso variables from ECHAM are converted to NetCDF because you are setting it to 2 (NetCDF).
The other variables are actually coming from JSBACH: IF (lwiso) THEN ! deniz: starting at line 1531
CALL new_stream(theLand%IO_wiso_stream, 'js_wiso', filetype=theOptions%FileType, ztype=theOptions%FileZtype, &
lpost=theOptions%OutputModelState, lrerun=.TRUE.)
CALL default_stream_setting(theLand%IO_wiso_stream, lpost=theOptions%OutputModelState, repr=LAND, lrerun=.TRUE., &
table=land_table)
CALL new_stream(theLand%IO_diag_wiso_stream, 'la_wiso', filetype=theOptions%FileType, ztype=theOptions%FileZtype, &
lpost=.TRUE., lrerun=.FALSE.)
CALL default_stream_setting(theLand%IO_diag_wiso_stream, lpost=.TRUE., repr=LAND, lrerun=.FALSE., table=ldiag_table)
! Stream that contains accumulated values weighted by land fraction, necessary to compute water and energy
! balance in ECHAM postprocessing
CALL new_stream(theLand%IO_wiso_accw, 'accw_wiso', filetype=theOptions%FileType, ztype=theOptions%FileZtype, &
lpost=.NOT. theOptions%Standalone, lrerun=.NOT. theOptions%Standalone)
CALL default_stream_setting(theLand%IO_wiso_accw, lpost=.NOT. theOptions%Standalone, lrerun=.NOT. theOptions%Standalone, &
repr=LAND, table=ldiag2_table) So what is the value of Source: This is the default initialization and it is updated by the namelist read, that comes just after that. I am sure that if you also set JSBACH output type to 2 via ESM-Tools, as you did for ECHAM, you will get NetCDF for other WISO outputs. The log file tells you the output selected for JSBACH files. Eg. at line 877
Could you give these a try. |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, Here is an excerpt from &jsbach_ctl
standalone = .false.
ntiles = 11
use_bethy = .true.
use_phenology = .true.
use_albedo = .true.
with_yasso = .true.
with_hd = .false.
use_roughness_lai = .true.
use_roughness_oro = .false.
veg_at_1200 = .false.
file_type = 1
file_ztype = 1
lpost_echam = .false.
debug = .false.
/ As you can see unlike ECHAM, JSBACH does not have a jsbach:
add_namelist_changes:
namelist.jsbach:
file_type: 2 Could you please try that. |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, add_namelist_changes:
namelist.jsbach:
jsbach_ctl: # this was missing
file_type: 2 |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, I checked your experiment directory ( $ find . -name "*wiso*nc" -exec du -h {} \;
3.5K ./run_20000101-20000131/work/pi_echam6_1d_034_3.39_200001.01_js_wiso.nc
512 ./run_20000101-20000131/work/pi_echam6_1d_034_3.39_200001.01_sf_wiso.nc
3.5K ./run_20000101-20000131/work/pi_echam6_1d_034_3.39_200001.01_accw_wiso.nc
3.5K ./run_20000101-20000131/work/pi_echam6_1d_034_3.39_200001.01_la_wiso.nc
3.5K ./run_20000101-20000131/work/pi_echam6_1d_034_3.39_200001.01_wiso.nc
3.5K ./unknown/pi_echam6_1d_034_3.39_200001.01_js_wiso.nc
512 ./unknown/pi_echam6_1d_034_3.39_200001.01_sf_wiso.nc
3.5K ./unknown/pi_echam6_1d_034_3.39_200001.01_accw_wiso.nc
3.5K ./unknown/pi_echam6_1d_034_3.39_200001.01_la_wiso.nc
3.5K ./unknown/pi_echam6_1d_034_3.39_200001.01_wiso.nc As you said, they only contain dimensions but not actual output data. |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, I also realized that they are moved to the
Unfortunately, that has nothing to do with the memory allocated by the computer. The REAL(dp), ALLOCATABLE, PUBLIC :: wisoqte (:,:,:,:)
REAL(dp), ALLOCATABLE, PUBLIC :: wisoxlte (:,:,:,:)
REAL(dp), ALLOCATABLE, PUBLIC :: wisoxite (:,:,:,:)
...
ALLOCATE (wisoqte (nproma,nlev,nwiso,ngpblks)) ;wisoqte = 0.0_dp
ALLOCATE (wisoxlte (nproma,nlev,nwiso,ngpblks)) ;wisoxlte = 0.0_dp
ALLOCATE (wisoxite (nproma,nlev,nwiso,ngpblks)) ;wisoxite = 0.0_dp These are always double precision floating point numbers. The strange thing is that netcdf show them as float instead of double. |
Beta Was this translation helpful? Give feedback.
-
Hi Qinggang, I also verified that the variables in the ./mo_io.f90:386: CALL IO_DEF_VAR (fileID, io_name, NF_DOUBLE, ndim, &
./mo_io.f90:422: CALL IO_DEF_VAR (fileID, 'lat', NF_DOUBLE, 1, IO_dims, io_vlat_id)
./mo_io.f90:424: CALL IO_DEF_VAR (fileID, 'lon', NF_DOUBLE, 1, IO_dims, io_vlon_id)
./mo_io.f90:428: CALL IO_DEF_VAR (fileID, 'landpoint', NF_DOUBLE, 1, IO_dims, io_vland_id)
./mo_io.f90:433: CALL IO_DEF_VAR (fileID, 'vct_a', NF_DOUBLE, 1, IO_dims, io_vcta_id)
./mo_io.f90:434: CALL IO_DEF_VAR (fileID, 'vct_b', NF_DOUBLE, 1, IO_dims, io_vctb_id) The subroutine The strange thing is variables are written with DP to restart files but with SP to output files. It looks like restart files are written like this (call stack) SUBROUTINE stepon
CALL write_streams ! mo_io.f90
CALL io_write_stream
CALL IO_PUT_VAR_DOUBLE ! mo_netcdf.f90
CALL NF_PUT_VAR_DOUBLE ! this is the actual NetCDF call I think the problem arises from SUBROUTINE stepon
CALL out_streams ! mo_output.f90
CALL out_stream
CALL write_var
CALL StreamWriteVar ! comes from CDI library CDI documentation says that it writes data in DP: https://code.mpimet.mpg.de/projects/cdi/embedded/cdi_fman.html#x1-510004.1.11 You can start your debugging by placing print statements before call to 1676 IF (nprocio == 0) THEN
1677 CALL StreamWriteVar(fileID, varID, xyz(:,:,:,1), nmiss)
1678 ELSE
1679 #ifdef HAVE_CDIPIO
1680 !PRINT *, 'pe ', p_pe, ', varid=', varid, 'size(xyz) = ', size(xyz), &
1681 ! 'xt_idxlist_get_num_indices = ', xt_idxlist_get_num_indices(gp(SIZE(xyz,3))%idxlist), &
1682 ! 'dim=', info%dim, SIZE(xyz,3)
1683 CALL streamWriteVarPart(fileID, varID, xyz(:,:,:,1), nmiss, &
1684 gp(SIZE(xyz,3))%idxlist)
1685 #endif
1686 ENDIF |
Beta Was this translation helpful? Give feedback.
-
Moin all,
Yes! So a few things:
All esm-tools really does, if you boil it down, is copy around input and output files and fire off a batch system job for you. It's complicated because we try to support many different HPC systems and models, but output routines from ECHAM has nothing to do specifically here with respect to using
|
Beta Was this translation helpful? Give feedback.
-
Hi All
I am not sure if this is the right place to ask this question, but I meet this problem while coding in ECHAM6. My understanding & questions are described below. If you have any ideas, please let me know.
1, In ECHAM6, the calculation is done by default in double precision (dp/wp), while the output for many variables/streams is in p16 for GRIB. I understand dp has 15 and sp has 7-8 significant digits. How about p16 and p24 for GRIB files? I googled a lot but found no answers.
2, If I want to output variables in double precision, shall I just specify "bits=24" when adding the variables to the stream? Is there anything else I need to do?
3, In the GRIB file, I can see the DTYPE of my desired variable is p24 with "cdo -sinfo". But when I import this data to python with xarray and cfgrib engine, it is float32. Even when I transfer it to float 64 with cdo, it has only around 9 significant digits. Does it mean p24 has only 9 significant digits?
4, Can I output float64 in netcdf format directly?
Regards, Qinggang
Beta Was this translation helpful? Give feedback.
All reactions