diff --git a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index 627b774..1b4161c 100644 Binary files a/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/dev/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip index 2a497dd..32e6741 100644 Binary files a/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip and b/dev/_downloads/0d78e075dd52a34e158d7f5f710dfe89/plot_incremental_FNO_darcy.zip differ diff --git a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip index 4f7f947..b4f91bd 100644 Binary files a/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip and b/dev/_downloads/20c43dd37baf603889c4dc23e93bdb60/plot_count_flops.zip differ diff --git a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip index b2dfcd2..f0c5663 100644 Binary files a/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip and b/dev/_downloads/3864a2d85c7ce11adeac9580559229ab/plot_darcy_flow.zip differ diff --git a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip index 68b744a..9a710c5 100644 Binary files a/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip and b/dev/_downloads/3faf9d2eaee5cc8e9f1c631c002ce544/plot_darcy_flow_spectrum.zip differ diff --git a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip index 1f0992f..a092fc7 100644 Binary files a/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip and b/dev/_downloads/4e1cba46bb51062a073424e1efbaad57/plot_DISCO_convolutions.zip differ diff --git a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip index 705fae8..cfcf9c7 100644 Binary files a/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip and b/dev/_downloads/5e60095ce99919773daa83384f767e02/plot_SFNO_swe.zip differ diff --git a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip index 725b90f..2f2c882 100644 Binary files a/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip and b/dev/_downloads/645da00b8fbbb9bb5cae877fd0f31635/plot_FNO_darcy.zip differ diff --git a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index c3eead1..4e9ee79 100644 Binary files a/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/dev/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip index bfb56dd..5460753 100644 Binary files a/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip and b/dev/_downloads/7296405f6df7c2cfe184e9b258cee33e/checkpoint_FNO_darcy.zip differ diff --git a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip index 01ff65b..aa0dbcb 100644 Binary files a/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip and b/dev/_downloads/cefc537c5730a6b3e916b83c1fd313d6/plot_UNO_darcy.zip differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_001.png b/dev/_images/sphx_glr_plot_SFNO_swe_001.png index af8519a..89fa5db 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_001.png and b/dev/_images/sphx_glr_plot_SFNO_swe_001.png differ diff --git a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png index 37e3b47..12622fa 100644 Binary files a/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png and b/dev/_images/sphx_glr_plot_SFNO_swe_thumb.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_001.png b/dev/_images/sphx_glr_plot_UNO_darcy_001.png index 9ede0e8..5cd2bcc 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_001.png and b/dev/_images/sphx_glr_plot_UNO_darcy_001.png differ diff --git a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png index d40d40d..4a30fcd 100644 Binary files a/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png and b/dev/_images/sphx_glr_plot_UNO_darcy_thumb.png differ diff --git a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt index 402176c..a66c78f 100644 --- a/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt +++ b/dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. code-block:: none - + @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 31.554 seconds) + **Total running time of the script:** (0 minutes 31.319 seconds) .. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py: diff --git a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt index 7e7a841..341eea7 100644 --- a/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_FNO_darcy.rst.txt @@ -248,13 +248,13 @@ Training the model ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -313,20 +313,20 @@ Then train the model on our small Darcy-Flow dataset: Raw outputs of shape torch.Size([32, 1, 16, 16]) [0] time=2.57, avg_loss=0.6956, train_err=21.7383 Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542 - [3] time=2.56, avg_loss=0.2103, train_err=6.5705 + [3] time=2.55, avg_loss=0.2103, train_err=6.5705 Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774 - [6] time=2.59, avg_loss=0.1911, train_err=5.9721 + [6] time=2.56, avg_loss=0.1911, train_err=5.9721 Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783 - [9] time=2.57, avg_loss=0.1410, train_err=4.4073 + [9] time=2.56, avg_loss=0.1410, train_err=4.4073 Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615 - [12] time=2.55, avg_loss=0.1422, train_err=4.4434 + [12] time=2.56, avg_loss=0.1422, train_err=4.4434 Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741 - [15] time=2.57, avg_loss=0.1198, train_err=3.7424 + [15] time=2.62, avg_loss=0.1198, train_err=3.7424 Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569 - [18] time=2.55, avg_loss=0.1104, train_err=3.4502 + [18] time=2.57, avg_loss=0.1104, train_err=3.4502 Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603 - {'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5465377909999916} + {'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5713351109999394} @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 52.494 seconds) + **Total running time of the script:** (0 minutes 52.622 seconds) .. _sphx_glr_download_auto_examples_plot_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt index c790322..c016df3 100644 --- a/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt +++ b/dev/_sources/auto_examples/plot_SFNO_swe.rst.txt @@ -234,13 +234,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'l2': } + * Test: {'l2': } @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset Training on 200 samples Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)]. Raw outputs of shape torch.Size([4, 3, 32, 64]) - [0] time=3.65, avg_loss=2.5842, train_err=10.3366 - Eval: (32, 64)_l2=1.8738, (64, 128)_l2=2.4284 - [3] time=3.62, avg_loss=0.4147, train_err=1.6589 - Eval: (32, 64)_l2=0.5781, (64, 128)_l2=2.3744 - [6] time=3.63, avg_loss=0.2662, train_err=1.0649 - Eval: (32, 64)_l2=0.5419, (64, 128)_l2=2.3195 - [9] time=3.58, avg_loss=0.2207, train_err=0.8829 - Eval: (32, 64)_l2=0.4887, (64, 128)_l2=2.3423 - [12] time=3.58, avg_loss=0.1894, train_err=0.7575 - Eval: (32, 64)_l2=0.4850, (64, 128)_l2=2.3244 - [15] time=3.57, avg_loss=0.1631, train_err=0.6523 - Eval: (32, 64)_l2=0.4811, (64, 128)_l2=2.3205 - [18] time=3.58, avg_loss=0.1442, train_err=0.5769 - Eval: (32, 64)_l2=0.4802, (64, 128)_l2=2.3157 + [0] time=3.66, avg_loss=2.6152, train_err=10.4609 + Eval: (32, 64)_l2=1.8807, (64, 128)_l2=2.5119 + [3] time=3.64, avg_loss=0.4558, train_err=1.8232 + Eval: (32, 64)_l2=0.9008, (64, 128)_l2=2.6564 + [6] time=3.58, avg_loss=0.2771, train_err=1.1085 + Eval: (32, 64)_l2=0.8429, (64, 128)_l2=2.5842 + [9] time=3.60, avg_loss=0.2288, train_err=0.9151 + Eval: (32, 64)_l2=0.8236, (64, 128)_l2=2.5503 + [12] time=3.59, avg_loss=0.1939, train_err=0.7754 + Eval: (32, 64)_l2=0.7405, (64, 128)_l2=2.4524 + [15] time=3.59, avg_loss=0.1673, train_err=0.6693 + Eval: (32, 64)_l2=0.7308, (64, 128)_l2=2.4187 + [18] time=3.59, avg_loss=0.1475, train_err=0.5900 + Eval: (32, 64)_l2=0.6825, (64, 128)_l2=2.3738 - {'train_err': 0.5465761256217957, 'avg_loss': 0.13664403140544892, 'avg_lasso_loss': None, 'epoch_train_time': 3.5847762680000415} + {'train_err': 0.5666905903816223, 'avg_loss': 0.14167264759540557, 'avg_lasso_loss': None, 'epoch_train_time': 3.6301156069999934} @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (1 minutes 27.934 seconds) + **Total running time of the script:** (1 minutes 28.624 seconds) .. _sphx_glr_download_auto_examples_plot_SFNO_swe.py: diff --git a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt index 8c7b0d8..a762e32 100644 --- a/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_UNO_darcy.rst.txt @@ -345,13 +345,13 @@ Creating the losses ) ### SCHEDULER ### - + ### LOSSES ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset Training on 1000 samples Testing on [50, 50] samples on resolutions [16, 32]. Raw outputs of shape torch.Size([32, 1, 16, 16]) - [0] time=9.95, avg_loss=0.6844, train_err=21.3879 - Eval: 16_h1=0.6813, 16_l2=0.4521, 32_h1=0.9371, 32_l2=0.6767 - [3] time=9.89, avg_loss=0.2622, train_err=8.1950 - Eval: 16_h1=0.2517, 16_l2=0.1695, 32_h1=0.8607, 32_l2=0.6035 - [6] time=9.92, avg_loss=0.2178, train_err=6.8077 - Eval: 16_h1=0.2469, 16_l2=0.1539, 32_h1=0.8402, 32_l2=0.5624 - [9] time=9.88, avg_loss=0.2180, train_err=6.8126 - Eval: 16_h1=0.2543, 16_l2=0.1619, 32_h1=0.7828, 32_l2=0.5254 - [12] time=9.94, avg_loss=0.1915, train_err=5.9847 - Eval: 16_h1=0.2700, 16_l2=0.1719, 32_h1=0.7757, 32_l2=0.4989 - [15] time=9.87, avg_loss=0.1635, train_err=5.1107 - Eval: 16_h1=0.2434, 16_l2=0.1517, 32_h1=0.8047, 32_l2=0.5077 - [18] time=9.90, avg_loss=0.1674, train_err=5.2311 - Eval: 16_h1=0.2448, 16_l2=0.1587, 32_h1=0.7799, 32_l2=0.5013 + [0] time=10.04, avg_loss=0.6503, train_err=20.3222 + Eval: 16_h1=0.3710, 16_l2=0.2491, 32_h1=0.8856, 32_l2=0.5331 + [3] time=9.99, avg_loss=0.2580, train_err=8.0616 + Eval: 16_h1=0.3135, 16_l2=0.2162, 32_h1=0.8040, 32_l2=0.5550 + [6] time=10.07, avg_loss=0.2012, train_err=6.2867 + Eval: 16_h1=0.2600, 16_l2=0.1664, 32_h1=0.7934, 32_l2=0.5296 + [9] time=9.99, avg_loss=0.2072, train_err=6.4741 + Eval: 16_h1=0.2641, 16_l2=0.1678, 32_h1=0.7593, 32_l2=0.4923 + [12] time=10.01, avg_loss=0.2042, train_err=6.3807 + Eval: 16_h1=0.2544, 16_l2=0.1577, 32_h1=0.7915, 32_l2=0.5030 + [15] time=9.96, avg_loss=0.1907, train_err=5.9588 + Eval: 16_h1=0.2567, 16_l2=0.1713, 32_h1=0.7695, 32_l2=0.4561 + [18] time=10.02, avg_loss=0.1470, train_err=4.5951 + Eval: 16_h1=0.2394, 16_l2=0.1488, 32_h1=0.7585, 32_l2=0.4966 - {'train_err': 4.346285965293646, 'avg_loss': 0.13908115088939665, 'avg_lasso_loss': None, 'epoch_train_time': 9.86875730099996} + {'train_err': 4.85387809574604, 'avg_loss': 0.1553240990638733, 'avg_lasso_loss': None, 'epoch_train_time': 9.995433986999956} @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (3 minutes 21.485 seconds) + **Total running time of the script:** (3 minutes 23.451 seconds) .. _sphx_glr_download_auto_examples_plot_UNO_darcy.py: diff --git a/dev/_sources/auto_examples/plot_count_flops.rst.txt b/dev/_sources/auto_examples/plot_count_flops.rst.txt index 6203eac..085df64 100644 --- a/dev/_sources/auto_examples/plot_count_flops.rst.txt +++ b/dev/_sources/auto_examples/plot_count_flops.rst.txt @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e .. code-block:: none - defaultdict(. at 0x7f32ba6a37e0>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) + defaultdict(. at 0x7fe4e078f7e0>, {'': defaultdict(, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(, {'convolution.default': 134217728}), 'projection': defaultdict(, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(, {'convolution.default': 4194304})}) @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 3.214 seconds) + **Total running time of the script:** (0 minutes 3.202 seconds) .. _sphx_glr_download_auto_examples_plot_count_flops.py: diff --git a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt index 9f19c7c..e84dd85 100644 --- a/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt +++ b/dev/_sources/auto_examples/plot_darcy_flow_spectrum.rst.txt @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 0.202 seconds) + **Total running time of the script:** (0 minutes 0.198 seconds) .. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py: diff --git a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt index 01b890d..f0ce7a3 100644 --- a/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt +++ b/dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt @@ -240,15 +240,15 @@ Set up the losses ) ### SCHEDULER ### - + ### LOSSES ### ### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ### - * Train: + * Train: - * Test: {'h1': , 'l2': } + * Test: {'h1': , 'l2': } @@ -331,15 +331,15 @@ Train the model Raw outputs of shape torch.Size([16, 1, 8, 8]) [0] time=0.24, avg_loss=0.9239, train_err=13.1989 Eval: 16_h1=0.8702, 16_l2=0.5536, 32_h1=0.9397, 32_l2=0.5586 - [1] time=0.23, avg_loss=0.7589, train_err=10.8413 + [1] time=0.22, avg_loss=0.7589, train_err=10.8413 Eval: 16_h1=0.7880, 16_l2=0.5167, 32_h1=0.9011, 32_l2=0.5234 [2] time=0.22, avg_loss=0.6684, train_err=9.5484 Eval: 16_h1=0.7698, 16_l2=0.4235, 32_h1=0.9175, 32_l2=0.4405 - [3] time=0.23, avg_loss=0.6082, train_err=8.6886 + [3] time=0.22, avg_loss=0.6082, train_err=8.6886 Eval: 16_h1=0.7600, 16_l2=0.4570, 32_h1=1.0126, 32_l2=0.5005 - [4] time=0.23, avg_loss=0.5604, train_err=8.0054 + [4] time=0.22, avg_loss=0.5604, train_err=8.0054 Eval: 16_h1=0.8301, 16_l2=0.4577, 32_h1=1.1722, 32_l2=0.4987 - [5] time=0.23, avg_loss=0.5366, train_err=7.6663 + [5] time=0.22, avg_loss=0.5366, train_err=7.6663 Eval: 16_h1=0.8099, 16_l2=0.4076, 32_h1=1.1414, 32_l2=0.4436 [6] time=0.23, avg_loss=0.5050, train_err=7.2139 Eval: 16_h1=0.7204, 16_l2=0.3888, 32_h1=0.9945, 32_l2=0.4285 @@ -354,7 +354,7 @@ Train the model Incre Res Update: change res to 16 [10] time=0.29, avg_loss=0.5158, train_err=7.3681 Eval: 16_h1=0.5133, 16_l2=0.3167, 32_h1=0.6120, 32_l2=0.3022 - [11] time=0.28, avg_loss=0.4536, train_err=6.4795 + [11] time=0.27, avg_loss=0.4536, train_err=6.4795 Eval: 16_h1=0.4680, 16_l2=0.3422, 32_h1=0.6436, 32_l2=0.3659 [12] time=0.28, avg_loss=0.4155, train_err=5.9358 Eval: 16_h1=0.4119, 16_l2=0.2692, 32_h1=0.5285, 32_l2=0.2692 @@ -362,18 +362,18 @@ Train the model Eval: 16_h1=0.4045, 16_l2=0.2569, 32_h1=0.5620, 32_l2=0.2747 [14] time=0.28, avg_loss=0.3555, train_err=5.0783 Eval: 16_h1=0.3946, 16_l2=0.2485, 32_h1=0.5278, 32_l2=0.2523 - [15] time=0.29, avg_loss=0.3430, train_err=4.8998 + [15] time=0.28, avg_loss=0.3430, train_err=4.8998 Eval: 16_h1=0.3611, 16_l2=0.2323, 32_h1=0.5079, 32_l2=0.2455 [16] time=0.28, avg_loss=0.3251, train_err=4.6438 Eval: 16_h1=0.3433, 16_l2=0.2224, 32_h1=0.4757, 32_l2=0.2351 - [17] time=0.29, avg_loss=0.3072, train_err=4.3888 + [17] time=0.28, avg_loss=0.3072, train_err=4.3888 Eval: 16_h1=0.3458, 16_l2=0.2226, 32_h1=0.4776, 32_l2=0.2371 [18] time=0.28, avg_loss=0.2982, train_err=4.2593 Eval: 16_h1=0.3251, 16_l2=0.2116, 32_h1=0.4519, 32_l2=0.2245 - [19] time=0.29, avg_loss=0.2802, train_err=4.0024 + [19] time=0.28, avg_loss=0.2802, train_err=4.0024 Eval: 16_h1=0.3201, 16_l2=0.2110, 32_h1=0.4533, 32_l2=0.2245 - {'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.2853402149999056, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)} + {'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.281409022000048, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)} @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs .. rst-class:: sphx-glr-timing - **Total running time of the script:** (0 minutes 7.050 seconds) + **Total running time of the script:** (0 minutes 6.935 seconds) .. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py: diff --git a/dev/_sources/auto_examples/sg_execution_times.rst.txt b/dev/_sources/auto_examples/sg_execution_times.rst.txt index 8a13a5c..f59f287 100644 --- a/dev/_sources/auto_examples/sg_execution_times.rst.txt +++ b/dev/_sources/auto_examples/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:24.262** total execution time for 9 files **from auto_examples**: +**06:26.680** total execution time for 9 files **from auto_examples**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``plot_UNO_darcy.py``) - - 03:21.485 + - 03:23.451 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``plot_SFNO_swe.py``) - - 01:27.934 + - 01:28.624 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``plot_FNO_darcy.py``) - - 00:52.494 + - 00:52.622 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``plot_DISCO_convolutions.py``) - - 00:31.554 + - 00:31.319 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``plot_incremental_FNO_darcy.py``) - - 00:07.050 + - 00:06.935 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``plot_count_flops.py``) - - 00:03.214 + - 00:03.202 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``plot_darcy_flow.py``) - 00:00.328 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``plot_darcy_flow_spectrum.py``) - - 00:00.202 + - 00:00.198 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/_sources/sg_execution_times.rst.txt b/dev/_sources/sg_execution_times.rst.txt index f58427c..102cedd 100644 --- a/dev/_sources/sg_execution_times.rst.txt +++ b/dev/_sources/sg_execution_times.rst.txt @@ -6,7 +6,7 @@ Computation times ================= -**06:24.262** total execution time for 9 files **from all galleries**: +**06:26.680** total execution time for 9 files **from all galleries**: .. container:: @@ -33,28 +33,28 @@ Computation times - Time - Mem (MB) * - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``../../examples/plot_UNO_darcy.py``) - - 03:21.485 + - 03:23.451 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``../../examples/plot_SFNO_swe.py``) - - 01:27.934 + - 01:28.624 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``../../examples/plot_FNO_darcy.py``) - - 00:52.494 + - 00:52.622 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_DISCO_convolutions.py` (``../../examples/plot_DISCO_convolutions.py``) - - 00:31.554 + - 00:31.319 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``../../examples/plot_incremental_FNO_darcy.py``) - - 00:07.050 + - 00:06.935 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``../../examples/plot_count_flops.py``) - - 00:03.214 + - 00:03.202 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``../../examples/plot_darcy_flow.py``) - 00:00.328 - 0.0 * - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``../../examples/plot_darcy_flow_spectrum.py``) - - 00:00.202 + - 00:00.198 - 0.0 * - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``../../examples/checkpoint_FNO_darcy.py``) - 00:00.000 diff --git a/dev/auto_examples/plot_DISCO_convolutions.html b/dev/auto_examples/plot_DISCO_convolutions.html index eb399a3..981ad3f 100644 --- a/dev/auto_examples/plot_DISCO_convolutions.html +++ b/dev/auto_examples/plot_DISCO_convolutions.html @@ -330,7 +330,7 @@ # plt.show() -plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7f32ac13c550>
+plot DISCO convolutions
<matplotlib.colorbar.Colorbar object at 0x7fe4cde24550>
 
convt = DiscreteContinuousConvTranspose2d(1, 1, grid_in=grid_out, grid_out=grid_in, quadrature_weights=q_out, kernel_shape=[2,4], radius_cutoff=3/nyo, periodic=False).float()
@@ -381,7 +381,7 @@
 plot DISCO convolutions
torch.Size([1, 1, 120, 90])
 
-

Total running time of the script: (0 minutes 31.554 seconds)

+

Total running time of the script: (0 minutes 31.319 seconds)

Create the trainer

@@ -323,22 +323,22 @@
Training on 200 samples
 Testing on [50, 50] samples         on resolutions [(32, 64), (64, 128)].
 Raw outputs of shape torch.Size([4, 3, 32, 64])
-[0] time=3.65, avg_loss=2.5842, train_err=10.3366
-Eval: (32, 64)_l2=1.8738, (64, 128)_l2=2.4284
-[3] time=3.62, avg_loss=0.4147, train_err=1.6589
-Eval: (32, 64)_l2=0.5781, (64, 128)_l2=2.3744
-[6] time=3.63, avg_loss=0.2662, train_err=1.0649
-Eval: (32, 64)_l2=0.5419, (64, 128)_l2=2.3195
-[9] time=3.58, avg_loss=0.2207, train_err=0.8829
-Eval: (32, 64)_l2=0.4887, (64, 128)_l2=2.3423
-[12] time=3.58, avg_loss=0.1894, train_err=0.7575
-Eval: (32, 64)_l2=0.4850, (64, 128)_l2=2.3244
-[15] time=3.57, avg_loss=0.1631, train_err=0.6523
-Eval: (32, 64)_l2=0.4811, (64, 128)_l2=2.3205
-[18] time=3.58, avg_loss=0.1442, train_err=0.5769
-Eval: (32, 64)_l2=0.4802, (64, 128)_l2=2.3157
-
-{'train_err': 0.5465761256217957, 'avg_loss': 0.13664403140544892, 'avg_lasso_loss': None, 'epoch_train_time': 3.5847762680000415}
+[0] time=3.66, avg_loss=2.6152, train_err=10.4609
+Eval: (32, 64)_l2=1.8807, (64, 128)_l2=2.5119
+[3] time=3.64, avg_loss=0.4558, train_err=1.8232
+Eval: (32, 64)_l2=0.9008, (64, 128)_l2=2.6564
+[6] time=3.58, avg_loss=0.2771, train_err=1.1085
+Eval: (32, 64)_l2=0.8429, (64, 128)_l2=2.5842
+[9] time=3.60, avg_loss=0.2288, train_err=0.9151
+Eval: (32, 64)_l2=0.8236, (64, 128)_l2=2.5503
+[12] time=3.59, avg_loss=0.1939, train_err=0.7754
+Eval: (32, 64)_l2=0.7405, (64, 128)_l2=2.4524
+[15] time=3.59, avg_loss=0.1673, train_err=0.6693
+Eval: (32, 64)_l2=0.7308, (64, 128)_l2=2.4187
+[18] time=3.59, avg_loss=0.1475, train_err=0.5900
+Eval: (32, 64)_l2=0.6825, (64, 128)_l2=2.3738
+
+{'train_err': 0.5666905903816223, 'avg_loss': 0.14167264759540557, 'avg_lasso_loss': None, 'epoch_train_time': 3.6301156069999934}
 

Plot the prediction, and compare with the ground-truth @@ -385,7 +385,7 @@ fig.show()

-Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 27.934 seconds)

+Inputs, ground-truth output and prediction., Input x (32, 64), Ground-truth y, Model prediction, Input x (64, 128), Ground-truth y, Model prediction

Total running time of the script: (1 minutes 28.624 seconds)

Create the trainer

@@ -454,22 +454,22 @@
Training on 1000 samples
 Testing on [50, 50] samples         on resolutions [16, 32].
 Raw outputs of shape torch.Size([32, 1, 16, 16])
-[0] time=9.95, avg_loss=0.6844, train_err=21.3879
-Eval: 16_h1=0.6813, 16_l2=0.4521, 32_h1=0.9371, 32_l2=0.6767
-[3] time=9.89, avg_loss=0.2622, train_err=8.1950
-Eval: 16_h1=0.2517, 16_l2=0.1695, 32_h1=0.8607, 32_l2=0.6035
-[6] time=9.92, avg_loss=0.2178, train_err=6.8077
-Eval: 16_h1=0.2469, 16_l2=0.1539, 32_h1=0.8402, 32_l2=0.5624
-[9] time=9.88, avg_loss=0.2180, train_err=6.8126
-Eval: 16_h1=0.2543, 16_l2=0.1619, 32_h1=0.7828, 32_l2=0.5254
-[12] time=9.94, avg_loss=0.1915, train_err=5.9847
-Eval: 16_h1=0.2700, 16_l2=0.1719, 32_h1=0.7757, 32_l2=0.4989
-[15] time=9.87, avg_loss=0.1635, train_err=5.1107
-Eval: 16_h1=0.2434, 16_l2=0.1517, 32_h1=0.8047, 32_l2=0.5077
-[18] time=9.90, avg_loss=0.1674, train_err=5.2311
-Eval: 16_h1=0.2448, 16_l2=0.1587, 32_h1=0.7799, 32_l2=0.5013
-
-{'train_err': 4.346285965293646, 'avg_loss': 0.13908115088939665, 'avg_lasso_loss': None, 'epoch_train_time': 9.86875730099996}
+[0] time=10.04, avg_loss=0.6503, train_err=20.3222
+Eval: 16_h1=0.3710, 16_l2=0.2491, 32_h1=0.8856, 32_l2=0.5331
+[3] time=9.99, avg_loss=0.2580, train_err=8.0616
+Eval: 16_h1=0.3135, 16_l2=0.2162, 32_h1=0.8040, 32_l2=0.5550
+[6] time=10.07, avg_loss=0.2012, train_err=6.2867
+Eval: 16_h1=0.2600, 16_l2=0.1664, 32_h1=0.7934, 32_l2=0.5296
+[9] time=9.99, avg_loss=0.2072, train_err=6.4741
+Eval: 16_h1=0.2641, 16_l2=0.1678, 32_h1=0.7593, 32_l2=0.4923
+[12] time=10.01, avg_loss=0.2042, train_err=6.3807
+Eval: 16_h1=0.2544, 16_l2=0.1577, 32_h1=0.7915, 32_l2=0.5030
+[15] time=9.96, avg_loss=0.1907, train_err=5.9588
+Eval: 16_h1=0.2567, 16_l2=0.1713, 32_h1=0.7695, 32_l2=0.4561
+[18] time=10.02, avg_loss=0.1470, train_err=4.5951
+Eval: 16_h1=0.2394, 16_l2=0.1488, 32_h1=0.7585, 32_l2=0.4966
+
+{'train_err': 4.85387809574604, 'avg_loss': 0.1553240990638733, 'avg_lasso_loss': None, 'epoch_train_time': 9.995433986999956}
 

Plot the prediction, and compare with the ground-truth @@ -519,7 +519,7 @@ fig.show() -Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 21.485 seconds)

+Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (3 minutes 23.451 seconds)

-Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (0 minutes 7.050 seconds)

+Inputs, ground-truth output and prediction., Input x, Ground-truth y, Model prediction

Total running time of the script: (0 minutes 6.935 seconds)