Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Dec 20, 2024
1 parent 4617499 commit 7591ba4
Show file tree
Hide file tree
Showing 34 changed files with 169 additions and 169 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_DISCO_convolutions.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. code-block:: none
<matplotlib.colorbar.Colorbar object at 0x7f32ac13c550>
<matplotlib.colorbar.Colorbar object at 0x7fe4cde24550>
Expand Down Expand Up @@ -448,7 +448,7 @@ in order to compute the convolved image, we need to first bring it into the righ
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 31.554 seconds)
**Total running time of the script:** (0 minutes 31.319 seconds)


.. _sphx_glr_download_auto_examples_plot_DISCO_convolutions.py:
Expand Down
22 changes: 11 additions & 11 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -248,13 +248,13 @@ Training the model
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f32a9cc6ad0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe4e07b2ad0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f32a9cc6850>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe4e07b2850>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f32a9cc6850>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f32a9cc6490>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe4e07b2850>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe4e07b2210>}
Expand Down Expand Up @@ -313,20 +313,20 @@ Then train the model on our small Darcy-Flow dataset:
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.57, avg_loss=0.6956, train_err=21.7383
Eval: 16_h1=0.4298, 16_l2=0.3487, 32_h1=0.5847, 32_l2=0.3542
[3] time=2.56, avg_loss=0.2103, train_err=6.5705
[3] time=2.55, avg_loss=0.2103, train_err=6.5705
Eval: 16_h1=0.2030, 16_l2=0.1384, 32_h1=0.5075, 32_l2=0.1774
[6] time=2.59, avg_loss=0.1911, train_err=5.9721
[6] time=2.56, avg_loss=0.1911, train_err=5.9721
Eval: 16_h1=0.2099, 16_l2=0.1374, 32_h1=0.4907, 32_l2=0.1783
[9] time=2.57, avg_loss=0.1410, train_err=4.4073
[9] time=2.56, avg_loss=0.1410, train_err=4.4073
Eval: 16_h1=0.2052, 16_l2=0.1201, 32_h1=0.5268, 32_l2=0.1615
[12] time=2.55, avg_loss=0.1422, train_err=4.4434
[12] time=2.56, avg_loss=0.1422, train_err=4.4434
Eval: 16_h1=0.2131, 16_l2=0.1285, 32_h1=0.5413, 32_l2=0.1741
[15] time=2.57, avg_loss=0.1198, train_err=3.7424
[15] time=2.62, avg_loss=0.1198, train_err=3.7424
Eval: 16_h1=0.1984, 16_l2=0.1137, 32_h1=0.5255, 32_l2=0.1569
[18] time=2.55, avg_loss=0.1104, train_err=3.4502
[18] time=2.57, avg_loss=0.1104, train_err=3.4502
Eval: 16_h1=0.2039, 16_l2=0.1195, 32_h1=0.5062, 32_l2=0.1603
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5465377909999916}
{'train_err': 2.9605126455426216, 'avg_loss': 0.0947364046573639, 'avg_lasso_loss': None, 'epoch_train_time': 2.5713351109999394}
Expand Down Expand Up @@ -476,7 +476,7 @@ are other ways to scale the outputs of the FNO to train a true super-resolution

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 52.494 seconds)
**Total running time of the script:** (0 minutes 52.622 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -234,13 +234,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f32ba653230>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe4e0743230>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7f32ba652cf0>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7fe4e0742cf0>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f32ba652cf0>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe4e0742cf0>}
Expand Down Expand Up @@ -297,22 +297,22 @@ Train the model on the spherical SWE dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.65, avg_loss=2.5842, train_err=10.3366
Eval: (32, 64)_l2=1.8738, (64, 128)_l2=2.4284
[3] time=3.62, avg_loss=0.4147, train_err=1.6589
Eval: (32, 64)_l2=0.5781, (64, 128)_l2=2.3744
[6] time=3.63, avg_loss=0.2662, train_err=1.0649
Eval: (32, 64)_l2=0.5419, (64, 128)_l2=2.3195
[9] time=3.58, avg_loss=0.2207, train_err=0.8829
Eval: (32, 64)_l2=0.4887, (64, 128)_l2=2.3423
[12] time=3.58, avg_loss=0.1894, train_err=0.7575
Eval: (32, 64)_l2=0.4850, (64, 128)_l2=2.3244
[15] time=3.57, avg_loss=0.1631, train_err=0.6523
Eval: (32, 64)_l2=0.4811, (64, 128)_l2=2.3205
[18] time=3.58, avg_loss=0.1442, train_err=0.5769
Eval: (32, 64)_l2=0.4802, (64, 128)_l2=2.3157
[0] time=3.66, avg_loss=2.6152, train_err=10.4609
Eval: (32, 64)_l2=1.8807, (64, 128)_l2=2.5119
[3] time=3.64, avg_loss=0.4558, train_err=1.8232
Eval: (32, 64)_l2=0.9008, (64, 128)_l2=2.6564
[6] time=3.58, avg_loss=0.2771, train_err=1.1085
Eval: (32, 64)_l2=0.8429, (64, 128)_l2=2.5842
[9] time=3.60, avg_loss=0.2288, train_err=0.9151
Eval: (32, 64)_l2=0.8236, (64, 128)_l2=2.5503
[12] time=3.59, avg_loss=0.1939, train_err=0.7754
Eval: (32, 64)_l2=0.7405, (64, 128)_l2=2.4524
[15] time=3.59, avg_loss=0.1673, train_err=0.6693
Eval: (32, 64)_l2=0.7308, (64, 128)_l2=2.4187
[18] time=3.59, avg_loss=0.1475, train_err=0.5900
Eval: (32, 64)_l2=0.6825, (64, 128)_l2=2.3738
{'train_err': 0.5465761256217957, 'avg_loss': 0.13664403140544892, 'avg_lasso_loss': None, 'epoch_train_time': 3.5847762680000415}
{'train_err': 0.5666905903816223, 'avg_loss': 0.14167264759540557, 'avg_lasso_loss': None, 'epoch_train_time': 3.6301156069999934}
Expand Down Expand Up @@ -383,7 +383,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 27.934 seconds)
**Total running time of the script:** (1 minutes 28.624 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -345,13 +345,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f32b8316490>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe4d9f22490>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f32bb7fdfd0>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe4f729b620>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f32bb7fdfd0>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f32b8316210>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe4f729b620>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe4d9f22210>}
Expand Down Expand Up @@ -410,22 +410,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=9.95, avg_loss=0.6844, train_err=21.3879
Eval: 16_h1=0.6813, 16_l2=0.4521, 32_h1=0.9371, 32_l2=0.6767
[3] time=9.89, avg_loss=0.2622, train_err=8.1950
Eval: 16_h1=0.2517, 16_l2=0.1695, 32_h1=0.8607, 32_l2=0.6035
[6] time=9.92, avg_loss=0.2178, train_err=6.8077
Eval: 16_h1=0.2469, 16_l2=0.1539, 32_h1=0.8402, 32_l2=0.5624
[9] time=9.88, avg_loss=0.2180, train_err=6.8126
Eval: 16_h1=0.2543, 16_l2=0.1619, 32_h1=0.7828, 32_l2=0.5254
[12] time=9.94, avg_loss=0.1915, train_err=5.9847
Eval: 16_h1=0.2700, 16_l2=0.1719, 32_h1=0.7757, 32_l2=0.4989
[15] time=9.87, avg_loss=0.1635, train_err=5.1107
Eval: 16_h1=0.2434, 16_l2=0.1517, 32_h1=0.8047, 32_l2=0.5077
[18] time=9.90, avg_loss=0.1674, train_err=5.2311
Eval: 16_h1=0.2448, 16_l2=0.1587, 32_h1=0.7799, 32_l2=0.5013
[0] time=10.04, avg_loss=0.6503, train_err=20.3222
Eval: 16_h1=0.3710, 16_l2=0.2491, 32_h1=0.8856, 32_l2=0.5331
[3] time=9.99, avg_loss=0.2580, train_err=8.0616
Eval: 16_h1=0.3135, 16_l2=0.2162, 32_h1=0.8040, 32_l2=0.5550
[6] time=10.07, avg_loss=0.2012, train_err=6.2867
Eval: 16_h1=0.2600, 16_l2=0.1664, 32_h1=0.7934, 32_l2=0.5296
[9] time=9.99, avg_loss=0.2072, train_err=6.4741
Eval: 16_h1=0.2641, 16_l2=0.1678, 32_h1=0.7593, 32_l2=0.4923
[12] time=10.01, avg_loss=0.2042, train_err=6.3807
Eval: 16_h1=0.2544, 16_l2=0.1577, 32_h1=0.7915, 32_l2=0.5030
[15] time=9.96, avg_loss=0.1907, train_err=5.9588
Eval: 16_h1=0.2567, 16_l2=0.1713, 32_h1=0.7695, 32_l2=0.4561
[18] time=10.02, avg_loss=0.1470, train_err=4.5951
Eval: 16_h1=0.2394, 16_l2=0.1488, 32_h1=0.7585, 32_l2=0.4966
{'train_err': 4.346285965293646, 'avg_loss': 0.13908115088939665, 'avg_lasso_loss': None, 'epoch_train_time': 9.86875730099996}
{'train_err': 4.85387809574604, 'avg_loss': 0.1553240990638733, 'avg_lasso_loss': None, 'epoch_train_time': 9.995433986999956}
Expand Down Expand Up @@ -499,7 +499,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (3 minutes 21.485 seconds)
**Total running time of the script:** (3 minutes 23.451 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7f32ba6a37e0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7fe4e078f7e0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2982150144, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 562036736}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 25165824}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 536870912}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 2147483648, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.0': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.0.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.0.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.1': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.1.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.1.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.2': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.2.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.2.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs.3': defaultdict(<class 'int'>, {'bmm.default': 34603008}), 'fno_blocks.channel_mlp.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.channel_mlp.3.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'fno_blocks.channel_mlp.3.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 134217728}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 3.214 seconds)
**Total running time of the script:** (0 minutes 3.202 seconds)


.. _sphx_glr_download_auto_examples_plot_count_flops.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.202 seconds)
**Total running time of the script:** (0 minutes 0.198 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:
Expand Down
26 changes: 13 additions & 13 deletions dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -240,15 +240,15 @@ Set up the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7f32c0126520>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe4e1aa2520>
### LOSSES ###
### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7f32ba62e710>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe4e071a710>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7f32ba62e710>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7f32c01269e0>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe4e071a710>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe4e1aa29e0>}
Expand Down Expand Up @@ -331,15 +331,15 @@ Train the model
Raw outputs of shape torch.Size([16, 1, 8, 8])
[0] time=0.24, avg_loss=0.9239, train_err=13.1989
Eval: 16_h1=0.8702, 16_l2=0.5536, 32_h1=0.9397, 32_l2=0.5586
[1] time=0.23, avg_loss=0.7589, train_err=10.8413
[1] time=0.22, avg_loss=0.7589, train_err=10.8413
Eval: 16_h1=0.7880, 16_l2=0.5167, 32_h1=0.9011, 32_l2=0.5234
[2] time=0.22, avg_loss=0.6684, train_err=9.5484
Eval: 16_h1=0.7698, 16_l2=0.4235, 32_h1=0.9175, 32_l2=0.4405
[3] time=0.23, avg_loss=0.6082, train_err=8.6886
[3] time=0.22, avg_loss=0.6082, train_err=8.6886
Eval: 16_h1=0.7600, 16_l2=0.4570, 32_h1=1.0126, 32_l2=0.5005
[4] time=0.23, avg_loss=0.5604, train_err=8.0054
[4] time=0.22, avg_loss=0.5604, train_err=8.0054
Eval: 16_h1=0.8301, 16_l2=0.4577, 32_h1=1.1722, 32_l2=0.4987
[5] time=0.23, avg_loss=0.5366, train_err=7.6663
[5] time=0.22, avg_loss=0.5366, train_err=7.6663
Eval: 16_h1=0.8099, 16_l2=0.4076, 32_h1=1.1414, 32_l2=0.4436
[6] time=0.23, avg_loss=0.5050, train_err=7.2139
Eval: 16_h1=0.7204, 16_l2=0.3888, 32_h1=0.9945, 32_l2=0.4285
Expand All @@ -354,26 +354,26 @@ Train the model
Incre Res Update: change res to 16
[10] time=0.29, avg_loss=0.5158, train_err=7.3681
Eval: 16_h1=0.5133, 16_l2=0.3167, 32_h1=0.6120, 32_l2=0.3022
[11] time=0.28, avg_loss=0.4536, train_err=6.4795
[11] time=0.27, avg_loss=0.4536, train_err=6.4795
Eval: 16_h1=0.4680, 16_l2=0.3422, 32_h1=0.6436, 32_l2=0.3659
[12] time=0.28, avg_loss=0.4155, train_err=5.9358
Eval: 16_h1=0.4119, 16_l2=0.2692, 32_h1=0.5285, 32_l2=0.2692
[13] time=0.28, avg_loss=0.3724, train_err=5.3195
Eval: 16_h1=0.4045, 16_l2=0.2569, 32_h1=0.5620, 32_l2=0.2747
[14] time=0.28, avg_loss=0.3555, train_err=5.0783
Eval: 16_h1=0.3946, 16_l2=0.2485, 32_h1=0.5278, 32_l2=0.2523
[15] time=0.29, avg_loss=0.3430, train_err=4.8998
[15] time=0.28, avg_loss=0.3430, train_err=4.8998
Eval: 16_h1=0.3611, 16_l2=0.2323, 32_h1=0.5079, 32_l2=0.2455
[16] time=0.28, avg_loss=0.3251, train_err=4.6438
Eval: 16_h1=0.3433, 16_l2=0.2224, 32_h1=0.4757, 32_l2=0.2351
[17] time=0.29, avg_loss=0.3072, train_err=4.3888
[17] time=0.28, avg_loss=0.3072, train_err=4.3888
Eval: 16_h1=0.3458, 16_l2=0.2226, 32_h1=0.4776, 32_l2=0.2371
[18] time=0.28, avg_loss=0.2982, train_err=4.2593
Eval: 16_h1=0.3251, 16_l2=0.2116, 32_h1=0.4519, 32_l2=0.2245
[19] time=0.29, avg_loss=0.2802, train_err=4.0024
[19] time=0.28, avg_loss=0.2802, train_err=4.0024
Eval: 16_h1=0.3201, 16_l2=0.2110, 32_h1=0.4533, 32_l2=0.2245
{'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.2853402149999056, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)}
{'train_err': 4.002395851271493, 'avg_loss': 0.2801677095890045, 'avg_lasso_loss': None, 'epoch_train_time': 0.281409022000048, '16_h1': tensor(0.3201), '16_l2': tensor(0.2110), '32_h1': tensor(0.4533), '32_l2': tensor(0.2245)}
Expand Down Expand Up @@ -447,7 +447,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 7.050 seconds)
**Total running time of the script:** (0 minutes 6.935 seconds)


.. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py:
Expand Down
Loading

0 comments on commit 7591ba4

Please sign in to comment.