Skip to content

Gradient-based history matching with high-fidelity models and reduced latent-space representations.

License

Notifications You must be signed in to change notification settings

rsyamil/gradient-based-hm

Repository files navigation

gradient-based-hm

Tutorials for gradient-based history matching for high-fidelity models and also using their reduced representations (with PCA and Autoencoders).

Prerequisites

The dataset used in this demo repository is the digit-MNIST images, X, with the forward model being a linear operator G and the resulting simulated responses denoted as Y. The physical system is represented as Y=G(X) or D=G(M). More description on the dataset is available here (in readme). Here you will find demos for dimensionality reduction of the digit-MNIST images using autoencoders and PCA.

ForwardModel

We are interested in learning the inverse mapping M=G'(D) which is not trivial if the M is non-Gaussian (which is the case with the digit-MNIST images) and G is nonlinear (in this demo we assume a linear operator). Such complex mapping (also known as history-matching) may result in solutions that are non-unique with features that may not be consistent. In gradient-based history-matching, the objective function that we want to minimize is the following, where d_obs is the field observation, m is our variable of interest and we assume that the forward operator G sufficiently represents the linear/nonlinear (i.e. multi-phase fluid flow, heat-diffusion etc) physical systems.

Eq1

Linear Least-Square Solution

The simple closed-form solution:

Eq2

As per this notebook, the inversion solution can reproduce the d_obs but the solution shows no realism with respect to the set of training models.

llsq

Gradient-based History Matching

The gradient for the loss function:

Eq3

The update equation:

Eq4

Run the optimization process as per this notebook

grad_full_dim

The inversion solution also is not satisfactory but still can reproduce the d_obs.

grad_full_dim_comp

Gradient-based History Matching (with PCA)

The poor inversion solutions we have seen above are caused by the non-Gaussian features in the digit-MNIST dataset. In this notebook, we represent the images as PCA coefficients. Refer to this tutorial on how to do that.

Eq5

Then with chain-rule, the update equation simply becomes:

Eq6

The minimization process:

grad_pca_dim

Here we see that the realism of the inversion solution is better preserved, at the same time it can also reproduce the d_obs.

grad_pca_dim_comp

Gradient-based History Matching (with AE)

We can also use autoencoders for dimensionality reduction as we did here.

Eq7

Similar to PCA, the update equation then becomes:

Eq8

Where the Jacobian (dm/dzm) is obtained from the decoder. See pending issues.

grad_ae_dim_comp

About

Gradient-based history matching with high-fidelity models and reduced latent-space representations.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published