Lattice
supports specifying arbitrary edge content for each unit cell via the kwargcustom_edges
. A generator for hexagonal lattices with coloured edges is implemented asnk.graph.KitaevHoneycomb
.nk.graph.Grid
again supports colouring edges by direction. #1074- Fermionic hilbert space (
nkx.hilbert.SpinOrbitalFermions
) and fermionic operators (nkx.operator.fermion
) to treat systems with a finite number of Orbitals have been added to the experimental submodule. The operators are also integrated with OpenFermion. Those functionalities are still in development and we would welcome feedback. #1090
- The gradient for models with real-parameter is now multiplied by 2. If your model had real parameters you might need to change the learning rate and halve it. Conceptually this is a bug-fix, as the value returned before was wrong (see Bug Fixes section below for additional details) #1069
- The gradient obtained with
VarState.expect_and_grad
for models with real-parameters was off by a factor of $ 1/2 $ from the correct value. This has now been corrected. As a consequence, the correct gradient for real-parameter models is equal to the old times 2. If your model had real parameters you might need to change the learning rate and halve it. #1069 - Support for coloured edges in
nk.graph.Grid
, removed in #724, is now restored. #1074 - Fixed bug that prevented calling
.quantum_geometric_tensor
onnetket.vqs.ExactState
. #1108 - Fixed bug where the gradient of
C->C
models (complex parameters, complex output) was computed incorrectly withnk.vqs.ExactState
. #1110 - Fixed bug where
QGTJacobianDense.state
andQGTJacobianPyTree.state
would not correctly transform the starting pointx0
ifholomorphic=False
. #1115
- Support for Python 3.10 #952.
- The minimum optax version is now
0.1.1
, which finally correctly supports complex numbers. The internal implementation of Adam which was introduced in 3.3 (#1069) has been removed. If an older version ofoptax
is detected, an import error is thrown to avoid providing wrong numerical results. Please update your optax version! #1097
- Allow
LazyOperator@densevector
for operators such as lazyAdjoint
,Transpose
andSquared
. #1068 - The logic to update the progress bar in
nk.experimental.TDVP
has been improved, and it should now display updates even if there are very sparsesave_steps
. #1084 - The
nk.logging.TensorBoardLog
is now lazily initialized to better work in an MPI environment. #1086 - Converting a
nk.operator.BoseHubbard
to ank.operator.LocalOperator
multiplied by 2 the nonlinearityU
. This has now been fixed. #1102
- Initialisation of all implementations of
DenseSymm
,DenseEquivariant
,GCNN
now defaults to truncated normals with Lecun variance scaling. For layers without masking, there should be no noticeable change in behaviour. For masked layers, the same variance scaling now works correctly. #1045 - Fix bug that prevented gradients of non-hermitian operators to be computed. The feature is still marked as experimental but will now run (we do not guarantee that results are correct). #1053
- Common lattice constructors such as
Honeycomb
now accepts the same keyword arguments asLattice
. #1046 - Multiplying a
QGTOnTheFly
representing the real part of the QGT (showing up when the ansatz has real parameters) with a complex vector now throws an error. Previously the result would be wrong, as the imaginary part was casted away. #885
- The interface to define expectation and gradient function of arbitrary custom operators is now stable. If you want to define it for a standard operator that can be written as an average of local expectation terms, you can now define a dispatch rule for {ref}
netket.vqs.get_local_kernel_arguments
and {ref}netket.vqs.get_local_kernel
. The old mechanism is still supported, but we encourage to use the new mechanism as it is more terse. #954 nk.optimizer.Adam
now supports complex parameters, and you can usenk.optimizer.split_complex
to make optimizers process complex parameters as if they are pairs of real parameters. #1009- Chunking of
MCState.expect
andMCState.expect_and_grad
computations is now supported, which allows to bound the memory cost in exchange of a minor increase in computation time. #1006 (and discussions in #918 and #830) - A new variational state that performs exact summation over the whole Hilbert space has been added. It can be constructed with {ref}
nk.vqs.ExactState
and supports the same Jax neural networks as {ref}nk.vqs.MCState
. #953 DenseSymm
allows multiple input features. #1030- [Experimental] A new time-evolution driver {ref}
nk.experimental.TDVP
using the time-dependent variational principle (TDVP) has been added. It works with time-independent and time-dependent Hamiltonians and Liouvillians. #1012 - [Experimental] A set of JAX-compatible Runge-Kutta ODE integrators has been added for use together with the new TDVP driver. #1012
- The method
sample_next
inSampler
and exact samplers (ExactSampler
andARDirectSampler
) is removed, and it is only defined inMetropolisSampler
. The module functionnk.sampler.sample_next
also only works withMetropolisSampler
. For exact samplers, please use the methodsample
instead. #1016 - The default value of
n_chains_per_rank
inSampler
and exact samplers is changed to 1, and specifyingn_chains
orn_chains_per_rank
when constructing them is deprecated. Please changechain_length
when callingsample
. ForMetropolisSampler
, the default value is changed fromn_chains = 16
(across all ranks) ton_chains_per_rank = 16
. #1017 GCNN_Parity
allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. The previous behaviour can be restored using the deprecatedextra_bias
switch; we only recommend this for loading previously saved parameters. Such parameters can be transformed to work with the new default usingnk.models.update_GCNN_parity
. #1030- Kernels of
DenseSymm
are now three-dimensional, not two-dimensional. Parameters saved from earlier implementations can be transformed to the new convention usingnk.nn.update_dense_symm
. #1030
- The method
Sampler.samples
is added to return a generator of samples. The module functionsnk.sampler.sampler_state
,reset
,sample
,samples
, andsample_next
are deprecated in favor of the corresponding class methods. #1025 - Kwarg
in_features
ofDenseEquivariant
is deprecated; the number of input features are inferred from the input. #1030 - Kwarg
out_features
ofDenseEquivariant
is deprecated in favour offeatures
. #1030
- The definitions of
MCState
andMCMixedState
have been moved to an internal module,nk.vqs.mc
that is hidden by default. #954 - Custom deepcopy for
LocalOperator
to avoid buildingLocalOperator
from scratch each time it is copied #964
- The constructor of
TensorHilbert
(which is used by the product operator*
for inhomogeneous spaces) no longer fails when one of the component spaces is non-indexable. #1004 - The {ref}
nk.hilbert.random.flip_state
method used byMetropolisLocal
now throws an error when called on a {ref}nk.hilbert.ContinuousHilbert
hilbert space instead of entering an endless loop. #1014 - Fixed bug in conversion to qutip for
MCMixedState
, where the resulting shape (hilbert space size) was wrong. #1020 - Setting
MCState.sampler
now recomputesMCState.chain_length
according toMCState.n_samples
and the newsampler.n_chains
. #1028 GCNN_Parity
allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. #1030
GraphOperator
(andHeisenberg
) now support passing a custom mapping of graph nodes to Hilbert space sites via the newacting_on_subspace
argument. This makes it possible to createGraphOperator
s that act on a subset of sites, which is useful in composite Hilbert spaces. #924PauliString
now supports any Hilbert space with local size 2. The Hilbert space is now the optional first argument of the constructor. #960PauliString
now can be multiplied and summed together, performing some simple algebraic simplifications on the strings they contain. They also lazily initialize their internal data structures, making them faster to construct but slightly slower the first time that their matrix elements are accessed. #955PauliString
s can now be constructed starting from anOpenFermion
operator. #956- In addition to nearest-neighbor edges,
Lattice
can now generate edges between next-nearest and, more generally, k-nearest neighbors via the constructor argumentmax_neighbor_order
. The edges can be distinguished by theircolor
property (which is used, e.g., byGraphOperator
to apply different bond operators). #970 - Two continuous-space operators (
KineticEnergy
andPotentialEnergy
) have been implemented. #971 Heisenberg
Hamiltonians support different coupling strengths onGraph
edges with different colors. #972.- The
little_group
andspace_group_irreps
methods ofSpaceGroupBuilder
take the wave vector as either varargs or iterables. #975 - A new
netket.experimental
submodule has been created and all experimental features have been moved there. Note that in contrast to the othernetket
submodules,netket.experimental
is not imported by default. #976
- Moved
nk.vqs.variables_from_***
tonk.experimental.vqs
module. Also moved the experimental samplers tonk.sampler.MetropolisPt
andnk.sampler.MetropolisPmap
tonk.experimental.sampler
. #976 operator.size
, has been deprecated. If you were using this function, please transition tooperator.hilbert.size
. #985
- A bug where
LocalOperator.get_conn_flattened
would read out-of-bounds memory has been fixed. It is unlikely that the bug was causing problems, but it triggered warnings when running Numba with boundscheck activated. #966 - The dependency
python-igraph
has been updated toigraph
following the rename of the upstream project in order to work on conda. #986 - {attr}
~netket.vqs.MCState.n_samples_per_rank
was returning wrong values and has now been fixed. #987 - The
DenseSymm
layer now also accepts objects of typeHashableArray
assymmetries
argument. #989 - A bug where
VMC.info()
was erroring has been fixed. #984
- Added Conversion methods
to_qobj()
to operators and variational states, that produce QuTiP's qobjects. - A function
nk.nn.activation.reim
has been added that transforms a nonlinearity to act seperately on the real and imaginary parts - Nonlinearities
reim_selu
andreim_relu
have been added - Autoregressive Neural Networks (ARNN) now have a
machine_pow
field (defaults to 2) used to change the exponent used for the normalization of the wavefunction. #940.
- The default initializer for
netket.models.GCNN
has been changed to fromjax.nn.selu
tonetket.nn.reim_selu
#892 netket.nn.initializers
has been deprecated in favor ofjax.nn.initializers
#935.- Subclasses of
AbstractARNN
must define the fieldmachine_pow
#940 nk.hilbert.HilbertIndex
andnk.operator.spin.DType
are now unexported (they where never intended to be visible). #904AbstractOperator
s have been renamedDiscreteOperator
s.AbstractOperator
s still exist, but have almost no functionality and they are intended as the base class for more arbitrary (eg. continuous space) operators. If you have defined a custom operator inheriting fromAbstractOperator
you should change it to derive fromDiscreteOperator
. #929
PermutationGroup.product_table
now consumes less memory and is more performant. This is helpfull when working with large symmetry groups. #884 #891- Added size check to
DiscreteOperator.get_conn
and throw helpful error messages if those do not match. #927 - The internal
numba4jax
module has been factored out into a standalone library, named (how original)numba4jax
. This library was never intended to be used by external users, but if for any reason you were using it, you should switch to the external library. #934 netket.jax
now includes several batching utilities likebatched_vmap
andbatched_vjp
. Those can be used to build memory efficient batched code, but are considered internal, experimental and might change without warning. #925.
- Autoregressive networks now work with
Qubit
hilbert spaces. #937
- The default initializer for
netket.nn.Dense
layers now matches the same default asflax.linen
, and it islecun_normal
instead ofnormal(0.01)
#869 - The default initializer for
netket.nn.DenseSymm
layers is now chosen in order to give variance 1 to every output channel, therefore defaulting tolecun_normal
#870
- DenseSymm now accepts a mode argument to specify whever the symmetries should be computed with a full dense matrix or FFT. The latter method is much faster for sufficiently large systems. Other kwargs have been added to satisfy the interface. The api changes are also reflected in RBMSymm and GCNN. #792
- The so-called legacy netket in
netket.legacy
has been removed. #773
- The methods
expect
andexpect_and_grad
ofMCState
now use dispatch to select the relevant implementation of the algorithm. They can therefore be expanded and overridden without editing NetKet's source code. #804 netket.utils.mpi_available
has been moved tonetket.utils.mpi.available
to have a more consistent api interface (all mpi-related properties in the same submodule). #827netket.logging.TBLog
has been renamed tonetket.logging.TensorBoardLog
for better readability. A deprecation warning is now issued if the older name is used #827- When
MCState
initializes a model by callingmodel.init
, the call is now jitted. This should speed it up for non-trivial models but might break non-jit invariant models. #832 operator.get_conn_padded
now supports arbitrarily-dimensioned bitstrings as input and reshapes the output accordingly. #834- NetKet's implementation of dataclasses now support
pytree_node=True/False
on cached properties. #835 - Plum version has been bumped to 1.5.1 to avoid broken versions (1.4, 1.5). #856.
- Numba version 0.54 is now allowed #857.
- Fix Progress bar bug. #810
- Make the repr/printing of history objects nicer in the REPL. #819
- The field
MCState.model
is now read-only, to prevent user errors. #822 - The order of the operators in
PauliString
does no longer influences the estimate of the number of non-zero connected elements. #836
- The {ref}
netket.utils.group
submodule provides utilities for geometrical and permutation groups.Lattice
(and its specialisations likeGrid
) use these to automatically construct the space groups of lattices, as well as their character tables for generating wave functions with broken symmetry. #724 - Autoregressive neural networks, sampler, and masked linear layers have been added to
models
,sampler
andnn
#705.
- The
netket.graph.Grid
class has been removed. {ref}netket.graph.Grid
will now return an instance of {ref}graph.Lattice
supporting the same API but with new functionalities related to spatial symmetries. Thecolor_edges
optional keyword argument has been removed without deprecation. #724 MCState.n_discard
has been renamedMCState.n_discard_per_chain
and the old binding has been deprecated #739.nk.optimizer.qgt.QGTOnTheFly
optioncentered=True
has been removed because we are now convinced the two options yielded equivalent results.QGTOnTheFly
now always behaves as ifcentered=False
#706.
networkX
has been replaced byigraph
, yielding a considerable speedup for some graph-related operations #729.netket.hilbert.random
module now usesplum-dispatch
(throughnetket.utils.dispatch
) to select the correct implementation ofrandom_state
andflip_state
. This makes it easy to define new hilbert states and extend their functionality easily. #734.- The AbstractHilbert interface is now much smaller in order to also support continuous Hilbert spaces. Any functionality specific to discrete hilbert spaces (what was previously supported) has been moved to a new abstract type
netket.hilbert.DiscreteHilbert
. Any Hilbert space previously subclassing {ref}netket.hilbert.AbstractHilbert
should be modified to subclass {ref}netket.hilbert.DiscreteHilbert
#800.
nn.to_array
andMCState.to_array
, ifnormalize=False
, do not subtract the logarithm of the maximum value from the state #705.- Autoregressive networks now work with Fock space and give correct errors if the hilbert space is not supported #806.
- Autoregressive networks are now much (x10-x100) faster #705.
- Do not throw errors when calling
operator.get_conn_flattened(states)
with a jax array #764. - Fix bug with the driver progress bar when
step_size != 1
#747.
- Group Equivariant Neural Networks have been added to
models
#620 - Permutation invariant RBM and Permutation invariant dense layer have been added to
models
andnn.linear
#573 - Add the property
acceptance
toMetropolisSampler
'sSamplerState
, computing the MPI-enabled acceptance ratio. #592. - Add
StateLog
, a new logger that stores the parameters of the model during the optimization in a folder or in a tar file. #645 - A warning is now issued if NetKet detects to be running under
mpirun
but MPI dependencies are not installed #631 operator.LocalOperator
s now do not return a zero matrix element on the diagonal if the whole diagonal is zero. #623.logger.JSONLog
now automatically flushes at every iteration if it does not consume significant CPU cycles. #599- The interface of Stochastic Reconfiguration has been overhauled and made more modular. You can now specify the solver you wish to use, NetKet provides some dense solvers out of the box, and there are 3 different ways to compute the Quantum Geometric Tensor. Read the documentation to learn more about it. #674
- Unless you specify the QGT implementation you wish to use with SR, we use an automatic heuristic based on your model and the solver to pick one. This might affect SR performance. #674
- For all samplers,
n_chains
now sets the total number of chains across all MPI ranks. This is a breaking change compared to the old API, wheren_chains
would set the number of chains on a single MPI rank. It is still possible to set the number of chains per MPI rank by specifyingn_chains_per_rank
instead ofn_chains
. This change, while breaking allows us to be consistent with the interface of {ref}variational.MCState
, wheren_samples
is the total number of samples across MPI nodes. MetropolisSampler.reset_chain
has been renamed toMetropolisSampler.reset_chains
. Likewise in the constructor of all samplers.- Briefly during development releases
MetropolisSamplerState.acceptance_ratio
returned the percentage (not ratio) of acceptance.acceptance_ratio
is now deprecated in favour of the correctacceptance
. models.Jastrow
now internally symmetrizes the matrix before computing its value #644MCState.evaluate
has been renamed toMCState.log_value
#632nk.optimizer.SR
no longer accepts keyword argument relative to the sparse solver. Those should be passed inside the closure orfunctools.partial
passed assolver
argument.nk.optimizer.sr.SRLazyCG
andnk.optimizer.sr.SRLazyGMRES
have been deprecated and will soon be removed.- Parts of the
Lattice
API have been overhauled, with deprecations of several methods in favor of a consistent usage ofLattice.position
for real-space location of sites andLattice.basis_coords
for location of sites in terms of basis vectors.Lattice.sites
has been added, which provides a sequence ofLatticeSite
objects combining all site properties. Furthermore,Lattice
now provides lookup of sites from their position viaid_from_position
using a hashing scheme that works across periodic boundaries. #703 #715 nk.variational
has been renamed tonk.vqs
and will be removed in a future release.
- Fix
operator.BoseHubbard
usage under jax Hamiltonian Sampling #662 - Fix
SROnTheFly
forR->C
models with non homogeneous parameters #661 - Fix MPI Compilation deadlock when computing expectation values #655
- Fix bug preventing the creation of a
hilbert.Spin
Hilbert space with odd sites and evenS
. #641 - Fix bug #635 preventing the usage of
NumpyMetropolisSampler
withMCState.expect
#635 - Fix bug #635 where the
graph.Lattice
was not correctly computing neighbours because of floating point issues. #633 - Fix bug the Y Pauli matrix, which was stored as its conjugate. #618 #617 #615
-
Hilbert space constructors do not store the lattice graph anymore. As a consequence, the constructor does not accept the graph anymore.
-
Special Hamiltonians defined on a lattice, such as {class}
operator.BoseHubbard
, {class}operator.Ising
and {class}operator.Heisenberg
, now require the graph to be passed explicitly through agraph
keyword argument. -
{class}
operator.LocalOperator
now default to real-valued matrix elements, except if you construct them with a complex-valued matrix. This is also valid for operators such as :func:operator.spin.sigmax
and similars. -
When performing algebraic operations {code}
*, -, +
on pairs of {class}operator.LocalOperator
, the dtype of the result iscomputed using standard numpy promotion logic.- Doing an operation in-place {code}
+=, -=, *=
on a real-valued operator will now fail if the other is complex. While this might seem annoying, it's useful to ensure that smaller types such asfloat32
orcomplex64
are preserved if the user desires to do so.
- Doing an operation in-place {code}
-
{class}
AbstractMachine
has been removed. It's functionality is now split among the model itself, which is defined by the user and {class}variational.MCState
for pure states or {class}variational.MCMixedState
for mixed states.-
The model, in general is composed by two functions, or an object with two functions: an
init(rng, sample_val)
function, accepting a {func}jax.random.PRNGKey
object and an input, returning the parameters and the state of the model for that particular sample shape, and a {code}apply(params, samples, **kwargs)
function, evaluating the model for the given parameters and inputs. -
Some models (previously machines) such as the RBM (Restricted Boltzmann Machine) Machine, NDM (Neural Density Matrix) or MPS (Matrix Product State ansatz) are available in {ref}
Pre-built models
. -
Machines, now called models, should be written using Flax or another jax framework.
-
Serialization and deserialization functionality has now been moved to {ref}
netket.variational.MCState
, which support the standard Flax interface through MsgPack. See Flax docs for more information -
{code}
AbstractMachine.init_random_parameters
functionality has now been absorbed into {meth}netket.vqs.VariationalState.init_parameters
, which however has a different syntax.
-
-
{ref}
Samplers <Sampler>
now require the Hilbert space upon which they sample to be passed in to the constructor. Also note that several keyword arguments of the samplers have changed, and new one are available. -
It's now possible to change {ref}
Samplers <Sampler>
dtype, which controls the type of the output. By default they use double-precision samples (np.float64
). Be wary of type promotion issues with your models. -
{ref}
Samplers <Sampler>
no longer take a machine as an argument. -
{ref}
Samplers <Sampler>
are now immutable (frozen)dataclasses
(defined throughflax.struct.dataclass
) that only hold the sampling parameters. As a consequence it is no longer possible to change their settings such asn_chains
orn_sweeps
without creating a new sampler. If you wish to update only one parameter, it is possible to construct the new sampler with the updated value by using thesampler.replace(parameter=new_value)
function. -
{ref}
Samplers <Sampler>
are no longer stateful objects. Instead, they can construct an immutable state object {ref}netket.sampler.init_state
, which can be passed to sampling functions such as {ref}netket.sampler.sample
, which now return also the updated state. However, unless you have particular use-cases we advise you use the variational state {ref}MCState
instead. -
The {ref}
netket.optimizer
module has been overhauled, and now only re-exports flax optim module. We advise not to use netket's optimizer but instead to use optax . -
The {ref}
netket.optimizer.SR
object now is only a set of options used to compute the SR matrix. The SR matrix, now calledquantum_geometric_tensor
can be obtained by calling {meth}variational.MCState.quantum_geometric_tensor
. Depending on the settings, this can be a lazy object. -
netket.Vmc
has been renamed to {ref}netket.VMC
-
{ref}
netket.models.RBM
replaces the old {code}RBM
machine, but has real parameters by default. -
As we rely on Jax, using {code}
dtype=float
or {code}dtype=complex
, which are weak types, will sometimes lead to loss of precision because they might be converted tofloat32
. Use {code}np.float64
or {code}np.complex128
instead if you want double precision when defining your models.