This is a package for simulating the real-time dynamics of large-scale quantum spin systems using the projected time-dependent Variational Monte Carlo (p-tVMC) method. The package is based on the results of the Neural Projected Quantum Dynamics: a systematic study paper.
This repository contains the following things:
Analyics: a set of Mathematica notebooks that can be used to re-derive the coefficients for the discretization schemes discussed in the manuscript;data: A folder containing the data obtained from major simulations of the manuscript, divided in several per-figure folders. In particular:- Figure 2: The data to compare the SNR of different fidelity estimators;
- Figure 3: The data to compare the SNR of different gradient estimators;
- Figure 7: The states obtained by p-tVMC calculations on the 10x10 lattice. Those can be loaded by running
nqxpack.load("path/to/file.nk");
packages: The codes used for our simulations, discussed below. You can install the packageptvmcas well as some other internal requirements by following the instructions below.examples: Some example code to run some fast (few minutes) calculations on a 3x3 lattice, some longer-running calculations on a 6x6 lattice (few hours) and which have the same hyperparameters we used for our large 10x10 calculations.
This project uses uv for dependency management, which ensures exact reproducibility of the environment.
- Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh- Clone and set up the environment:
git clone https://github.com/NeuralQXLab/ptvmc-systematic-study
cd ptvmc-systematic-study
uv sync --group devThis creates a .venv directory with all dependencies locked to exact versions.
- Run scripts:
# Using uv run (no activation needed)
uv run python examples/tfim_dynamics_3x3.py
# Or activate the environment
source .venv/bin/activate
python examples/tfim_dynamics_3x3.pyThis package is not registered on PyPi, so you must install it directly from GitHub. To do so, you can run the following command:
pip install git+https://github.com/NeuralQXLab/ptvmc-systematic-studyYou can also clone the repository and install the package locally by running
git clone https://github.com/NeuralQXLab/ptvmc-systematic-study
cd ptvmc-systematic-study
pip install -e .We provide three main examples for the moment. The most complete one is the 'TFIM dynamics' example, which is used to produce the data in the animations above. We also provide an 'example' showing a simple code snippet to perform a single state compression. Finally we provide an 'example' very similar to the first one, but with a larger system size and with parameters more closely resembling the ones used in the paper.
The dynamics of a closed quantum system is described by the Schrödinger equation
where
A practical implementation of the method requires a careful analysis of two aspects: (i) an efficient approximation of the evolutor
Efficient, scalable, and high-order approximations of the evolutor can be obtained in the form of a product series
where the number of elements
with
Each optimization is preceded by the application of $\mathcal{D}k$ to the state $\ket{\psi{\theta^{(k-1)}}}$.
With this package, a single optimization of the above form can be easily performed as
import netket as nk
from netket.optimizer.solver import cholesky
from advanced_drivers.driver import InfidelityOptimizerNG
import optax
vs = nk.vqs.MCState(sampler, model, n_samples=1024, n_discard_per_chain=5)
ts = nk.vqs.MCState(sampler, model, n_samples=1024, n_discard_per_chain=5)
driver = InfidelityOptimizerNG(
target_state=ts,
optimizer=optax.sgd(1e-2),
diag_shift=1e-6,
variational_state=vs,
linear_solver_fn=cholesky,
U=U,
V=V,
sample_Uphi=True,
)
driver.run(n_iter=100, out=logger)The InfidelityOptimizerNG driver is an instance of the AdvancedVariationalDriver class. This is an extension of the VariationalDriver class in NetKet, which allows for a more controlled optimization loop and more flexible callback functions.
U and V are NetKet DiscreteOperators to be defined by the user. Choosing a suitable set of
Here, S-LPE-o the first few substeps and orders are S-PPE-o they are
The proposed schemes are explicitly constructed to take advantage of the structure of the optimization problems, minimizing the number of optimizations required to achieve a given order of accuracy and the computational cost of each optimization. The numerical values of the expansion coefficients mathematica_scripts directory.
In this directory we also provide python implementations of the product expansions and an easy example comparing the performance of the product expansion discretization scheme to the exact evolution operator when applied to state vector simulations (no MC sampling and full description of the state).
The optimization process is a crucial aspect of the p-tVMC method. As we show in the paper, to drive optimizations to convergence, the use of Natural Gradient Descent (NGD) is essential. Within this framework, the vanilla gradient descent update
is replaced by the natural gradient descent update
where InfidelityOptimizerNG driver.
The most basic keyword arguments that can be used to customize the preconditioning are:
-
optimizer: an optax optimizer object. This should always beoptax.sgdwhen performing natural gradient descent. -
diag_shift: It is often the case that the QFIM is ill-conditioned, leading to numerical instabilities in the optimization process. To mitigate this issue, we use Tikhnov regularization, adding a small positive shift to the diagonal of the QGT as$S \to S + \lambda \mathbb{1}$ . Thediag_shiftparameter controls the value of$\lambda$ . -
linear_solver_fn: a function that takes as input a matrix and a vector and returns the solution of the linear system. This is used to solve the linear system in the natural gradient descent step. Thecholeskysolver from NetKet is one of the most efficient choices for this.
The QGT is a square matrix of size
The InfidelityOptimizerNG driver provides an implementation of the tangent kernel method. The main keyword arguments that can be used to activate/deactivate the tangent kernel method are:
use_ntk: a boolean flag to enable the use of the tangent kernel method. We recommend using this method when the number of parameters largely surpasses the number of samples.on_the_fly: a boolean flag to enable the on-the-fly computation of the NTK. This is useful when the number of samples and parameters is too large to store the Jacobian matrix in memory.
Choosing an appropriate value for the diagonal-shift
Identifying an optimal value for
While the details of the autotuning logic can be found in the paper, its use in the InfidelityOptimizerNG driver is straightforward. It is implemented as a callback function that can be passed to the driver as follows:
from advanced_drivers.callbacks import PI_controller_diagshift
from advanced_drivers.driver import InfidelityOptimizerNG
autotune_cb = PI_controller_diagshift(
target=0.9,
safety_fac=1.,
clip_min=0.5,
clip_max=2,
diag_shift_min=1e-9,
diag_shift_max=0.1,
order=1,
beta_1=0.9,
beta_2=0.1,
)
driver = InfidelityOptimizerNG(
...
)
driver.run(n_iter=100, out=logger, callback=autotune_cb)As we have seen above, a single timestep (
The main classes are:
-
AbstractStateCompression: an abstract class defining the interface for a single state compression. This class is used to define the compression algorithm to be used in the optimization process.For the case of infidelity optimization a concretization of the abstract class is already provided in
InfidelityCompression.
To use it, one simply doescompression_alg = ptvmc.compression.InfidelityCompression( driver_class=advd.driver.InfidelityOptimizerNG, build_parameters={ "diag_shift": 1e-6, "optimizer": optax.inject_hyperparams(optax.sgd)(learning_rate=0.05), "linear_solver_fn": cholesky, "proj_reg": None, "momentum": None, "chunk_size_bwd": None, "collect_quadratic_model": True, "use_ntk": False, "cv_coeff": -0.5, "resample_fraction": None, "estimator": "cmc", }, run_parameters={ "n_iter": 100, "callback": [ PI_controller_diagshift( target=0.9, safety_fac=1.0, clip_min=0.5, clip_max=2, diag_shift_min=1e-9, diag_shift_max=0.1, order=1, beta_1=0.9, beta_2=0.1, ) ], }, )
-
AbstractDiscretization: an abstract class defining the interface for a single timestep. This class is used to define the sequence of compressions making up a physical timestep.We provide a ready-to-use concretization of the general
AbstractDiscretizationclass implementing all product expansions schemes detailed above. They can be simply called assolver = ptvmc.solver.SPPE3()
These solvers allow looping over the sequence of compressions defining a physical timestep. While this class defines the structure of the discretization scheme, it does not execute the compressions. This can be performed by calling the
stepmethod of theIntegratorclass which takes as input the compression algorithm, and the solver. -
Integrator: a class that takes care of a single step of time evolution. It takes as input the compression algorithm and the solver and performs a cycle over the different stages of the discretization algorithms. -
PTVMCDriver: a class that takes care of the full time evolution. It takes as input the generator of the dynamics, the initial time, the solver, the compression algorithm, and the initial variational state. It then performs the full time evolution of the system looping over the single timesteps (loops over theIntegratorclass).The full time evolution can be performed as follows:
integration_params = ptvmc.IntegrationParameters(dt=dt) generator = -1j * H te_driver = ptvmc.PTVMCDriver( generator, 0.0, solver=solver, integration_params=integration_params, compression_algorithm=compression_alg, variational_state=vs, ) te_driver.run(T)
where
His the Hamiltonian of the system,dtis the time step,Tis the final time of the physical simulation, andvsis initial variational state.
If you use this package in your research, please cite this repository as
@software{
netket_fidelity,
author = {Gravina, Luca and Vicentini, Filippo},
title = {ptvmc-systematic-study package},
url = {https://github.com/NeuralQXLab/ptvmc-systematic-study},
year = {2025}
}
and the paper as
@article{
gravina_neural_2024,
title={Neural Projected Quantum Dynamics: a systematic study},
author={Gravina, Luca and Savona, Vincenzo and Vicentini, Filippo},
journal={arXiv preprint arXiv:2410.10720},
year={2024},
url = {https://arxiv.org/abs/2410.10720}
}

