by Willem Gispen and Austen Lamacraft
This code repository accompagnies our paper accepted for MSML 2021.
Visualization of the link between quantum mechanics and reinforcement learning.
We introduce reinforcement learning (RL) formulations of the problem of finding the ground state of a many-body quantum mechanical model defined on a lattice. We show that stoquastic Hamiltonians -- those without a sign problem -- have a natural decomposition into stochastic dynamics and a potential representing a reward function. The mapping to RL is developed for both continuous and discrete time, based on a generalized Feynman--Kac formula in the former case and a stochastic representation of the Schr"odinger equation in the latter. We discuss the application of this mapping to the neural representation of quantum states, spelling out the advantages over approaches based on direct representation of the wavefunction of the system.
All our models are implemented using PyTorch and Pytorch Lightning. All source code used to generate the results in the paper are in
the continuum
folder.
The calculations and figure generation are run inside
Jupyter notebooks or Python scripts contained in the experiments
folder.
Results generated by the code are saved in experiments/results
, figures are in experiments/figs
.
You can download a copy of all the files in this repository by cloning the git repository:
git clone https://github.com/WillemGispen/Lattice-QuaRL.git
You'll need a working Python environment to run the code.
The recommended way to set up your environment is through the
Anaconda Python distribution which
provides the conda
package manager.
Anaconda can be installed in your user directory and does not interfere with
the system Python installation. We use conda
virtual environments to manage the project dependencies in
isolation.
Thus, you can install our dependencies without causing conflicts with your
setup (even with different Python versions).
The required dependencies are specified in the file environment.yml
. Run the following command in the repository folder (where environment.yml
is located) to create a separate environment and install all required dependencies in it:
conda env create
Before running any code you must activate the conda environment:
conda activate lattice-quarl
or, if you're on Windows:
activate lattice-quarl
This will enable the environment for your current terminal session. Any subsequent commands will use software that is installed in the environment.
To use or test our code, produce results or figures, please run the Python scripts or the Jupyter Notebooks in the experiments
directory using this conda environment.
All source code is made available under the MIT license. You can freely
use and modify the code, without warranty, so long as you provide attribution
to the authors. See LICENSE.md
for the full license text.
If you use or build on our work, please cite our paper.