diff --git a/docs/source/auto_examples/auto_examples_jupyter.zip b/docs/source/auto_examples/auto_examples_jupyter.zip index d4bc9d4c..c4389316 100644 Binary files a/docs/source/auto_examples/auto_examples_jupyter.zip and b/docs/source/auto_examples/auto_examples_jupyter.zip differ diff --git a/docs/source/auto_examples/auto_examples_python.zip b/docs/source/auto_examples/auto_examples_python.zip index d709b325..7e7c4ce6 100644 Binary files a/docs/source/auto_examples/auto_examples_python.zip and b/docs/source/auto_examples/auto_examples_python.zip differ diff --git a/docs/source/auto_examples/example_nn_Si.ipynb b/docs/source/auto_examples/example_nn_Si.ipynb index c5ddac38..fbfde5dd 100644 --- a/docs/source/auto_examples/example_nn_Si.ipynb +++ b/docs/source/auto_examples/example_nn_Si.ipynb @@ -76,7 +76,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the above code, we build a NN model with an input layer, two hidden layer, and an\noutput layer. The ``descriptor`` carries the information of the input layer, so it is\nnot needed to be specified explicitly. For each hidden layer, we first do a linear\ntransformation using ``nn.Linear(size_in, size_out)`` (essentially carrying out $y\n= xW+b$, where $W$ is the weight matrix of size ``size_in`` by ``size_out``, and\n$b$ is a vector of size ``size_out``. Then we apply the hyperbolic tangent\nactivation function ``nn.Tanh()`` to the output of the Linear layer (i.e. $y$) so\nas to add the nonlinearity. We use a Linear layer for the output layer as well, but\nunlike the hidden layer, no activation function is applied here. The input size\n``size_in`` of the first hidden layer must be the size of the descriptor, which is\nobtained using ``descriptor.get_size()``. For all other layers (hidden or output), the\ninput size must be equal to the output size of the previous layer. The ``out_size`` of\nthe output layer must be 1 such that the output of the NN model gives the energy of the\natom.\n\nThe ``set_save_metadata`` function call informs where to save intermediate models during\nthe optimization (discussed below), and what the starting epoch and how often to save\nthe model.\n\n\n## Training set and calculator\n\nThe training set and the calculator are the same as explained in `tut_kim_sw`. The\nonly difference is that we need to use the\n:mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.\nAlso, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the\nfingerprints generated from the descriptor if it is present.\n\n" + "In the above code, we build a NN model with an input layer, two hidden layer, and an\noutput layer. The ``descriptor`` carries the information of the input layer, so it is\nnot needed to be specified explicitly. For each hidden layer, we first do a linear\ntransformation using ``nn.Linear(size_in, size_out)`` (essentially carrying out $y\n= xW+b$, where $W$ is the weight matrix of size ``size_in`` by ``size_out``, and\n$b$ is a vector of size ``size_out``. Then we apply the hyperbolic tangent\nactivation function ``nn.Tanh()`` to the output of the Linear layer (i.e. $y$) so\nas to add the nonlinearity. We use a Linear layer for the output layer as well, but\nunlike the hidden layer, no activation function is applied here. The input size\n``size_in`` of the first hidden layer must be the size of the descriptor, which is\nobtained using ``descriptor.get_size()``. For all other layers (hidden or output), the\ninput size must be equal to the output size of the previous layer. The ``out_size`` of\nthe output layer must be 1 such that the output of the NN model gives the energy of the\natom.\n\nThe ``set_save_metadata`` function call informs where to save intermediate models during\nthe optimization (discussed below), and what the starting epoch and how often to save\nthe model.\n\n\n## Training set and calculator\n\nThe training set and the calculator are the same as explained in `tut_kim_sw`. The\nonly difference is that we need to use the\n:mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.\nAlso, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the\nfingerprints generated from the descriptor if it is present.\nTo train on gpu, set ``gpu=True`` in ``Calculator``.\n\n\n" ] }, { @@ -87,7 +87,7 @@ }, "outputs": [], "source": [ - "# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model)\n_ = calc.create(configs, reuse=False)" + "# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model, gpu=False)\n_ = calc.create(configs, reuse=False)" ] }, { diff --git a/docs/source/auto_examples/example_nn_Si.py b/docs/source/auto_examples/example_nn_Si.py index e4e9defb..927f4246 100644 --- a/docs/source/auto_examples/example_nn_Si.py +++ b/docs/source/auto_examples/example_nn_Si.py @@ -102,6 +102,8 @@ # :mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model. # Also, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the # fingerprints generated from the descriptor if it is present. +# To train on gpu, set ``gpu=True`` in ``Calculator``. +# # training set dataset_path = download_dataset(dataset_name="Si_training_set") @@ -110,7 +112,7 @@ configs = tset.get_configs() # calculator -calc = CalculatorTorch(model) +calc = CalculatorTorch(model, gpu=False) _ = calc.create(configs, reuse=False) diff --git a/docs/source/auto_examples/example_nn_Si.py.md5 b/docs/source/auto_examples/example_nn_Si.py.md5 index 43bb1baa..df7af95c 100644 --- a/docs/source/auto_examples/example_nn_Si.py.md5 +++ b/docs/source/auto_examples/example_nn_Si.py.md5 @@ -1 +1 @@ -61f8e82a0dc400f3c2e1e616e8f44301 \ No newline at end of file +ddfc7cb67629dfea5b40790f4f7ce5e0 \ No newline at end of file diff --git a/docs/source/auto_examples/example_nn_Si.rst b/docs/source/auto_examples/example_nn_Si.rst index f8432d63..f3962d23 100644 --- a/docs/source/auto_examples/example_nn_Si.rst +++ b/docs/source/auto_examples/example_nn_Si.rst @@ -128,7 +128,7 @@ We can then build the NN model on top of the descriptor. -.. GENERATED FROM PYTHON SOURCE LINES 77-105 +.. GENERATED FROM PYTHON SOURCE LINES 77-107 In the above code, we build a NN model with an input layer, two hidden layer, and an output layer. The ``descriptor`` carries the information of the input layer, so it is @@ -158,8 +158,10 @@ only difference is that we need to use the :mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model. Also, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the fingerprints generated from the descriptor if it is present. +To train on gpu, set ``gpu=True`` in ``Calculator``. -.. GENERATED FROM PYTHON SOURCE LINES 105-117 + +.. GENERATED FROM PYTHON SOURCE LINES 107-119 .. code-block:: default @@ -171,7 +173,7 @@ fingerprints generated from the descriptor if it is present. configs = tset.get_configs() # calculator - calc = CalculatorTorch(model) + calc = CalculatorTorch(model, gpu=False) _ = calc.create(configs, reuse=False) @@ -185,21 +187,22 @@ fingerprints generated from the descriptor if it is present. .. code-block:: none - 2021-08-11 22:52:40.505 | INFO | kliff.dataset.dataset:_read:370 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat - 2021-08-11 22:52:40.505 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints. - 2021-08-11 22:53:13.620 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints. - 2021-08-11 22:53:13.622 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`. - 2021-08-11 22:53:13.622 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl` - 2021-08-11 22:53:13.662 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 0. - 2021-08-11 22:53:13.956 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 100. - 2021-08-11 22:53:14.244 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 200. - 2021-08-11 22:53:14.624 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 300. - 2021-08-11 22:53:15.100 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:218 - Pickle 400 configurations finished. + 2021-11-20 22:33:47.584 | INFO | kliff.dataset.dataset:_read:370 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat + 2021-11-20 22:33:47.585 | INFO | kliff.calculators.calculator_torch:_get_device:417 - Training on cpu + 2021-11-20 22:33:47.586 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints. + 2021-11-20 22:34:24.241 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints. + 2021-11-20 22:34:24.244 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`. + 2021-11-20 22:34:24.244 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl` + 2021-11-20 22:34:24.908 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 0. + 2021-11-20 22:34:25.779 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 100. + 2021-11-20 22:34:26.898 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 200. + 2021-11-20 22:34:28.475 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 300. + 2021-11-20 22:34:29.533 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:218 - Pickle 400 configurations finished. -.. GENERATED FROM PYTHON SOURCE LINES 118-130 +.. GENERATED FROM PYTHON SOURCE LINES 120-132 Loss function ------------- @@ -214,7 +217,7 @@ through the training set for ``10`` epochs. The learning rate ``lr`` used here i ``0.001``, and typically, one may need to play with this to find an acceptable one that drives the loss down in a reasonable time. -.. GENERATED FROM PYTHON SOURCE LINES 130-135 +.. GENERATED FROM PYTHON SOURCE LINES 132-137 .. code-block:: default @@ -233,7 +236,7 @@ drives the loss down in a reasonable time. .. code-block:: none - 2021-08-11 22:53:15.324 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam. + 2021-11-20 22:34:29.791 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam. Epoch = 0 loss = 7.3307514191e+01 Epoch = 1 loss = 7.2090656281e+01 Epoch = 2 loss = 7.1389846802e+01 @@ -245,18 +248,18 @@ drives the loss down in a reasonable time. Epoch = 8 loss = 6.7668614388e+01 Epoch = 9 loss = 6.7058616638e+01 Epoch = 10 loss = 6.6683934212e+01 - 2021-08-11 22:53:27.929 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam. + 2021-11-20 22:34:33.793 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam. -.. GENERATED FROM PYTHON SOURCE LINES 136-139 +.. GENERATED FROM PYTHON SOURCE LINES 138-141 We can save the trained model to disk, and later can load it back if we want. We can also write the trained model to a KIM model such that it can be used in other simulation codes such as LAMMPS via the KIM API. -.. GENERATED FROM PYTHON SOURCE LINES 139-146 +.. GENERATED FROM PYTHON SOURCE LINES 141-148 .. code-block:: default @@ -277,12 +280,12 @@ codes such as LAMMPS via the KIM API. .. code-block:: none - 2021-08-11 22:53:28.005 | INFO | kliff.models.neural_network:write_kim_model:111 - KLIFF trained model written to /Users/mjwen/Applications/kliff/examples/NeuralNetwork_KLIFF__MO_000000111111_000. + 2021-11-20 22:34:33.901 | INFO | kliff.models.neural_network:write_kim_model:111 - KLIFF trained model written to /Users/mjwen/Applications/kliff/examples/NeuralNetwork_KLIFF__MO_000000111111_000. -.. GENERATED FROM PYTHON SOURCE LINES 147-152 +.. GENERATED FROM PYTHON SOURCE LINES 149-154 .. note:: Now we have trained an NN for a single specie Si. If you have multiple species in @@ -293,7 +296,7 @@ codes such as LAMMPS via the KIM API. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 49.215 seconds) + **Total running time of the script:** ( 0 minutes 48.952 seconds) .. _sphx_glr_download_auto_examples_example_nn_Si.py: diff --git a/docs/source/auto_examples/example_nn_SiC.ipynb b/docs/source/auto_examples/example_nn_SiC.ipynb index 026f4a45..7924c047 100644 --- a/docs/source/auto_examples/example_nn_SiC.ipynb +++ b/docs/source/auto_examples/example_nn_SiC.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)\n\nN1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c})\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)" + "from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)\n\nN1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c}, gpu=False)\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)" ] }, { diff --git a/docs/source/auto_examples/example_nn_SiC.py b/docs/source/auto_examples/example_nn_SiC.py index 26f047ce..26af24a1 100644 --- a/docs/source/auto_examples/example_nn_SiC.py +++ b/docs/source/auto_examples/example_nn_SiC.py @@ -63,7 +63,7 @@ configs = tset.get_configs() # calculator -calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}) +calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}, gpu=False) _ = calc.create(configs, reuse=False) # loss diff --git a/docs/source/auto_examples/example_nn_SiC.py.md5 b/docs/source/auto_examples/example_nn_SiC.py.md5 index 45c470b8..993ea913 100644 --- a/docs/source/auto_examples/example_nn_SiC.py.md5 +++ b/docs/source/auto_examples/example_nn_SiC.py.md5 @@ -1 +1 @@ -19b2671ec2e63ff6e1b5732e4d7095d1 \ No newline at end of file +dcdf008a81dc6a79cdb8b3dc7c95fcd1 \ No newline at end of file diff --git a/docs/source/auto_examples/example_nn_SiC.rst b/docs/source/auto_examples/example_nn_SiC.rst index c37d42be..e946413b 100644 --- a/docs/source/auto_examples/example_nn_SiC.rst +++ b/docs/source/auto_examples/example_nn_SiC.rst @@ -86,7 +86,7 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet). configs = tset.get_configs() # calculator - calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}) + calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}, gpu=False) _ = calc.create(configs, reuse=False) # loss @@ -104,14 +104,6 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet). .. code-block:: none - 2021-08-03 11:20:34.072 | INFO | kliff.dataset.dataset:_read:370 - 10 configurations read from /Users/mjwen/Applications/kliff/examples/SiC_training_set - 2021-08-03 11:20:34.072 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints. - 2021-08-03 11:20:35.093 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints. - 2021-08-03 11:20:35.097 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`. - 2021-08-03 11:20:35.097 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl` - 2021-08-03 11:20:35.125 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:174 - Processing configuration: 0. - 2021-08-03 11:20:35.168 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:217 - Pickle 10 configurations finished. - 2021-08-03 11:20:35.172 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam. Epoch = 0 loss = 5.7247632980e+01 Epoch = 1 loss = 5.7215625763e+01 Epoch = 2 loss = 5.7186323166e+01 @@ -123,7 +115,6 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet). Epoch = 8 loss = 5.7020624161e+01 Epoch = 9 loss = 5.6992567062e+01 Epoch = 10 loss = 5.6973577499e+01 - 2021-08-03 11:20:35.602 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam. @@ -152,7 +143,7 @@ codes such as LAMMPS via the KIM API. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 3.271 seconds) + **Total running time of the script:** ( 0 minutes 3.036 seconds) .. _sphx_glr_download_auto_examples_example_nn_SiC.py: diff --git a/docs/source/auto_examples/example_nn_SiC_codeobj.pickle b/docs/source/auto_examples/example_nn_SiC_codeobj.pickle index daea2e07..fbb4130b 100644 Binary files a/docs/source/auto_examples/example_nn_SiC_codeobj.pickle and b/docs/source/auto_examples/example_nn_SiC_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_nn_Si_codeobj.pickle b/docs/source/auto_examples/example_nn_Si_codeobj.pickle index 7a354061..bee18e68 100644 Binary files a/docs/source/auto_examples/example_nn_Si_codeobj.pickle and b/docs/source/auto_examples/example_nn_Si_codeobj.pickle differ diff --git a/docs/source/auto_examples/sg_execution_times.rst b/docs/source/auto_examples/sg_execution_times.rst index 0baa6014..4de9a2bf 100644 --- a/docs/source/auto_examples/sg_execution_times.rst +++ b/docs/source/auto_examples/sg_execution_times.rst @@ -5,16 +5,16 @@ Computation times ================= -**00:22.986** total execution time for **auto_examples** files: +**00:51.988** total execution time for **auto_examples** files: +-----------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_example_linear_regression.py` (``example_linear_regression.py``) | 00:22.986 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_example_nn_Si.py` (``example_nn_Si.py``) | 00:48.952 | 0.0 MB | ++-----------------------------------------------------------------------------------------------+-----------+--------+ +| :ref:`sphx_glr_auto_examples_example_nn_SiC.py` (``example_nn_SiC.py``) | 00:03.036 | 0.0 MB | +-----------------------------------------------------------------------------------------------+-----------+--------+ | :ref:`sphx_glr_auto_examples_example_kim_SW_Si.py` (``example_kim_SW_Si.py``) | 00:00.000 | 0.0 MB | +-----------------------------------------------------------------------------------------------+-----------+--------+ | :ref:`sphx_glr_auto_examples_example_lennard_jones.py` (``example_lennard_jones.py``) | 00:00.000 | 0.0 MB | +-----------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_example_nn_Si.py` (``example_nn_Si.py``) | 00:00.000 | 0.0 MB | -+-----------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_example_nn_SiC.py` (``example_nn_SiC.py``) | 00:00.000 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_example_linear_regression.py` (``example_linear_regression.py``) | 00:00.000 | 0.0 MB | +-----------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/docs/source/conf.py b/docs/source/conf.py index 936e3d36..cb8d01a3 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -39,7 +39,7 @@ # The short X.Y version version = "0.3" # The full version, including alpha/beta/rc tags -release = "0.3.0" +release = "0.3.1" # -- General configuration ---------------------------------------------------