Skip to content

Commit

Permalink
Update version to 0.3.1 and generate docs
Browse files Browse the repository at this point in the history
  • Loading branch information
mjwen committed Nov 21, 2021
1 parent be9c679 commit 26d3ccf
Show file tree
Hide file tree
Showing 14 changed files with 42 additions and 46 deletions.
Binary file modified docs/source/auto_examples/auto_examples_jupyter.zip
Binary file not shown.
Binary file modified docs/source/auto_examples/auto_examples_python.zip
Binary file not shown.
4 changes: 2 additions & 2 deletions docs/source/auto_examples/example_nn_Si.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above code, we build a NN model with an input layer, two hidden layer, and an\noutput layer. The ``descriptor`` carries the information of the input layer, so it is\nnot needed to be specified explicitly. For each hidden layer, we first do a linear\ntransformation using ``nn.Linear(size_in, size_out)`` (essentially carrying out $y\n= xW+b$, where $W$ is the weight matrix of size ``size_in`` by ``size_out``, and\n$b$ is a vector of size ``size_out``. Then we apply the hyperbolic tangent\nactivation function ``nn.Tanh()`` to the output of the Linear layer (i.e. $y$) so\nas to add the nonlinearity. We use a Linear layer for the output layer as well, but\nunlike the hidden layer, no activation function is applied here. The input size\n``size_in`` of the first hidden layer must be the size of the descriptor, which is\nobtained using ``descriptor.get_size()``. For all other layers (hidden or output), the\ninput size must be equal to the output size of the previous layer. The ``out_size`` of\nthe output layer must be 1 such that the output of the NN model gives the energy of the\natom.\n\nThe ``set_save_metadata`` function call informs where to save intermediate models during\nthe optimization (discussed below), and what the starting epoch and how often to save\nthe model.\n\n\n## Training set and calculator\n\nThe training set and the calculator are the same as explained in `tut_kim_sw`. The\nonly difference is that we need to use the\n:mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.\nAlso, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the\nfingerprints generated from the descriptor if it is present.\n\n"
"In the above code, we build a NN model with an input layer, two hidden layer, and an\noutput layer. The ``descriptor`` carries the information of the input layer, so it is\nnot needed to be specified explicitly. For each hidden layer, we first do a linear\ntransformation using ``nn.Linear(size_in, size_out)`` (essentially carrying out $y\n= xW+b$, where $W$ is the weight matrix of size ``size_in`` by ``size_out``, and\n$b$ is a vector of size ``size_out``. Then we apply the hyperbolic tangent\nactivation function ``nn.Tanh()`` to the output of the Linear layer (i.e. $y$) so\nas to add the nonlinearity. We use a Linear layer for the output layer as well, but\nunlike the hidden layer, no activation function is applied here. The input size\n``size_in`` of the first hidden layer must be the size of the descriptor, which is\nobtained using ``descriptor.get_size()``. For all other layers (hidden or output), the\ninput size must be equal to the output size of the previous layer. The ``out_size`` of\nthe output layer must be 1 such that the output of the NN model gives the energy of the\natom.\n\nThe ``set_save_metadata`` function call informs where to save intermediate models during\nthe optimization (discussed below), and what the starting epoch and how often to save\nthe model.\n\n\n## Training set and calculator\n\nThe training set and the calculator are the same as explained in `tut_kim_sw`. The\nonly difference is that we need to use the\n:mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.\nAlso, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the\nfingerprints generated from the descriptor if it is present.\nTo train on gpu, set ``gpu=True`` in ``Calculator``.\n\n\n"
]
},
{
Expand All @@ -87,7 +87,7 @@
},
"outputs": [],
"source": [
"# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model)\n_ = calc.create(configs, reuse=False)"
"# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model, gpu=False)\n_ = calc.create(configs, reuse=False)"
]
},
{
Expand Down
4 changes: 3 additions & 1 deletion docs/source/auto_examples/example_nn_Si.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,8 @@
# :mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.
# Also, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the
# fingerprints generated from the descriptor if it is present.
# To train on gpu, set ``gpu=True`` in ``Calculator``.
#

# training set
dataset_path = download_dataset(dataset_name="Si_training_set")
Expand All @@ -110,7 +112,7 @@
configs = tset.get_configs()

# calculator
calc = CalculatorTorch(model)
calc = CalculatorTorch(model, gpu=False)
_ = calc.create(configs, reuse=False)


Expand Down
2 changes: 1 addition & 1 deletion docs/source/auto_examples/example_nn_Si.py.md5
Original file line number Diff line number Diff line change
@@ -1 +1 @@
61f8e82a0dc400f3c2e1e616e8f44301
ddfc7cb67629dfea5b40790f4f7ce5e0
47 changes: 25 additions & 22 deletions docs/source/auto_examples/example_nn_Si.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ We can then build the NN model on top of the descriptor.
.. GENERATED FROM PYTHON SOURCE LINES 77-105
.. GENERATED FROM PYTHON SOURCE LINES 77-107
In the above code, we build a NN model with an input layer, two hidden layer, and an
output layer. The ``descriptor`` carries the information of the input layer, so it is
Expand Down Expand Up @@ -158,8 +158,10 @@ only difference is that we need to use the
:mod:`~kliff.calculators.CalculatorTorch()`, which is targeted for the NN model.
Also, its ``create()`` method takes an argument ``reuse`` to inform whether to reuse the
fingerprints generated from the descriptor if it is present.
To train on gpu, set ``gpu=True`` in ``Calculator``.

.. GENERATED FROM PYTHON SOURCE LINES 105-117

.. GENERATED FROM PYTHON SOURCE LINES 107-119
.. code-block:: default
Expand All @@ -171,7 +173,7 @@ fingerprints generated from the descriptor if it is present.
configs = tset.get_configs()
# calculator
calc = CalculatorTorch(model)
calc = CalculatorTorch(model, gpu=False)
_ = calc.create(configs, reuse=False)
Expand All @@ -185,21 +187,22 @@ fingerprints generated from the descriptor if it is present.

.. code-block:: none
2021-08-11 22:52:40.505 | INFO | kliff.dataset.dataset:_read:370 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat
2021-08-11 22:52:40.505 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints.
2021-08-11 22:53:13.620 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints.
2021-08-11 22:53:13.622 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`.
2021-08-11 22:53:13.622 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl`
2021-08-11 22:53:13.662 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 0.
2021-08-11 22:53:13.956 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 100.
2021-08-11 22:53:14.244 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 200.
2021-08-11 22:53:14.624 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 300.
2021-08-11 22:53:15.100 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:218 - Pickle 400 configurations finished.
2021-11-20 22:33:47.584 | INFO | kliff.dataset.dataset:_read:370 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat
2021-11-20 22:33:47.585 | INFO | kliff.calculators.calculator_torch:_get_device:417 - Training on cpu
2021-11-20 22:33:47.586 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints.
2021-11-20 22:34:24.241 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints.
2021-11-20 22:34:24.244 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`.
2021-11-20 22:34:24.244 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl`
2021-11-20 22:34:24.908 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 0.
2021-11-20 22:34:25.779 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 100.
2021-11-20 22:34:26.898 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 200.
2021-11-20 22:34:28.475 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:175 - Processing configuration: 300.
2021-11-20 22:34:29.533 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:218 - Pickle 400 configurations finished.
.. GENERATED FROM PYTHON SOURCE LINES 118-130
.. GENERATED FROM PYTHON SOURCE LINES 120-132
Loss function
-------------
Expand All @@ -214,7 +217,7 @@ through the training set for ``10`` epochs. The learning rate ``lr`` used here i
``0.001``, and typically, one may need to play with this to find an acceptable one that
drives the loss down in a reasonable time.

.. GENERATED FROM PYTHON SOURCE LINES 130-135
.. GENERATED FROM PYTHON SOURCE LINES 132-137
.. code-block:: default
Expand All @@ -233,7 +236,7 @@ drives the loss down in a reasonable time.

.. code-block:: none
2021-08-11 22:53:15.324 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam.
2021-11-20 22:34:29.791 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam.
Epoch = 0 loss = 7.3307514191e+01
Epoch = 1 loss = 7.2090656281e+01
Epoch = 2 loss = 7.1389846802e+01
Expand All @@ -245,18 +248,18 @@ drives the loss down in a reasonable time.
Epoch = 8 loss = 6.7668614388e+01
Epoch = 9 loss = 6.7058616638e+01
Epoch = 10 loss = 6.6683934212e+01
2021-08-11 22:53:27.929 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam.
2021-11-20 22:34:33.793 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam.
.. GENERATED FROM PYTHON SOURCE LINES 136-139
.. GENERATED FROM PYTHON SOURCE LINES 138-141
We can save the trained model to disk, and later can load it back if we want. We can
also write the trained model to a KIM model such that it can be used in other simulation
codes such as LAMMPS via the KIM API.

.. GENERATED FROM PYTHON SOURCE LINES 139-146
.. GENERATED FROM PYTHON SOURCE LINES 141-148
.. code-block:: default
Expand All @@ -277,12 +280,12 @@ codes such as LAMMPS via the KIM API.

.. code-block:: none
2021-08-11 22:53:28.005 | INFO | kliff.models.neural_network:write_kim_model:111 - KLIFF trained model written to /Users/mjwen/Applications/kliff/examples/NeuralNetwork_KLIFF__MO_000000111111_000.
2021-11-20 22:34:33.901 | INFO | kliff.models.neural_network:write_kim_model:111 - KLIFF trained model written to /Users/mjwen/Applications/kliff/examples/NeuralNetwork_KLIFF__MO_000000111111_000.
.. GENERATED FROM PYTHON SOURCE LINES 147-152
.. GENERATED FROM PYTHON SOURCE LINES 149-154
.. note::
Now we have trained an NN for a single specie Si. If you have multiple species in
Expand All @@ -293,7 +296,7 @@ codes such as LAMMPS via the KIM API.

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 49.215 seconds)
**Total running time of the script:** ( 0 minutes 48.952 seconds)


.. _sphx_glr_download_auto_examples_example_nn_Si.py:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/auto_examples/example_nn_SiC.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
},
"outputs": [],
"source": [
"from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)\n\nN1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c})\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)"
"from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)\n\nN1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c}, gpu=False)\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/source/auto_examples/example_nn_SiC.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@
configs = tset.get_configs()

# calculator
calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c})
calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}, gpu=False)
_ = calc.create(configs, reuse=False)

# loss
Expand Down
2 changes: 1 addition & 1 deletion docs/source/auto_examples/example_nn_SiC.py.md5
Original file line number Diff line number Diff line change
@@ -1 +1 @@
19b2671ec2e63ff6e1b5732e4d7095d1
dcdf008a81dc6a79cdb8b3dc7c95fcd1
13 changes: 2 additions & 11 deletions docs/source/auto_examples/example_nn_SiC.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet).
configs = tset.get_configs()
# calculator
calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c})
calc = CalculatorTorchSeparateSpecies({"Si": model_si, "C": model_c}, gpu=False)
_ = calc.create(configs, reuse=False)
# loss
Expand All @@ -104,14 +104,6 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet).

.. code-block:: none
2021-08-03 11:20:34.072 | INFO | kliff.dataset.dataset:_read:370 - 10 configurations read from /Users/mjwen/Applications/kliff/examples/SiC_training_set
2021-08-03 11:20:34.072 | INFO | kliff.descriptors.descriptor:generate_fingerprints:103 - Start computing mean and stdev of fingerprints.
2021-08-03 11:20:35.093 | INFO | kliff.descriptors.descriptor:generate_fingerprints:120 - Finish computing mean and stdev of fingerprints.
2021-08-03 11:20:35.097 | INFO | kliff.descriptors.descriptor:generate_fingerprints:128 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`.
2021-08-03 11:20:35.097 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:163 - Pickling fingerprints to `fingerprints.pkl`
2021-08-03 11:20:35.125 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:174 - Processing configuration: 0.
2021-08-03 11:20:35.168 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:217 - Pickle 10 configurations finished.
2021-08-03 11:20:35.172 | INFO | kliff.loss:minimize:708 - Start minimization using optimization method: Adam.
Epoch = 0 loss = 5.7247632980e+01
Epoch = 1 loss = 5.7215625763e+01
Epoch = 2 loss = 5.7186323166e+01
Expand All @@ -123,7 +115,6 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet).
Epoch = 8 loss = 5.7020624161e+01
Epoch = 9 loss = 5.6992567062e+01
Epoch = 10 loss = 5.6973577499e+01
2021-08-03 11:20:35.602 | INFO | kliff.loss:minimize:763 - Finish minimization using optimization method: Adam.
Expand Down Expand Up @@ -152,7 +143,7 @@ codes such as LAMMPS via the KIM API.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 3.271 seconds)
**Total running time of the script:** ( 0 minutes 3.036 seconds)


.. _sphx_glr_download_auto_examples_example_nn_SiC.py:
Expand Down
Binary file modified docs/source/auto_examples/example_nn_SiC_codeobj.pickle
Binary file not shown.
Binary file modified docs/source/auto_examples/example_nn_Si_codeobj.pickle
Binary file not shown.
10 changes: 5 additions & 5 deletions docs/source/auto_examples/sg_execution_times.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@

Computation times
=================
**00:22.986** total execution time for **auto_examples** files:
**00:51.988** total execution time for **auto_examples** files:

+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_linear_regression.py` (``example_linear_regression.py``) | 00:22.986 | 0.0 MB |
| :ref:`sphx_glr_auto_examples_example_nn_Si.py` (``example_nn_Si.py``) | 00:48.952 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_nn_SiC.py` (``example_nn_SiC.py``) | 00:03.036 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_kim_SW_Si.py` (``example_kim_SW_Si.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_lennard_jones.py` (``example_lennard_jones.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_nn_Si.py` (``example_nn_Si.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_auto_examples_example_nn_SiC.py` (``example_nn_SiC.py``) | 00:00.000 | 0.0 MB |
| :ref:`sphx_glr_auto_examples_example_linear_regression.py` (``example_linear_regression.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------+-----------+--------+
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
# The short X.Y version
version = "0.3"
# The full version, including alpha/beta/rc tags
release = "0.3.0"
release = "0.3.1"


# -- General configuration ---------------------------------------------------
Expand Down

0 comments on commit 26d3ccf

Please sign in to comment.