diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index 365e28e4..9d7fa95e 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -5,7 +5,7 @@ on: types: [created] jobs: - deploy: + deploy-pypi: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 diff --git a/.github/workflows/update-precommit.yml b/.github/workflows/update-precommit.yml index 1591c4b0..b551d77b 100644 --- a/.github/workflows/update-precommit.yml +++ b/.github/workflows/update-precommit.yml @@ -8,10 +8,10 @@ jobs: auto-update: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v2.3.4 + - uses: actions/checkout@v3 - name: Set up Python - uses: actions/setup-python@v2.2.2 + uses: actions/setup-python@v3 with: python-version: 3.9 diff --git a/devtool/README.md b/devtool/README.md index 4a3585ec..4a6c2311 100644 --- a/devtool/README.md +++ b/devtool/README.md @@ -3,13 +3,20 @@ - Change the version by modifying `major`, `minor` and `patch` in `./update_version.py` and then `$ python update_version.py` -- Format code `$ python format_sources.py` +- Format code + ```shell + $ mamba update -c conda-forge black + $ python format_sources.py + ```` - Update `kliff/docs/source/changelog.rst` - Update docs at `kliff/docs/source` as necessary - Generate docs by running `$ make html` in the `kliff/docs` directory + - remove `.git/hooks/pre-commit` so that it will not correct the generated + doc files, otherwise ReadTheDoc will try to regenerate it and then fail + - after commit it, then `$ pre-commit install` to get pre-commit back - Commit and merge it to the `docs` branch. [ReadTheDocs](https://readthedocs.org) is set up to watch this branch and will automatically generate the docs.) diff --git a/devtool/update_version.py b/devtool/update_version.py index e7160656..fb93dbff 100644 --- a/devtool/update_version.py +++ b/devtool/update_version.py @@ -30,8 +30,8 @@ def update_version(version, path, key, in_quotes=False, extra_space=False): if __name__ == "__main__": major = 0 - minor = 3 - patch = 3 + minor = 4 + patch = 0 mmp = f"{major}.{minor}.{patch}" mm = f"{major}.{minor}" diff --git a/docs/source/auto_examples/auto_examples_jupyter.zip b/docs/source/auto_examples/auto_examples_jupyter.zip index 3e07e022..771e09fa 100644 Binary files a/docs/source/auto_examples/auto_examples_jupyter.zip and b/docs/source/auto_examples/auto_examples_jupyter.zip differ diff --git a/docs/source/auto_examples/auto_examples_python.zip b/docs/source/auto_examples/auto_examples_python.zip index 8efbd5ee..5ae2c59e 100644 Binary files a/docs/source/auto_examples/auto_examples_python.zip and b/docs/source/auto_examples/auto_examples_python.zip differ diff --git a/docs/source/auto_examples/example_kim_SW_Si.ipynb b/docs/source/auto_examples/example_kim_SW_Si.ipynb index b4b9560d..bfbc7d4d 100644 --- a/docs/source/auto_examples/example_kim_SW_Si.ipynb +++ b/docs/source/auto_examples/example_kim_SW_Si.ipynb @@ -33,7 +33,7 @@ }, "outputs": [], "source": [ - "from kliff.calculators import Calculator\nfrom kliff.dataset import Dataset\nfrom kliff.loss import Loss\nfrom kliff.models import KIMModel\nfrom kliff.utils import download_dataset" + "from kliff.calculators import Calculator\nfrom kliff.dataset import Dataset\nfrom kliff.dataset.weight import Weight\nfrom kliff.loss import Loss\nfrom kliff.models import KIMModel\nfrom kliff.utils import download_dataset" ] }, { @@ -76,7 +76,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Here, we tell KLIFF to fit four parameters ``B``, ``gamma``, ``sigma``, and ``A`` of the\nSW model. The information for each fitting parameter should be provided as a list of\nlist, where the size of the outer list should be equal to the ``size`` of the parameter\ngiven by ``model.echo_model_params()``. For each inner list, you can provide either one,\ntwo, or three items.\n\n- One item. You can use a numerical value (e.g. ``gamma``) to provide an initial guess\n of the parameter. Alternatively, the string ``'default'`` can be provided to use the\n default value in the model (e.g. ``B``).\n\n- Two items. The first item should be a numerical value and the second item should be\n the string ``'fix'`` (e.g. ``sigma``), which tells KLIFF to use the value for the\n parameter, but do not optimize it.\n\n- Three items. The first item can be a numerical value or the string ``'default'``,\n having the same meanings as the one item case. In the second and third items, you can\n list the lower and upper bounds for the parameters, respectively. A bound could be\n provided as a numerical values or ``None``. The latter indicates no bound is applied.\n\nThe call of ``model.echo_opt_params()`` prints out the fitting parameters that we\nrequire KLIFF to optimize. The number ``1`` after the name of each parameter indicates\nthe size of the parameter.\n\n

Note

The parameters that are not included as a fitting parameter are fixed to the default\n values in the model during the optimization.

\n\n\n## Training set\n\nKLIFF has a :class:`~kliff.dataset.Dataset` to deal with the training data (and possibly\ntest data). For the silicon training set, we can read and process the files by:\n\n" + "Here, we tell KLIFF to fit four parameters ``B``, ``gamma``, ``sigma``, and ``A`` of the\nSW model. The information for each fitting parameter should be provided as a list of\nlist, where the size of the outer list should be equal to the ``size`` of the parameter\ngiven by ``model.echo_model_params()``. For each inner list, you can provide either one,\ntwo, or three items.\n\n- One item. You can use a numerical value (e.g. ``gamma``) to provide an initial guess\n of the parameter. Alternatively, the string ``'default'`` can be provided to use the\n default value in the model (e.g. ``B``).\n\n- Two items. The first item should be a numerical value and the second item should be\n the string ``'fix'`` (e.g. ``sigma``), which tells KLIFF to use the value for the\n parameter, but do not optimize it.\n\n- Three items. The first item can be a numerical value or the string ``'default'``,\n having the same meanings as the one item case. In the second and third items, you can\n list the lower and upper bounds for the parameters, respectively. A bound could be\n provided as a numerical values or ``None``. The latter indicates no bound is applied.\n\nThe call of ``model.echo_opt_params()`` prints out the fitting parameters that we\nrequire KLIFF to optimize. The number ``1`` after the name of each parameter indicates\nthe size of the parameter.\n\n

Note

The parameters that are not included as a fitting parameter are fixed to the default\n values in the model during the optimization.

\n\n\n## Training set\n\nKLIFF has a :class:`~kliff.dataset.Dataset` to deal with the training data (and possibly\ntest data). Additionally, we define the ``energy_weight`` and ``forces_weight``\ncorresponding to each configuration using :class:`~kliff.dataset.weight.Weight`. In\nthis example, we set ``energy_weight`` to ``1.0`` and ``forces_weight`` to ``0.1``.\nFor the silicon training set, we can read and process the files by:\n\n" ] }, { @@ -87,7 +87,7 @@ }, "outputs": [], "source": [ - "dataset_path = download_dataset(dataset_name=\"Si_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()" + "dataset_path = download_dataset(dataset_name=\"Si_training_set\")\nweight = Weight(energy_weight=1.0, forces_weight=0.1)\ntset = Dataset(dataset_path, weight)\nconfigs = tset.get_configs()" ] }, { @@ -112,7 +112,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "where ``calc.create(configs)`` does some initializations for each\nconfiguration in the training set, such as creating the neighbor list.\n\n\n## Loss function\n\nKLIFF uses a loss function to quantify the difference between the training set data and\npotential predictions and uses minimization algorithms to reduce the loss as much as\npossible. KLIFF provides a large number of minimization algorithms by interacting with\nSciPy_. For physics-motivated potentials, any algorithm listed on\n`scipy.optimize.minimize`_ and `scipy.optimize.least_squares`_ can be used. In the\nfollowing code snippet, we create a loss of energy and forces, where the residual\nfunction uses an ``energy_weight`` of ``1.0`` and a ``forces_weight`` of ``0.1``, and\n``2`` processors will be used to calculate the loss. The ``L-BFGS-B`` minimization\nalgorithm is applied to minimize the loss, and the minimization is allowed to run for\na max number of 100 iterations.\n\n" + "where ``calc.create(configs)`` does some initializations for each\nconfiguration in the training set, such as creating the neighbor list.\n\n\n## Loss function\n\nKLIFF uses a loss function to quantify the difference between the training set data and\npotential predictions and uses minimization algorithms to reduce the loss as much as\npossible. KLIFF provides a large number of minimization algorithms by interacting with\nSciPy_. For physics-motivated potentials, any algorithm listed on\n`scipy.optimize.minimize`_ and `scipy.optimize.least_squares`_ can be used. In the\nfollowing code snippet, we create a loss of energy and forces and use ``2`` processors\nto calculate the loss. The ``L-BFGS-B`` minimization algorithm is applied to minimize\nthe loss, and the minimization is allowed to run for a max number of 100 iterations.\n\n" ] }, { @@ -123,7 +123,7 @@ }, "outputs": [], "source": [ - "steps = 100\nresidual_data = {\"energy_weight\": 1.0, \"forces_weight\": 0.1}\nloss = Loss(calc, residual_data=residual_data, nprocs=2)\nloss.minimize(method=\"L-BFGS-B\", options={\"disp\": True, \"maxiter\": steps})" + "steps = 100\nloss = Loss(calc, nprocs=2)\nloss.minimize(method=\"L-BFGS-B\", options={\"disp\": True, \"maxiter\": steps})" ] }, { diff --git a/docs/source/auto_examples/example_kim_SW_Si.py b/docs/source/auto_examples/example_kim_SW_Si.py index 2b3b6c69..12eef3ea 100644 --- a/docs/source/auto_examples/example_kim_SW_Si.py +++ b/docs/source/auto_examples/example_kim_SW_Si.py @@ -37,6 +37,7 @@ from kliff.calculators import Calculator from kliff.dataset import Dataset +from kliff.dataset.weight import Weight from kliff.loss import Loss from kliff.models import KIMModel from kliff.utils import download_dataset @@ -107,10 +108,14 @@ # ------------ # # KLIFF has a :class:`~kliff.dataset.Dataset` to deal with the training data (and possibly -# test data). For the silicon training set, we can read and process the files by: +# test data). Additionally, we define the ``energy_weight`` and ``forces_weight`` +# corresponding to each configuration using :class:`~kliff.dataset.weight.Weight`. In +# this example, we set ``energy_weight`` to ``1.0`` and ``forces_weight`` to ``0.1``. +# For the silicon training set, we can read and process the files by: dataset_path = download_dataset(dataset_name="Si_training_set") -tset = Dataset(dataset_path) +weight = Weight(energy_weight=1.0, forces_weight=0.1) +tset = Dataset(dataset_path, weight) configs = tset.get_configs() @@ -149,15 +154,12 @@ # possible. KLIFF provides a large number of minimization algorithms by interacting with # SciPy_. For physics-motivated potentials, any algorithm listed on # `scipy.optimize.minimize`_ and `scipy.optimize.least_squares`_ can be used. In the -# following code snippet, we create a loss of energy and forces, where the residual -# function uses an ``energy_weight`` of ``1.0`` and a ``forces_weight`` of ``0.1``, and -# ``2`` processors will be used to calculate the loss. The ``L-BFGS-B`` minimization -# algorithm is applied to minimize the loss, and the minimization is allowed to run for -# a max number of 100 iterations. +# following code snippet, we create a loss of energy and forces and use ``2`` processors +# to calculate the loss. The ``L-BFGS-B`` minimization algorithm is applied to minimize +# the loss, and the minimization is allowed to run for a max number of 100 iterations. steps = 100 -residual_data = {"energy_weight": 1.0, "forces_weight": 0.1} -loss = Loss(calc, residual_data=residual_data, nprocs=2) +loss = Loss(calc, nprocs=2) loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": steps}) diff --git a/docs/source/auto_examples/example_kim_SW_Si.py.md5 b/docs/source/auto_examples/example_kim_SW_Si.py.md5 index 4bf1b055..12f82398 100644 --- a/docs/source/auto_examples/example_kim_SW_Si.py.md5 +++ b/docs/source/auto_examples/example_kim_SW_Si.py.md5 @@ -1 +1 @@ -7b261ef6ac84d9734bd967d57109e034 \ No newline at end of file +aa103f8edc37ca96254be2e86d1426f3 \ No newline at end of file diff --git a/docs/source/auto_examples/example_kim_SW_Si.rst b/docs/source/auto_examples/example_kim_SW_Si.rst index 5321e559..257cef4b 100644 --- a/docs/source/auto_examples/example_kim_SW_Si.rst +++ b/docs/source/auto_examples/example_kim_SW_Si.rst @@ -53,13 +53,14 @@ information of this format. Let's first import the modules that will be used in this example. -.. GENERATED FROM PYTHON SOURCE LINES 37-44 +.. GENERATED FROM PYTHON SOURCE LINES 37-45 .. code-block:: default from kliff.calculators import Calculator from kliff.dataset import Dataset + from kliff.dataset.weight import Weight from kliff.loss import Loss from kliff.models import KIMModel from kliff.utils import download_dataset @@ -71,7 +72,7 @@ Let's first import the modules that will be used in this example. -.. GENERATED FROM PYTHON SOURCE LINES 45-50 +.. GENERATED FROM PYTHON SOURCE LINES 46-51 Model ----- @@ -79,7 +80,7 @@ Model We first create a KIM model for the SW potential, and print out all the available parameters that can be optimized (we call this ``model parameters``). -.. GENERATED FROM PYTHON SOURCE LINES 50-55 +.. GENERATED FROM PYTHON SOURCE LINES 51-56 .. code-block:: default @@ -98,55 +99,12 @@ parameters that can be optimized (we call this ``model parameters``). .. code-block:: none - #================================================================================ - # Available parameters to optimize. - # Parameters in `original` space. - # Model: SW_StillingerWeber_1985_Si__MO_405512056662_006 - #================================================================================ - - name: A - value: [15.28484792] - size: 1 - - name: B - value: [0.60222456] - size: 1 - - name: p - value: [4.] - size: 1 - - name: q - value: [0.] - size: 1 - - name: sigma - value: [2.0951] - size: 1 - - name: gamma - value: [2.51412] - size: 1 - - name: cutoff - value: [3.77118] - size: 1 - - name: lambda - value: [45.5322] - size: 1 - - name: costheta0 - value: [-0.33333333] - size: 1 - - '#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [15.28484792]\nsize: 1\n\nname: B\nvalue: [0.60222456]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.51412]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' -.. GENERATED FROM PYTHON SOURCE LINES 56-70 +.. GENERATED FROM PYTHON SOURCE LINES 57-71 The output is generated by the last line, and it tells us the ``name``, ``value``, ``size``, ``data type`` and a ``description`` of each parameter. @@ -163,7 +121,7 @@ The output is generated by the last line, and it tells us the ``name``, ``value` Now that we know what parameters are available for fitting, we can optimize all or a subset of them to reproduce the training set. -.. GENERATED FROM PYTHON SOURCE LINES 70-77 +.. GENERATED FROM PYTHON SOURCE LINES 71-78 .. code-block:: default @@ -184,31 +142,12 @@ subset of them to reproduce the training set. .. code-block:: none - #================================================================================ - # Model parameters that are optimized. - # Note that the parameters are in the transformed space if - # `params_transform` is provided when instantiating the model. - #================================================================================ - - A 1 - 5.0000000000000000e+00 1.0000000000000000e+00 2.0000000000000000e+01 - - B 1 - 6.0222455840000000e-01 - - sigma 1 - 2.0951000000000000e+00 fix - - gamma 1 - 1.5000000000000000e+00 - - '#================================================================================\n# Model parameters that are optimized.\n# Note that the parameters are in the transformed space if \n# `params_transform` is provided when instantiating the model.\n#================================================================================\n\nA 1\n 5.0000000000000000e+00 1.0000000000000000e+00 2.0000000000000000e+01 \n\nB 1\n 6.0222455840000000e-01 \n\nsigma 1\n 2.0951000000000000e+00 fix \n\ngamma 1\n 1.5000000000000000e+00 \n\n' -.. GENERATED FROM PYTHON SOURCE LINES 78-111 +.. GENERATED FROM PYTHON SOURCE LINES 79-115 Here, we tell KLIFF to fit four parameters ``B``, ``gamma``, ``sigma``, and ``A`` of the SW model. The information for each fitting parameter should be provided as a list of @@ -242,15 +181,19 @@ Training set ------------ KLIFF has a :class:`~kliff.dataset.Dataset` to deal with the training data (and possibly -test data). For the silicon training set, we can read and process the files by: +test data). Additionally, we define the ``energy_weight`` and ``forces_weight`` +corresponding to each configuration using :class:`~kliff.dataset.weight.Weight`. In +this example, we set ``energy_weight`` to ``1.0`` and ``forces_weight`` to ``0.1``. +For the silicon training set, we can read and process the files by: -.. GENERATED FROM PYTHON SOURCE LINES 111-117 +.. GENERATED FROM PYTHON SOURCE LINES 115-122 .. code-block:: default dataset_path = download_dataset(dataset_name="Si_training_set") - tset = Dataset(dataset_path) + weight = Weight(energy_weight=1.0, forces_weight=0.1) + tset = Dataset(dataset_path, weight) configs = tset.get_configs() @@ -258,18 +201,10 @@ test data). For the silicon training set, we can read and process the files by: -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - 2022-03-31 23:10:32.478 | INFO | kliff.dataset.dataset:_read:371 - 1000 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set - -.. GENERATED FROM PYTHON SOURCE LINES 118-134 +.. GENERATED FROM PYTHON SOURCE LINES 123-139 The ``configs`` in the last line is a list of :class:`~kliff.dataset.Configuration`. Each configuration is an internal representation of a processed **extended xyz** file, @@ -288,7 +223,7 @@ parameters stored in the model so that the up-to-date parameters are used the ne the model is evaluated to compute the energy and forces. The calculator can be created by: -.. GENERATED FROM PYTHON SOURCE LINES 134-139 +.. GENERATED FROM PYTHON SOURCE LINES 139-144 .. code-block:: default @@ -301,18 +236,10 @@ by: -.. rst-class:: sphx-glr-script-out - - Out: - .. code-block:: none - 2022-03-31 23:10:36.673 | INFO | kliff.calculators.calculator:create:107 - Create calculator for 1000 configurations. - - - -.. GENERATED FROM PYTHON SOURCE LINES 140-157 +.. GENERATED FROM PYTHON SOURCE LINES 145-160 where ``calc.create(configs)`` does some initializations for each configuration in the training set, such as creating the neighbor list. @@ -326,20 +253,17 @@ potential predictions and uses minimization algorithms to reduce the loss as muc possible. KLIFF provides a large number of minimization algorithms by interacting with SciPy_. For physics-motivated potentials, any algorithm listed on `scipy.optimize.minimize`_ and `scipy.optimize.least_squares`_ can be used. In the -following code snippet, we create a loss of energy and forces, where the residual -function uses an ``energy_weight`` of ``1.0`` and a ``forces_weight`` of ``0.1``, and -``2`` processors will be used to calculate the loss. The ``L-BFGS-B`` minimization -algorithm is applied to minimize the loss, and the minimization is allowed to run for -a max number of 100 iterations. +following code snippet, we create a loss of energy and forces and use ``2`` processors +to calculate the loss. The ``L-BFGS-B`` minimization algorithm is applied to minimize +the loss, and the minimization is allowed to run for a max number of 100 iterations. -.. GENERATED FROM PYTHON SOURCE LINES 157-164 +.. GENERATED FROM PYTHON SOURCE LINES 160-166 .. code-block:: default steps = 100 - residual_data = {"energy_weight": 1.0, "forces_weight": 0.1} - loss = Loss(calc, residual_data=residual_data, nprocs=2) + loss = Loss(calc, nprocs=2) loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": steps}) @@ -353,31 +277,28 @@ a max number of 100 iterations. .. code-block:: none - 2022-03-31 23:10:36.675 | INFO | kliff.loss:minimize:275 - Start minimization using method: L-BFGS-B. - 2022-03-31 23:10:36.675 | INFO | kliff.loss:_scipy_optimize:391 - Running in multiprocessing mode with 2 processes. - 2022-03-31 23:12:11.739 | INFO | kliff.loss:minimize:277 - Finish minimization using method: {method}. - fun: 0.6940780133179182 + fun: 0.6940780132834561 hess_inv: <3x3 LbfgsInvHessProduct with dtype=float64> - jac: array([ 4.68514078e-06, -1.84019465e-04, 3.29847263e-05]) + jac: array([-3.65263345e-06, 1.88971060e-04, 3.90798507e-06]) message: 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH' - nfev: 172 - nit: 35 - njev: 43 + nfev: 180 + nit: 36 + njev: 45 status: 0 success: True - x: array([14.93863445, 0.58740275, 2.20146349]) + x: array([14.93863664, 0.58740288, 2.20146242]) -.. GENERATED FROM PYTHON SOURCE LINES 165-169 +.. GENERATED FROM PYTHON SOURCE LINES 167-171 The minimization stops after running for 27 steps. After the minimization, we'd better save the model, which can be loaded later for the purpose to do a retraining or evaluations. If satisfied with the fitted model, you can also write it as a KIM model that can be used with LAMMPS_, GULP_, ASE_, etc. via the kim-api_. -.. GENERATED FROM PYTHON SOURCE LINES 169-176 +.. GENERATED FROM PYTHON SOURCE LINES 171-178 .. code-block:: default @@ -392,37 +313,10 @@ that can be used with LAMMPS_, GULP_, ASE_, etc. via the kim-api_. -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - #================================================================================ - # Model parameters that are optimized. - # Note that the parameters are in the transformed space if - # `params_transform` is provided when instantiating the model. - #================================================================================ - - A 1 - 1.4938634447205009e+01 1.0000000000000000e+00 2.0000000000000000e+01 - - B 1 - 5.8740275142426945e-01 - - sigma 1 - 2.0951000000000000e+00 fix - - gamma 1 - 2.2014634864154190e+00 - - - 2022-03-31 23:12:11.756 | INFO | kliff.models.kim:write_kim_model:695 - KLIFF trained model write to `/Users/mjwen/Applications/kliff/examples/SW_StillingerWeber_1985_Si__MO_405512056662_006_kliff_trained` - -.. GENERATED FROM PYTHON SOURCE LINES 177-195 +.. GENERATED FROM PYTHON SOURCE LINES 179-197 The first line of the above code generates the output. A comparison with the original parameters before carrying out the minimization shows that we recover the original @@ -446,7 +340,7 @@ parameters quite reasonably. The second line saves the fitted model to a file na .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 43.094 seconds) + **Total running time of the script:** ( 2 minutes 5.502 seconds) .. _sphx_glr_download_auto_examples_example_kim_SW_Si.py: diff --git a/docs/source/auto_examples/example_kim_SW_Si_codeobj.pickle b/docs/source/auto_examples/example_kim_SW_Si_codeobj.pickle index 7ea30e4a..32909ed2 100644 Binary files a/docs/source/auto_examples/example_kim_SW_Si_codeobj.pickle and b/docs/source/auto_examples/example_kim_SW_Si_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_lennard_jones.rst b/docs/source/auto_examples/example_lennard_jones.rst index 865cf8d7..2ac3f725 100644 --- a/docs/source/auto_examples/example_lennard_jones.rst +++ b/docs/source/auto_examples/example_lennard_jones.rst @@ -36,69 +36,8 @@ Compare this with :ref:`tut_kim_sw`. -.. rst-class:: sphx-glr-script-out - Out: - .. code-block:: none - - 2022-03-31 23:07:55.100 | INFO | kliff.dataset.dataset:_read:371 - 4 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set_4_configs - #================================================================================ - # Available parameters to optimize. - # Parameters in `original` space. - # Model: LJ6-12 - #================================================================================ - - name: epsilon - value: [1.] - size: 1 - - name: sigma - value: [2.] - size: 1 - - name: cutoff - value: [5.] - size: 1 - - - #================================================================================ - # Model parameters that are optimized. - # Note that the parameters are in the transformed space if - # `params_transform` is provided when instantiating the model. - #================================================================================ - - sigma 1 - 2.0000000000000000e+00 - - epsilon 1 - 1.0000000000000000e+00 - - - 2022-03-31 23:07:55.106 | INFO | kliff.calculators.calculator:create:107 - Create calculator for 4 configurations. - 2022-03-31 23:07:55.107 | INFO | kliff.loss:minimize:275 - Start minimization using method: L-BFGS-B. - 2022-03-31 23:07:55.107 | INFO | kliff.loss:_scipy_optimize:389 - Running in serial mode. - 2022-03-31 23:07:58.288 | INFO | kliff.loss:minimize:277 - Finish minimization using method: {method}. - #================================================================================ - # Model parameters that are optimized. - # Note that the parameters are in the transformed space if - # `params_transform` is provided when instantiating the model. - #================================================================================ - - sigma 1 - 2.0629043239028659e+00 - - epsilon 1 - 1.5614870430532530e+00 - - - - - - - - -| .. code-block:: default @@ -136,7 +75,7 @@ Compare this with :ref:`tut_kim_sw`. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 6.086 seconds) + **Total running time of the script:** ( 0 minutes 6.187 seconds) .. _sphx_glr_download_auto_examples_example_lennard_jones.py: diff --git a/docs/source/auto_examples/example_lennard_jones_codeobj.pickle b/docs/source/auto_examples/example_lennard_jones_codeobj.pickle index 97044ff4..2b2f8493 100644 Binary files a/docs/source/auto_examples/example_lennard_jones_codeobj.pickle and b/docs/source/auto_examples/example_lennard_jones_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_linear_regression.rst b/docs/source/auto_examples/example_linear_regression.rst index 21356184..ac629eb9 100644 --- a/docs/source/auto_examples/example_linear_regression.rst +++ b/docs/source/auto_examples/example_linear_regression.rst @@ -65,17 +65,17 @@ symmetry functions. .. code-block:: none - 2022-03-31 23:02:47.029 | INFO | kliff.dataset.dataset:_read:371 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat - 2022-03-31 23:02:47.030 | INFO | kliff.calculators.calculator_torch:_get_device:417 - Training on cpu - 2022-03-31 23:02:47.032 | INFO | kliff.descriptors.descriptor:generate_fingerprints:104 - Start computing mean and stdev of fingerprints. - 2022-03-31 23:03:07.080 | INFO | kliff.descriptors.descriptor:generate_fingerprints:121 - Finish computing mean and stdev of fingerprints. - 2022-03-31 23:03:07.088 | INFO | kliff.descriptors.descriptor:generate_fingerprints:129 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`. - 2022-03-31 23:03:07.089 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:164 - Pickling fingerprints to `fingerprints.pkl` - 2022-03-31 23:03:07.094 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 0. - 2022-03-31 23:03:07.585 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 100. - 2022-03-31 23:03:08.004 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 200. - 2022-03-31 23:03:08.384 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 300. - 2022-03-31 23:03:08.665 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:219 - Pickle 400 configurations finished. + 2022-04-28 10:33:14.541 | INFO | kliff.dataset.dataset:_read:397 - 400 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set/varying_alat + 2022-04-28 10:33:14.541 | INFO | kliff.calculators.calculator_torch:_get_device:417 - Training on cpu + 2022-04-28 10:33:14.542 | INFO | kliff.descriptors.descriptor:generate_fingerprints:104 - Start computing mean and stdev of fingerprints. + 2022-04-28 10:33:37.085 | INFO | kliff.descriptors.descriptor:generate_fingerprints:121 - Finish computing mean and stdev of fingerprints. + 2022-04-28 10:33:37.092 | INFO | kliff.descriptors.descriptor:generate_fingerprints:129 - Fingerprints mean and stdev saved to `fingerprints_mean_and_stdev.pkl`. + 2022-04-28 10:33:37.092 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:164 - Pickling fingerprints to `fingerprints.pkl` + 2022-04-28 10:33:37.095 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 0. + 2022-04-28 10:33:37.424 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 100. + 2022-04-28 10:33:38.046 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 200. + 2022-04-28 10:33:38.510 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:176 - Processing configuration: 300. + 2022-04-28 10:33:38.884 | INFO | kliff.descriptors.descriptor:_dump_fingerprints:219 - Pickle 400 configurations finished. @@ -109,7 +109,7 @@ function of its calculator. .. code-block:: none - 2022-03-31 23:03:09.269 | INFO | kliff.models.linear_regression:fit:39 - fit model "LinearRegression" finished. + 2022-04-28 10:33:39.460 | INFO | kliff.models.linear_regression:fit:39 - fit model "LinearRegression" finished. fit model "LinearRegression" finished. @@ -118,7 +118,7 @@ function of its calculator. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 25.707 seconds) + **Total running time of the script:** ( 0 minutes 26.918 seconds) .. _sphx_glr_download_auto_examples_example_linear_regression.py: diff --git a/docs/source/auto_examples/example_linear_regression_codeobj.pickle b/docs/source/auto_examples/example_linear_regression_codeobj.pickle index f241d709..e0eff4e4 100644 Binary files a/docs/source/auto_examples/example_linear_regression_codeobj.pickle and b/docs/source/auto_examples/example_linear_regression_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_nn_Si.ipynb b/docs/source/auto_examples/example_nn_Si.ipynb index 121cc26b..f038737b 100644 --- a/docs/source/auto_examples/example_nn_Si.ipynb +++ b/docs/source/auto_examples/example_nn_Si.ipynb @@ -33,7 +33,7 @@ }, "outputs": [], "source": [ - "from kliff import nn\nfrom kliff.calculators import CalculatorTorch\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset" + "from kliff import nn\nfrom kliff.calculators import CalculatorTorch\nfrom kliff.dataset import Dataset\nfrom kliff.dataset.weight import Weight\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset" ] }, { @@ -87,7 +87,7 @@ }, "outputs": [], "source": [ - "# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model, gpu=False)\n_ = calc.create(configs, reuse=False)" + "# training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ndataset_path = dataset_path.joinpath(\"varying_alat\")\nweight = Weight(forces_weight=0.3)\ntset = Dataset(dataset_path, weight)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorch(model, gpu=False)\n_ = calc.create(configs, reuse=False)" ] }, { @@ -105,7 +105,7 @@ }, "outputs": [], "source": [ - "loss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=100, lr=0.001)" + "loss = Loss(calc)\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=100, lr=0.001)" ] }, { diff --git a/docs/source/auto_examples/example_nn_Si.py b/docs/source/auto_examples/example_nn_Si.py index 927f4246..57367c8f 100644 --- a/docs/source/auto_examples/example_nn_Si.py +++ b/docs/source/auto_examples/example_nn_Si.py @@ -28,6 +28,7 @@ from kliff import nn from kliff.calculators import CalculatorTorch from kliff.dataset import Dataset +from kliff.dataset.weight import Weight from kliff.descriptors import SymmetryFunction from kliff.loss import Loss from kliff.models import NeuralNetwork @@ -108,7 +109,8 @@ # training set dataset_path = download_dataset(dataset_name="Si_training_set") dataset_path = dataset_path.joinpath("varying_alat") -tset = Dataset(dataset_path) +weight = Weight(forces_weight=0.3) +tset = Dataset(dataset_path, weight) configs = tset.get_configs() # calculator @@ -130,7 +132,7 @@ # ``0.001``, and typically, one may need to play with this to find an acceptable one that # drives the loss down in a reasonable time. -loss = Loss(calc, residual_data={"forces_weight": 0.3}) +loss = Loss(calc) result = loss.minimize(method="Adam", num_epochs=10, batch_size=100, lr=0.001) diff --git a/docs/source/auto_examples/example_nn_Si.py.md5 b/docs/source/auto_examples/example_nn_Si.py.md5 index df7af95c..35044061 100644 --- a/docs/source/auto_examples/example_nn_Si.py.md5 +++ b/docs/source/auto_examples/example_nn_Si.py.md5 @@ -1 +1 @@ -ddfc7cb67629dfea5b40790f4f7ce5e0 \ No newline at end of file +283e433c6d1ef194c0d21531bebe795e \ No newline at end of file diff --git a/docs/source/auto_examples/example_nn_Si.rst b/docs/source/auto_examples/example_nn_Si.rst index eb285da3..e47de91c 100644 --- a/docs/source/auto_examples/example_nn_Si.rst +++ b/docs/source/auto_examples/example_nn_Si.rst @@ -42,7 +42,7 @@ information of this format. Let's first import the modules that will be used in this example. -.. GENERATED FROM PYTHON SOURCE LINES 27-36 +.. GENERATED FROM PYTHON SOURCE LINES 27-37 .. code-block:: default @@ -50,6 +50,7 @@ Let's first import the modules that will be used in this example. from kliff import nn from kliff.calculators import CalculatorTorch from kliff.dataset import Dataset + from kliff.dataset.weight import Weight from kliff.descriptors import SymmetryFunction from kliff.loss import Loss from kliff.models import NeuralNetwork @@ -62,7 +63,7 @@ Let's first import the modules that will be used in this example. -.. GENERATED FROM PYTHON SOURCE LINES 37-43 +.. GENERATED FROM PYTHON SOURCE LINES 38-44 Model ----- @@ -71,7 +72,7 @@ For a NN model, we need to specify the descriptor that transforms atomic environ information to the fingerprints, which the NN model uses as the input. Here, we use the symmetry functions proposed by Behler and coworkers. -.. GENERATED FROM PYTHON SOURCE LINES 43-49 +.. GENERATED FROM PYTHON SOURCE LINES 44-50 .. code-block:: default @@ -88,7 +89,7 @@ symmetry functions proposed by Behler and coworkers. -.. GENERATED FROM PYTHON SOURCE LINES 50-59 +.. GENERATED FROM PYTHON SOURCE LINES 51-60 The ``cut_name`` and ``cut_dists`` tell the descriptor what type of cutoff function to use and what the cutoff distances are. ``hyperparams`` specifies the set of @@ -100,7 +101,7 @@ optimize NN model. We can then build the NN model on top of the descriptor. -.. GENERATED FROM PYTHON SOURCE LINES 59-76 +.. GENERATED FROM PYTHON SOURCE LINES 60-77 .. code-block:: default @@ -128,7 +129,7 @@ We can then build the NN model on top of the descriptor. -.. GENERATED FROM PYTHON SOURCE LINES 77-107 +.. GENERATED FROM PYTHON SOURCE LINES 78-108 In the above code, we build a NN model with an input layer, two hidden layer, and an output layer. The ``descriptor`` carries the information of the input layer, so it is @@ -161,7 +162,7 @@ fingerprints generated from the descriptor if it is present. To train on gpu, set ``gpu=True`` in ``Calculator``. -.. GENERATED FROM PYTHON SOURCE LINES 107-119 +.. GENERATED FROM PYTHON SOURCE LINES 108-121 .. code-block:: default @@ -169,7 +170,8 @@ To train on gpu, set ``gpu=True`` in ``Calculator``. # training set dataset_path = download_dataset(dataset_name="Si_training_set") dataset_path = dataset_path.joinpath("varying_alat") - tset = Dataset(dataset_path) + weight = Weight(forces_weight=0.3) + tset = Dataset(dataset_path, weight) configs = tset.get_configs() # calculator @@ -184,7 +186,7 @@ To train on gpu, set ``gpu=True`` in ``Calculator``. -.. GENERATED FROM PYTHON SOURCE LINES 120-132 +.. GENERATED FROM PYTHON SOURCE LINES 122-134 Loss function ------------- @@ -199,12 +201,12 @@ through the training set for ``10`` epochs. The learning rate ``lr`` used here i ``0.001``, and typically, one may need to play with this to find an acceptable one that drives the loss down in a reasonable time. -.. GENERATED FROM PYTHON SOURCE LINES 132-137 +.. GENERATED FROM PYTHON SOURCE LINES 134-139 .. code-block:: default - loss = Loss(calc, residual_data={"forces_weight": 0.3}) + loss = Loss(calc) result = loss.minimize(method="Adam", num_epochs=10, batch_size=100, lr=0.001) @@ -219,27 +221,27 @@ drives the loss down in a reasonable time. .. code-block:: none Epoch = 0 loss = 7.3307514191e+01 - Epoch = 1 loss = 7.2090658188e+01 + Epoch = 1 loss = 7.2090656281e+01 Epoch = 2 loss = 7.1389846802e+01 Epoch = 3 loss = 7.0744289398e+01 Epoch = 4 loss = 7.0117309570e+01 Epoch = 5 loss = 6.9499519348e+01 - Epoch = 6 loss = 6.8886822701e+01 + Epoch = 6 loss = 6.8886824608e+01 Epoch = 7 loss = 6.8277158737e+01 - Epoch = 8 loss = 6.7668616295e+01 + Epoch = 8 loss = 6.7668614388e+01 Epoch = 9 loss = 6.7058616638e+01 - Epoch = 10 loss = 6.6683933258e+01 + Epoch = 10 loss = 6.6683934212e+01 -.. GENERATED FROM PYTHON SOURCE LINES 138-141 +.. GENERATED FROM PYTHON SOURCE LINES 140-143 We can save the trained model to disk, and later can load it back if we want. We can also write the trained model to a KIM model such that it can be used in other simulation codes such as LAMMPS via the KIM API. -.. GENERATED FROM PYTHON SOURCE LINES 141-148 +.. GENERATED FROM PYTHON SOURCE LINES 143-150 .. code-block:: default @@ -257,7 +259,7 @@ codes such as LAMMPS via the KIM API. -.. GENERATED FROM PYTHON SOURCE LINES 149-154 +.. GENERATED FROM PYTHON SOURCE LINES 151-156 .. note:: Now we have trained an NN for a single specie Si. If you have multiple species in @@ -268,7 +270,7 @@ codes such as LAMMPS via the KIM API. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 47.281 seconds) + **Total running time of the script:** ( 0 minutes 47.669 seconds) .. _sphx_glr_download_auto_examples_example_nn_Si.py: diff --git a/docs/source/auto_examples/example_nn_SiC.ipynb b/docs/source/auto_examples/example_nn_SiC.ipynb index e70990de..43533cb3 100644 --- a/docs/source/auto_examples/example_nn_SiC.ipynb +++ b/docs/source/auto_examples/example_nn_SiC.ipynb @@ -26,7 +26,7 @@ }, "outputs": [], "source": [ - "from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)" + "from kliff import nn\nfrom kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies\nfrom kliff.dataset import Dataset\nfrom kliff.dataset.weight import Weight\nfrom kliff.descriptors import SymmetryFunction\nfrom kliff.loss import Loss\nfrom kliff.models import NeuralNetwork\nfrom kliff.utils import download_dataset\n\ndescriptor = SymmetryFunction(\n cut_name=\"cos\",\n cut_dists={\"Si-Si\": 5.0, \"C-C\": 5.0, \"Si-C\": 5.0},\n hyperparams=\"set51\",\n normalize=True,\n)" ] }, { @@ -44,7 +44,7 @@ }, "outputs": [], "source": [ - "N1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c}, gpu=False)\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc, residual_data={\"forces_weight\": 0.3})\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)" + "N1 = 10\nN2 = 10\nmodel_si = NeuralNetwork(descriptor)\nmodel_si.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_si.set_save_metadata(prefix=\"./kliff_saved_model_si\", start=5, frequency=2)\n\n\nN1 = 10\nN2 = 10\nmodel_c = NeuralNetwork(descriptor)\nmodel_c.add_layers(\n # first hidden layer\n nn.Linear(descriptor.get_size(), N1),\n nn.Tanh(),\n # second hidden layer\n nn.Linear(N1, N2),\n nn.Tanh(),\n # output layer\n nn.Linear(N2, 1),\n)\nmodel_c.set_save_metadata(prefix=\"./kliff_saved_model_c\", start=5, frequency=2)\n\n\n# training set\ndataset_path = download_dataset(dataset_name=\"SiC_training_set\")\nweight = Weight(forces_weight=0.3)\ntset = Dataset(dataset_path, weight)\nconfigs = tset.get_configs()\n\n# calculator\ncalc = CalculatorTorchSeparateSpecies({\"Si\": model_si, \"C\": model_c}, gpu=False)\n_ = calc.create(configs, reuse=False)\n\n# loss\nloss = Loss(calc)\nresult = loss.minimize(method=\"Adam\", num_epochs=10, batch_size=4, lr=0.001)" ] }, { diff --git a/docs/source/auto_examples/example_nn_SiC.py b/docs/source/auto_examples/example_nn_SiC.py index 7c89c2cf..72192932 100644 --- a/docs/source/auto_examples/example_nn_SiC.py +++ b/docs/source/auto_examples/example_nn_SiC.py @@ -13,6 +13,7 @@ from kliff import nn from kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies from kliff.dataset import Dataset +from kliff.dataset.weight import Weight from kliff.descriptors import SymmetryFunction from kliff.loss import Loss from kliff.models import NeuralNetwork @@ -63,7 +64,8 @@ # training set dataset_path = download_dataset(dataset_name="SiC_training_set") -tset = Dataset(dataset_path) +weight = Weight(forces_weight=0.3) +tset = Dataset(dataset_path, weight) configs = tset.get_configs() # calculator @@ -71,7 +73,7 @@ _ = calc.create(configs, reuse=False) # loss -loss = Loss(calc, residual_data={"forces_weight": 0.3}) +loss = Loss(calc) result = loss.minimize(method="Adam", num_epochs=10, batch_size=4, lr=0.001) diff --git a/docs/source/auto_examples/example_nn_SiC.py.md5 b/docs/source/auto_examples/example_nn_SiC.py.md5 index 9e825388..0bcdca84 100644 --- a/docs/source/auto_examples/example_nn_SiC.py.md5 +++ b/docs/source/auto_examples/example_nn_SiC.py.md5 @@ -1 +1 @@ -d5bde2a5fadc3f5b131de33dfc7bbad0 \ No newline at end of file +a6e06d3de1cefb00c05f34f3d4b3d42d \ No newline at end of file diff --git a/docs/source/auto_examples/example_nn_SiC.rst b/docs/source/auto_examples/example_nn_SiC.rst index 7dddbf66..a6daa27a 100644 --- a/docs/source/auto_examples/example_nn_SiC.rst +++ b/docs/source/auto_examples/example_nn_SiC.rst @@ -27,7 +27,7 @@ In this tutorial, we train a neural network (NN) potential for a system containi species: Si and C. This is very similar to the training for systems containing a single specie (take a look at :ref:`tut_nn` for Si if you haven't yet). -.. GENERATED FROM PYTHON SOURCE LINES 11-28 +.. GENERATED FROM PYTHON SOURCE LINES 11-29 .. code-block:: default @@ -36,6 +36,7 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet). from kliff import nn from kliff.calculators.calculator_torch import CalculatorTorchSeparateSpecies from kliff.dataset import Dataset + from kliff.dataset.weight import Weight from kliff.descriptors import SymmetryFunction from kliff.loss import Loss from kliff.models import NeuralNetwork @@ -55,12 +56,12 @@ specie (take a look at :ref:`tut_nn` for Si if you haven't yet). -.. GENERATED FROM PYTHON SOURCE LINES 29-31 +.. GENERATED FROM PYTHON SOURCE LINES 30-32 We will create two models, one for Si and the other for C. The purpose is to have a separate set of parameters for Si and C so that they can be differentiated. -.. GENERATED FROM PYTHON SOURCE LINES 31-78 +.. GENERATED FROM PYTHON SOURCE LINES 32-80 .. code-block:: default @@ -99,7 +100,8 @@ a separate set of parameters for Si and C so that they can be differentiated. # training set dataset_path = download_dataset(dataset_name="SiC_training_set") - tset = Dataset(dataset_path) + weight = Weight(forces_weight=0.3) + tset = Dataset(dataset_path, weight) configs = tset.get_configs() # calculator @@ -107,7 +109,7 @@ a separate set of parameters for Si and C so that they can be differentiated. _ = calc.create(configs, reuse=False) # loss - loss = Loss(calc, residual_data={"forces_weight": 0.3}) + loss = Loss(calc) result = loss.minimize(method="Adam", num_epochs=10, batch_size=4, lr=0.001) @@ -136,11 +138,11 @@ a separate set of parameters for Si and C so that they can be differentiated. -.. GENERATED FROM PYTHON SOURCE LINES 79-80 +.. GENERATED FROM PYTHON SOURCE LINES 81-82 We can save the trained model to disk, and later can load it back if we want. -.. GENERATED FROM PYTHON SOURCE LINES 80-84 +.. GENERATED FROM PYTHON SOURCE LINES 82-86 .. code-block:: default @@ -158,7 +160,7 @@ We can save the trained model to disk, and later can load it back if we want. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.711 seconds) + **Total running time of the script:** ( 0 minutes 2.116 seconds) .. _sphx_glr_download_auto_examples_example_nn_SiC.py: diff --git a/docs/source/auto_examples/example_nn_SiC_codeobj.pickle b/docs/source/auto_examples/example_nn_SiC_codeobj.pickle index 201b6609..f381fd52 100644 Binary files a/docs/source/auto_examples/example_nn_SiC_codeobj.pickle and b/docs/source/auto_examples/example_nn_SiC_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_nn_Si_codeobj.pickle b/docs/source/auto_examples/example_nn_Si_codeobj.pickle index a32a9e96..4b01cb10 100644 Binary files a/docs/source/auto_examples/example_nn_Si_codeobj.pickle and b/docs/source/auto_examples/example_nn_Si_codeobj.pickle differ diff --git a/docs/source/auto_examples/example_parameter_transform.ipynb b/docs/source/auto_examples/example_parameter_transform.ipynb index 7995e46d..bb9373a7 100644 --- a/docs/source/auto_examples/example_parameter_transform.ipynb +++ b/docs/source/auto_examples/example_parameter_transform.ipynb @@ -15,7 +15,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n\n# Parameter transformation for the Stillinger-Weber potential\n\nParameters in the empirical interatomic potential are often restricted by some physical\nconstraints. As an example, in the Stillinger-Weber (SW) potential, the energy scaling\nparameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma``\nand ``gamma``) are constrained to be positive.\n\nDue to these constraints, we might want to work with the log of the parameters,\ni.e., ``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the\noptimization. After the optimization, we can transform them back to the original\nparameter space using an exponential function, which will guarantee the positiveness\nof the parameters.\n\nIn this tutorial, we show how to apply such parameter transformation to the SW\npotential for silicon that is archived on OpenKIM_. Compare this to `tut_kim_sw`.\n" + "\n\n# Parameter transformation for the Stillinger-Weber potential\n\nParameters in the empirical interatomic potential are often restricted by some physical\nconstraints. As an example, in the Stillinger-Weber (SW) potential, the energy scaling\nparameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma`` and\n``gamma``) are constrained to be positive.\n\nDue to these constraints, we might want to work with the log of the parameters, i.e.,\n``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the optimization.\nAfter the optimization, we can transform them back to the original parameter space using\nan exponential function, which will guarantee the positiveness of the parameters.\n\nIn this tutorial, we show how to apply parameter transformation to the SW potential for\nsilicon that is archived on OpenKIM_. Compare this with `tut_kim_sw`.\n" ] }, { @@ -33,14 +33,14 @@ }, "outputs": [], "source": [ - "import numpy as np\n\nfrom kliff.calculators import Calculator\nfrom kliff.dataset import Dataset\nfrom kliff.loss import Loss\nfrom kliff.models import KIMModel\nfrom kliff.models.parameter_transform import LogParameterTransform\nfrom kliff.utils import download_dataset" + "import numpy as np\n\nfrom kliff.calculators import Calculator\nfrom kliff.dataset import Dataset\nfrom kliff.dataset.weight import Weight\nfrom kliff.loss import Loss\nfrom kliff.models import KIMModel\nfrom kliff.models.parameter_transform import LogParameterTransform\nfrom kliff.utils import download_dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Before creating a KIM model for the SW potential, we first instantiate the parameter\ntransformation class that we want to use. ``kliff`` has a built-in log-transformation;\nhowever, extending it to other parameter transformation can be done by creating a\nsubclass of :class:`~kliff.models.parameter_transform.ParameterTransform`.\n\nTo make a direct comparison to `tut_kim_sw`, in this tutorial we will apply\nlog-transformation to parameters ``A``, ``B``, ``sigma``, and ``gamma``, which\ncorrespond to energy and length scales.\n\n" + "Before creating a KIM model for the SW potential, we first instantiate the parameter\ntransformation class that we want to use. ``kliff`` has a built-in log-transformation;\nhowever, extending it to other parameter transformation can be done by creating a\nsubclass of :class:`~kliff.models.parameter_transform.ParameterTransform`.\n\nTo make a direct comparison to `tut_kim_sw`, in this tutorial we will apply\nlog-transformation to parameters ``A``, ``B``, ``sigma``, and ``gamma``, which\ncorrespond to energy and length scales.\n\n\n" ] }, { @@ -58,7 +58,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "``model.echo_model_params(params_space=\"original\")`` above will print out parameter\nvalues in the original, untransformed space, i.e., the original parameterization of\nthe model. If we supply the argument ``params_space=\"transformed\"``, then the printed\nparameter values are given in the transformed space, e.g., log space (below). The\nvalues of the other parameters are not changed.\n\n" + "``model.echo_model_params(params_space=\"original\")`` above will print out parameter\nvalues in the original, untransformed space, i.e., the original parameterization of\nthe model. If we supply the argument ``params_space=\"transformed\"``, then the printed\nparameter values are given in the transformed space, e.g., log space (below). The\nvalues of the other parameters are not changed.\n\n\n" ] }, { @@ -69,21 +69,21 @@ }, "outputs": [], "source": [ - "model.echo_model_params(params_space=\"transformed\")" + "model.echo_model_params(params_space=\"original\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Compare the output of ``params_space=\"transformed\"`` and # ``params_space=\"original\",\nyou can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the\nlog space after the transformation.\n\n" + "Compare the output of ``params_space=\"transformed\"`` and ``params_space=\"original\"``,\nyou can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the log\nspace after the transformation.\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Next, we will set up the initial guess of the parameters to optimize. A value of\n``\"default\"`` means the initial guess will be directly taken from the value already\nin the model.\n\n

Note

The parameter values we initialize, as well as the lower and upper bounds,\n are in transformed space (i.e. log space here).

\n\n" + "Next, we will set up the initial guess of the parameters to optimize. A value of\n``\"default\"`` means the initial guess will be directly taken from the value already in\nthe model.\n\n

Note

The parameter values we initialize, as well as the lower and upper bounds, are in\n transformed space (i.e. log space here).

\n\n" ] }, { @@ -101,7 +101,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can show the parameters we've just set by ``model.echo_opt_params()``.\n\n

Note

``model.echo_opt_params()`` always displays the parameter values in the transformed\n space. And it only shows all the parameters specified to optimize. To show all\n the parameters, do ``model.echo_model_params(params_space=\"transformed\")``.

\n\n" + "We can show the parameters we\u2019ve just set by ``model.echo_opt_params()``.\n\n

Note

``model.echo_opt_params()`` always displays the parameter values in the transformed\n space. And it only shows all the parameters specified to optimize. To show all\n the parameters, do ``model.echo_model_params(params_space=\"transformed\")``.

\n\n" ] }, { @@ -119,14 +119,14 @@ }, "outputs": [], "source": [ - "# Training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\ntset = Dataset(dataset_path)\nconfigs = tset.get_configs()\n\n# Calculator\ncalc = Calculator(model)\n_ = calc.create(configs)\n\n# Loss function and model training\nsteps = 100\nresidual_data = {\"energy_weight\": 1.0, \"forces_weight\": 0.1}\nloss = Loss(calc, residual_data=residual_data, nprocs=2)\nloss.minimize(method=\"L-BFGS-B\", options={\"disp\": True, \"maxiter\": steps})\n\nmodel.echo_model_params(params_space=\"original\")" + "# Training set\ndataset_path = download_dataset(dataset_name=\"Si_training_set\")\nweight = Weight(energy_weight=1.0, forces_weight=0.1)\ntset = Dataset(dataset_path, weight)\nconfigs = tset.get_configs()\n\n# Calculator\ncalc = Calculator(model)\n_ = calc.create(configs)\n\n# Loss function and model training\nsteps = 100\nloss = Loss(calc, nprocs=2)\nloss.minimize(method=\"L-BFGS-B\", options={\"disp\": True, \"maxiter\": steps})\n\nmodel.echo_model_params(params_space=\"original\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "The optimized parameter values from this model training are very close, if not the\nsame, as in `tut_kim_sw`. This is expected for the simple tutorial example\nconsidered. But for more complex models, training in a transformed space can make\nit much easier for the optimizer to navigate the parameter space.\n\n\n\n" + "The optimized parameter values from this model training are very close, if not the\nsame, as in `tut_kim_sw`. This is expected for the simple tutorial example\nconsidered. But for more complex models, training in a transformed space can make it\nmuch easier for the optimizer to navigate the parameter space.\n\n\n\n" ] } ], diff --git a/docs/source/auto_examples/example_parameter_transform.py b/docs/source/auto_examples/example_parameter_transform.py index 55f82c28..3789f6bc 100644 --- a/docs/source/auto_examples/example_parameter_transform.py +++ b/docs/source/auto_examples/example_parameter_transform.py @@ -6,22 +6,19 @@ Parameters in the empirical interatomic potential are often restricted by some physical constraints. As an example, in the Stillinger-Weber (SW) potential, the energy scaling -parameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma`` -and ``gamma``) are constrained to be positive. +parameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma`` and +``gamma``) are constrained to be positive. -Due to these constraints, we might want to work with the log of the parameters, -i.e., ``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the -optimization. After the optimization, we can transform them back to the original -parameter space using an exponential function, which will guarantee the positiveness -of the parameters. +Due to these constraints, we might want to work with the log of the parameters, i.e., +``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the optimization. +After the optimization, we can transform them back to the original parameter space using +an exponential function, which will guarantee the positiveness of the parameters. -In this tutorial, we show how to apply such parameter transformation to the SW -potential for silicon that is archived on OpenKIM_. Compare this to :ref:`tut_kim_sw`. +In this tutorial, we show how to apply parameter transformation to the SW potential for +silicon that is archived on OpenKIM_. Compare this with :ref:`tut_kim_sw`. """ - - ########################################################################################## # To start, let's first install the SW model:: # @@ -37,6 +34,7 @@ from kliff.calculators import Calculator from kliff.dataset import Dataset +from kliff.dataset.weight import Weight from kliff.loss import Loss from kliff.models import KIMModel from kliff.models.parameter_transform import LogParameterTransform @@ -51,6 +49,7 @@ # To make a direct comparison to :ref:`tut_kim_sw`, in this tutorial we will apply # log-transformation to parameters ``A``, ``B``, ``sigma``, and ``gamma``, which # correspond to energy and length scales. +# transform = LogParameterTransform(param_names=["A", "B", "sigma", "gamma"]) model = KIMModel( @@ -59,28 +58,32 @@ ) model.echo_model_params(params_space="original") + ########################################################################################## # ``model.echo_model_params(params_space="original")`` above will print out parameter # values in the original, untransformed space, i.e., the original parameterization of # the model. If we supply the argument ``params_space="transformed"``, then the printed # parameter values are given in the transformed space, e.g., log space (below). The # values of the other parameters are not changed. +# + +model.echo_model_params(params_space="original") -model.echo_model_params(params_space="transformed") ########################################################################################## -# Compare the output of ``params_space="transformed"`` and # ``params_space="original", -# you can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the -# log space after the transformation. +# Compare the output of ``params_space="transformed"`` and ``params_space="original"``, +# you can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the log +# space after the transformation. ########################################################################################## # Next, we will set up the initial guess of the parameters to optimize. A value of -# ``"default"`` means the initial guess will be directly taken from the value already -# in the model. +# ``"default"`` means the initial guess will be directly taken from the value already in +# the model. # # .. note:: -# The parameter values we initialize, as well as the lower and upper bounds, -# are in transformed space (i.e. log space here). +# The parameter values we initialize, as well as the lower and upper bounds, are in +# transformed space (i.e. log space here). + model.set_opt_params( A=[[np.log(5.0), np.log(1.0), np.log(20)]], @@ -91,7 +94,7 @@ model.echo_opt_params() ########################################################################################## -# We can show the parameters we've just set by ``model.echo_opt_params()``. +# We can show the parameters we’ve just set by ``model.echo_opt_params()``. # # .. note:: # ``model.echo_opt_params()`` always displays the parameter values in the transformed @@ -105,7 +108,8 @@ # Training set dataset_path = download_dataset(dataset_name="Si_training_set") -tset = Dataset(dataset_path) +weight = Weight(energy_weight=1.0, forces_weight=0.1) +tset = Dataset(dataset_path, weight) configs = tset.get_configs() # Calculator @@ -114,17 +118,17 @@ # Loss function and model training steps = 100 -residual_data = {"energy_weight": 1.0, "forces_weight": 0.1} -loss = Loss(calc, residual_data=residual_data, nprocs=2) +loss = Loss(calc, nprocs=2) loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": steps}) model.echo_model_params(params_space="original") + ########################################################################################## # The optimized parameter values from this model training are very close, if not the # same, as in :ref:`tut_kim_sw`. This is expected for the simple tutorial example -# considered. But for more complex models, training in a transformed space can make -# it much easier for the optimizer to navigate the parameter space. -# +# considered. But for more complex models, training in a transformed space can make it +# much easier for the optimizer to navigate the parameter space. # # .. _OpenKIM: https://openkim.org +# diff --git a/docs/source/auto_examples/example_parameter_transform.py.md5 b/docs/source/auto_examples/example_parameter_transform.py.md5 index 5b6009d1..972438ce 100644 --- a/docs/source/auto_examples/example_parameter_transform.py.md5 +++ b/docs/source/auto_examples/example_parameter_transform.py.md5 @@ -1 +1 @@ -8da03b06581006a6676d952c91b72a56 \ No newline at end of file +7e409b61e05a833f9c3dabe7c8237ac0 \ No newline at end of file diff --git a/docs/source/auto_examples/example_parameter_transform.rst b/docs/source/auto_examples/example_parameter_transform.rst index 0f6fdd8e..70882a7c 100644 --- a/docs/source/auto_examples/example_parameter_transform.rst +++ b/docs/source/auto_examples/example_parameter_transform.rst @@ -25,19 +25,18 @@ Parameter transformation for the Stillinger-Weber potential Parameters in the empirical interatomic potential are often restricted by some physical constraints. As an example, in the Stillinger-Weber (SW) potential, the energy scaling -parameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma`` -and ``gamma``) are constrained to be positive. +parameters (e.g., ``A`` and ``B``) and the length scaling parameters (e.g., ``sigma`` and +``gamma``) are constrained to be positive. -Due to these constraints, we might want to work with the log of the parameters, -i.e., ``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the -optimization. After the optimization, we can transform them back to the original -parameter space using an exponential function, which will guarantee the positiveness -of the parameters. +Due to these constraints, we might want to work with the log of the parameters, i.e., +``log(A)``, ``log(B)``, ``log(sigma)``, and ``log(gamma)`` when doing the optimization. +After the optimization, we can transform them back to the original parameter space using +an exponential function, which will guarantee the positiveness of the parameters. -In this tutorial, we show how to apply such parameter transformation to the SW -potential for silicon that is archived on OpenKIM_. Compare this to :ref:`tut_kim_sw`. +In this tutorial, we show how to apply parameter transformation to the SW potential for +silicon that is archived on OpenKIM_. Compare this with :ref:`tut_kim_sw`. -.. GENERATED FROM PYTHON SOURCE LINES 26-35 +.. GENERATED FROM PYTHON SOURCE LINES 23-32 To start, let's first install the SW model:: @@ -49,7 +48,7 @@ To start, let's first install the SW model:: This is -.. GENERATED FROM PYTHON SOURCE LINES 35-45 +.. GENERATED FROM PYTHON SOURCE LINES 32-43 .. code-block:: default @@ -58,6 +57,7 @@ This is from kliff.calculators import Calculator from kliff.dataset import Dataset + from kliff.dataset.weight import Weight from kliff.loss import Loss from kliff.models import KIMModel from kliff.models.parameter_transform import LogParameterTransform @@ -70,7 +70,7 @@ This is -.. GENERATED FROM PYTHON SOURCE LINES 46-54 +.. GENERATED FROM PYTHON SOURCE LINES 44-53 Before creating a KIM model for the SW potential, we first instantiate the parameter transformation class that we want to use. ``kliff`` has a built-in log-transformation; @@ -81,7 +81,8 @@ To make a direct comparison to :ref:`tut_kim_sw`, in this tutorial we will apply log-transformation to parameters ``A``, ``B``, ``sigma``, and ``gamma``, which correspond to energy and length scales. -.. GENERATED FROM PYTHON SOURCE LINES 54-62 + +.. GENERATED FROM PYTHON SOURCE LINES 53-62 .. code-block:: default @@ -97,61 +98,19 @@ correspond to energy and length scales. + .. rst-class:: sphx-glr-script-out Out: .. code-block:: none - #================================================================================ - # Available parameters to optimize. - # Parameters in `original` space. - # Model: SW_StillingerWeber_1985_Si__MO_405512056662_006 - #================================================================================ - - name: A - value: [15.28484792] - size: 1 - - name: B - value: [0.60222456] - size: 1 - - name: p - value: [4.] - size: 1 - - name: q - value: [0.] - size: 1 - - name: sigma - value: [2.0951] - size: 1 - - name: gamma - value: [2.51412] - size: 1 - - name: cutoff - value: [3.77118] - size: 1 - - name: lambda - value: [45.5322] - size: 1 - - name: costheta0 - value: [-0.33333333] - size: 1 - - '#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [15.28484792]\nsize: 1\n\nname: B\nvalue: [0.60222456]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.51412]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' -.. GENERATED FROM PYTHON SOURCE LINES 63-68 +.. GENERATED FROM PYTHON SOURCE LINES 63-69 ``model.echo_model_params(params_space="original")`` above will print out parameter values in the original, untransformed space, i.e., the original parameterization of @@ -159,12 +118,14 @@ the model. If we supply the argument ``params_space="transformed"``, then the pr parameter values are given in the transformed space, e.g., log space (below). The values of the other parameters are not changed. -.. GENERATED FROM PYTHON SOURCE LINES 68-71 + +.. GENERATED FROM PYTHON SOURCE LINES 69-73 .. code-block:: default - model.echo_model_params(params_space="transformed") + model.echo_model_params(params_space="original") + @@ -176,75 +137,33 @@ values of the other parameters are not changed. .. code-block:: none - #================================================================================ - # Available parameters to optimize. - # Parameters in `transformed` space. - # Model: SW_StillingerWeber_1985_Si__MO_405512056662_006 - #================================================================================ - - name: A - value: [2.72686201] - size: 1 - - name: B - value: [-0.50712488] - size: 1 - - name: p - value: [4.] - size: 1 - - name: q - value: [0.] - size: 1 - - name: sigma - value: [0.73960128] - size: 1 - - name: gamma - value: [0.92192284] - size: 1 - - name: cutoff - value: [3.77118] - size: 1 - - name: lambda - value: [45.5322] - size: 1 - - name: costheta0 - value: [-0.33333333] - size: 1 - - - '#================================================================================\n# Available parameters to optimize.\n# Parameters in `transformed` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [2.72686201]\nsize: 1\n\nname: B\nvalue: [-0.50712488]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [0.73960128]\nsize: 1\n\nname: gamma\nvalue: [0.92192284]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' + '#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [15.28484792]\nsize: 1\n\nname: B\nvalue: [0.60222456]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.51412]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' -.. GENERATED FROM PYTHON SOURCE LINES 72-75 +.. GENERATED FROM PYTHON SOURCE LINES 74-77 -Compare the output of ``params_space="transformed"`` and # ``params_space="original", -you can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the -log space after the transformation. +Compare the output of ``params_space="transformed"`` and ``params_space="original"``, +you can see that the values of ``A``, ``B``, ``sigma``, and ``gamma`` are in the log +space after the transformation. -.. GENERATED FROM PYTHON SOURCE LINES 77-84 +.. GENERATED FROM PYTHON SOURCE LINES 79-86 Next, we will set up the initial guess of the parameters to optimize. A value of -``"default"`` means the initial guess will be directly taken from the value already -in the model. +``"default"`` means the initial guess will be directly taken from the value already in +the model. .. note:: - The parameter values we initialize, as well as the lower and upper bounds, - are in transformed space (i.e. log space here). + The parameter values we initialize, as well as the lower and upper bounds, are in + transformed space (i.e. log space here). -.. GENERATED FROM PYTHON SOURCE LINES 84-93 +.. GENERATED FROM PYTHON SOURCE LINES 86-96 .. code-block:: default + model.set_opt_params( A=[[np.log(5.0), np.log(1.0), np.log(20)]], B=[["default"]], @@ -263,53 +182,35 @@ in the model. .. code-block:: none - #================================================================================ - # Model parameters that are optimized. - # Note that the parameters are in the transformed space if - # `params_transform` is provided when instantiating the model. - #================================================================================ - - A 1 - 1.6094379124341003e+00 0.0000000000000000e+00 2.9957322735539909e+00 - - B 1 - -5.0712488263019628e-01 - - sigma 1 - 7.3960128493182953e-01 fix - - gamma 1 - 4.0546510810816438e-01 - - '#================================================================================\n# Model parameters that are optimized.\n# Note that the parameters are in the transformed space if \n# `params_transform` is provided when instantiating the model.\n#================================================================================\n\nA 1\n 1.6094379124341003e+00 0.0000000000000000e+00 2.9957322735539909e+00 \n\nB 1\n -5.0712488263019628e-01 \n\nsigma 1\n 7.3960128493182953e-01 fix \n\ngamma 1\n 4.0546510810816438e-01 \n\n' -.. GENERATED FROM PYTHON SOURCE LINES 94-100 +.. GENERATED FROM PYTHON SOURCE LINES 97-103 -We can show the parameters we've just set by ``model.echo_opt_params()``. +We can show the parameters we’ve just set by ``model.echo_opt_params()``. .. note:: ``model.echo_opt_params()`` always displays the parameter values in the transformed space. And it only shows all the parameters specified to optimize. To show all the parameters, do ``model.echo_model_params(params_space="transformed")``. -.. GENERATED FROM PYTHON SOURCE LINES 102-105 +.. GENERATED FROM PYTHON SOURCE LINES 105-108 Once we set the model and the parameter transformation scheme, then further calculations, e.g., training the model, will be performed using the transformed space and can be done in the same way as in :ref:`tut_kim_sw`. -.. GENERATED FROM PYTHON SOURCE LINES 105-123 +.. GENERATED FROM PYTHON SOURCE LINES 108-127 .. code-block:: default # Training set dataset_path = download_dataset(dataset_name="Si_training_set") - tset = Dataset(dataset_path) + weight = Weight(energy_weight=1.0, forces_weight=0.1) + tset = Dataset(dataset_path, weight) configs = tset.get_configs() # Calculator @@ -318,8 +219,7 @@ and can be done in the same way as in :ref:`tut_kim_sw`. # Loss function and model training steps = 100 - residual_data = {"energy_weight": 1.0, "forces_weight": 0.1} - loss = Loss(calc, residual_data=residual_data, nprocs=2) + loss = Loss(calc, nprocs=2) loss.minimize(method="L-BFGS-B", options={"disp": True, "maxiter": steps}) model.echo_model_params(params_space="original") @@ -328,79 +228,32 @@ and can be done in the same way as in :ref:`tut_kim_sw`. + .. rst-class:: sphx-glr-script-out Out: .. code-block:: none - 2022-03-31 23:08:37.385 | INFO | kliff.dataset.dataset:_read:371 - 1000 configurations read from /Users/mjwen/Applications/kliff/examples/Si_training_set - 2022-03-31 23:08:41.032 | INFO | kliff.calculators.calculator:create:107 - Create calculator for 1000 configurations. - 2022-03-31 23:08:41.033 | INFO | kliff.loss:minimize:275 - Start minimization using method: L-BFGS-B. - 2022-03-31 23:08:41.033 | INFO | kliff.loss:_scipy_optimize:391 - Running in multiprocessing mode with 2 processes. - 2022-03-31 23:09:45.934 | INFO | kliff.loss:minimize:277 - Finish minimization using method: {method}. - #================================================================================ - # Available parameters to optimize. - # Parameters in `original` space. - # Model: SW_StillingerWeber_1985_Si__MO_405512056662_006 - #================================================================================ - - name: A - value: [14.93863372] - size: 1 - name: B - value: [0.58740268] - size: 1 + '#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [14.93863379]\nsize: 1\n\nname: B\nvalue: [0.58740269]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.2014612]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' - name: p - value: [4.] - size: 1 - name: q - value: [0.] - size: 1 - name: sigma - value: [2.0951] - size: 1 - - name: gamma - value: [2.20146115] - size: 1 - - name: cutoff - value: [3.77118] - size: 1 - - name: lambda - value: [45.5322] - size: 1 - - name: costheta0 - value: [-0.33333333] - size: 1 - - - - '#================================================================================\n# Available parameters to optimize.\n# Parameters in `original` space.\n# Model: SW_StillingerWeber_1985_Si__MO_405512056662_006\n#================================================================================\n\nname: A\nvalue: [14.93863372]\nsize: 1\n\nname: B\nvalue: [0.58740268]\nsize: 1\n\nname: p\nvalue: [4.]\nsize: 1\n\nname: q\nvalue: [0.]\nsize: 1\n\nname: sigma\nvalue: [2.0951]\nsize: 1\n\nname: gamma\nvalue: [2.20146115]\nsize: 1\n\nname: cutoff\nvalue: [3.77118]\nsize: 1\n\nname: lambda\nvalue: [45.5322]\nsize: 1\n\nname: costheta0\nvalue: [-0.33333333]\nsize: 1\n\n' - - - -.. GENERATED FROM PYTHON SOURCE LINES 124-131 +.. GENERATED FROM PYTHON SOURCE LINES 128-135 The optimized parameter values from this model training are very close, if not the same, as in :ref:`tut_kim_sw`. This is expected for the simple tutorial example -considered. But for more complex models, training in a transformed space can make -it much easier for the optimizer to navigate the parameter space. - +considered. But for more complex models, training in a transformed space can make it +much easier for the optimizer to navigate the parameter space. .. _OpenKIM: https://openkim.org + .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 11.710 seconds) + **Total running time of the script:** ( 1 minutes 14.239 seconds) .. _sphx_glr_download_auto_examples_example_parameter_transform.py: diff --git a/docs/source/auto_examples/example_parameter_transform_codeobj.pickle b/docs/source/auto_examples/example_parameter_transform_codeobj.pickle index 7f854936..4bd06f33 100644 Binary files a/docs/source/auto_examples/example_parameter_transform_codeobj.pickle and b/docs/source/auto_examples/example_parameter_transform_codeobj.pickle differ diff --git a/docs/source/auto_examples/index.rst b/docs/source/auto_examples/index.rst index 80b87c98..32f08116 100644 --- a/docs/source/auto_examples/index.rst +++ b/docs/source/auto_examples/index.rst @@ -94,14 +94,14 @@ More examples can be found at ` +
.. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_example_nn_Si_thumb.png - :alt: Train a neural network potential + .. figure:: /auto_examples/images/thumb/sphx_glr_example_parameter_transform_thumb.png + :alt: Parameter transformation for the Stillinger-Weber potential - :ref:`sphx_glr_auto_examples_example_nn_Si.py` + :ref:`sphx_glr_auto_examples_example_parameter_transform.py` .. raw:: html @@ -111,18 +111,18 @@ More examples can be found at ` +
.. only:: html - .. figure:: /auto_examples/images/thumb/sphx_glr_example_parameter_transform_thumb.png - :alt: Parameter transformation for the Stillinger-Weber potential + .. figure:: /auto_examples/images/thumb/sphx_glr_example_nn_Si_thumb.png + :alt: Train a neural network potential - :ref:`sphx_glr_auto_examples_example_parameter_transform.py` + :ref:`sphx_glr_auto_examples_example_nn_Si.py` .. raw:: html @@ -132,7 +132,7 @@ More examples can be found at `