- Fixtures: Make
pgtest
truly an optional dependency [9fe8fd2e0]
This minor release comes with a number of features that are focused on user friendliness and ease-of-use of the CLI and the API. The caching mechanism has received a number of improvements guaranteeing even greater savings of computational time. For existing calculations to be valid cache sources in the new version, their hash has to be regenerated (see Improvements and changes to caching for details).
- Making RabbitMQ optional
- Simplifying profile setup
- Improved test fixtures without services
- Improvements and changes to caching
- Programmatic syntax for query builder filters and projections
- Automated profile storage backups
- Full list of changes
The RabbitMQ message broker service is now optional for running AiiDA. The requirement was added in AiiDA v1.0 when the engine was completely overhauled. Although it significantly improved the scaling and responsiveness, it also made it more difficult to start using AiiDA. As of v2.6, profiles can now be configured without RabbitMQ, at the cost that the daemon can not be used and all processes have to be run locally.
With the removal of RabbitMQ as a hard requirement, combined with storage plugins that replace PostgreSQL with the serverless SQLite that were introduced in v2.5, it is now possible to setup a profile that requires no services.
A new command is introduced, verdi presto
, that automatically creates a profile with sensible defaults.
This now in principle makes it possible to run just the two following commands on any operating system:
pip install aiida-core
verdi presto
and get a working AiiDA installation that is ready to go.
As a bonus, it also configures the localhost as a Computer
.
See the documentation for more details.
Up till now, running tests would always require a fully functional profile, which meant that PostgreSQL and RabbitMQ had to be available.
As described in the section above, it is now possible to set up a profile without these services.
This new feature is leveraged to provide a set of pytest
fixtures that provide a test profile that can be used essentially on any system that just has AiiDA installed.
To start writing tests, simply create a conftest.py
and import the fixtures with:
pytest_plugins = 'aiida.tools.pytest_fixtures'
The new fixtures include the aiida_profile
fixture which is session-scoped and automatically loaded.
The fixture creates a temporary test profile at the start of the test session and automatically deletes it when the session ends.
For more information and an overview of all available fixtures, please refer to the documentation on pytest
fixtures.
A number of fixes and changes to the caching mechanism were introduced (see the changes subsection of the full list of changes for a more detailed overview).
For existing calculations to be valid cache sources in the new version, their hash has to be regenerated by running verdi node rehash
.
Note that this can take a while for large databases.
Since its introduction, the cache would essentially be reset each time AiiDA or any of the plugin packages would be updated, since the version of these packages were included in the calculation of the node hashes. This was originally done out of precaution to err on the safe-side and limit the possibility of false-positives in cache hits. However, this strategy has turned out to be unnecessarily cautious and severely limited the effectiveness of caching.
The package version information is no longer included in the hash and therefore no longer impacts the caching.
This change does now make it possible for false positives if the implementation of a CalcJob
or Parser
plugin changes signficantly.
Therefore, a mechanism is introduced to give control to these plugins to effectively reset the cache of existing nodes.
Please refer to the documentation on controlling caching for more details.
In the QueryBuilder
, fields to filter on or project always had to be provided with strings:
QueryBuilder().append(Node, filters={'label': 'some-label'}, project=['id', 'ctime'])
and it is not always trivial to know what fields exist that can be filtered on and can be projected.
In addition, there was a discrepancy for some fields, most notably the pk
property, which had to be converted to id
in the query builder syntax.
These limitations have been solved as each class in AiiDA's ORM now defines the fields
property, which allows to discover these fields programmatically.
The example above would convert to:
QueryBuilder().append(Node, filters={Node.fields.label: 'some-label'}, project=[Node.fields.pk, Node.fields.ctime])
The fields
property provides tab-completion allowing easy discovery of available fields for an ORM class in IDEs and interactive shells.
The fields also allow to express logical conditions programmatically and more.
For more details, please refer to the documentation on programmatic field syntax.
Data plugins can also define custom fields, adding on top of the fields inherited from their base class(es). The documentation on data plugin fields provides more information, but the API is currently in beta and guaranteed to be changed in an upcoming version. It is therefore recommended for plugin developers to hold off making use of this new API.
A generic mechanism has been implemented to allow easily backing up the data of a profile.
The command verdi storage backup
automatically maintains a directory structure of previous backups allowing efficient incremental backups.
Note that the exact details of the backup mechanism is dependent on the storage plugin that is used by the profile and not all storage plugins necessarily implement it.
For now the storage plugins core.psql_dos
, and core.sqlite_dos
implement the functionality.
For more information, please refer to the documentation.
Please refer to this section of the documentation for instructions to restore from a backup.
CalcJob
: Allow to define order of copying of input files [6898ff4d8]SqliteDosStorage
: Implement the backup functionality [18e447c77]SshTransport
: ReturnFileNotFoundError
if destination parent does not exist [d86bb38bf]- Add improved more configurable versions of
pytest
fixtures [e3a60460e] - Add the
orm.Entity.fields
interface forQueryBuilder
[4b9abe2bd] - CLI:
verdi computer test
make unexpected output check optional [589a3b2c0] - CLI:
verdi node graph generate
root nodes as arguments [06f8f4cfb] - CLI: Add
--most-recent-node
option toverdi process watch
[72692fa5c] - CLI: Add
--sort/--no-sort
toverdi code export
[80c606890] - CLI: Add
verdi process dump
and theProcessDumper
[6291accf0] - CLI: Add RabbitMQ options to
verdi profile setup
[f553f805e] - CLI: Add the
-M/--most-recent-node
option [5aae874aa] - CLI: Add the
verdi computer export
command [9e3ebf6ea] - CLI: Add the
verdi node list
command [cf091e80f] - CLI: Add the
verdi presto
command [6b6e1520f] - CLI: Add the
verdi profile configure-rabbitmq
command [202a3ece9] - CLI: Allow
verdi computer delete
to delete associated nodes [348777571] - CLI: Allow multiple root nodes in
verdi node graph generate
[f16c432af] - Engine: Allow
CalcJob
monitors to return outputs [b7e59a0db] - Make
postgres_cluster
andconfig_psql_dos
fixtures configurable [35d7ca63b] - Process: Add the
metadata.disable_cache
input [4626b11f8] - Storage: Add backup mechanism to the interface [bf79f23ee]
- Transports: fix overwrite behaviour for
puttree
/gettree
[a55451703]
- CLI: Speed up tab-completion by lazily importing
Config
[9524cda0b] - Improve import time of
aiida.orm
andaiida.storage
[fb9b6cc3b] - ORM: Cache the logger adapter for
ProcessNode
[1d104d06b]
- Caching:
NodeCaching._get_objects_to_hash
return type todict
[c9c7c4bd8] - Caching: Add
CACHE_VERSION
attribute toCalcJob
andParser
[39d0f312d] - Caching: Include the node's class in objects to hash [68ce11161]
- Caching: Make
NodeCaching._get_object_to_hash
public [e33000402] - Caching: Remove core and plugin information from hash calculation [4c60bbef8]
- Caching: Rename
get_hash
tocompute_hash
[b544f7cf9] - CLI: Always do hard reset in
verdi daemon restart
[8ac642410] - CLI: Change
--profile
to-p/--profile-name
forverdi profile setup
[8ea203cd9] - CLI: Let
-v/--verbosity
only affectaiida
andverdi
loggers [487c6bf04] - Engine: Set the
to_aiida_type
as default inport port serializer [2fa7a5305] QueryBuilder
: Remove implementation forhas_key
in SQLite storage [24cfbe27e]
BandsData
: Use f-strings in_prepare_gnuplot
[dba117437]BaseRestartWorkChain
: Fix handler overrides used only first iteration [65786a6bd]SlurmScheduler
: Make detailed job info fields dynamic [4f9774a68]SqliteDosStorage
: Fix exception when importing archive [af0c260bb]StructureData
: Fix the pbc constraints ofget_pymatgen_structure
[adcce4bcd]- Archive: Automatically create nested output directories [212f6163b]
- Archive: Respect
filter_size
in query for existing nodes [ef60b66aa] - CLI: Ensure deprecation warnings are printed before any prompts [deb293d0e]
- CLI: Fix
verdi archive create --dry-run
for empty file repository [cc96c9d04] - CLI: Fix
verdi plugin list
incorrectly not displaying description [e952d7717] - CLI: Fix
verdi process [show|report|status|watch|call-root]
no output [a56a1389d] - CLI: Fix
verdi process list
if no available workers [b44afcb3c] - CLI: Fix
verdi quicksetup
when profiles exist where storage is notcore.psql_dos
[6cb91c181] - CLI: Fix dry-run resulting in critical error in
verdi archive import
[36991c6c8] - CLI: Fix logging not showing in
verdi daemon worker
[9bd8585bd] - CLI: Fix the
ctx.obj.profile
attribute not being initialized [8a286f26e] - CLI: Hide misleading message for
verdi archive create --test-run
[7e42d7aa7] - CLI: Improve error message of
PathOrUrl
andFileOrUrl
[ffc6e4f70] - CLI: Only configure logging in
set_log_level
callback once [66a2dcedd] - CLI: Unify help of
verdi process
commands [d91e0a58d] - Config: Set existing user as default for read-only storages [e66592509]
- Config: Use UUID in
Manager.load_profile
to identify profile [b01038bf1] - Daemon: Log the worker's path and Python interpreter [ae2094169]
- Docker: Start and stop daemon only when a profile exists [0a5b20023]
- Engine: Add positional inputs for
Process.submit
[d1131fe94] - Engine: Catch
NotImplementedError
inget_process_state_change_timestamp
[04926fe20] - Engine: Fix paused work chains not showing it in process status [40b22d593]
- Fix passwords containing
@
not being accepted for Postgres databases [d14c14db2] - ORM: Correct field type of
InstalledCode
andPortableCode
models [0079cc1e4] - ORM: Fix
ProcessNode.get_builder_restart
[0dee9d8ef] - ORM: Fix deprecation warning being shown for new code types [a9155713b]
- Runner: Close event loop in
Runner.close()
[53cc45837]
- CLI: Deprecate
verdi profile setdefault
and rename toverdi profile set-default
[ab48a4f62] - CLI: Deprecate accepting
0
fordefault_mpiprocs_per_machine
[acec0c190] - CLI: Deprecate the
deprecated_command
decorator [4c11c0616] - CLI: Remove the deprecated
verdi database
command [3dbde9e31] - ORM: Undo deprecation of
Code.get_description
[1b13014b1]
- Update
tabulate>=0.8.0,<0.10.0
[6db2f4060]
- Abstract message broker functionality [69389e038]
- Config: Refactor
get_configuration_directory_from_envvar
[65739f524] - Config: Refactor the
create_profile
function and method [905e93444] - Engine: Refactor handling of
remote_folder
andretrieved
outputs [28adacaf8] - ORM: Switch to
pydantic
for code schema definition [06189d528] - Replace deprecated
IOError
withOSError
[7f9129fd1] - Storage: Move profile locking to the abstract base class [ea5f51bcb]
- Add more instructions on how to use docker image [aaf44afcc]
- Add the updated cheat sheet [09f9058a7]
- Add tips for common problems with conda PostgreSQL setup [cd5313825]
- Customize the color scheme through custom style sheet [a6cf7fc7e]
- Docs: Clarify
Transport.copy
requiresrecursive=True
if source is a directory [310ff1db7] - Fix example of the
entry_points
fixture [081fc5547] - Fixing several small issues [6a3a59b29]
- Minor cheatsheet update for v2.6 release [c3cc169c4]
- Reorganize the tutorial content [5bd960efa]
- Rework the installation section [0ee0a0c6a]
- Standardize usage of
versionadded
directive [bf5dac848] - Update twitter logo [5e4f60d83]
- Use uv installer in readthedocs build [be0db3cc4]
- Add
check-jsonschema
pre-commit hook for GHA workflows [14c5bb0f7] - Add Dependabot config for maintaining GH actions [0812f4b9e]
- Add docker image
aiida-core-dev
for development [6d0984109] - Add Python 3.12 tox environment [6b0d43960]
- Add the
slurm
service to nightly workflow [5460a0414] - Add typing to
aiida.common.hashing
[ba21ba1d4] - Add workflow to build Docker images on PRs from forks [23d2aa5ee]
- Address internal deprecation warnings [ceed7d55d]
- Allow unit test suite to be ran against SQLite [0dc8bbcb2]
- Bump the gha-dependencies group with 4 updates [ccb56286c]
- Dependencies: Update the requirements files [61ae1a55b]
- Disable code coverage in
test-install.yml
[4cecda517] - Do not pin the mamba version [82bba1307]
- Fix Docker build not defining
REGISTRY
[e7953fd4d] - Fix publishing to DockerHub using incorrect secret name [9c9ff7986]
- Fix Slack notification for nightly tests [082589f45]
- Fix the
test-install.yml
workflow [22ea06362] - Fix the Docker builds [3404c0192]
- Increase timeout for the
test-install
workflow [e36a3f11f] - Move RabbitMQ CI to nightly and update versions [b47a56698]
- Refactor the GHA Docker build [e47932ee9]
- Remove
verdi tui
from CLI reference documentation [1b4a19a44] - Run Docker workflow only for pushes to origin [b1a714155]
- Tests: Convert hierarchy functions into fixtures [a02abc470]
- Tests: extend
node_and_calc_info
fixture tocore.ssh
[9cf28f208] - Tests: Remove test classes for transport plugins [b77e51f8c]
- Tests: Unskip test in
tests/cmdline/commands/test_archive_import.py
[7b7958c7a] - Update codecov action [fc2a84d9b]
- Update deprecated
whitelist_externals
option in tox config [8feef5189] - Update pre-commit hooks [3dda84ff3]
- Update pre-commit requirement
ruff==0.3.5
[acd54543d] - Update requirements
mypy
andpre-commit
[04b3260a0] - Update requirements to address deprecation warnings [566f681f7]
- Use
uv
to install package in CI and CD [73a734ae3] - Use recursive dependencies for
pre-commit
extra [6564e78dd]
This is a patch release with a few bug fixes, but mostly devops changes related to the package structure.
- CLI: Fix
verdi process repair
not actually repairing [784ad6488] - Docker: Allow default profile parameters to be configured through env variables [06ea130df]
- Dependencies: Fix incompatibility with
spglib>=2.3
[fa8b9275e]
- Devops: Move the source directory into
src/
[53748d4de] - Devops: Remove post release action for uploading pot to transifex [9feda35eb]
- Pre-commit: Add
ruff
as the new linter and formatter [64c5e6a82] - Pre-commit: Update a number of pre-commit hooks [a4ced7a67]
- Pre-commit: Add YAML and TOML formatters [c27aa33f3]
- Update pre-commit CI configuration [cb95f0c4c]
- Update pre-commit dependencies [8dfab0e09]
- Dependencies: Pin
mypy
to minor versionmypy~=1.7.1
[d65fa3d2d]
- Streamline and fix typos in
docs/topics/processes/usage.rst
[45ba27732] - Update process function section on file deduplication [f35d7ae98]
- Correct a typo in
docs/source/topics/data_types.rst
[6ee278ceb] - Fix the ADES paper citation [80117f8f7]
This minor release comes with a number of features that are focused on user friendliness of the CLI and the API. It also reduces the import time of modules, which makes the CLI faster to load and so tab-completion should be snappier. The release adds support for Python 3.12 and a great number of bugs are fixed.
- Create profiles without a database server
- Changes in process launch functions
- Improvements for built-in data types
- Repository interface improvements
- Full list of changes
A new storage backend plugin has been added that uses SQLite
instead of PostgreSQL.
This makes it a lot easier to setup across all platforms.
A new profile using this storage backend can be created in a single command:
verdi profile setup core.sqlite_dos -n --profile <PROFILE_NAME> --email <EMAIL>
Although easier to setup compared to the default storage backend that uses PostgreSQL, it is less performant.
This makes this storage ideally suited for use-cases that want to test or demonstrate AiiDA, or to just play around a bit.
The storage is compatible with most of AiiDA's functionality, except for automated database migrations and some very specific QueryBuilder
functionality.
Therefore, for production databases, the default core.psql_dos
storage entry point remains the recommended storage.
It is now also possible to create a profile using an export archive:
verdi profile setup core.sqlite_dos -n --profile <PROFILE_NAME> --filepath <ARCHIVE>
where <ARCHIVE>
should point to an export archive on disk.
You can now use this profile like any other profile to inspect the data of the export archive.
Note that this profile is read-only, so you will not be able to use it to mutate existing data or add new data to the profile.
See the documentation for more details and a more in-depth example.
Finally, the original storage plugin core.psql_dos
, which uses PostgreSQL for the database is also accessible through verdi profile setup core.psql_dos
.
Essentially this is the same as the verdi setup
command, which is kept for now for backwards compatibility.
See the documentation on storage plugins for more details on the differences between these storage plugins and when to use which.
The verdi profile delete
command can now also be used to delete a profile for any of these storage plugins.
You will be prompted whether you also want to delete all the data, or you can specify this with the --delete-data
or --keep-data
flags.
The aiida.engine.submit
method now accepts the argument wait
.
When set to True
, instead of returning the process node straight away, the function will wait for the process to terminate before returning.
By default it is set to False
so the current behavior remains unchanged.
from aiida.engine import submit
node = submit(Process, wait=True) # This call will block until process is terminated
assert node.is_terminated
This new feature is mostly useful for interactive demos and tutorials in notebooks.
In these situations, it might be beneficial to use aiida.engine.run
because the cell will be blocking until it is finished, indicating to the user that something is processing.
When using submit
, the cell returns immediately, but the results are not ready yet and typically the next cell cannot yet be executed.
Instead, the demo should redirect the user to using something like verdi process list
to query the status of the process.
However, using run
has downsides as well, most notably that the process will be lost if the notebook gets disconnected.
For processes that are expected to run longer, this can be really problematic, and so submit
will have to be used regardless.
With the new wait
argument, submit
provides the best of both worlds.
Although very useful, the introduction of this feature does break any processes that define wait
or wait_interval
as an input.
Since the inputs to a process are defined as keyword arguments, these inputs would overlap with the arguments to the submit
method.
To solve this problem, inputs can now also be passed as a dictionary, e.g., where one would do before:
submit(SomeProcess, x=Int(1), y=Int(2), code=load_code('some-code'))
# or alternatively
inputs = {
'x': Int(1),
'y': Int(2),
'code': load_code('some-code'),
}
submit(SomeProcess, **inputs)
The new syntax allows the following:
inputs = {
'x': Int(1),
'y': Int(2),
'code': load_code('some-code'),
}
submit(SomeProcess, inputs)
Passing inputs as keyword arguments is still supported because sometimes that notation is still more legible than defining an intermediate dictionary. However, if both an input dictionary and keyword arguments are define, an exception is raised.
The XyData
and ArrayData
data plugins now allow to directly pass the content in the constructor.
This allows defining the complete node in a single line
import numpy as np
from aiida.orm import ArrayData, XyData
xy = XyData(np.array([1, 2]), np.array([3, 4]), x_name='x', x_units='E', y_names='y', y_units='F')
assert all(xy.get_x()[1] == np.array([1, 2]))
array = ArrayData({'a': np.array([1, 2]), 'b': np.array([3, 4])})
assert all(array.get_array('a') == np.array([1, 2]))
It is now also no longer required to specify the name in ArrayData.get_array
as long as the node contains just a single array:
import numpy as np
from aiida.orm import ArrayData
array = ArrayData(np.array([1, 2]))
assert all(array.get_array() == np.array([1, 2]))
As of v2.0.0
, the repository interface of the Node
class was moved to the Node.base.repository
namespace.
This was done to clean up the top-level namespace of the Node
class which was getting very crowded, and in most use-cases, a user never needs to directly access these methods.
It is up to the data plugin to provide specific methods to retrieve data that might be stored in the repository.
For example, with the ArrayData
, a user should now have to go to ArrayData.base.repository.get_object_content
to retrieve an array from the repository, but the class provides ArrayData.get_array
as a shortcut.
A few data plugins that ship with aiida-core
didn't respect this guideline, most notably the FolderData
and SinglefileData
plugins.
This has been corrected in this release: for FolderData
, all the repository methods are now once again directly available on the top-level namespace.
The SinglefileData
now makes it easier to get the content as bytes.
Before, one had to do:
from aiida.orm import SinglefileData
node = SinglefileData.from_string('some content')
with node.open(mode='rb') as handle:
byte_content = handle.read()
this can now be achieved with:
from aiida.orm import SinglefileData
node = SinglefileData.from_string('some content')
byte_content = node.get_content(mode='rb')
As of v2.0, due to the repository redesign, it was no longer possible to access a file directly by a filepath on disk.
The repository interface only interacts with file-like objects to stream the content.
However, a lot of Python libraries expect filepaths on disk and do not support file-like objects.
This would force an AiiDA user to write the file from the repository to a temporary file on disk, and pass that temporary filepath.
For example, consider the numpy.loadtxt
function which requires a filepath, the code would look something like:
import pathlib
import shutil
import tempfile
with tempfile.TemporaryDirectory() as tmp_path:
# Copy the entire content to the temporary folder
dirpath = pathlib.Path(tmp_path)
node.base.repository.copy_tree(dirpath)
# Or copy the content of a file. Should use streaming
# to avoid reading everything into memory
filepath = (dirpath / 'some_file.txt')
with filepath.open('rb') as target:
with node.base.repository.open('rb') as source:
shutil.copyfileobj(source, target)
# Now use `filepath` to library call, e.g.
numpy.loadtxt(filepath)
This burdensome boilerplate has now been made obsolete by the as_path
method:
with node.base.repository.as_path() as filepath:
numpy.loadtxt(filepath)
For the FolderData
and SinglefileData
plugins, the method can be accessed on the top-level namespace of course.
- Add the
SqliteDosStorage
storage backend [702f88788] XyData
: Allow defining array(s) on construction [f11598dc6]ArrayData
: Makename
optional inget_array
[7fbe67cb6]ArrayData
: Allow defining array(s) on construction [35e669fe8]FolderData
: Expose repository API on top-level namespace [3e1f87373]- Repository: Add the
as_path
context manager [b0546e8ed] - Caching: Add the
strict
argument configuration validation [f272e197e] - Caching: Try to import an identifier if it is a class path [2c56fc234]
- CLI: Add the command
verdi profile setup
[351021164] - CLI: Add
cached
andcached_from
projections toverdi process list
[3b445c4f1] - CLI: Add
--all
flag toverdi process kill
[db1375949] - CLI: Lazily validate entry points in parameter types [d3807d422]
- CLI: Add repair hint to
verdi process play/pause/kill
[8bc31bfd1] - CLI: Add the
verdi process repair
command [3e3d9b9f7] - CLI: Validate strict in
verdi config set caching.disabled_for
[9cff59232] DynamicEntryPointCommandGroup
: Allow entry points to be excluded [9e30ec8ba]- Add the
aiida.common.log.capture_logging
utility [9006eef3a] Config
: Add thecreate_profile
method [ae7abe8a6]- Engine: Add the
await_processes
utility function [45767f050] - Engine: Add the
wait
argument tosubmit
[8f5e929d1] - ORM: Add the
User.is_default
property [a43c4cd0f] - ORM: Add
NodeCaching.CACHED_FROM_KEY
for_aiida_cached_from
constant [35fc3ae57] - ORM: Add the
Entity.get_collection
classmethod [305f1dbf4] - ORM: Add the
Dict.get
method [184fcd16e] - ORM: Register
numpy.ndarray
with theto_aiida_type
toArrayData
[d8dd776a6] - Manager: Add the
set_default_user_email
[8f8f55807] CalcJob
: Add support for nested targets inremote_symlink_list
[0ec650c1a]RemoteData
: Add theis_cleaned
property [2a2353d3d]SqliteTempBackend
: Add support for reading from and writing to archives [83fc5cf69]StorageBackend
: Add theread_only
class attribute [8a4303ff5]SinglefileData
: Addmode
keyword toget_content
[d082df7f1]BaseRestartWorkChain
: Factor out attachment of outputs [d6093d101]- Add support for
NodeLinksManager
to YAML serializer [6905c134e]
- CLI: Make loading of config lazy for improved responsiveness [d533b7a54]
- Cache the lookup of entry points [12cc930db]
- Refactor: Delay import of heavy packages to speed up import time [5dda6fd97]
- Refactor: Delay import of heavy packages to speed up import time [8e6e08dc7]
- Do not import
aiida.cmdline
inaiida.orm
[0879a4e27] - Lazily define
__type_string
inorm.Group
[ebf3101d9] - Lazily define
_plugin_type_string
and_query_type_string of
Node` [3a61a7003]
- CLI:
verdi profile delete
is now storage plugin agnostic [5015f5fe1] - CLI: Usability improvements for interactive
verdi setup
[c53ea20a4] - CLI: Do not load config in defaults and callbacks during tab-completion [062058862]
- Engine: Make process inputs in launchers positional [6d18ccb86]
- Remove
aiida.manage.configuration.load_documentation_profile
[9941266ce] - ORM:
Sealable.seal()
returnself
instead ofNone
[16e3bd3b5] - ORM: Move deprecation warnings from module level [c4afdb9be]
- Config: Switch from
jsonschema
topydantic
[4203f162d] DynamicEntryPointCommandGroup
: Usepydantic
to define config model [1d8ea2a27]- Config: Remove use of
NO_DEFAULT
forOption.default
[275718cc8]
- Add the
report
method tologging.LoggerAdapter
[7d6684ce1] CalcJob
: Fix MPI behavior ifwithmpi
option default is True [84737506e]CalcJobNode
: Fix validation fordepth=None
inretrieve_list
[03c86d5c9]- CLI: Fix bug in
verdi data core.trajectory show
for various formats [fd4c1269b] - CLI: Add missing entry point groups for
verdi plugin list
[ae637d8c4] - CLI: Remove loading backend for
verdi plugin list
[34e564ad0] - CLI: Fix
repository
being required forverdi quicksetup
[d4666009e] - CLI: Fix
verdi config set
when setting list option [314917801] - CLI: Keep list unique in
verdi config set --append
[3844f86c6] - CLI: Improve the formatting of
verdi user list
[806d7e236] - CLI: Set defaults for user details in profile setup [8b8887e55]
- CLI: Reuse options in
verdi user configure
from setup [1c0b702ba] InteractiveOption
: Fix validation being skipped if!
provided [c4b183bc6]- ORM: Fix problem with detached
DbAuthInfo
instances [ec2c6a8fe] - ORM: Check nodes are from same backend in
validate_link
[7bd546ebe] - ORM:
ProcessNode.is_valid_cache
isFalse
for unsealed nodes [a1f456d43] - ORM: Explicitly pass backend when constructing new entity [96667c8c6]
- ORM: Replace
.collection(backend)
with.get_collection(backend)
[bac2152c4] - Make
warn_deprecation
respect thewarnings.showdeprecations
option [6c28c63e9] PsqlDosBackend
: Fix changes not persisted afteriterall
anditerdict
[2ea5087c0]PsqlDosBackend
: FixNode.store
excepting when inside a transaction [624dcd9fc]Parser.parse_from_node
: Validate outputs against process spec [d16792f3d]- Fix
QueryBuilder.count
for storage backends using sqlite [5dc1555bc] - Process functions: Fix bug with variable arguments [ca8bbc67f]
SqliteZipBackend
: Returnself
instore
[6a43b3f15]SqliteZipBackend
: Ensure thefilepath
is absolute and exists [5eac8b49d]- Remove
with_dbenv
use inaiida.orm
[35c57b9eb]
- Deprecated
aiida.orm.nodes.data.upf
andverdi data core.upf
[6625fd245]
- Add topic section on storage [83dbe1ad9]
- Add important note on using
iterall
anditerdict
[0aea7e41b] - Add links about "entry point" and "plugin" to tutorial [517ffcb1c]
- Disable the
warnings.showdeprecations
option [4adb06c0c] - Fix instructions for inspecting archive files [0a9c2788e]
- Changes are reverted if exception during
iterall
[17c5d8724] - Various minor fixes to
run_docker.rst
[d3788adea] - Update
pydata-sphinx-theme
and add Discourse links [13df42c14] - Correct example of
verdi config unset
in troubleshooting [d6143dbc8] - Improvements to sections containing recently added functionality [836419f66]
- Fix typo in
run_codes.rst
[9bde86ec7] - Fixtures: Fix
suppress_warnings
ofrun_cli_command
[9807cede4] - Update citation suggestions [1dafdf2dd]
- Add support for Python 3.12 [c39b4fda4]
- Update to
sqlalchemy~=2.0
[a216f5052] - Update to
disk-objectstore~=1.0
[56f9f6ca0] - Add new extra
tui
that providesverdi
as a TUI [a42e09c02] - Add upper limit
jedi<0.19
[fae2a9cfd] - Update requirement
mypy~=1.7
[c2fcad4ab] - Add compatibility for
pymatgen>=v2023.9.2
[4e0e7d8e9] - Bump
yapf
to0.40.0
[a8ae50853] - Update pre-commit requirement
flynt==1.0.1
[e01ea4b97] - Docker: Pinning mamba version to 1.5.2 [a6c2dbe1c]
- Docker: Bump Python version to 3.10.13 [b168f2e12]
- CI: Use Python 3.10 for
pre-commit
in CI and CD workflows [f41c8ac90] - CI: Using concurrency for CI actions [4db54b7f8]
- CI: Update tox to use Python 3.9 [227390a52]
- Docker: Bump
upload-artifact
action to v4 for Docker workflow [bfdb2828a] - Refactor: Replace
all
withiterall
where beneficial [8a2fece02] - Pre-commit: Disable
no-member
andno-name-in-module
foraiida.orm
[15379bbee] - Tests: Move memory leak tests to main unit test suite [561f93cef]
- Tests: Move ipython magic tests to main unit test suite [ce9acc312]
- Tests: Remove deprecated
aiida/manage/tests/main
module [5b9da7d1e] - Tests: Refactor transport tests from
unittest
topytest
[ec64780c2] - Tests: Fix failing
tests/cmdline/commands/test_setup.py
[b6f7ec188] - Tests: Print stack trace if CLI command excepts with
run_cli_command
[08cba0f78] - Tests: Make
PsqlDosStorage
profile unload test more robust [1c72eac1f] - Tests: Fix flaky work chain tests using
recwarn
fixture [207151784] - Tests: Fix
StructureData
test breaking for recentpymatgen
versions [d1d64e800] - Typing: Improve annotations of process functions [a85af4f0c]
- Typing: Add type hinting for
aiida.orm.nodes.data.array.xy
[2eaa5449b] - Typing: Add type hinting for
aiida.orm.nodes.data.array.array
[c19b1423a] - Typing: Add overload signatures for
open
[0986f6b59] - Typing: Add overload signatures for
get_object_content
[d18eedc8b] - Typing: Correct type annotation of
WorkChain.on_wait
[923cc314c] - Typing: Improve type hinting for
aiida.orm.nodes.data.singlefile
[b9d087dd4]
- Disable the consumer timeout for RabbitMQ [5ce1e7ec3]
- Add
rsync
andgraphviz
to system requirements [c4799add4]
- Add upper limit
jedi<0.19
[90e586fe3]
This patch release comes with an improved set of Docker images and a few fixes to provide compatibility with recent versions of pymatgen
.
- Improved Docker images [fec4e3bc4]
- Add folders that automatically run scripts before/after daemon start in Docker image [fe4bc1d3d]
- Pass environment variable to
aiida-prepare
script in Docker image [ea47668ea] - Update the
.devcontainer
to use the new docker stack [413a0db65]
- Add compatibility for
pymatgen>=v2023.9.2
[1f6027f06]
- Tests: Make
PsqlDosStorage
profile unload test more robust [f392459bd] - Tests: Fix
StructureData
test breaking for recentpymatgen
versions [093037d48] - Trigger Docker image build when pushing to
support/*
branch [5cf3d1d75] - Use
aiida-core-base
image fromghcr.io
[0e5b1c747] - Loosen trigger conditions for Docker build CI workflow [22e8a8069]
- Follow-up docker build runner macOS-ARM64 [1bd9bf03d]
- Upload artifact by PR from forks for docker workflow [afc2dad8a]
- Update the image name for docker image [17507b410]
This minor release comes with a number of new features and improvements as well as a significant amount of bug fixes. Support for Python 3.8 has been officially dropped in accordance with AEP 003.
As a result of one of the bug fixes, related to the caching of CalcJob
nodes, a database migration had to be added, the first since the release of v2.0.
After ugrading to v2.4.0, you will be prompted to migrate your database.
The automated migration drops the hashes of existing CalcJobNode
s and provides you with the optional command to recompute them.
Execute the command if existing CalcJobNode
s need to be usable as valid cache sources.
- Config: Add option to change recursion limit in daemon workers [226159fd9]
- CLI: Added
compress
option toverdi storage maintain
[add474cbb] - Expose
get_daemon_client
so it can be imported fromaiida.engine
[1a0c1ee93] verdi computer test
: Improve messaging of login shell check [062a58260]verdi node rehash
: Addaiida.node
as group for--entry-point
[2fd07514d]verdi process status
: Addcall_link_label
to stack entries [bd9372a5f]SinglefileData
: Add thefrom_string
classmethod [c25de615e]DynamicEntryPointCommandGroup
: Add support for shared options [220a65c76]DynamicEntryPointCommandGroup
: Pass ctx tocommand
callable [7de711be4]ProcessNode
: Add theexit_code
property [ad8a539ee]
- Engine: Dynamically update maximum stack size close to overflow to address
RecursionError
under heavy load [f797b4766] CalcJobNode
: Fix the computation of the hash [685e0f87d]CalcJob
: Ignore file inremote_copy_list
not existing [101a8d61b]CalcJob
: Assign outputs from node in case of cache hit [777b97601]- Fix log messages being logged twice to the daemon log file [bfd63c790]
- Process control: Change language when not waiting for response [68cb4579d]
- Do not assume
pgtest
cluster started inpostgres_cluster
fixture [1de2ca576] - Process control: Warn instead of except when daemon is not running [ad4fbcccb]
DirectScheduler
: Add?
asJobState.UNDETERMINED
[ffc869d8f]- CLI: Correct
verdi devel rabbitmq tasks revive
docstring [13cadd05f] SinglefileData
: Fix bug whenfilename
ispathlib.Path
[f36bf583c]- Improve clarity of various deprecation warnings [c72a252ed]
CalcJob
: Remove default ofwithmpi
input and make it optional [6a88cb315]Process
: Haveinputs
property always returnAttributesFrozenDict
[60756fe30]PsqlDos
: Add migration to remove hashes for allCalcJobNodes
[7ad916836]PsqlDosMigrator
: Commit changes when migrating existing schema [f84fe5b60]PsqlDos
: Addentry_point_string
argument todrop_hashes
[c7a36fa3d]PsqlDos
: Make hash reset migrations more explicit [c447a1af3]verdi process list
: Fix double percent sign in daemon usage [68be866e6]- Fix the
daemon_client
fixture [9e5f5eefd] - Transports: Raise
FileNotFoundError
incopy
if source doesn't exist [d82069441]
- Add
graphviz
to system requirements of RTD build runner [3df02550e] - Add types for
DefaultFieldsAttributeDict
subclasses [afed5dc46] - Bump Python version for RTD build [5df446cd3]
- Pre-commit: Fix
mypy
warning inaiida.orm.utils.serialize
[c25922484] - Update Docker base image
aiida-prerequisites==0.7.0
[ac755afae] - Use f-strings in
aiida/engine/daemon/execmanager.py
[49cffff21]
- Drop support for Python 3.8 [3defb8bb7]
- Update requirement
pylint~=2.17.4
[397634444] - Update requirement
flask~=2.2
[a2a05a69f]
QueryBuilder
: Deprecatedebug
argument and use logger [603ff37a0]
- Add missing
core.
prefix to allverdi data
subcommands [99319b3c1] - Clarify negation operator in
QueryBuilder
filters [2c828811f] - Correct "variable" to "variadic" arguments [978217693]
- Fix reference target warnings related to
flask_restful
[4f76e0bd7]
DaemonClient
: Clean stale PID file instop_daemon
[#6007]
This release comes with a number of improvements, some of the more useful and important of which are quickly highlighted. A full list of changes can be found below.
- Process function improvements
- Scheduler plugins: including
environment_variables
WorkChain
: conditional predicates should return boolean-like- Controlling usage of MPI
- Add support for Docker containers
- Exporting code configurations
- Full list of changes
- New contributors
A number of improvements in the usage of process functions, i.e., calcfunction
and workfunction
, have been added.
Each subsection title is a link to the documentation for more details.
Variadic arguments can be used in case the function should accept a list of inputs of unknown length.
Consider the example of a calculation function that computes the average of a number of Int
nodes:
@calcfunction
def average(*args):
return sum(args) / len(args)
result = average(*(1, 2, 3))
Type hint annotations can now be used to add automatic type validation to process functions.
@calcfunction
def add(x: Int, y: Int):
return x + y
add(1, 1.0) # Passes
add(1, '1.0') # Raises an exception
Since the Python base types (int
, str
, bool
, etc.) are automatically serialized, these can also be used in type hints.
The following example is therefore identical to the previous:
@calcfunction
def add(x: int, y: int):
return x + y
The calcfunction
and workfunction
generate a Process
of the decorated function on-the-fly.
In doing so, it automatically defines the ProcessSpec
that is normally done manually, such as for a CalcJob
or a WorkChain
.
Before, this would just define the ports that the function process accepts, but the help
attribute of the port would be left empty.
This is now parsed from the docstring, if it can be correctly parsed:
@calcfunction
def add(x: int, y: int):
"""Add two integers.
:param x: Left hand operand.
:param y: Right hand operand.
"""
return x + y
assert add.spec().inputs['a'].help == 'Left hand operand.'
assert add.spec().inputs['b'].help == 'Right hand operand.'
This functionality is particularly useful when exposing process functions in work chains.
Since the process specification of the exposed function will be automatically inherited, the user can inspect the help
string through the builder.
The automatic documentation produced by the Sphinx plugin will now also display the help string parsed from the docstring.
The keys in the output dictionary can now contain nested namespaces:
@calcfunction
def add(alpha, beta):
return {'nested.sum': alpha + beta}
result = add(Int(1), Int(2))
assert result['nested']['sum'] == 3
Process functions can now be defined as class member methods of work chains:
class CalcFunctionWorkChain(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.input('x')
spec.input('y')
spec.output('sum')
spec.outline(
cls.run_compute_sum,
)
@staticmethod
@calcfunction
def compute_sum(x, y):
return x + y
def run_compute_sum(self):
self.out('sum', self.compute_sum(self.inputs.x, self.inputs.y))
The function should be declared as a staticmethod
and it should not include the self
argument in its function signature.
It can then be called from within the work chain as self.function_name(*args, **kwargs)
.
The Scheduler
base class implements the concrete method _get_submit_script_environment_variables
which formats the lines for the submission script that set the environment variables that were defined in the metadata.options.environment_variables
input.
Before it was left up to the plugins to actually call this method in the _get_submit_script_header
, but this is now done by the base class in the get_submit_script
.
You can now remove the call to _get_submit_script_environment_variables
from your scheduler plugins, as the base class will take care of it.
A deprecation warning is emitted if the base class detects that the plugin is still calling it manually.
See the pull request for more details.
Up till now, work chain methods that are used as the predicate in a conditional, e.g., if_
or while_
could return any type.
For example:
class SomeWorkChain(WorkChain):
@classmethod
def define(cls, spec):
super().define(spec)
spec.outline(if_(cls.some_conditional)())
def some_conditional(self):
if self.ctx.something == 'something':
return True
The some_conditional
method is used as the "predicate" of the if_
conditional.
It returns True
or None
.
Since the None
value in Python is "falsey", it would be considered as returning False
.
However, this duck-typing could accidentally lead to unexpected situations, so we decided to be more strict on the return type.
As of now, a deprecation warning is emitted if the method returns anything that is not "boolean-like", i.e., does not implement the __bool__
method.
If you see this warning, please make sure to return a boolean, like the built-ins True
or False
, or a numpy.bool
or aiida.orm.Bool
.
See the pull request for more details.
It is now possible to define on a code object whether it should be run with or without MPI through the with_mpi
attribute.
It can be set from the Python API as AbstractCode(with_mpi=with_mpi)
or through the --with-mpi / --no-with-mpi
option of the verdi code create
CLI command.
This option adds a manner to control the use of MPI in calculation jobs, in addition to the existing ones defined by the CalcJob
plugin and the metadata.options.withmpi
input.
For more details on how these are controlled and how conflicts are handled, please refer to the documentation.
Support is added for running calculation within Docker containers.
For example, to run Quantum ESPRESSO pw.x in a Docker container, write the following file to config.yml
:
label: qe-pw-on-docker
computer: localhost
engine_command: docker run -i -v $PWD:/workdir:rw -w /workdir {image_name} sh -c
image_name: haya4kun/quantum_espresso
filepath_executable: pw.x
default_calc_job_plugin: quantumespresso.pw
use_double_quotes: false
wrap_cmdline_params: true
and run the CLI command:
verdi code create core.code.containerized --config config.yml --non-interactive
This should create a ContainerizedCode
that you can now use to launch a PwCalculation
.
For more details, please refer to the documentation.
It is now possible to export the configuration of an existing code through the verdi code export
command.
The produced YAML file can be used to recreate the code through the verdi code create
command.
Note that you should use the correct subcommand based on the type of the original code.
For example, if it was an InstalledCode
you should use verdi code create core.code.installed
.
For the legacy Code
instances, you should use verdi code setup
.
See the pull request for more details.
AbstractCode
: Add thewith_mpi
attribute [#5922]ContainerizedCode
: Add support for Docker images to use asCode
forCalcJob
s [#5841]InstalledCode
: Allow relative path forfilepath_executable
[#5879]- CLI: Allow specifying output filename in
verdi node graph generate
[#5897] - CLI: Add
--timeout
option to allverdi daemon
commands [#5966] - CLI: Add the
verdi calcjob remotecat
command [#4861] - CLI: Add the
verdi code export
command [#5860] - CLI: Improved customizability and scriptability of
verdi storage maintain
[#5936] - CLI:
verdi quicksetup
: Further reduce required user interaction [#5768] - CLI:
verdi computer test
: Add test for login shell being slow [#5845] - CLI:
verdi process list
: Addexit_message
as projectable attribute [#5853] - CLI:
verdi node delete
: Add verbose list of pks to be deleted [#5878] - CLI: Fail command if
--config
file contains unknown key [#5939] - CLI:
verdi daemon status
: Do not except when no profiles are defined [#5874] - ORM: Add unary operations
+
,-
andabs
toNumericType
[#5946] - Process functions: Support class member functions as process functions [#4963]
- Process functions: Infer argument
valid_type
from type hints [#5900] - Process functions: Parse docstring to set input port help attribute [#5919]
- Process functions: Add support for variadic arguments [#5691]
- Process functions: Allow nested output namespaces [#5954]
Process
: Store JSON-serializable metadata inputs on the node [#5801]Port
: Add theis_metadata
keyword [#5801]ProcessBuilder
: Include metadata inputs inget_builder_restart
[#5801]StructureData
: Addmode
argument toget_composition
[#5926]Scheduler
: Allow terminating job if submission script is invalid [#5849]SlurmScheduler
: Detect broken submission scripts for invalid account [#5850]SlurmScheduler
: Parse theNODE_FAIL
state [#5866]WorkChain
: Add dataclass serialisation to context [#5833]IcsdDbImporter
: Addis_theoretical
tag to queried entries [#5868]
- CLI: Prefix the
verdi data
subcommands withcore.
[#5846] - CLI: Respect config log levels if
--verbosity
not explicitly passed [#5925] - CLI:
verdi config list
: Do not except if no profiles are defined [#5921] - CLI:
verdi code show
: Add missing code attributes [#5916] - CLI:
verdi quicksetup
: Fix error incorrect role when creating database [#5828] - CLI: Fix error in
aiida.cmdline.utils.log.CliFormatter
[#5957] - Daemon: Fix false-positive of stopped daemon in
verdi daemon status
[#5862] DaemonClient
: Fix and homogenize use oftimeout
in client calls [#5960]ProcessBuilder
: Fix bug in_recursive_merge
[#5801]QueryBuilder
: Catch new exception raised bysqlalchemy>=1.4.45
[#5875]- Fix the
%verdi
IPython magics utility [#5961] - Fix bug in
aiida.engine.utils.instantiate_process
[#5952] - Fix incorrect import of exception from
kiwipy.communications
[#5947]
Scheduler
: Move setting of environment variables into base class [#5948]WorkChains
: Emit deprecation warning if predicateif_/while_
does not return boolean-like [#5924]
DaemonClient
: Refactor to include parsing of client response [#5850]- ORM: Remove
Entity.from_backend_entity
from the public API [#5447] PbsproScheduler
: Replace deprecatedppn
tag withncpus
[#5910]ProcessBuilder
: Move_prune
method to standalone utility [#5801]verdi process list
: Simplify the daemon load implementation [#5850]
- Add FAQ on MFA-enabled computers [#5887]
- Add link to all
metadata.options
inputs inCalcJob
submission example [#5912] - Add warning that
Data
constructor is not called on loading [#5898] - Add note on how to create a code that uses Conda environment [#5905]
- Add
--without-daemon
flag to benchmark script [#5839] - Add alternative for conda env activation in submission script [#5950]
- Clarify that process functions can be exposed in work chains [#5919]
- Fix the
intro/tutorial.md
notebook [#5961] - Fix the overindentation of lists [#5915]
- Hide the "Edit this page" button on the API reference pages [#5956]
- Note that an entry point is required for using a data plugin [#5907]
- Set
use_login_shell=False
forlocalhost
in performance benchmark [#5847] - Small improvements to the benchmark script [#5854]
- Use mamba instead of conda [#5891]
- Add devcontainer for easy integration with VSCode [#5913]
- CI: Update
sphinx-intl
and install transifex CLI [#5908] - Fix the
test-install
workflow [#5873] - Pre-commit: Improve typing of
aiida.schedulers.scheduler
[#5849] - Pre-commit: Set
yapf
optionallow_split_before_dict_value = false
[#5931] - Process functions: Replace
getfullargspec
withsignature
[#5900] - Fixtures: Add argument
use_subprocess
torun_cli_command
[#5846] - Fixtures: Change default
use_subprocess=False
forrun_cli_command
[#5846] - Tests: Use
use_subprocess=False
andsuppress_warnings=True
[#5846] - Tests: Fix bugs revealed by running with
use_subprocess=True
[#5846] - Typing: Annotate
aiida/orm/utils/serialize.py
[#5832] - Typing: Annotate
aiida/tools/visualization/graph.py
[#5821] - Typing: Use modern syntax for
aiida.engine.processes.functions
[#5900]
- Add compatibility for
ipython~=8.0
[#5888] - Bump cryptography from 36.0.0 to 39.0.1 [#5885]
- Remove upper limit on
werkzeug
[#5904] - Update pre-commit requirement
isort==5.12.0
[#5877] - Update requirement
importlib-metadata~=4.13
[#5963] - Bump
graphviz
version to0.19
[#5965]
Thanks a lot to the following new contributors:
- Critical bug fix: Fix bug causing
CalcJob
s to except after restarting daemon [#5886]
- Critical bug fix: Revert the changes of PR [#5804] released with v2.2.0, which addressed a bug when mutating nodes during
QueryBuilder.iterall
. Unfortunately, the change caused changes performed byverdi
commands (as well as changes made inverdi shell
) to not be persisted to the database. [#5851]
This feature release comes with a significant feature and a number of improvements and fixes.
In certain use cases, it is useful to have a calculation job stopped prematurely, before it finished or the requested wallclock time runs out.
Examples are calculations that seem to be going nowhere and so continuing would only waste computational resources.
Up till now, a calculation job could only be "manually" stopped, through verdi process kill
.
In this release, functionality is added that allows calculation jobs to be monitored automatically by the daemon and have them stopped when certain conditions are met.
Monitors can be attached to a calculation job through the monitors
input namespace:
builder = load_code().get_builder()
builder.monitors = {
'monitor_a': Dict({'entry_point': 'some.monitor'}),
'monitor_b': Dict({'entry_point': 'some.other.monitor'}),
}
Monitors are referenced by their entry points with which they are registered in the aiida.calculations.monitors
entry point group.
A monitor is essentially a function that implements the following interface:
from aiida.orm import CalcJobNode
from aiida.transports import Transport
def monitor(node: CalcJobNode, transport: Transport) -> str | CalcJobMonitorResult | None:
"""Retrieve and inspect files in working directory of job to determine whether the job should be killed.
:param node: The node representing the calculation job.
:param transport: The transport that can be used to retrieve files from remote working directory.
:returns: A string if the job should be killed, `None` otherwise.
"""
The transport
allows to fetch files from the working directory of the calculation.
If the job should be killed, the monitor simply returns a string with the message why and the daemon will send the message to kill the job.
For more information and a complete description of the interface, please refer to the documentation. This functionality was accepted based on AEP 008 which provides more detail on the design choices behind this implementation.
CalcJob
: Add functionality that allows live monitoring [#5659]- CLI: Add
--raw
option toverdi code list
[#5763] - CLI: Add the
-h
short-hand flag for--help
toverdi
[#5792] - CLI: Add short option names for
verdi code create
[#5799] StorageBackend
: Add theinitialise
method [#5760]- Fixtures: Add support for
Process
inputs tosubmit_and_await
[#5780] - Fixtures: Add
aiida_computer_local
andaiida_computer_ssh
[#5786] - Fixtures: Modularize fixtures creating AiiDA test instance and profile [#5758]
Computer
: Add theis_configured
property [#5786]- Plugins: Add
aiida.storage
toENTRY_POINT_GROUP_FACTORY_MAPPING
[#5798]
verdi run
: Do not addpathlib.Path
instance tosys.path
[#5810]- Process functions: Restore support for dynamic nested input namespaces [#5808]
Process
: properly cleanup when exception in state transition [#5697]Process
: Update outputs before updating node process state [#5813]PsqlDosMigrator
: refactor the connection handling [#5783]PsqlDosBackend
: Use transaction whenever mutating session state, fixing exception when storing a node or group duringQueryBuilder.iterall
[#5804]InstalledCode
: Fix bug invalidate_filepath_executable
for SSH [#5787]WorkChain
: Protect public methods from being subclassed. Now if you accidentally override, for example, therun
method of theWorkChain
, an exception is raised instead of silently breaking the work chain [#5779]
- Rename
PsqlDostoreMigrator
toPsqlDosMigrator
[#5761] - ORM: Remove
pymatgen
version check inStructureData.set_pymatgen_structure
[#5777] StorageBackend
: Removerecreate_user
from_clear
[#5772]PsqlDosMigrator
: Remove hardcoding of table name in database reset [#5781]
- Dependencies: Add support for Python 3.11 [#5778]
- Docs: Correct command to enable
verdi
tab-completion forfish
shell [#5784] - Docs: Fix transport & scheduler type in localhost setup [#5785]
- Docs: Fix minor formatting issues in "How to run a code" [#5794]
- CI: Increase load limit for
verdi
to 0.5 seconds [#5773] - CI: Add
workflow_dispatch
trigger tonightly.yml
[#5760] - ORM: Fix typing of
aiida.orm.nodes.data.code
module [#5830] - Pin version of
setuptools
as it breaks dependencies [#5782] - Tests: Use explicit
aiida_profile_clean
in process control tests [#5778] - Tests: Replace all use of
aiida_profile_clean
withaiida_profile
where a clean profile is not necessary [#5814] - Tests: Deal with
run_via_daemon
returningNone
in RPN tests [#5813] - Make type-checking opt-out [#5811]
BaseRestartWorkChain
: Fix bug in_wrap_bare_dict_inputs
introduced inv2.1.0
[#5757]
- Engine: Remove
*args
from theProcess.submit
method. [#5753] Positional arguments were silently ignored leading to a misleading error message. For example, if a user calledinstead of the intendedinputs = {} self.submit(cls, inputs)
The returned error message was that one of the required inputs was not defined. Now it will correctly raise ainputs = {} self.submit(cls, **inputs)
TypeError
saying that positional arguments are not supported. - Process functions: Add serialization for Python base type defaults [#5744]
Defining Python base types as defaults, such as:
would raise an exception. The default is now automatically serialized, just as an input argument would be upon function call.
@calcfunction def function(a, b = 5): return a + b
- Process control: Reinstate process status for paused/killed processes [#5754]
Regression introduced in
aiida-core==2.1.0
caused the messageKilled through 'verdi process list'
to no longer be set on theprocess_status
of the node. QueryBuilder
: use a nested session initerall
anditerdict
[#5736] Modifying entities yielded byQueryBuilder.iterall
andQueryBuilder.iterdict
would raise an exception, for example:for [node] in QueryBuilder().append(Node).iterall(): node.base.extras.set('some', 'extra')
This feature release comes with a number of new features as well as quite a few fixes of bugs and stability issues. Further down you will find a complete list of changes, after a short description of some of the most important changes:
- Automatic input serialization in calculation and work functions
- Improved interface for creating codes
- Support for running code in containers
- Control daemon and processes from the API
- REST API can serve multiple profiles
- Pluginable data storage backends
- Full list of changes
The inputs to calcfunction
s and workfunction
s are now automatically converted to AiiDA data types if they are one of the basic Python types (bool
, dict
, Enum
, float
, int
, list
or str
).
This means that code that looked like:
from aiida.engine import calcfunction
from aiida.orm import Bool, Float, Int, Str
@calcfunction
def function(switch, threshold, count, label):
...
function(Bool(True), Float(0.25), Int(10), Str('some-label'))
can now be simplified to:
from aiida.engine import calcfunction
from aiida.orm import Bool, Float, Int, Str
@calcfunction
def function(switch, threshold, count, label):
...
function(True, 0.25, 10, 'some-label')
The Code
data plugin was a single class that served two different types of codes: "remote" codes and "local" codes.
These names "remote" and "local" have historically caused a lot of confusion.
Likewise, using a single class Code
for both implementations also has led to confusing interfaces.
To address this issue, the functionality has been split into two new classes InstalledCode
and PortableCode
, that replace the "remote" and "local" code, respectively.
The installed code represents an executable binary that is already pre-installed on some compute resource.
The portable code represents a code (executable plus any additional required files) that are stored in AiiDA's storage and can be automatically transfered to any computer before being executed.
Creating a new instance of these new code types is easy:
from pathlib import Path
from aiida.orm import InstalledCode, PortableCode
installed_code = InstalledCode(
label='installed-code',
computer=load_computer('localhost'),
filepath_executable='/usr/bin/bash'
)
portable_code = PortableCode(
label='portable-code',
filepath_files=Path('/some/path/code'),
filepath_executable='executable.exe'
)
Codes can also be created through the new verdi
command verdi code create
.
To specify the type of code to create, pass the corresponding entry point name as an argument.
For example, to create a new installed code, invoke:
verdi code create core.code.installed
The options for each subcommand are automatically generated based on the code type, and so only options that are relevant to that code type will be prompted for.
The new code classes both subclass the aiida.orm.nodes.data.code.abstract.AbstractCode
base class.
This means that both InstalledCode
s and PortableCode
s can be used as the code
input for CalcJob
s without problems.
The old Code
class remains supported for the time being as well, however, it is deprecated and will be remove at some point.
The same goes for the verdi code setup
command; please use verdi code create
instead.
Existing codes will be automatically migrated to either an InstalledCode
or a PortableCode
.
It is strongly advised that you update any code that creates new codes to use these new plugin types.
Support is added to run calculation jobs inside a container. A containerized code can be setup through the CLI:
verdi code create core.code.containerized \
--label containerized \
--image-name docker://alpine:3 \
--filepath-executable /bin/sh \
--engine-command "singularity exec --bind $PWD:$PWD {image_name}"
as well as through the API:
from aiida.orm import ContainerizedCode, load_computer
code = ContainerizedCode(
computer=load_computer('some-computer')
filepath_executable='/bin/sh'
image_name='docker://alpine:3',
engine_command='singularity exec --bind $PWD:$PWD {image_name}'
).store()
In the example above we use the Singularity containerization technology. For more information on what containerization programs are supported and how to configure them, please refer to the documentation.
Up till now, the daemon and live processes could only easily be controlled through verdi daemon
and verdi process
, respectively.
In this release, modules are added to provide the same functionality through the Python API.
The daemon can now be started and stopped through the DaemonClient
which can be obtained through the get_daemon_client
utility function:
from aiida.engine.daemon.client import get_daemon_client
client = get_daemon_client()
By default, this will give the daemon client for the current default profile. It is also possible to explicitly specify a profile:
client = get_daemon_client(profile='some-profile')
The daemon can be started and stopped through the client:
client.start_daemon()
assert client.is_daemon_running
client.stop_daemon(wait=True)
The functionality of verdi process
to play
, pause
and kill
is now made available through the aiida.engine.process.control
module.
Processes can be played, paused or killed through the play_processes
, pause_processes
, and kill_processes
, respectively.
The processes to act upon are defined through their ProcessNode
which can be loaded using load_node
.
from aiida.engine.process import control
processes = [load_node(<PK1>), load_node(<PK2>)]
pause_processes(processes) # Pause the processes
play_processes(processes) # Play them again
kill_processes(processes) # Kill the processes
Instead of specifying an explicit list of processes, the functions also take the all_entries
keyword argument:
pause_processes(all_entries=True) # Pause all running processes
Before, a single REST API could only serve data of a single profile at a time. This limitation has been removed and a single REST API instance can now serve data from all profiles of an AiiDA instance. To maintain backwards compatibility, the new functionality needs to be explicitly enabled through the configuration:
verdi config set rest_api.profile_switching True
After the REST API is restarted, it will now accept the profile
query parameter, for example:
http://127.0.0.1:5000/api/v4/computers?profile=some-profile-name
If the specified is already loaded, the REST API functions exactly as without profile switching enabled. If another profile is specified, the REST API will first switch profiles before executing the request.
If the profile parameter is specified in a request and the REST API does not have profile switching enabled, a 400 response is returned.
Warning: this is beta functionality.
It is now possible to implement custom storage backends to control where all data of an AiiDA profile is stored.
To provide a data storage plugin, one should implement the aiida.orm.implementation.storage_backend.StorageBackend
interface.
The default implementation provided by aiida-core
is the aiida.storage.psql_dos.backend.PsqlDosBackend
which uses a PostgreSQL database for the provenance graph and a disk-objectstore
container for repository files.
Storage backend plugins should be registered in the new entry point group aiida.storage
.
The default storage backend PsqlDosBackend
has the core.psql_dos
entry point name.
The storage backend to be used for a profile can be specified using the --db-backend
option in verdi setup
and verdi quicksetup
.
The entry point of the selected backend is stored in the storage.backend
key of a profile configuration:
{
"profiles": {
"profile-name": {
"PROFILE_UUID": "",
"storage": {
"backend": "core.psql_dos",
"config": {}
},
"process_control": {},
"default_user_email": "aiida@localhost",
"test_profile": false
},
}
At the moment, it is not quite clear if the abstract interface StorageBackend
properly abstracts everything that is needed to implement any storage backend.
For the time being then, it is advised to subclass the PsqlDosBackend
and replace parts required for the use-case, such as just replacing the file repository implementation.
Process
: Add hook to customize theprocess_label
attribute [#5713]- Add the
ContainerizedCode
data plugin [#5667] - API: Add the
aiida.engine.processes.control
module [#5630] PluginVersionProvider
: Add support for entry point strings [#5662]verdi setup
: Add the--profile-uuid
option [#5673]- Process control: Add the
revive_processes
method [#5677] - Process functions: Add the
get_source_code_function
method [#4554] - CLI: Improve the quality of
verdi code list
output [#5750] - CLI: Add the
verdi devel revive
command [#5677] - CLI:
verdi process status --max-depth
[#5727] - CLI:
verdi setup/quicksetup
store autofill user info early [#5729] - CLI: Add the
devel launch-add
command [#5733] - CLI: Make filename in
verdi node repo cat
optional forSinglefileData
[#5747] - CLI: Add the
verdi devel rabbitmq
command group [#5718] - API: Add function to start the daemon [#5625]
BaseRestartWorkChain
: add theget_outputs
hook [#5618]CalcJob
: extendretrieve_list
syntax withdepth=None
[#5651]CalcJob
: allow wildcards instash.source_list
paths [#5601]- Add global config option
rest_api.profile_switching
[#5054] - REST API: make the profile configurable as request parameter [#5054]
ProcessFunction
: Automatically serialize Python base type inputs [#5688]BaseRestartWorkChain
: allow to override priority inhandler_overrides
[#5546]- ORM: add
entry_point
classproperty toNode
andGroup
[#5437] - Add the
aiida.storage
entry point group [#5501] - Add the config option
storage.sandbox
[#5501] - Add the
InstalledCode
andPortableCode
data plugins [#5510] - CLI: Add the
verdi code create
command group [#5510] - CLI: Add the
DynamicEntryPointCommandGroup
command group [#5510] - Add a client to connect to RabbitMQ Manamegement HTTP API [#5718]
LsfScheduler
: add support fornum_machines
[#5153]JobResource
: add theaccepts_default_memory_per_machine
[#5642]AbstractCode
: add abstraction methods for command line parameters [#5664]ArithmeticAddCalculation
: Add themetadata.options.sleep
input [#5663]DaemonClient
: add theget_env
method [#5631]- Tests: Make daemon fixtures available to plugin packages [#5701]
verdi plugin list
: Show which exit codes invalidate cache [#5710]verdi plugin list
: Show full help for input and output ports [#5711]
ArrayData
: replacenan
andinf
withNone
when dumping to JSON [#5613]- Archive: add missing migration of transport entry points [#5604]
BaseRestartWorkChain
: fixhandler_overrides
ignoringenabled=False
[#5598]- CLI: allow setting options for config without profiles [#5544]
- CLI: normalize use of colors [#5547]
Config
: fix bug in downgrade past version 6 [#5528]DaemonClient
: closeCircusClient
after call [#5631]- Engine: Do not call serializer for
None
values [#5694] - Engine: Do not let
DuplicateSubcriberError
except aProcess
[#5715] - ORM: raise when trying to pickle instance of
Entity
[#5549] - ORM: Return
None
inget_function_source_code
instead of excepting [#5730] - Fix
get_entry_point
not raising even for duplicate entry points [#5531] - Fix: reference to command in message for
verdi storage maintain
[#5558] - Fix:
is_valid_cache
setter forProcessNode
s [#5583] - Fix exception when importing an archive into a profile with many nodes [#5740]
Profile
: make definition of daemon filepaths dynamic [#5631]- Fixtures: Fix bug in reset of
empty_config
fixture [#5717] PsqlDosBackend
: ensure sqla sessions are garbage-collected onclose
[#5728]TrajectoryData
: Fix bug inget_step_data
[#5734]ProfileManager
: restart daemon inclear_profile
[#5751]
- Mark relevant
Process
exit codes asinvalidates_cache=True
[#5709] TemplatereplacerCalculation
: Change exit codes to be in 300 range [#5709]- Add the prefix
core.
to all storage entry points [#5501] CalcJob
: Fully abstract interaction withAbstractCode
in presubmit [#5666]- CLI: make label the default group list order in
verdi group list
[#5523] - Config: add migration to properly prefix storage backend [#5501]
- Move query utils from
aiida.cmdline
toaiida.tools
[#5630] SandboxFolder
: decouple the location from the profile [#5496]TemplatereplacerDoublerParser
: rename and generalize implementation [#5669]Process
: AllowNone
for input ports that are not required [#5722]
- RabbitMQ: Remove support for v3.5 and older [#5718]
- Relax
wrapt
requirement [#5607] - Set upper limit
werkzeug<2.2
[#5606] - Update requirement
click~=8.1
[#5504]
- Deprecate
Profile.repository_path
[#5516] - Deprecate:
verdi code setup
andCodeBuilder
[#5510] - Deprecate the method
aiida.get_strict_version
[#5512] - Remove use of legacy
Code
[#5510]
- Add section on basic performance benchmark with automated benchmark script [#5724]
- Add
-U
flag to PostgreSQL database backup command [#5550] - Clarify excepted and killed calculations are not cached [#5525]
- Correct snippet for workchain context nested keys [#5551]
- Plugin package setup add PEP 621 example [#5626]
- Remove note on disk space for caching [#5534]
- Remove explicit release tag in Docker image name [#5671]
- Remove example REST API extension with POST requests [#5737]
- Resubmit a
Process
from aProcessNode
[#5579]
- Add a notification for nightly workflow on fail [#5605]
- CI: Remove
--use-feature
flag inpip install
of CI [#5703] - Fixtures: Add
started_daemon_client
andstopped_daemon_client
[#5631] - Fixtures: Add the
entry_points
fixture to dynamically add and remove entry points [#5745] - Refactor:
Process
extractCalcJob
specific input handling fromProcess
[#5539] - Refactor: remove unnecessary use of
tempfile.mkdtemp
[#5639] - Refactor: Remove internal use of various deprecated resources [#5716]
- Refactor: Turn
aiida.manage.external.rmq
into a package [#5718] - Tests: remove legacy
tests/utils/configuration.py
[#5500] - Tests: fix the RPN work chains for the nightly build [#5529]
- Tests: Manually stop daemon after
verdi devel revive
test [#5689] - Tests: Add verbose info if
submit_and_wait
times out [#5689] - Tests: Do not set default memory for
localhost
fixture [#5689] - Tests: Suppress RabbitMQ and developer version warnings [#5689]
- Tests: Add the
EntryPointManager
exposed asentry_points
fixture [#5656] - Tests: Only reset database connection at end of suite [#5641]
- Tests: Suppress logging and warnings from temporary profile fixture [#5702]
- Engine: Fix bug that allowed non-storable inputs to be passed to process [#5532]
- Engine: Fix bug when caching from process with nested outputs [#5538]
- Archive: Fix bug in archive creation after packing of file repository [#5570]
QueryBuilder
: apply escape\
inlike
andilike
for asqlite
backend, such as export archives [#5553]QueryBuilder
: Fix bug in distinct queries always projecting the first entity, even if not projected explicitly [#5654]CalcJob
: fix bug inlocal_copy_list
provenance exclusion [#5648]Repository.copy_tree
: omit subdirectories frompath
when copying [#5648]- Docs: Add intersphinx aliases for
__all__
imports. Now the shortcut imports can also be used in third-party packages (e.g.aiida.orm.nodes.node.Node
as well asaiida.orm.Node
) [#5657]
Update of the Dockerfile base image (aiidateam/aiida-prerequisites
) to version 0.6.0
.
- REST API: treat
false
asFalse
in URL parsing [#5573] - REST API: add support for byte streams through a custom JSON encoder [#5576]
- Fix incompatibility with
click>=8.1
and requireclick==8.1
as a minimum by @sphuber in [#5504]
This release finalises the v2.0.0b1 changes.
:::{note}
The restructuring is fully back-compatible, and existing methods/attributes will continue to work, until aiida-core v3.0
.
Deprecations warnings are also currently turned off by default.
To identify these deprecations in your code base (for example when running unit tests), activate the AIIDA_WARN_v3
environmental variable:
export AIIDA_WARN_v3=1
:::
The Node
class (and thus its subclasses) has many methods and attributes in its public namespace.
This has been noted as being a problem for those using auto-completion,
since it makes it difficult to select suitable methods and attributes.
These methods/attributes have now been partitioned into "sub-namespaces" for specific purposes:
Node.base.attributes
: Interface to the attributes of a node instance.
Node.base.caching
: Interface to control caching of a node instance.
Node.base.comments
: Interface for comments of a node instance.
Node.base.extras
: Interface to the extras of a node instance.
Node.base.links
: Interface for links of a node instance.
Node.base.repository
: Interface to the file repository of a node instance.
:::{dropdown} Full list of re-naming
Current name | New name |
---|---|
Collection |
Deprecated, use NodeCollection directly |
add_comment |
Node.base.comments.add |
add_incoming |
Node.base.links.add_incoming |
attributes |
Node.base.attributes.all |
attributes_items |
Node.base.attributes.items |
attributes_keys |
Node.base.attributes.keys |
check_mutability |
Node._check_mutability_attributes |
clear_attributes |
Node.base.attributes.clear |
clear_extras |
Node.base.extras.clear |
clear_hash |
Node.base.caching.clear_hash |
copy_tree |
Node.base.repository.copy_tree |
delete_attribute |
Node.base.attributes.delete |
delete_attribute_many |
Node.base.attributes.delete_many |
delete_extra |
Node.base.extras.delete |
delete_extra_many |
Node.base.extras.delete_many |
delete_object |
Node.base.repository.delete_object |
erase |
Node.base.repository.erase |
extras |
Node.base.extras.all |
extras_items |
Node.base.extras.items |
extras_keys |
Node.base.extras.keys |
get |
Deprecated, use Node.objects.get |
get_all_same_nodes |
Node.base.caching.get_all_same_nodes |
get_attribute |
Node.base.attributes.get |
get_attribute_many |
Node.base.attributes.get_many |
get_cache_source |
Node.base.caching.get_cache_source |
get_comment |
Node.base.comments.get |
get_comments |
Node.base.comments.all |
get_extra |
Node.base.extras.get |
get_extra_many |
Node.base.extras.get_many |
get_hash |
Node.base.caching.get_hash |
get_incoming |
Node.base.links.get_incoming |
get_object |
Node.base.repository.get_object |
get_object_content |
Node.base.repository.get_object_content |
get_outgoing |
Node.base.links.get_outgoing |
get_stored_link_triples |
Node.base.links.get_stored_link_triples |
glob |
Node.base.repository.glob |
has_cached_links |
Node.base.caching.has_cached_links |
id |
Deprecated, use pk |
is_created_from_cache |
Node.base.caching.is_created_from_cache |
is_valid_cache |
Node.base.caching.is_valid_cache |
list_object_names |
Node.base.repository.list_object_names |
list_objects |
Node.base.repository.list_objects |
objects |
collection |
open |
Node.base.repository.open |
put_object_from_file |
Node.base.repository.put_object_from_file |
put_object_from_filelike |
Node.base.repository.put_object_from_filelike |
put_object_from_tree |
Node.base.repository.put_object_from_tree |
rehash |
Node.base.caching.rehash |
remove_comment |
Node.base.comments.remove |
repository_metadata |
Node.base.repository.metadata |
repository_serialize |
Node.base.repository.serialize |
reset_attributes |
Node.base.attributes.reset |
reset_extras |
Node.base.extras.reset |
set_attribute |
Node.base.attributes.set |
set_attribute_many |
Node.base.attributes.set_many |
set_extra |
Node.base.extras.set |
set_extra_many |
Node.base.extras.set_many |
update_comment |
Node.base.comments.update |
validate_incoming |
Node.base.links.validate_incoming |
validate_outgoing |
Node.base.links.validate_outgoing |
validate_storability |
Node._validate_storability |
verify_are_parents_stored |
Node._verify_are_parents_stored |
walk |
Node.base.repository.walk |
:::
The aiida
IPython magic commands are now available to load via:
%load_ext aiida
As well as the previous %aiida
magic command, to load a profile,
one can also use the %verdi
magic command.
This command runs the verdi
CLI using the currently loaded profile of the IPython/Jupyter session.
%verdi status
See the Basic Tutorial for example usage.
The SqliteTempBackend
utilises an in-memory SQLite database to store data, allowing it to be transiently created/destroyed within a single Python session, without the need for Postgresql.
As such, it is useful for demonstrations and testing purposes, whereby no persistent storage is required.
To load a temporary profile, you can use the following code:
from aiida import load_profile
from aiida.storage.sqlite_temp import SqliteTempBackend
profile = load_profile(
SqliteTempBackend.create_profile(
'myprofile',
options={
'runner.poll.interval': 1
},
debug=False
),
)
See the Basic Tutorial for example usage.
Below is a list of some key pull requests that have been merged into version 2.0.0:
-
Node namespace re-structuring:
- 🔧 MAINTAIN: Add
warn_deprecation
function,Node.base
, and moveNodeRepositoryMixin -> NodeRepository
by @chrisjsewell in #5472 - ♻️ REFACTOR:
EntityAttributesMixin
->NodeAttributes
by @chrisjsewell in #5442 - ♻️ REFACTOR: Move methods to
Node.comments
by @chrisjsewell in #5446 - ♻️ REFACTOR:
EntityExtrasMixin
->EntityExtras
by @chrisjsewell in #5445 - ♻️ REFACTOR: Move link related methods to
Node.base.links
by @sphuber in #5480 - ♻️ REFACTOR: Move caching related methods to
Node.base.caching
by @sphuber in #5483
- 🔧 MAINTAIN: Add
-
Storage:
-
ORM:
- 👌 IMPROVE:
StructureData
: allow to be initialised without a specified cell by @ltalirz in #5341
- 👌 IMPROVE:
-
Processing:
- 👌 IMPROVE: Allow
engine.run
to work without RabbitMQ by @chrisjsewell in #5448 - 👌 IMPROVE:
JobTemplate
: changeCodeInfo
toJobTemplateCodeInfo
incodes_info
by @unkcpz in #5350- This is required for a containerized code implementation
- 👌 IMPROVE: Add option to use double quotes for
Code
andComputer
CLI arguments by @unkcpz in #5478
- 👌 IMPROVE: Allow
-
Transport and Scheduler:
-
IPython:
- ✨ NEW: Add
%verdi
IPython magic by @chrisjsewell in #5448
- ✨ NEW: Add
-
Dependencies:
- ♻️ REFACTOR: drop the
python-dateutil
library by @sphuber
- ♻️ REFACTOR: drop the
(release/2.0.0b1)=
The version 2 release of aiida-core
largely focusses on major improvements to the design of data storage within AiiDA, as well as updates to core dependencies and removal of deprecated APIs.
Assuming users have already addressed deprecation warnings from aiida-core
v1.6.x, there should be limited impact on existing code.
For plugin developers, the AiiDA 2.0 plugin migration guide provides a step-by-step guide on how to update their plugins.
For existing profiles and archives, a migration will be required, before they are compatible with the new version.
:::{tip}
Before updating your aiida-core
installation, it is advisable to make sure you create a full backup of your profiles,
using the current version of aiida-core
you have installed.
For backup instructions, using aiida-core v1.6.7, see this documentation.
:::
Following the NEP 029 timeline, support for Python 3.7 is dropped as of December 26 2021, and support for Python 3.10 is added.
AiiDA's use of entry points, to allow plugins to extend the functionality of AiiDA, is described in the plugins topic section.
The use of reentry scan
, for loading plugin entry points, is no longer necessary.
Use of the reentry dependency has been replaced by the built-in importlib.metadata library. This library requires no additional loading step.
All entry points provided by aiida-core
now start with a core.
prefix, to make their origin more explicit and respect the naming guidelines of entry points in the AiiDA ecosystem.
The old names are still supported so as to not suddenly break existing code based on them, but they have now been deprecated.
For example:
from aiida.plugins import DataFactory
Int = DataFactory('int') # Old name
Int = DataFactory('core.int') # New name
Note that entry point names are also used on the command line. For example:
$ verdi computer setup -L localhost -T local -S direct
# now changed to
$ verdi computer setup -L localhost -T local -S core.direct
Full details on the AiiDA storage architecture are available in the storage architecture section.
The storage refactor incorporates four major changes:
-
The
django
andsqlalchemy
storage backends have been merged into a singlepsql_dos
backend (PostgreSQL + Disk-Objectstore).- See the
psql_dos
storage format for details. - This has allowed for the
django
dependency to be dropped.
- See the
-
The file system node repository has been replaced with an object store implementation.
- The object store automatically deduplicates files, and allows for the compression of many objects into a single file, thus significantly reducing the number of files on the file system and memory utilisation (by orders of magnitude).
- Note, to make full use of object compression, one should periodically run
verdi storage maintain
. - See the repository design section for details.
-
Command-line interaction with a profile's storage has been moved from
verdi database
toverdi storage
. -
The AiiDA archive format has been redesigned as the
sqlite_zip
storage backend.- See the
sqlite_zip
storage format for details. - The new format allows for streaming of data during exports and imports, significantly reducing both the time and memory utilisation of these actions.
- The archive can now be loaded directly as a (read-only) profile, without the need to import it first, see this Jupyter Notebook tutorial.
- See the
The storage redesign also allows for profile switching, within the same Python process, and profile access within a context manager. For example:
from aiida import load_profile, profile_context, orm
with profile_context('my_profile_1'):
# The profile will be loaded within the context
node_from_profile_1 = orm.load_node(1)
# then the profile will be unloaded automatically
# load a global profile
load_profile('my_profile_2')
node_from_profile_2 = orm.load_node(1)
# switch to a different global profile
load_profile('my_profile_3', allow_switch=True)
node_from_profile_3 = orm.load_node(1)
See How to interact with AiiDA for more details.
On first using aiida-core
v2.0, your AiiDA configuration will be automatically migrated to the new version (this can be reverted by verdi config downgrade
).
To update existing profiles and archives to the new storage formats, simply use verdi storage migrate
and verdi archive migrate
, respectively.
:::{important} The migration of large storage repositories is a potentially time-consuming process. It may take several hours to complete, depending on the size of the repository. It is also advisable to make a full manual backup of any AiiDA setup with important data: see the installation management section for more information.
See also this testing of profile migrations, for some indicative timings. :::
Inline with the storage improvements, {class}~aiida.orm.Node
methods associated with the repository have some backwards incompatible changes:
:::{dropdown} Node
repository method changes
Altered:
FileType
: moved fromaiida.orm.utils.repository
toaiida.repository.common
File
: moved fromaiida.orm.utils.repository
toaiida.repository.common
File
: changed from namedtuple to classFile
: can no longer be iterated overFile
:type
attribute was renamed tofile_type
Node.put_object_from_tree
:path
argument was renamed tofilepath
Node.put_object_from_file
:path
argument was renamed tofilepath
Node.put_object_from_tree
:key
argument was renamed topath
Node.put_object_from_file
:key
argument was renamed topath
Node.put_object_from_filelike
:key
argument was renamed topath
Node.get_object
:key
argument was renamed topath
Node.get_object_content
:key
argument was renamed topath
Node.open
:key
argument was renamed topath
Node.list_objects
:key
argument was renamed topath
Node.list_object_names
:key
argument was renamed topath
SinglefileData.open
:key
argument was renamed topath
Node.open
: can no longer be called without context managerNode.open
: only moder
andrb
are supported, useput_object_from_
methods insteadNode.get_object_content
: only moder
andrb
are supportedNode.put_object_from_tree
: argumentcontents_only
was removedNode.put_object_from_tree
: argumentforce
was removedNode.put_object_from_file
: argumentforce
was removedNode.put_object_from_filelike
: argumentforce
was removedNode.delete_object
: argumentforce
was removed
Added:
Node.walk
Node.copy_tree
Node.is_valid_cache
setterNode.objects.iter_repo_keys
Additionally, Node.open
should always be used as a context manager, for example:
with node.open('filename.txt') as handle:
content = handle.read()
:::
When using the {class}~aiida.orm.QueryBuilder
to query the database, the following changes have been made:
- The
Computer
'sname
field is now replaced withlabel
(as previously deprecated in v1.6) - The
QueryBuilder.queryhelp
attribute is deprecated, for theas_dict
(andfrom_dict
) methods - The
QueryBuilder.first
method now allows theflat
argument, which will return a single item, instead of a list of one item, if only a single projection is defined.
For example:
from aiida.orm import QueryBuilder, Computer
query = QueryBuilder().append(Computer, filters={'label': 'localhost'}, project=['label']).as_dict()
QueryBuilder.from_dict(query).first(flat=True) # -> 'localhost'
For further information, see How to find and query for data.
The {class}~aiida.orm.Dict
class has been updated to support more native dict
behaviour:
- Initialisation can now use
Dict({'a': 1})
, instead ofDict(dict={'a': 1})
. This is also the case forList([1, 2])
. - Equality (
==
/!=
) comparisons now compare the dictionaries, rather than the UUIDs - The contains (
in
) operator now returnsTrue
if the dictionary contains the key - The
items
method iterates a list of(key, value)
pairs
For example:
from aiida.orm import Dict
d1 = Dict({'a': 1})
d2 = Dict({'a': 1})
assert d1.uuid != d2.uuid
assert d1 == d2
assert not d1 != d2
assert 'a' in d1
assert list(d1.items()) == [('a', 1)]
Two new built-in data types have been added:
{class}~aiida.orm.EnumData
: A data plugin that wraps a Python enum.Enum
instance.
{class}~aiida.orm.JsonableData
: A data plugin that allows one to easily wrap existing objects that are JSON-able (via an as_dict
method).
See the data types section for more information.
A number of minor improvements have been made to the CalcJob
API:
- Both numpy arrays and
Enum
instances can now be serialized on process checkpoints. - The
Calcjob.spec.metadata.options.rerunnable
option allows to specify whether the calculation can be rerun or requeued (dependent on the scheduler). Note, this should only be applied for idempotent codes. - The
Calcjob.spec.metadata.options.environment_variables_double_quotes
option allows for double-quoting of environment variable declarations. In particular, this allows for use of the$
character in the environment variable name, e.g.export MY_FILE="$HOME/path/my_file"
. CalcJob.local_copy_list
now allows for specifying entire directories to be copied to the local computer, in addition to individual files. Note that the directory itself won't be copied, just its contents.WorkChain.to_context
now allows.
delimited namespacing, which generate nested dictionaries. See Nested context keys for more information.
The new CalcJobImporter
class has been added, to define importers for computations completed outside of AiiDA.
These can help onboard new users to your AiiDA plugin.
For more information, see Writing importers for existing computations.
Plugin's implementation of Scheduler._get_submit_script_header
should now utilise Scheduler._get_submit_script_environment_variables
, to format environment variable declarations, rather than handling it themselves. See the exemplar changes in #5283.
The Scheduler.get_valid_transports()
method has also been removed, use get_entry_point_names('aiida.schedulers')
instead (see {func}~aiida.plugins.entry_point.get_entry_point_names
).
See Scheduler plugins for more information.
The SshTransport
now supports the SSH ProxyJump
option, for tunnelling through other SSH hosts.
See How to setup SSH connections for more information.
Transport plugins now support also transferring bytes (rather than only Unicode strings) in the stdout/stderr of "remote" commands (see #3787). The required changes for transport plugins:
- rename the
exec_command_wait
function in your plugin implementation withexec_command_wait_bytes
- ensure the method signature follows {meth}
~aiida.transports.transport.Transport.exec_command_wait_bytes
, and thatstdin
accepts abytes
object. - return bytes for stdout and stderr (most probably internally you are already getting bytes - just do not decode them to strings)
For an exemplar implementation, see {meth}~aiida.transports.plugins.local.LocalTransport.exec_command_wait_bytes
,
or see Transport plugins for more information.
The Transport.get_valid_transports()
method has also been removed, use get_entry_point_names('aiida.transports')
instead (see {func}~aiida.plugins.entry_point.get_entry_point_names
).
The AiiDA command-line interface (CLI) can now be accessed as both verdi
and /path/to/bin/python -m aiida
.
The underlying dependency for this CLI, click
, has been updated to version 8, which contains built-in tab-completion support, to replace the old click-completion
.
The completion works the same, except that the string that should be put in the activation script to enable it is now shell-dependent.
See Activating tab-completion for more information.
Logging for the CLI has been updated, to standardise its use across all CLI commands. This means that all commands include the option:
-v, --verbosity [notset|debug|info|report|warning|error|critical]
Set the verbosity of the output.
By default the verbosity is set to REPORT
(see verdi config list
), which relates to using Logger.report
, as defined in {func}~aiida.common.log.report
.
The following specific changes and improvements have been made to the CLI commands:
verdi storage
(replaces verdi database
)
: This command group replaces the verdi database
command group, which is now deprecated, in order to represent its interaction with the full profile storage (not just database).
: verdi storage info
provides information about the entities contained for a profile.
: verdi storage maintain
has also been added, to allow for maintenance of the storage, for example, to optimise the storage size.
verdi archive version
and verdi archive info
(replace verdi archive inspect
)
: This change synchronises the commands with the new verdi storage version
and verdi storage info
commands.
verdi group move-nodes
: This command moves nodes from a source group to a target group (removing them from one and adding them to the other).
verdi code setup
: There is a small change to the order of prompts, in interactive mode.
: The uniqueness of labels is now validated, for both remote and local codes.
verdi code test
: Run tests for a given code to check whether it is usable, including whether remote executable files are available.
See AiiDA Command Line for more information.
The build tool for aiida-core
has been changed from setuptools
to flit
.
This allows for the project metadata to be fully specified in the pyproject.toml
file, using the PEP 621 format.
Note, editable installs (using the -e
flag for pip install
) of aiida-core
now require pip>=21
.
Type annotations have been added to most of the code base. Plugin developers can use mypy to check their code against the new type annotations.
All module level imports are now defined explicitly in __all__
.
See Overview of public API for more information.
The aiida.common.json
module is now deprecated.
Use the json
standard library instead.
The deprecated AiidaTestCase
class has been removed, in favour of the AiiDA pytest fixtures, which can be loaded in your conftest.py
using:
pytest_plugins = ['aiida.manage.tests.pytest_fixtures']
The fixtures clear_database
, clear_database_after_test
, clear_database_before_test
are now deprecated, in favour of the aiida_profile_clean
fixture, which ensures (before the test) the default profile is reset with clean storage, and that all previous resources are closed
If you only require the profile to be reset before a class of tests, then you can use aiida_profile_clean_class
.
Below is a list of some key pull requests that have been merged into version 2.0.0b1
:
-
Storage and migrations:
- ♻️ REFACTOR: Implement the new file repository by @sphuber in #4345
- ♻️ REFACTOR: New archive format by @chrisjsewell in #5145
- ♻️ REFACTOR: Remove
QueryManager
by @chrisjsewell in #5101 - ♻️ REFACTOR: Fully abstract QueryBuilder by @chrisjsewell in #5093
- ✨ NEW: Add
Backend
bulk methods by @chrisjsewell in #5171 - ⬆️ UPDATE: SQLAlchemy v1.4 (v2 API) by @chrisjsewell in #5103 and #5122
- 👌 IMPROVE: Configuration migrations by @chrisjsewell in #5319
- ♻️ REFACTOR: Remove Django storage backend by @chrisjsewell in #5330
- ♻️ REFACTOR: Move archive backend to
aiida/storage
by @chrisjsewell in 5375 - 👌 IMPROVE: Use
sqlalchemy.func
for JSONB QB filters by @ltalirz in #5393 - ✨ NEW: Add Mechanism to lock profile access by @ramirezfranciscof in #5270
- ✨ NEW: Add
verdi storage
CLI by @ramirezfranciscof in #4965 and #5156
-
ORM API:
- ♻️ REFACTOR: Add the
core.
prefix to all entry points by @sphuber in #5073 - 👌 IMPROVE: Replace
InputValidationError
withValueError
andTypeError
by @sphuber in #4888 - 👌 IMPROVE: Add
Node.walk
method to iterate over repository content by @sphuber in #4935 - 👌 IMPROVE: Add
Node.copy_tree
method by @sphuber in #5114 - 👌 IMPROVE: Add
Node.is_valid_cache
setter property by @sphuber in #5114 - 👌 IMPROVE: Add
Node.objects.iter_repo_keys
by @chrisjsewell in #5114 - 👌 IMPROVE: Allow storing
Decimal
inNode.attributes
by @dev-zero in #4964 - 🐛 FIX: Initialising a
Node
with aUser
by @chrisjsewell in #5114 - 🐛 FIX: Deprecate double underscores in
LinkManager
contains by @sphuber in #5067 - ♻️ REFACTOR: Rename
name
field ofComputer
tolabel
by @sphuber in #4882 - ♻️ REFACTOR:
QueryBuilder.queryhelp
->QueryBuilder.as_dict
by @chrisjsewell in #5081 - 👌 IMPROVE: Add
AuthInfo
joins toQueryBuilder
by @chrisjsewell in #5195 - 👌 IMPROVE:
QueryBuilder.first
addflat
keyword by @sphuber in #5410 - 👌 IMPROVE: Add
Computer.default_memory_per_machine
attribute by @yakutovicha in #5260 - 👌 IMPROVE: Add
Code.validate_remote_exec_path
method to check executable by @sphuber in #5184 - 👌 IMPROVE: Allow
source
to be passed as a keyword toData.__init__
by @sphuber in #5163 - 👌 IMPROVE:
Dict.__init__
andList.__init__
by @mbercx in #5165 ‼️ BREAKING: CompareDict
nodes by content by @mbercx in #5251- 👌 IMPROVE: Implement the
Dict.__contains__
method by @sphuber in #5251 - 👌 IMPROVE: Implement
Dict.items()
method by @mbercx in #5251 - 🐛 FIX:
BandsData.show_mpl
allow NaN values by @PhilippRue in #5024 - 🐛 FIX: Replace
KeyError
withAttributeError
inTrajectoryData
methods by @Crivella in #5015 - ✨ NEW:
EnumData
data plugin by @sphuber in #5225 - ✨ NEW:
JsonableData
data plugin by @sphuber in #5017 - 👌 IMPROVE: Register
List
class withto_aiida_type
dispatch by @sphuber in #5142 - 👌 IMPROVE: Register
EnumData
class withto_aiida_type
dispatch by @sphuber in #5314
- ♻️ REFACTOR: Add the
-
Processing:
- ✨ NEW:
CalcJob.get_importer()
to import existing calculations, run outside of AiiDA by @sphuber in #5086 - ✨ NEW:
ProcessBuilder._repr_pretty_
ipython representation by @mbercx in #4970 - 👌 IMPROVE: Allow
Enum
types to be serialized onProcessNode.checkpoint
by @sphuber in #5218 - 👌 IMPROVE: Allow numpy arrays to be serialized on
ProcessNode.checkpoint
by @greschd in #4730 - 👌 IMPROVE: Add
Calcjob.spec.metadata.options.rerunnable
to requeue/rerun calculations by @greschd in #4707 - 👌 IMPROVE: Add
Calcjob.spec.metadata.options.environment_variables_double_quotes
to escape environment variables by @unkcpz in #5349 - 👌 IMPROVE: Allow directories in
CalcJob.local_copy_list
by @sphuber in #5115 - 👌 IMPROVE: Add support for
.
namespacing in the keys forWorkChain.to_context
by @dev-zero in #4871 - 👌 IMPROVE: Handle namespaced outputs in
BaseRestartWorkChain
by @unkcpz in #4961 - 🐛 FIX: Nested namespaces in
ProcessBuilderNamespace
by @sphuber in #4983 - 🐛 FIX: Ensure
ProcessBuilder
instances do not interfere by @sphuber in #4984 - 🐛 FIX: Raise when
Process.exposed_outputs
gets non-existingnamespace
by @sphuber in #5265 - 🐛 FIX: Catch
AttributeError
for unloadable identifier inProcessNode.is_valid_cache
by @sphuber in #5222 - 🐛 FIX: Handle
CalcInfo.codes_run_mode
whenCalcInfo.codes_info
contains multiple codes by @unkcpz in #4990 - 🐛 FIX: Check for recycled circus PID by @dev-zero in #5086
- ✨ NEW:
-
Scheduler/Transport:
- 👌 IMPROVE: Specify abstract methods on
Transport
by @chrisjsewell in #5242 - ✨ NEW: Add support for SSH proxy_jump by @dev-zero in #4951
- 🐛 FIX: Daemon hang when passing
None
asjob_id
by @ramirezfranciscof in #4967 - 🐛 FIX: Avoid deadlocks when retrieving stdout/stderr via SSH by @giovannipizzi in #3787
- 🐛 FIX: Use sanitised variable name in SGE scheduler job title by @mjclarke94 in #4994
- 🐛 FIX:
listdir
method with pattern for SSH by @giovannipizzi in #5252 - 👌 IMPROVE:
DirectScheduler
: usenum_cores_per_mpiproc
if defined in resources by @sphuber in #5126 - 👌 IMPROVE: Add abstract generation of submit script env variables to
Scheduler
by @sphuber in #5283
- 👌 IMPROVE: Specify abstract methods on
-
CLI:
- ✨ NEW: Allow for CLI usage via
python -m aiida
by @chrisjsewell in #5356 - ⬆️ UPDATE:
click==8.0
and removeclick-completion
by @sphuber in #5111 - ♻️ REFACTOR: Replace
verdi database
commands withverdi storage
by @ramirezfranciscof in #5228 - ✨ NEW: Add verbosity control by @sphuber in #5085
- ♻️ REFACTOR: Logging verbosity implementation by @sphuber in #5119
- ✨ NEW: Add
verdi group move-nodes
command by @mbercx in #4428 - 👌 IMPROVE:
verdi code setup
: validate the uniqueness of label for local codes by @sphuber in #5215 - 👌 IMPROVE:
GroupParamType
: store group if created by @sphuber in #5411 - 👌 IMPROVE: Show #procs/machine in
verdi computer show
by @dev-zero in #4945 - 👌 IMPROVE: Notify users of runner usage in
verdi process list
by @ltalirz in #4663 - 👌 IMPROVE: Set
localhost
as default for database hostname inverdi setup
by @sphuber in #4908 - 👌 IMPROVE: Make
verdi group
messages consistent by @CasperWA in #4999 - 🐛 FIX:
verdi calcjob cleanworkdir
command by @zhubonan in #5209 - 🔧 MAINTAIN: Add
verdi devel run-sql
by @chrisjsewell in #5094
- ✨ NEW: Allow for CLI usage via
-
REST API:
-
Developers:
- 🔧 MAINTAIN: Move to flit for PEP 621 compliant package build by @chrisjsewell in #5312
- 🔧 MAINTAIN: Make
__all__
imports explicit by @chrisjsewell in #5061 - 🔧 MAINTAIN: Add
pre-commit.ci
by @chrisjsewell in #5062 - 🔧 MAINTAIN: Add isort pre-commit hook by @chrisjsewell in #5151
- ⬆️ UPDATE: Drop support for Python 3.7 by @sphuber in #5307
- ⬆️ UPDATE: Support Python 3.10 by @csadorf in #5188
- ♻️ REFACTOR: Remove
reentry
requirement by @chrisjsewell in #5058 - ♻️ REFACTOR: Remove
simplejson
by @sphuber in #5391 - ♻️ REFACTOR: Remove
ete3
dependency by @ltalirz in #4956 - 👌 IMPROVE: Replace deprecated imp with importlib by @DirectriX01 in #4848
- ⬆️ UPDATE:
sphinx~=4.1
(+ sphinx extensions) by @chrisjsewell in #5420 - 🧪 CI: Move time consuming tests to separate nightly workflow by @sphuber in #5354
- 🧪 TESTS: Entirely remove
AiidaTestCase
by @chrisjsewell in #5372
Thanks to all contributors: Contributor Graphs
Including first-time contributors:
- @DirectriX01 made their first contribution in [#4848]
- @mjclarke94 made their first contribution in [#4994]
- @janssenhenning made their first contribution in [#5064]
The markupsafe
dependency specification was moved to install_requires
DirectScheduler
: remove the-e
option for bash invocation [#5264]- Replace deprecated matplotlib config option 'text.latex.preview' [#5233]
- Add upper limit
markupsafe<2.1
to fix the documentation build [#5371] - Add upper limit
pytest-asyncio<0.17
[#5309]
- CI: move Jenkins workflow to nightly GHA workflow [#5277]
- Docs: replace CircleCI build with ReadTheDocs [#5279]
- CI: run certain workflows only on main repo, not on forks [#5091]
- Revise Docker image build [#4997]
This patch release contains a number of helpful bug fixes and improvements.
- Add support for the
ProxyJump
SSH config option for seting up an arbitrary number of proxy jumps without additional processes by creating TCP channels over existing SSH connections. This provides improved control over the lifetime of the different connections. See SSH configuration for further details. [#4951] - Allow numpy arrays to be serialized to a process checkpoint. [#4730)])
- Add the
_merge
method toProcessBuilder
, to update the builder with a nested dictionary. [#4983)]) verdi setup
: Set the defaut database hostname aslocalhost
. [#4908]- Allow
Node.__init__
to be constructed with a specificUser
node. [#4977] - Minimize database logs of failed schema version retrievals. [#5056]
- Remove duplicate call of normal
callback
forInteractiveOption
. [#5064] - Update requirement
pyyaml~=5.4
, which contains critical security fixes. [#5060]
- Fix regression issue with
__contains__
operator inLinkManager
, when using double underscores, e.g. for'some__nested__namespace' in calc.inputs
. #5067 - Stop deprecation warning being shown when tab-completing incoming and outgoing node links. [#5011]
- Stop possible command hints being shown when attempting to tab complete
verdi
commands that do not exist. [#5012] - Do not use
get_detailed_job_info
when retrieving a calculation job, if no job id is set. [#4967] - Race condition when two processes try to create the same
Folder
/SandboxFolder
, [#4912] - Return the whole nested namespace when using
BaseRestartWorkChain.result
. [#4961] - Use
numpy.nanmin
andnumpy.nanmax
for computing y-limits ofBandsData
matplotlib methods. [#5024] - Use sanitized job title with
SgeScheduler
scheduler. [#4994]
This is a patch release to pin psycopg2-binary
to version 2.8.x, to avoid an issue with database creation in version 2.9 (#4989).
full changelog | GitHub contributors page for this release
This is a patch release to fix a bug that was introduced in v1.6.2
that would cause a number of verdi
commands to fail due to a bug in the with_dbenv
decorator utility.
- Fix
aiida.cmdline.utils.decorators.load_backend_if_not_loaded
[#4878]
full changelog | GitHub contributors page for this release
- CLI: Use the proper proxy command for
verdi calcjob gotocomputer
if configured as such [#4761] - Respect nested output namespaces in
Process.exposed_outputs
[#4863] NodeLinkManager
now properly regenerates original nested namespaces from the flat link labels stored in the database. This means one can now donode.outputs.some.nested.output
instead of having to donode.outputs.some__nested__output
. The same goes fornode.inputs
[#4625]- Fix
aiida.cmdline.utils.decorators.with_dbenv
always loading the database. Now it will only load the database if not already loaded, as intended [#4865]
- Add the
account
option to theLsfScheduler
scheduler plugin [#4832]
- Update ssh proxycommand section with instructions on how to handle cases where the SSH key needs to be specified for the proxy server [#4839]
- Add the "How to extend workflows" section, explaining the use of the
expose_inputs
andexpose_outputs
features, as well as nested namespaces [#4562] - Add help in intro for when quicksetup fails due to problems autodetecting the PostgreSQL settings [#4838]
full changelog | GitHub contributors page for this release
This patch release is primarily intended to fix a regression in the aiida_profile
test fixture, used by plugin developers, causing config validation errors (#4831).
Other additions:
- ✨ NEW: Added
structure.data.import
entry-point, allowing for plugins to define file-format specific sub-commands ofverdi data structure import
(#4427). - ✨ NEW: Added
--label
and--group
options toverdi data structure import
, which apply a label/group to all structures being imported (#4429). - ⬆️ UPDATE:
psgu
dependency increased tov0.2.x
. This fixes a bug inverdi quicksetup
, when used on the Windows Subsystem for Linux (WSL) platform (#4834). - 🐛 FIX:
metadata.options.max_memory_kb
is now ignored when using the direct scheduler (#4825). This was previously imposing a a virtual memory limit withulimit -v
, which is very different to the physical memory limit that other scheduler plugins impose. No straightforward way exists to directly limit the physical memory usage for this scheduler. - 🐛 FIX: Added
__str__
method to theOrbital
class, fixing a recursion error (#4829).
full changelog | GitHub contributors page for this release
As well as introducing a number of improvements and new features listed below, this release marks the "under-the-hood" migration from the tornado
package to the Python built-in module asyncio
, for handling asynchronous processing within the AiiDA engine.
This removes a number of blocking dependency version clashes with other tools, in particular with the newest Jupyter shell and notebook environments.
The migration does not present any backward incompatible changes to AiiDA's public API.
A substantial effort has been made to test and debug the new implementation, and ensure it performs at least equivalent to the previous code (or improves it!), but please let us know if you uncover any additional issues.
This release also drops support for Python 3.6 (testing is carried out against 3.7
, 3.8
and 3.9
).
NOTE: v1.6
is tentatively intended to be the final minor v1.x
release before v2.x
, that will include a new file repository implementation and remove all deprecated code.
The additional_retrieve_list
metadata option has been added to CalcJob
(#4437).
This new option allows one to specify additional files to be retrieved on a per-instance basis, in addition to the files that are already defined by the plugin to be retrieved.
A new namespace stash
has bee added to the metadata.options
input namespace of the CalcJob
process (#4424).
This option namespace allows a user to specify certain files that are created by the calculation job to be stashed somewhere on the remote.
This can be useful if those files need to be stored for a longer time than the scratch space (where the job was run) is available for, but need to be kept on the remote machine and not retrieved.
Examples are files that are necessary to restart a calculation but are too big to be retrieved and stored permanently in the local file repository.
See Stashing files on the remote for more details.
The new TransferCalcjob
plugin (#4194) allows the user to copy files between a remote machine and the local machine running AiiDA.
More specifically, it can do any of the following:
- Take any number of files from any number of
RemoteData
folders in a remote machine and copy them in the local repository of a single newly createdFolderData
node. - Take any number of files from any number of
FolderData
nodes in the local machine and copy them in a single newly createdRemoteData
folder in a given remote machine.
See the Transferring data how-to for more details.
The way the global/profile configuration is accessed has undergone a number of distinct changes (#4712):
- When loaded, the
config.json
(found in the.aiida
folder) is now validated against a JSON Schema that can be found inaiida/manage/configuration/schema
. - The schema includes a number of new global/profile options, including:
transport.task_retry_initial_interval
,transport.task_maximum_attempts
,rmq.task_timeout
andlogging.aiopika_loglevel
(#4583). - The
cache_config.yml
has now also been deprecated and merged into theconfig.json
, as part of the profile options. This merge will be handled automatically, upon first load of theconfig.json
using the new AiiDA version.
In-line with these changes, the verdi config
command has been refactored into separate commands, including verdi config list
, verdi config set
, verdi config unset
and verdi config caching
.
See the Configuring profile options and Configuring caching how-tos for more details.
In addition to verdi config
, numerous other new commands and options have been added to verdi
:
- Deprecated
verdi export
andverdi import
commands (replaced by newverdi archive
) (#4710) - Added
verdi group delete --delete-nodes
, to also delete the nodes in a group during its removal (#4578). - Improved
verdi group remove-nodes
command to warn when requested nodes are not in the specified group (#4728). - Added
exception
to the projection mapping ofverdi process list
, for example to use in debugging as:verdi process list -S excepted -P ctime pk exception
(#4786). - Added
verdi database summary
(#4737): This prints a summary of the count of each entity and (optionally) the list of unique identifiers for some entities. - Improved
verdi process play
performance, by only querying for active processes with the--all
flag (#4671) - Added the
verdi database version
command (#4613): This shows the schema generation and version of the database of the given profile, useful mostly for developers when debugging. - Improved
verdi node delete
performance (#4575): The logic has been re-written to greatly reduce the time to delete large amounts of nodes. - Fixed
verdi quicksetup --non-interactive
, to ensure it does not include any user prompts (#4573) - Fixed
verdi --version
when used in editable mode (#4576)
The base Node
class now evaluates equality based on the node's UUID (#4753).
For example, loading the same node twice will always resolve as equivalent: load_node(1) == load_node(1)
.
Note that existing, class specific, equality relationships will still override the base class behaviour, for example: Int(99) == Int(99)
, even if the nodes have different UUIDs.
This behaviour for subclasses is still under discussion at: aiidateam#1917
When hashing nodes for use with the caching features, -0.
is now converted to 0.
, to reduce issues with differing hashes before/after node storage (#4648).
Known failure modes for hashing are now also raised with the HashingError
exception (#4778).
Both aiida.tools.delete_nodes
(#4578) and aiida.orm.to_aiida_type
(#4672) have been exposed for use in the public API.
A pathlib.Path
instance can now be used for the file
argument of SinglefileData
(#3614)
Type annotations have been added to all inputs/outputs of functions and methods in aiida.engine
(#4669) and aiida/orm/nodes/processes
(#4772).
As outlined in PEP 484, this improves static code analysis and, for example, allows for better auto-completion and type checking in many code editors.
The /querybuilder
endpoint is the first POST method available for AiiDA's RESTful API (#4337)
The POST endpoint returns what the QueryBuilder would return, when providing it with a proper queryhelp
dictionary (see the documentation here).
Furthermore, it returns the entities/results in the "standard" REST API format - with the exception of link_type
and link_label
keys for links (these particular keys are still present as type
and label
, respectively).
For security, POST methods can be toggled on/off with the verdi restapi --posting/--no-posting
options (it is on by default).
Although note that this option is not yet strictly public, since its naming may be changed in the future!
See AiiDA REST API documentation for more details.
-
Fixed the direct scheduler which, in combination with
SshTransport
, was hanging on submit command (#4735). In the ssh transport, to emulate 'chdir', the current directory is now kept in memory, and every command prepended withcd FOLDER_NAME && ACTUALCOMMAND
. -
In
aiida.tools.ipython.ipython_magics
,load_ipython_extension
has been deprecated in favour ofregister_ipython_extension
(#4548). -
Refactored
.ci/
folder to make tests more portable and easier to understand (#4565) Theci/
folder had become cluttered, containing configuration and scripts for both the GitHub Actions and Jenkins CI. This change moved the GH actions specific scripts to.github/system_tests
, and refactored the Jenkins setup/tests to use molecule in the.molecule/
folder. -
For aiida-core development, the pytest
requires_rmq
marker andconfig_with_profile
fixture have been added (#4739 and #4764)
Note: release v1.5.1
was skipped due to a problem with the uploaded files to PyPI.
Dict
: accessing an inexistent key now raises aKeyError
(instead ofAttributeError
) [#4577]- Config: make writing to disk as atomic as possible [#4607]
- Config: do not overwrite when loaded and not migrated [#4605]
- SqlAlchemy: fix bug in
Group
extras migration with revision0edcdd5a30f0
[#4602]
- SqlAlchemy: improve the alembic migration code [#4602] 4607
- CI: manually install
numpy
to prevent incompatible releases [#4615]
In this minor version release, support for Python 3.9 is added [#4301], while support for Python 3.5 is dropped [#4386]. This version is compatible with all current Python versions that are not end-of-life:
- 3.6
- 3.7
- 3.8
- 3.9
- Process functions (
calcfunction
andworkfunction
) can now be submitted to the daemon just likeCalcJob
s andWorkChain
s [#4539] - REST API: list endpoints at base URL [#4412]
- REST API: new
full_types_count
endpoint that counts the number of nodes for each type of node [#4277] ProcessBuilder
: allow unsetting of inputs through attribute deletion [#4419]verdi migrate
: make--in-place
work across different file systems [#4393]
- Added remaining original documentation that didn't make it into the first step of the recent major overhaul of v1.3.0
verdi process show
: order by ctime and print process label [#4407]LinkManager
: fix inaccuracy in exception message for non-existent link [#4388]- Add
reset
method toProgressReporterAbstract
[#4522] - Improve the deprecation warning for
Node.open
outside context manager [#4434]
SlurmScheduler
: fix bug in validation of job resources [#4555]- Fix
ZeroDivisionError
in worker slots check [#4513] CalcJob
: only attempt to clean up the retrieve temporary folder after parsing if it is present [#4379]- Add missing entry point groups to the mapping [#4395]
- REST API: the
process_type
can now identify pathological empty-stringed or null entries in the database [#4277]
verdi group delete
: deprecate and ignore the--clear
option [#4357]- Replace old format string interpolation with f-strings [#4400]
- CI: move
pylint
configuration topyproject.toml
[#4411] - CI: use
-e
install for tox + add docker-compose for isolated RabbitMQ [#4375] - CI: add coverage patch threshold to prevent false positives [#4413]
- CI: Allow for mypy type checking of third-party imports [#4553]
- Update requirement
pytest~=6.0
and usepyproject.toml
[#4410]
- The refactoring goal was to pave the way for the implementation of a new archive format in v2.0.0 ( aiidateamAEP005)
- Three abstract+concrete interface classes are defined; writer, reader, migrator, which are independent of theinternal structure of the archive. These classes are used within the export/import code.
- The code in
aiida/tools/importexport
has been largely re-written, in particular addingaiida/toolsimportexport/archive
, which contains this code for interfacing with an archive, and does not require connectionto an AiiDA profile. - The export logic has been re-written; to minimise required queries (faster), and to allow for "streaming" datainto the writer (minimise RAM requirement with new format). It is intended that a similiar PR will be made for the import code.
- A general progress bar implementation is now available in
aiida/common/progress_reporter.py
. All correspondingCLI commands now also have--verbosity
option. - Merged PRs:
- Updated archive version from
0.9
->0.10
(#4561 - Deprecations:
export_zip
,export_tar
,export_tree
,extract_zip
,extract_tar
andextract_tree
functions.silent
key-word in theexport
function - Removed:
ZipFolder
class
This patch is a backport for 2 of the fixes in v1.5.2
.
Dict
: accessing an inexistent key now raises aKeyError
(instead of anAttributeError
) [#4616]
- CI: manually install
numpy
to prevent incompatible releases [#4615]
- RabbitMQ: update
topika
requirement to fix SSL connections and remove validation ofbroker_parameters
from profile [#4542] - Fix
UnboundLocalError
inaiida.cmdline.utils.edit_multiline_template
, which affectedverdi code/computer setup
[#4436]
CalcJob
: make surelocal_copy_list
files do not end up in the node's repository folder [#4415]
verdi setup
: forward broker defaults to interactive mode [#4405]
verdi setup
: improve validation and help string of broker virtual host [#4408]- Implement
next
anditer
for theNode.open
deprecation wrapper [#4399] - Dependencies: increase minimum version requirement
plumpy~=0.15.1
to suppress noisy warning at end of interpreter that ran processes [#4398]
- Add defaults for configure options of the
SshTransport
plugin [#4223] verdi status
: distinguish database schema version incompatible [#4319]SlurmScheduler
: implementparse_output
to detect OOM and OOW [#3931]
- Make the RabbitMQ connection parameters configurable [#4341]
- Add infrastructure to parse scheduler output for
CalcJobs
[#3906] - Add support for "peer" authentication with PostgreSQL [#4255]
- Add the
--paused
flag toverdi process list
[#4213] - Make the loglevel of the daemonizer configurable [#4276]
Transport
: add option to not use a login shell for all commands [#4271]- Implement
skip_orm
option for SqlAlchemyGroup.remove_nodes
[#4214] Dict
: allow setting attributes through setitem andAttributeManager
[#4351]CalcJob
: allow nested target paths forlocal_copy_list
[#4373]verdi export migrate
: add--in-place
flag to migrate archive in place [#4220]
verdi
: make--prepend-text
and--append-text
options properly interactive [#4318]verdi computer test
: fix failing result in harmlessstderr
responses [#4316]QueryBuilder
: Accept empty string forentity_type
inappend
method [#4299]verdi status
: do not except when no profile is configured [#4253]ArithmeticAddParser
: attach output before checking for negative value [#4267]CalcJob
: fix bug inretrieve_list
affecting entries without wildcards [#4275]TemplateReplacerCalculation
: makefiles
namespace dynamic [#4348]
- Rename folder
test.fixtures
totest.static
[#4219] - Remove all files from the pre-commit exclude list [#4196]
- ORM: move attributes/extras methods of frontend and backend nodes to mixins [#4376]
- Dependencies: update minimum requirement
paramiko~=2.7
[#4222] - Depedencies: remove upper limit and allow
numpy~=1.17
[#4378]
- Deprecate getter and setter methods of
Computer
properties [#4252] - Deprecate methods that refer to a computer's label as name [#4309]
BaseRestartWorkChain
: do not runprocess_handler
whenexit_codes=[]
[#4380]SlurmScheduler
: always raise for non-zero exit code [#4332]- Remove superfluous
ERROR_NO_RETRIEVED_FOLDER
fromCalcJob
subclasses [#3906]
- Fix a file handle leak due to the
Runner
not closing the event loop if it created it itself [#4307] ArithmeticAddParser
: attach output before checking for negative value [#4267]
- Comprehensive restructuring and revamp of the online documentation [#4141]
- Improve defaults for
verdi computer configure ssh
[#4055] - Provenance graphs: enable highlighting specific node classes (and highlight root node by default) [#4081]
- Enable event-based monitoring of work chain child processes (they were being polled every second) [#4154]
- Increase the default for
runner.poll.interval
config option from 1 to 60 seconds [#4150] - Increase the efficiency of the
SqlaGroup.nodes
iterator [#4094]
- Add a progress bar for export and import related functionality [#3599]
- Enable loading config.yml files from URL in
verdi
commands with--config
option [#3977] QueryBuilder
: add theflat
argument to the.all()
method [#3945]verdi status
: add--no-rmq
flag to skip the RabbitMQ check [#4181]- Add support for process functions in
verdi plugin list
[#4117] - Allow profile selection in ipython magic
%aiida
[#4071] - Support more complex formula formats in
aiida.orm.data.cif.parse_formula
[#3954]
BaseRestartWorkChain
: do not assumemetadata
exists in inputs inrun_process
[#4210]BaseRestartWorkChain
: fix bug ininspect_process
[#4166]BaseRestartWorkChain
: fix the "unhandled failure mechanism" for dealing with failures of subprocesses [#4155]- Fix exception handling in commands calling
list_repository_contents
[#3968] - Fix bug in
Code.get_full_text_info
[#4083] - Fix bug in
verdi daemon restart --reset
[#3969] - Fix tab-completion for
LinkManager
andAttributeManager
[#3985] CalcJobResultManager
: fix bug that broke tab completion [#4187]SshTransport.gettree
: allow non-existing nested target directories [#4175]CalcJob
: move job resource validation to theScheduler
class fixing a problem for the SGE and LSF scheduler plugins [#4192]WorkChain
: guarantee to maintain order of appended awaitables [#4156]- Add support for binary files to the various
verdi
cat commands [#4077] - Ensure
verdi group show --limit
respects limit even in raw mode [#4092] QueryBuilder
: fix type string filter generation forGroup
subclasses [#4144]- Raise when calling
Node.objects.delete
for node with incoming links [#4168] - Properly handle multiple requests to threaded REST API [#3974]
NodeTranslator
: do not assumeget_export_formats
exists [#4188]- Only color directories in
verdi node repo ls --color
[#4195]
- Add arithmetic workflows and restructure calculation plugins [#4124]
- Add minimal
mypy
run to the pre-commit hooks. [#4176] - Fix timeout in
tests.cmdline.commands.test_process:test_pause_play_kill
[#4052] - Revise update-dependency flow to resolve issue #3930 [#3957]
- Add GitHub action for transifex upload [#3958]
- The
get_valid_schedulers
class method of theScheduler
class has been deprecated in favor ofaiida.plugins.entry_point.get_entry_point_names
[#4192]
In the fixing of three bugs, three minor features have been added along the way.
- Add config option
daemon.worker_process_slots
to configure the maximum number of concurrent tasks each daemon worker can handle [#3949] - Add config option
daemon.default_workers
to set the default number of workers to be started byverdi daemon start
[#3949] CalcJob
: make submit script filename configurable through themetadata.options
[#3948]
CalcJob
: fix bug in idempotency check of upload transport task [#3948]- REST API: reintroduce CORS headers, the lack of which was breaking the Materials Cloud provenance explorer [#3951]
- Remove the equality operator of
ExitCode
which caused the serialization of workchains to fail if put in the workchain context [#3940]
- The
hookup
argument ofaiida.restapi.run_api
and the--hookup
option ofverdi restapi
are deprecated [#3951]
ExitCode
: make the exit message parameterizable through templates [#3824]GroupPath
: a utility to work with virtualGroup
hierarchies [#3613]- Make
Group
sub classable through entry points [#3882][#3903][#3926] - Add auto-complete support for
CodeParamType
andGroupParamType
[#3926] - Add export archive migration for
Group
type strings [#3912] - Add the
-v/--version
option toverdi export migrate
[#3910] - Add the
-l/--limit
option toverdi group show
[#3857] - Add the
--order-by/--order-direction
options toverdi group list
[#3858] - Add
prepend_text
andappend_text
toaiida_local_code_factory
pytest fixture [#3831] - REST API: make it easier to call
run_api
in wsgi scripts [#3875] - Plot bands with only one kpoint [#3798]
- Improved validation for CLI parameters [#3894]
- Ensure unicity when creating instances of
Autogroup
[#3650] - Prevent nodes without registered entry points from being stored [#3886]
- Fix the
RotatingFileHandler
configuration of the daemon logger[#3891] - Ensure log messages are not duplicated in daemon log file [#3890]
- Convert argument to
str
inaiida.common.escaping.escape_for_bash
[#3873] - Remove the return statement of
RemoteData.getfile()
[#3742] - Support for
BandsData
nodes withoutStructureData
ancestors [#3817]
- Deprecate
--group-type
option in favor of--type-string
forverdi group list
[#3926]
- Docs: link to documentation of other libraries via
intersphinx
mapping [#3876] - Docs: remove extra
advanced_plotting
from install instructions [#3860] - Docs: consistent use of "plugin" vs "plugin package" terminology [#3799]
- Deduplicate code for tests of archive migration code [#3924]
- CI: use GitHub Actions services for PostgreSQL and RabbitMQ [#3901]
- Move
aiida.manage.external.pgsu
to external packagepgsu
[#3892] - Cleanup the top-level directory of the repository [#3738]
- Remove unused
orm.implementation.utils
module [#3877] - Revise dependency management workflow [#3771]
- Re-add support for Coverage reports through codecov.io [#3618]
- Emit a warning when input port specifies a node instance as default [#3466]
BaseRestartWorkChain
: require process handlers to be instance methods [#3782]BaseRestartWorkChain
: add method to enable/disable process handlers [#3786]- Docker container: remove conda activation from configure-aiida.sh script [#3791]
- Add fixtures to clear the database before or after tests [#3783]
verdi status
: add the configuration directory path to the output [#3587]QueryBuilder
: add support fordatetime.date
objects in filters [#3796]
- Fix bugs in
Node._store_from_cache
andNode.repository.erase
that could result in calculations not being reused [#3777] - Caching: fix configuration spec and validation [#3785]
- Write migrated config to disk in
Config.from_file
[#3797] - Validate label string at code setup stage [#3793]
- Reuse
prepend_text
andappend_text
inverdi computer/code duplicate
[#3788] - Fix broken imports of
urllib
in various locations includingverdi import
[#3767] - Match headers with actual output for
verdi data structure list
[#3756] - Disable caching for the
Data
node subclass (this should not affect usual caching behavior) [#3807]
Nota Bene: although this is a minor version release, the support for python 2 is dropped (#3566) following the reasoning outlined in the corresponding AEP001.
Critical bug fixes for python 2 will be supported until July 1 2020 on the v1.0.*
release series.
With the addition of python 3.8 (#3719), this version is now compatible with all current python versions that are not end-of-life:
- 3.5
- 3.6
- 3.7
- 3.8
- Add the AiiDA Graph Explorer (AGE) a generic tool for traversing provenance graph [#3686]
- Add the
BaseRestartWorkChain
which makes it easier to write a simple work chain wrapper around another process with automated error handling [#3748] - Add
provenance_exclude_list
attribute toCalcInfo
data structure, allowing to prevent calculation input files from being permanently stored in the repository [#3720] - Add the
verdi node repo dump
command [#3623] - Add more methods to control cache invalidation of completed process node [#3637]
- Allow documentation to be build without installing and configuring AiiDA [#3669]
- Add option to expand namespaces in sphinx directive [#3631]
- Add
node_type
to list of immutable model fields, preventing repeated database hits [#3619] - Add cache for entry points in an entry point group [#3622]
- Improve the performance when exporting many groups [#3681]
CalcJob
: movepresubmit
call fromCalcJob.run
toWaiting.execute
[#3666]CalcJob
: do not pause when exception thrown in thepresubmit
[#3699]- Move
CalcJob
spec validator to corresponding namespaces [#3702] - Move getting completed job accounting to
retrieve
transport task [#3639] - Move
last_job_info
from JSON-serialized string to dictionary [#3651] - Improve SqlAlchemy session handling for
QueryBuilder
[#3708] - Use built-in
open
instead ofio.open
, which is possible now that python 2 is no longer supported [#3615] - Add non-zero exit code for
verdi daemon status
[#3729]
- Deal with unreachable daemon worker in
get_daemon_status
[#3683] - Django backend: limit batch size for
bulk_create
operations [#3713] - Make sure that datetime conversions ignore
None
[#3628] - Allow empty
key_filename
inverdi computer configure ssh
and reuse cooldown time when reconfiguring [#3636] - Update
pyyaml
to v5.1.2 to prevent arbitrary code execution [#3675] QueryBuilder
: fix validation bug and improve message forin
operator [#3682]- Consider 'AIIDA_TEST_PROFILE' in 'get_test_backend_name' [#3685]
- Ensure correct types for
QueryBuilder().dict()
with multiple projections [#3695] - Make local modules importable when running
verdi run
[#3700] - Fix bug in
upload_calculation
forCalcJobs
with local codes [#3707] - Add imports from
urllib
to dbimporters [#3704]
- Moved continuous integration from Travis to Github actions [#3571]
- Replace custom unit test framework for
pytest
and move all tests totests
top level directory [#3653][#3674][#3715] - Cleaned up direct dependencies and relaxed requirements where possible [#3597]
- Set job poll interval to zero in localhost pytest fixture [#3605]
- Make command line deprecation warnings visible with test profile [#3665]
- Add docker image with minimal running AiiDA instance [#3722]
- Improve the backup mechanism of the configuration file: unique backup written at each update [#3581]
- Forward
verdi code delete
toverdi node delete
[#3546] - Homogenize and improve output of
verdi computer test
[#3544] - Scheduler SLURM: support
UNLIMITED
andNOT_SET
as values for requested walltimes [#3586] - Set default for the
safe_interval
option ofverdi computer configure
[#3590] - Create backup of configuration file before migrating [#3568]
- Add
python_requires
tosetup.json
necessary for future dropping of python 2 [#3574] - Remove unused QB methods/functions [#3526]
- Move
pgtest
argument ofTemporaryProfileManager
to constructor [#3486] - Add
filename
argument toSinglefileData
constructor [#3517] - Mention machine in SSH connection exception message [#3536]
- Docs: Expand on QB
order_by
information [#3548] - Replace deprecated pymatgen
site.species_and_occu
withsite.species
[#3480] QueryBuilder
: add deepcopy implementation andqueryhelp
property [#3524]
- Fix
verdi calcjob gotocomputer
whenkey_filename
is missing [#3593] - Fix bug in database migrations where schema generation determination excepts for old databases [#3582]
- Fix false positive for
verdi database integrity detect-invalid-links
[#3591] - Config migration: handle edge case where
daemon
key is missing fromdaemon_profiles
[#3585] - Raise when unable to detect name of local timezone [#3576]
- Fix bug for
CalcJob
dry runs withstore_provenance=False
[#3513] - Migrations for legacy and now illegal default link label
_return
, export version upped to0.8
[#3561] - Fix REST API
attributes_filter
andextras_filter
[#3556] - Fix bug in plugin
Factory
classes for python 3.7 [#3552] - Make
PolishWorkChains
checkpointable [#3532] - REST API: fix generator of full node namespace [#3516]
The following is a summary of the major changes and improvements from v0.12.*
to v1.0.0
.
- Faster workflow engine: the new message-based engine powered by RabbitMQ supports tens of thousands of processes per hour and greatly speeds up workflow testing. You can now run one daemon per AiiDA profile.
- Faster database queries: the switch to JSONB for node attributes and extras greatly improves query speed and reduces storage size by orders of magnitude.
- Robust calculations: AiiDA now deals with network connection issues (automatic retries with backoff mechanism, connection pooling, ...) out of the box. Workflows and calculations are all Processes and can be "paused" and "played" anytime.
- Better verdi commands: the move to the
click
framework brings homogenous command line options across all commands (loading nodes, ...). You can easily add new commands through plugins. - Easier workflow development: Input and output namespaces, reusing specs of sub-processes and less boilerplate code simplify writing WorkChains and CalcJobs, while also enabling powerful auto-documentation features.
- Mature provenance model: Clear separation between data provenance (Calculations, Data) and logical provenance (Workflows). Old databases can be migrated to the new model automatically.
- python3 compatible: AiiDA 1.0 is compatible with both python 2.7 and python 3.6 (and later). Python 2 support will be dropped in the coming months.
Below a (non-exhaustive) list of changes by category. Changes between 1.0 alpha/beta releases are not included - for those see the changelog of the corresponding releases.
- Implement the concept of an "exit status" for all calculations, allowing a programmatic definition of success or failure for all processes [#1189]
- All calculations now go through the
Process
layer, homogenizing the state of work and job calculations [#1125] - Allow
None
as default for arguments of process functions [#2582] - Implement the new
calcfunction
decorator. [#2203] - Each profile now has its own daemon that can be run completely independently in parallel [#1217]
- Polling based daemon has been replaced with a much faster event-based daemon [#1067]
- Replaced
Celery
withCircus
as the daemonizer of the daemon [#1213] - The daemon can now be stopped without loading the database, making it possible to stop it even if the database version does not match the code [#1231]
- Implement exponential backoff retry mechanism for transport tasks [#1837]
- Pause
CalcJob
when transport task falls through exponential backoff [#1903] - Separate
CalcJob
submit task in folder upload and scheduler submit [#1946] - Each daemon worker now respects an optional minimum scheduler polling interval [#1929]
- Make the
execmanager.retrieve_calculation
idempotent'ish [#3142] - Make the
execmanager.upload_calculation
idempotent'ish [#3146] - Make the
execmanager.submit_calculation
idempotent'ish [#3188] - Implement a
PluginVersionProvider
for processes to automatically add versions ofaiida-core
and plugin to process nodes [#3131]
- Implement the
ProcessBuilder
which simplifies the definition ofProcess
inputs and the launching of aProcess
[#1116] - Namespaces added to the port containers of the
ProcessSpec
class [#1099] - Convention of leading underscores for non-storable inputs is replaced with a proper
non_db
attribute of thePort
class [#1105] - Implement a Sphinx extension for the
WorkChain
class to automatically generate documentation from the workchain definition [#1155] WorkChain
s can now expose the inputs and outputs of anotherWorkChain
, which is great for writing modular workflows [#1170]- Add built-in support and API for exit codes in
WorkChain
s [#1640], [#1704], [#1681] - Implement method for
CalcJobNode
to create a restart builder [#1962] - Add
CalculationTools
base and entry pointaiida.tools.calculations
[#2331] - Generalize Sphinx workchain extension to processes [#3314]
- Collapsible namespace in sphinxext [#3441]
- The
retrieve_singlefile_list
has been deprecated and is replaced byretrieve_temporary_list
[#3041] - Automatically set
CalcInfo.uuid
inCalcJob.run
[#2874] - Allow the usage of lambda functions for
InputPort
default values [#3465]
- Implementat
AuthInfo
class which allows custom configuration per configured computer [#1184] - Add efficient
count
method foraiida.orm.groups.Group
[#2567] - Speed up creation of Nodes in the AiiDA ORM [#2214]
- Enable use of tuple in
QueryBuilder.append
for all ORM classes [#1608], [#1607] - Refactor the ORM to have explicit front-end and back-end layers [#2190][#2210][#2225][#2227][#2481]
- Add support for indexing and slicing in
orm.Group.nodes
iterator [#2371] - Add support for process classes to QueryBuilder.append [#2421]
- Change type of uuids returned by the QueryBuilder to unicode [#2259]
- The
AttributeDict
is now constructed recursively for nested dictionaries [#3005] - Ensure immutability of
CalcJobNode
hash before and after storing [#3130] - Fix bug in the
RemoteData._clean
method [#1847] - Fix bug in
QueryBuilder.first()
for multiple projections [#2824] - Fix bug in
delete_nodes
when passing pks of non-existing nodes [#2440] - Remove unserializable data from metadata in
Log
records [#2469]
- Fix bug in
parse_formula
for formulas with leading or trailing whitespace [#2186] - Refactor
Orbital
code and fix some bugs [#2737] - Fix bug in the
store
method ofCifData
which would raise an exception when called more than once [#1136] - Allow passing directory path in
FolderData
constructor [#3359] - Add element
X
to the elements list in order to support unknown species [#1613] - Various bug and consistency fixes for
CifData
andStructureData
[#2374] - Changes to
Data
class attributes andTrajectoryData
data storage [#2310][#2422] - Rename
ParameterData
toDict
[#2530] - Remove the
FrozenDict
data sub class [#2532] - Remove the
Error
data sub class [#2529] - Make
Code
a real sub class ofData
[#2193] - Implement the
has_atomic_sites
andhas_unknown_species
properties for theCifData
class [#1257] - Change default library used in
_get_aiida_structure
(convertingCifData
toStructureData
) fromase
topymatgen
[#1257] - Add converter for
UpfData
from UPF to JSON format [#3308] - Fix potential inefficiency in
aiida.tools.data.cif
converters [#3098] - Fix bug in
KpointsData.reciprocal_cell()
[#2779] - Improve robustness of parsing versions and element names from UPF files [#2296]
- Migrate
verdi
to the click infrastructure [#1795] - Add a default user to AiiDA configuration, eliminating the need to retype user information for every new profile [#2734]
- Implement tab-completion for profile in the
-p
option ofverdi
[#2345] - Homogenize the interface of
verdi quicksetup
andverdi setup
[#1797] - Add the option
--version
toverdi
to display current version [#1811] verdi computer configure
can now read inputs from a yaml file through the--config
option [#2951]
- Add importer class for the Materials Platform of Data Science API, which hosts the Pauling file data [#1238]
- Add an importer class for the Materials Project API [#2097]
- Add an index to columns of
DbLink
for SqlAlchemy [#2561] - Creating unique constraint and indexes at the
db_dbgroup_dbnodes
table for SqlAlchemy [#1680] - Performance improvement for adding nodes to group [#1677]
- Make UUID columns unique in SqlAlchemy [#2323]
- Allow PostgreSQL connections via unix sockets [#1721]
- Drop the unused
nodeversion
andpublic
columns from the node table [#2937] - Drop various unused columns from the user table [#2944]
- Drop the unused
transport_params
column from the computer table [#2946] - Drop the
DbCalcState
table [#2198] - [Django]: migrate the node attribute and extra schema to use JSONB, greatly improving storage and querying efficiency [#3090]
- [SqlAlchemy]: Improve speed of node attribute and extra deserialization [#3090]
- Implement the exporting and importing of node extras [#2416]
- Implement the exporting and importing of comments [#2413]
- Implement the exporting and importing of logs [#2393]
- Add
export_parameters
to themetadata.json
in archive files [#3386] - Simplify the data format of export archives, greatly reducing file size [#3090]
verdi import
automatically migrates archive files of old formats [#2820]
- Refactor unit test managers and add basic fixtures for
pytest
[#3319] - REST API v4: updates to conform with
aiida-core==1.0.0
[#3429] - Improve decorators using the
wrapt
library such that function signatures are properly maintained [#2991] - Allow empty
enabled
anddisabled
keys in caching configuration [#3330] - AiiDA now enforces UTF-8 encoding for text output in its files and databases. [#2107]
- Remove
aiida.tests
and obsoleteaiida.storage.tests.test_parsers
entry point group [#2778] - Implement new link types [#2220]
- Rename the type strings of
Groups
and change the attributesname
andtype
tolabel
andtype_string
[#2329] - Make various protected
Node
methods public [#2544] - Rename
DbNode.type
toDbNode.node_type
[#2552] - Rename the ORM classes for
Node
sub classesJobCalculation
,WorkCalculation
,InlineCalculation
andFunctionCalculation
[#2184][#2189][#2192][#2195][#2201] - Do not allow the
copy
ordeepcopy
ofNode
, except forData
nodes [#1705] - Remove
aiida.control
andaiida.utils
top-level modules; reorganizeaiida.common
,aiida.manage
andaiida.tools
[#2357] - Make the node repository API backend agnostic [#2506]
- Redesign the Parser class [#2397]
- [Django]: Remove support for datetime objects from node attributes and extras [#3090]
- Enforce specific precision in
clean_value
for floats when computing a node's hash [#3108] - Move physical constants from
aiida.common.constants
to externalqe-tools
package [#3278] - Add type checks to all plugin factories [#3456]
- Disallow pickle when storing numpy array in
ArrayData
[#3434] - Remove implementation of legacy workflows [#2379]
- Implement
CalcJob
process class that replaces the deprecatedJobCalculation
[#2389] - Change the structure of the
CalcInfo.local_copy_list
[#2581] - QueryBuilder: Change 'ancestor_of'/'descendant_of' to 'with_descendants'/'with_ancestors' [#2278]
- Added new endpoint in rest api to get list of distinct node types [#2745]
- Travis: port the deploy stage from the development branch [#2816]
- Corrected the graph export set expansion rules [#2632]
- Backport streamlined quick install instructions from
provenance_redesign
[#2555] - Remove useless chainmap dependency [#2799]
- Add aiida-core version to docs home page [#3058]
- Docs: add note on increasing work_mem [#2952]
- Fast addition of nodes to groups with
skip_orm=True
[#2471] - Add
environment.yml
for installing dependencies using conda; release ofaiida-core
on conda-forge channel [#2081] - REST API: io tree response now includes link type and node label [#2033] [#2511]
- Backport postgres improvements for quicksetup [#2433]
- Backport
aiida.get_strict_version
(for plugin development) [#2099]
- Fix security vulnerability by upgrading
paramiko
to2.4.2
[#2043] - Disable caching for inline calculations (broken since move to
workfunction
-based implementation) [#1872] - Let
verdi help
return exit status 0 [#2434] - Decode dict keys only if strings (backport) [#2436]
- Remove broken verdi-plug entry point [#2356]
verdi node delete
(without arguments) no longer tries to delete all nodes [#2545]- Fix plotting of
BandsData
objects [#2492]
- REST API: add tests for random sorting list entries of same type [#2106]
- Add various badges to README [#1969]
- Minor documentation improvements [#1955]
- Add license file to MANIFEST [#2339]
- Add instructions when
verdi import
fails [#2420]
- Support the hashing of
uuid.UUID
types by registering a hashing function [#1861] - Add documentation on plugin cutter [#1904]
- Make exported graphs consistent with the current node and link hierarchy definition [#1764]
- Fix link import problem under SQLA [#1769]
- Fix cache folder copying [#1746] [1752]
- Fix bug in mixins.py when copying node [#1743]
- Fix pgtest failures (release-branch) on travis [#1736]
- Fix plugin: return testrunner result to fail on travis, when tests don't pass [#1676]
- Remove pycrypto dependency, as it was found to have sercurity flaws [#1754]
- Set xsf as default format for structures visualization [#1756]
- Delete unused
utils/create_requirements.py
file [#1702]
- Always use a bash login shell to execute all remote SSH commands, overriding any system default shell [#1502]
- Reduced the size of the distributed package by almost half by removing test fixtures and generating the data on the fly [#1645]
- Removed the explicit dependency upper limit for
scipy
[#1492] - Resolved various dependency requirement conflicts [#1488]
- Fixed a bug in
verdi node delete
that would throw an exception for certain cases [#1564] - Fixed a bug in the
cif
endpoint of the REST API [#1490]
- Hashing, caching and fast-forwarding [#652]
- Calculation no longer stores full source file [#1082]
- Delete nodes via
verdi node delete
[#1083] - Import structures using ASE [#1085]
StructureData
-pymatgen
-StructureData
roundtrip works for arbitrary kind names [#1285] [#1306] [#1357]- Output format of archive file can now be defined for
verdi export migrate
[#1383] - Automatic reporting of code coverage by unit tests has been added [#1422]
- Add
parser_name
JobProcess
options [#1118] - Node attribute reads were not always up to date across interpreters for SqlAlchemy [#1379]
- Cell vectors not printed correctly [#1087]
- Fix read-the-docs issues [#1120] [#1143]
- Fix structure/band visualization in REST API [#1167] [#1182]
- Fix
verdi work list
test [#1286] - Fix
_inline_to_standalone_script
inTCODExporter
[#1351] - Updated
reentry
to fix various small bugs related to plugin registering [#1440]
- Bump
qe-tools
version [#1090] - Document link types [#1174]
- Switch to trusty + postgres 9.5 on Travis [#1180]
- Use raw SQL in sqlalchemy migration of
Code
[#1291] - Document querying of list attributes [#1326]
- Document running
aiida
as a daemon service [#1445] - Document that Torque and LoadLever schedulers are now fully supported [#1447]
- Cookbook: how to check the number of queued/running jobs in the scheduler [#1349]
- PyCifRW upgraded to 4.2.1 [#1073]
- Persist and load parsed workchain inputs and do not recreate to avoid creating duplicates for default inputs [#1362]
- Serialize
WorkChain
context before persisting [#1354]
- Documentation: AiiDA now has an automatically generated and complete API documentation (using
sphinx-apidoc
) [#1330] - Add JSON schema for connection of REST API to Materials Cloud Explore interface [#1336]
FINISHED_KEY
andFAILED_KEY
variables were not known toAbstractCalculation
[#1314]
- Make 'REST' extra lowercase, such that one can do
pip install aiida-core[rest]
[#1328] CifData
/visualization
endpoint was not returning data [#1328]
QueryTool
(was deprecated in favor ofQueryBuilder
since v0.8.0) [#1330]
- Add
gource
config for generating a video of development history [#1337]
- Link types were not respected in
Node.get_inputs
for SqlAlchemy [#1271]
- Support visualization of structures and cif files with VESTA [#1093]
- Better fallback when node class is not available [#1185]
CifData
now supports faster parsing and lazy loading [#1190]- REST endpoint for
CifData
, API reports full list of available endpoints [#1228] - Various smaller improvements [#1100] [#1182]
- Restore attribute immutability in nodes [#1111]
- Fix daemonization issue that could cause aiida daemon to be killed [#1246]
Computer
: the shebang line is now customizable [#940]KpointsData
: deprecate buggy legacy implementation of k-point generation in favor of Seekpath [#1015]Dict
:to_aiida_type
used on dictionaries now automatically converted toDict
[#947]JobCalculation
: parsers can now specify files that are retrieved locally for parsing, but only temporarily, as they are deleted after parsing is completed [#886] [#894]
- Plugin data hooks: plugins can now add custom commands to
verdi data
[#993] - Plugin fixtures: simple-to-use decorators for writing tests of plugins [#716] [#865]
- Plugin development: no longer swallow
ImportError
exception during import of plugins [#1029]
verdi shell
: improve tab completion of imports in [#1008]verdi work list
: projections for verdi work list [#847]
- Supervisor removal: dependency on unix-only supervisor package removed [#790]
- REST API: add server info endpoint, structure endpoint can return different file formats [#878]
- REST API: update endpoints for structure visualization, calculation (includes retrieved input & output list), add endpoints for
UpfData
and more [#977] [#991] - Tests using daemon run faster [#870]
- Documentation: updated outdated workflow examples [#948]
- Documentation: updated import/export [#994],
- Documentation: plugin quickstart [#996],
- Documentation: parser example [#1003]
- Fix bug with repository on external hard drive [#982]
- Fix bug in configuration of pre-commit hooks [#863]
- Fix and improve plugin loader tests [#1025]
- Fix broken celery logging [#1033]
- async from aiida.work.run has been deprecated because it can lead to race conditions and thereby unexpected behavior [#1040]
- Improved exception handling for loading db tests [#968]
verdi work kill
on workchains: skip calculation if it cannot be killed, rather than stopping [#980]- Remove unnecessary INFO messages of Alembic for SQLAlchemy backend [#1012]
- Add filter to suppress unnecessary log messages during testing [#1014]
- Fix bug in
verdi quicksetup
on Ubuntu 16.04 and add regression tests to catch similar problems in the future [#976] - Fix bug in
verdi data
list commands for SQLAlchemy backend [#1007]
-
The
DbPath
table has been removed and replaced with a dynamic transitive closure, because, among others, nested workchains could lead to theDbPath
table exploding in size -
Code plugins have been removed from
aiida-core
and have been migrated to their own respective plugin repositories and can be found here:Each can be installed from
pip
using e.g.pip install aiida-quantumespresso
. Existing installations will require a migration (see update instructions in the documentation). For a complete overview of available plugins you can visit the registry.
- A new entry
retrieve_temporary_list
inCalcInfo
allows to retrieve files temporarily for parsing, while not having to store them permanently in the repository [#903] - New verdi command:
verdi work kill
to kill running workchains [#821] - New verdi command:
verdi data remote [ls,cat,show]
to inspect the contents ofRemoteData
objects [#743] - New verdi command:
verdi export migrate
allows the migration of existing export archives to new formats [#781] - New verdi command:
verdi profile delete
[#606] - Implemented a new option
-m
for theverdi work report
command to limit the number of nested levels to be printed [#888] - Added a
running
field to the output ofverdi work list
to give the current state of the workchains [#888] - Implemented faster query to obtain database statistics [#738]
- Added testing for automatic SqlAlchemy database migrations through alembic [#834]
- Exceptions that are triggered in steps of a
WorkChain
are now properly logged to theNode
making them visible throughverdi work report
[#908]
- Export will now write the link types to the archive and import will properly recreate the link [#760]
- Fix bug in workchain persistence that would lead to crashed workchains under certain conditions being resubmitted [#728]
- Fix bug in the pickling of
WorkChain
instances containing an_if
logical block in the outline [#904]
- The logger for subclasses of
AbstractNode
is now properly namespaced toaiida.
such that it works in plugins outside of theaiida-core
source tree [#897] - Fixed a problem with the states of the direct scheduler that was causing the daemon process to hang during submission [#879]
- Various bug fixes related to the old workflows in combination with the SqlAlchemy backend [#889] [#898]
- Fixed bug in
TCODexporter
[#761] verdi profile delete
now respects the configureddbport
setting [#713]- Restore correct help text for
verdi --help
[#704] - Fixed query in the ICSD importer element that caused certain structures to be erroneously skipped [#690]
- Added a "quickstart" to plugin development in the documentation, structured around the new plugintemplate [#818]
- Improved and restructured the developer documentation [#818]
- Workchain steps will no longer be executed multiple times due to process pickles not being locked
- Fix arithmetic operations for basic numeric types
- Fixed
verdi calculation cleanworkdir
after changes inQueryBuilder
syntax - Fixed
verdi calculation logshow
exception when called forWorkCalculation
nodes - Fixed
verdi import
for SQLAlchemy profiles - Fixed bug in
reentry
and update dependency requirement tov1.0.2
- Made octal literal string compatible with python 3
- Fixed broken import in the ASE plugin
verdi calculation show
now properly distinguishes betweenWorkCalculation
andJobCalculation
nodes- Improved error handling in
verdi setup --non-interactive
- Disable unnecessary console logging for tests
- A number of new functionalities have been added to export band structures to a number of formats, including: gnuplot, matplotlib (both to export a python file, and directly PNG or PDF; both with support of LaTeX typesetting and not); JSON; improved agr (xmgrace) output. Also support for two-color bands for collinear magnetic systems. Added also possibility to specify export-format-specific parameters.
- Added method get_export_formats() to know available export formats for a given data subclass
- Added label prettifiers to properly typeset high-symmetry k-point labels for different formats (simple/old format, seekpath, ...) into a number of plotting codes (xmgrace, gnuplot, latex, ...)
- Improvement of command-line export functionality (more options, possibility to write directly to file, possibility to pass custom options to exporter, by removing its DbPath dependency)
- Crucial bug fix: workchains can now be run through the daemon, i.e. by using
aiida.work.submit
- Enhancement: added an
abort
andabort_nowait
method toWorkChain
which allows to abort the workchain at the earliest possible moment - Enhancement: added the
report
method toWorkChain
, which allows a workchain developer to log messages to the database - Enhancement: added command
verdi work report
which for a givenpk
returns the messages logged for aWorkChain
through thereport
method - Enhancement: workchain inputs ports with a valid default specified no longer require to explicitly set
required=False
but is overriden automatically
- New plugin system implemented, allowing to load aiida entrypoints, and working in parallel with old system (still experimental, though - command line entry points are not fully implemented yet)
- Support for the plugin registry
- Refactoring of
Node
to move as much as possible of the caching code into the abstract class - Refactoring of
Data
nodes to have the export code in the topmost class, and to make it more general also for formats exporting more than one file - Refactoring of a number of
Data
subclasses to support the new export API - Refactoring of
BandsData
to have export code not specific to xmgrace or a given format, and to make it more general
- General improvements to documentation
- Added documentation to upgrade AiiDA from v0.8.0 to v0.9.0
- Added documentation of new plugin system and tutorial
- Added more in-depth documentation on how to export data nodes to various formats
- Added explanation on how to export band structures and available formats
- Added documentation on how to run tests in developer's guide
- Documented Latex requirements
- Updated WorkChain documentation for
WaitingEquationOfState
example - Updated AiiDA installation documentation for installing virtual environment
- Updated documentation to use Jupyter
- Speedups the travis builds process by caching pip files between runs
- Node can be loaded by passing the start of its UUID
- Handled invalid verdi command line arguments; added help texts for same
- upgraded
Paramiko
to 2.1.2 and avoided to create empty file when remote connection is failed verdi calculation kill
command is now available forSGE plugin
- Updated
Plum
from 0.7.8 to 0.7.9 to create a workchain inputs that had default value and evaluated to false - Now QueryBuilder will be imported by default for all verdi commands
- Bug fixes in QE input parser
- Code.get() method accepts the pk in integer or string format whereas Code.get_from_string() method accepts pk only in string format
verdi code show
command now shows the description of the code- Bug fix to check if computer is properly configured before submitting the calculation
- Replacing dependency from old unmantained
pyspglib
to newspglib
- Accept BaseTypes as attributes/extras, and convert them automatically to their value. In this way, for instance, it is now possible to pass a
Int
,Float
,Str
, ... as value of a dictionary, and store all into aDict
. - Switch from
pkg_resources
to reentry to allow for much faster loading of modules when possible, and therefore allowing for good speed for bash completion - Removed obsolete code for Sqlite
- Removed
mayavi2
package from dependencies
- Upgraded the TCODExporter to produce CIF files, conforming to the newest (as of 2017-04-26) version of cif_tcod.dic.
- Added dependency on six to properly re-raise exceptions
- Simplified installation procedure by adopting standard python package installation method through setuptools and pip
- Verdi install replaced by verdi setup
- New verdi command
quicksetup
to simplify the setup procedure - Significantly updated and improved the installation documentation
- Significantly increased test coverage and implemented for both backends
- Activated continuous integration through Travis CI
- Application-wide logging is now abstracted and implemented for all backends
- Added a REST API layer with hook through verdi cli:
verdi restapi
- Improved
QueryBuilder
- Composition model instead of inheritance removing the requirement of determining the implementation on import
- Added keyword
with_dbpath
that makesQueryBuilder
switch between using theDbPath
and not using it. - Updated and improved documentation
- The QueryTool as well as the
class Node.query()
method are now deprecated in favor of theQueryBuilder
- Migration of verdi cli to use the
QueryBuilder
in order to support both database backends - Added option
--project
toverdi calculation list
to specify which attributes to print
- Documentation is restructured to improve navigability
- Added pseudopotential tutorial
- Dropped support for MySQL and SQLite to fully support efficient features in Postgres like JSONB fields
- Database efficiency improvements with orders of magnitude speedup for large databases [added indices for daemon queries and node UUID queries]
- Replace deprecated
commit_on_success
with atomic for Django transactions - Change of how SQLAlchemy internally uses the session and the engine to work also with forks (e.g. in celery)
- Finalized the naming for the new workflow system from
workflows2
towork
FragmentedWorkFunction
is replaced byWorkChain
InlineCalculation
is replaced byWorkfunction
ProcessCalculation
is replaced byWorkCalculation
- Old style Workflows can still be called and run from a new style
WorkChain
- Major improvements to the
WorkChain
andWorkfunction
implementation - Improvements to
WorkChain
- Implemented a
return
statement forWorkChain
specification - Logging to the database implemented through
WorkChain.report()
for debugging
- Implemented a
- Improved kill command for old-style workflows to avoid steps to remaing in running state
- Added finer granularity for parsing PW timers in output
- New Quantum ESPRESSO and scheduler plugins contributed from EPFL
- ASE/GPAW plugins: Andrea Cepellotti (EPFL and Berkeley)
- Quantum ESPRESSO DOS, Projwfc: Daniel Marchand (EPFL and McGill)
- Quantum ESPRESSO phonon, matdyn, q2r, force constants plugins: Giovanni Pizzi, Nicolas Mounet (EPFL); Andrea Cepellotti (EPFL and Berkeley)
- Quantum ESPRESSO cp.x plugin: Giovanni Pizzi (EPFL)
- Quantum ESPRESSO neb.x plugin: Marco Gibertini (EPFL)
- LSF scheduler: Nicolas Mounet (EPFL)
- Implemented functionality to export and visualize molecular dynamics trajectories (using e.g. matplotlib, mayavi)
- Improved the TCODExporter (some fixes to adapt to changes of external libraries, added some additional TCOD CIF tags, various bugfixes)
- Fix for the direct scheduler on Mac OS X
- Fix for the import of computers with name collisions
- Generated backup scripts are now made profile specific and saved as
start_backup_<profile>.py
- Fix for the vary_rounds warning
- Implemented support for Kerberos authentication in the ssh transport plugin.
- Added
_get_submit_script_footer
to scheduler base class. - Improvements of the SLURM scheduler plugin.
- Fully functional parsers for Quantumespresso CP and PW.
- Better parsing of atomic species from PW output.
- Array classes for projection & xy, and changes in kpoints class.
- Added codespecific tools for Quantumespresso.
verdi code list
now shows local codes too.verdi export
can now export non user-defined groups (from their pk).
- Fixed bugs in (old) workflow manager and daemon.
- Improvements of the efficiency of the (old) workflow manager.
- Fixed JobCalculation text prepend with multiple codes.
This release introduces a lot and significant changes & enhancements.
We worked on our new backend and now AiiDA can be installed using SQLAlchemy too. Many of the verdi commands and functionalities have been tested and are working with this backend. The full JSON support provided by SQLAlchemy and the latest versions of PostgreSQL enable significant speed increase in attribute related queries. SQLAlchemy backend choice is a beta option since some last functionalities and commands need to be implemented or improved for this backend. Scripts are provided for the transition of databases from Django backend to SQLAlchemy backend.
In this release we have included a new querying tool called QueryBuilder
. It is a powerfull tool
allowing users to write complex graph queries to explore the AiiDA graph database. It provides
various features like selection of entity properties, filtering of results, combination of entities
on specific properties as well as various ways to obtain the final result. It also provides the
users an abstract way to query their data without enforcing them to write backend dependent queries.
Last but not least we have included a new workflow engine (in beta version) which is available
through the verdi workflow2
command. The new workflows are easier to write (it is as close as
writing python as possible), there is seamless mixing of short running tasks with long running
(remote) tasks and they encourage users to write reusable workflows. Moreover, debugging of
workflows has been made easier and it is possible both in-IDE and through logging.
- Installation procedure works with SQLAlchemy backend too (SQLAlchemy option is still in beta).
- Most of the verdi commands work with SQLAlchemy backend.
- Transition script from Django schema of version 0.7.0 to SQLAlchemy schema of version 0.7.0.
- AiiDA daemon redesigned and working with both backends (Django & SQLAlchemy).
- Introducing new workflow engine that allows better debugging and easier to write workflows. It is
available under the
verdi workflows2
command. Examples are also added. - Old workflows are still supported and available under the "verdi workflow" command.
- Introducing new querying tool (called
QueryBuilder
). It allows to easily write complex graph queries that will be executed on the AiiDA graph database. Extensive documentation also added. - Unifying behaviour of verdi commands in both backends.
- Upped to version 0.4.2 of plum (needed for workflows2)
- Implemented the validator and input helper for Quantum ESPRESSO pw.x.
- Improved the documentation for the pw (and cp) input plugins (for all the flags in the Settings node).
- Fixed a wrong behavior in the QE pw/cp plugins when checking for the parser options and checking if there were further unknown flags in the Settings node. However, this does not solve yet completely the problem (see issue #219).
- Implemented validator and input helper for Quantum ESPRESSO pw.x.
- Added elements with Z=104-112, 114 and 116, in
aiida.common.constants
. - Added method
set_kpoints_mesh_from_density
inKpointsData
class. - Improved incremental backup documentation.
- Added backup related tests.
- Added an option to
test_pw.py
to run also in serial. - SSH transport, to connect to remote computers via SSH/SFTP.
- Support for the SGE and SLURM schedulers.
- Support for Quantum ESPRESSO Car-Parrinello calculations.
- Support for data nodes to store electronic bands, phonon dispersion and generally arrays defined over the Brillouin zone.
We performed a lot of changes to introduce in one of our following releases a second object-relational mapper (we will refer to it as back-end) for the management of the used DBMSs and more specifically of PostgreSQL. SQLAlchemy and the latest version of PostgreSQL allows AiiDA to store JSON documents directly to the database and also to query them. Moreover the JSON query optimization is left to the database including also the use of the JSON specific indexes. There was major code restructuring to accommodate the new back-end resulting to abstracting many classes of the orm package of AiiDA.
Even if most of the needed restructuring & code addition has been finished, a bit of more work is needed. Therefore even in this version, Django is the only available back-end for the end user.
However, the users have to update their AiiDA configuration files by executing the migration file
that can be found at YOUR_AIIDA_DIR/aiida/common/additions/migration.py
as the Linux user that
installed AiiDA in your system.
(e.g. python YOUR_AIIDA_DIR/aiida/common/additions/migration.py
)
- Back-end selection (Added backend selection). SQLAlchemy selection is disabled for the moment.
- Migration scripts for the configuration files of AiiDA (SQLAlchemy support).
- Enriched link description in the database (to enrich the provenance model).
- Corrections for numpy array and cell. List will be used with cell.
- Fixed backend import. Verdi commands load as late as possible the needed backend.
- Abstraction of the basic AiiDA orm classes (like node, computer, data etc). This is needed to support different backends (e.g. Django and SQLAlchemy).
- Fixes on the structure import from QE-input files.
- SQLAlchemy and Django benchmarks.
- UltraJSON support.
- requirements.txt now also include SQLAlchemy and its dependencies.
- Recursive way of loading JSON for SQLAlchemy.
- Improved way of accessing calculations and workflows attached to a workflow step.
- Added methods to programmatically create new codes and computers.
- Final paper published, ref: G. Pizzi, A. Cepellotti, R. Sabatini, N. Marzari, and B. Kozinsky, AiiDA: automated interactive infrastructure and database for computational science, Comp. Mat. Sci 111, 218-230 (2016)
- Core, concrete, requirements kept in
requirements.txt
and optionals moved tooptional_requirements.txt
- Schema change to v1.0.2: got rid of
calc_states.UNDETERMINED
- [non-back-compatible] Now supporting multiple codes execution in the same submission script. Plugin interface changed, requires adaptation of the code plugins.
- Added import support for XYZ files
- Added support for van der Waals table in QE input
- Restart QE calculations avoiding using scratch using copy of parent calc
- Adding database importer for NNIN/C Pseudopotential Virtual Vault
- Implemented conversion of pymatgen Molecule lists to AiiDA's TrajectoryData
- Adding a converter from pymatgen Molecule to AiiDA StructureData
- Queries now much faster when exporting
- Added an option to export a zip file
- Added backup scripts for efficient incremental backup of large AiiDA repositories
- Added the possibility to add any kind of Django query in Group.query
- Added TCOD (Theoretical Crystallography Open Database) importer and exporter
- Added option to sort by a field in the query tool
- Implemented selection of data nodes and calculations by group
- Added NWChem plugin
- Change default behaviour of symbolic link copy in the transport plugins: "put"/"get" methods -> symbolic links are followed before copy; "copy" methods -> symbolic links are not followed (copied "as is").
- Explicit Torque support (some slightly different flags)
- Improved PBSPro scheduler
- Added new
num_cores_per_machine
andnum_cores_per_mpiproc fields
for pbs and torque schedulers (giving full support for MPI+OpenMP hybrid codes) - Direct scheduler added, allowing calculations to be run without batch system (i.e. directly call executable)
- Support for profiles added: it allows user to switch between database configurations using the
verdi profile
command - Added
verdi data structure import --file file.xyz
for importing XYZ - Added a
verdi data upf exportfamily
command (to export an upf pseudopotential family into a folder) - Added new functionalities to the
verdi group
command (show list of nodes, add and remove nodes from the command line) - Allowing verdi export command to take group PKs
- Added ASE as a possible format for visualizing structures from command line
- Added possibility to export trajectory data in xsf format
- Added possibility to show trajectory data with xcrysden
- Added filters on group name in
verdi group list
- Added possibility to load custom modules in the verdi shell (additional property
verdishell.modules created; can be set with
verdi devel setproperty verdishell.modules
) - Added
verdi data array show
command, usingjson_date
serialization to display the contents ofArrayData
- Added
verdi data trajectory deposit
command line command - Added command options
--computer
and--code
toverdi data * deposit
- Added a command line option
--all-users
forverdi data * list
to list objects, owned by all users