-
Notifications
You must be signed in to change notification settings - Fork 235
Release v2.7.2 #7132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: support/2.7.x
Are you sure you want to change the base?
Release v2.7.2 #7132
Conversation
Cherry-pick: a8230b4
* mypy: use --pretty for nicer error messages! * Fix bug in existing archive import validation! * Use cast * Fix typing of shared_options in src/aiida/cmdline/groups/dynamic.py * Mark ctx positional-only in profile_configure_rabbitmq Cherry-pick: 2106844
Cherry-pick: 1229ddd
* Add slack-notification action, use it in test-install.yml * Don't run on PRs Cherry-pick: b235b3a
* Only show 5 slowest test durations * Remove deprecation warning in cmd_node.py * Remove deprecation warnings in tests/cmdline/commands/test_node.py * Add stacklevel to warn_deprecation * Ignore plumpy get_event_loop deprecation warning * Fix fixture in tests/cmdline/commands/test_process.py * Ignore 'dict interface' spglib deprecation warning * Fix pymatgen deprecation warning tests/test_dataclasses.py::TestStructureDataFromPymatgen::test_1 tests/test_dataclasses.py:2098: FutureWarning: get_structures is deprecated; use parse_structures in pymatgen.io.cif instead. The only difference is that primitive defaults to False in the new parse_structures method. So parse_structures(primitive=True) is equivalent to the old behavior of get_structures(). pymatgen_struct = pymatgen_parser.get_structures()[0] Cherry-pick: cb5348e
Following commit b2a6e2, This commit updates documentation on stashing calcjob. Cherry-pick: 81dd4df
…glob (#6950) This commit suggest two changes in core.ssh_async interface: - Catch and raise a better error message if the connection fails to open - Globing of a non-existing path or an unmatched pattern will not raise anymore, instead it will just returns an empty list. This is to respect the same convention of core.ssh behavior Cherry-pick: da3e425
…ry detection (#6935) Previously, the configuration directory lookup split the AIIDA_PATH environment variable using a hardcoded ':' separator, which caused issues on Windows systems where ';' is used as the path separator. This update introduces platform detection using sys.platform to select the correct separator for splitting paths, ensuring compatibility across operating systems. --------- Co-authored-by: lainme <lainme993@gmail.com> Co-authored-by: Ali Khosravi <khsrali@gmail.com> Cherry-pick: 32d515a
* Add typing to src/aiida/cmdline/params/options/multivalue.py * Add typing to src/aiida/cmdline/params/types/group.py * Type check src/aiida/cmdline/params/options/interactive.py * Typecheck src/aiida/cmdline/params/options/main.py * Typecheck src/aiida/cmdline/params/options/commands/setup.py * Type AiiDAConfigDir.get and get_config * Fix type error in src/aiida/cmdline/utils/multi_line_input.py * Add typing to src/aiida/cmdline/utils/shell.py * Don't allow sub_classes=None in cmdline/params/types/group.py Cherry-pick: cfb7fd1
* Various cleanups in verdi.sh tests - In 8.2.0, just running 'verdi' or 'python -m aiida' without any arguments returns non-zero code. In verdi.sh we add '-h' to workaround this. - Move verdi devel check-* commands to verdi.sh - Reduce verdi load limit to 300ms * Improve output of verdi devel commands Cherry-pick: f995713
* Add resolution=lowest-direct CI job * Fix extras parameter when from-lock: false * Bump alembic * Bump flask version * Bump psycopg and sqlalchemy * Bump pymatgen and ASE * Bump tabulate * Bump pydantic * Bump pyparsing and ipython * Upgrade lockfile Cherry-pick: 577d7c9
* Use typing_extensions.assert_never for python<3.11 --------- Co-authored-by: Daniel Hollas <daniel.hollas@bristol.ac.uk> Cherry-pick: 494179d
Fixing it properly, e.g., by adding a `target_dir` parameter, would require adding it throughout the call stack (`_export_yaml`, `Data.export`, `Data._exportcontent`, `data_export`). Maybe do later on. Cherry-pick: 01a4dcd
* Provide an overloaded method definitions for `flat=True` and `flat=False` arguments. * Bump mypy to 1.17 * Cast away the casts Cherry-pick: ee87cec
* Add filter argument to shutil.unpack_archive calls in tests * Ignore psycopg ResourceWarnings * Add note about the test_kill_job test * More Cif ignores Cherry-pick: 0eb794e
In favor of `.github/workflows/pre-commit.yml` which can actually run mypy and the other previously skipped checks. Removing the `ci` section from `.pre-commit-config.yaml` also removes the auto-generated PR commits by pre-commit.ci. Locally, pre-commit still works the same. The pre-commit.ci integration that was installed via the pre-commit ci website and through GitHub was also removed as part of this PR. Cherry-pick: b8df58d
Add typing to aiida.cmdline.{common,echo,ascii_vis}.py
Cherry-pick: 76f776a
The transport classes have a method for changing the permissions of a file, with the same name as the UNIX command `chmod`. Users familiar with this command might not realise the input is an octal number, and that in Python you can express this as 0o700 to obtain the same outcome as `chmod 700` on the command line. Here we update the docstring of the `chmod` methods to help clarify this. Cherry-pick: 0eb8fc3
Typecheck cmd_code.py, cmd_computer.py, cmd_storage.py, cmd_devel.py and cmd_group.py and aiida/tools/groups/paths.py Cherry-pick: 594cf56
move values to insert to `execute` statement rather than `insert` Cherry-pick: 9bccdc8
…6963) refactor: extract archive version validation into reusable methods - Add `get_current_archive_version()` and `validate_archive_versions()` static methods - Skip migration in `initialise()` when already at target version - Move version checking logic from migrator to backend for reusability Cherry-pick: 7255f01
* More typing for cmdline.params.{arguments,types} modules
* More precise Callable type
* Better types for cmdline.params.types.strings module
* More accurate return types of convert methods
* Strict typing for cmdline.params.options
* Don't ignore errors from plumpy
* Enable strict settings for aiida.cmdline.params
* Upgrade mypy to 1.18.1
* Strict typing for cmdline.groups module
Cherry-pick: 0d5c4d4
Cherry-pick: 2d0a0ed
We move the pre-existing pytest fixture tests, that are testing the new pytest fixtures, to their respective place in the test suite from `tests/manage/tests/test_pytest_fixture.py` to `tests/tools/pytest_fixtures/test_orm.py`, and we copy them over to a file with tests for the old testing the old pytest fixtures, since these these are valid for both old and new fixtures. Cherry-pick: 6afd3f5
Clarify the distinction between calculation-like and workflow-like processes in AiiDA. Expand on the roles and capabilities of each process type, including whether workflow can do data creation. Co-authored-by: Kristjan Eimre <eimrek@users.noreply.github.com> Cherry-pick: d50cf17
For table construction, the pydantic `FieldInfo`s of each model field are being used to exclude fields with `exclude_from_cli=True`. In addition, `is_attribute` is being (mis)used (see comment in src) to avoid bare `AtrributeError` exception handling, which otherwise captures `filepath_files` of `PortableCode`, which is never set on the instance or stored in the database after creation of the entity. The meaningless `test_code_show` test function has been improved and regression tests using the `file_regression` fixture were added that compare the command output to the expected output, stored in `txt` files. This ensures no other formatting changes appear in the future (no duplications, wrong capitalization, etc., as resolved by this PR). Cherry-pick: 32742c0
This is required if it's in the top folder (of the repository) and will thus be put in the top folder of the SCRATCH directory (AiiDA's RemoteData), otherwise the bash script (given to the scheduler) will not be able to execute it, since `./` is not typically on the bash's PATH environment variable for security reasons. Also, fixing a small bug where the PortableCode will not accept using a binary defined in a subfolder of the Repository. Cherry-pick: 9256f2f
Cherry-pick: 289cdd5
… fix, those codes were not shown (e.g., PortableCodes). Now, they are properly added to the list. Cherry-pick: 27b52da
bpython is an alternative interactive shell, basically lighter-weight ipython. Both can be used for the verdi shell command, ipython is installed by default, while bpython is installed via optional 'bpython' extras. Minimum version was bumped to 0.20, which is the first version to support python 3.9. I've removed the upper pin, so that new versions can naturally track new python versions. Cherry-pick: f01c70e
Cherry-pick: ce11608
Co-authored-by: npaulish <nataliya.paulish@psi.ch> Cherry-pick: 49af7f0
Cherry-pick: 9b69202
* CI: Add Slack notification to install-with-conda-job Cherry-pick: 0dcd10a
This should improve performance of async_ssh plugin for lots of small files since we're no longer sleeping 500ms when waiting for a lock. Also makes the code more safe since one doesn't need to manually lock and unlock, and fixes a bug since the number of file IO connections was not decreased when an OsError exception was thrown. Cherry-pick: ad5cafd
Use database-specific optimizations for large IN clauses in QB, reducing parameter usage from N parameters (one per value) to 1 parameter per list. This avoids hitting database parameter limits (SQLite: ~999, PostgreSQL: ~65k) when filtering on large sets of IDs. Implementation details: PostgreSQL uses `unnest()` with native ARRAY types. The list is passed as a single array parameter using `type_coerce()` (a Python-side SQLAlchemy hint that doesn't generate SQL CAST). PostgreSQL natively understands arrays, so values from `unnest()` are already properly typed. SQLite uses `json_each()` since it lacks native array support. The list is serialized to JSON and passed as a single parameter. Since `json_each()` returns all values as TEXT, explicit SQL CAST is required to convert to the target column type. For very large lists (>500k items), automatic batching splits them into multiple OR'd conditions. The recursion depth is always exactly 2. The first call splits into ≤500k batches, and recursive calls immediately hit the base case since each batch is guaranteed to be ≤500k items. This optimization eliminates the need for `filter_size` parameter batching throughout the codebase, particularly in archive import/export operations where large node sets are common. Cherry-pick: 8d562b4
### Problem The JobsList class had a race condition where: 1. Job A requests a status update, triggering a scheduler query 2. While the scheduler is being queried (with only Job A), Job B also requests an update 3. After the scheduler returns (with only Job A's status), both futures were resolved 4. Job B's future was resolved as DONE, because AiiDA assumes any job ID that disappeared from the scheduler query has completed This premature "DONE" status causes several critical issues: - Premature retrieval: AiiDA attempts to retrieve output files while the job is still running - Corrupted files: Files may be incomplete or still being written when retrieved - False failure reports: Jobs still running may be incorrectly marked as failed This issue only surfaces when using async transport plugins like core.ssh_async or aiida-firecrest, where the timing conditions make the race condition more likely to occur. ### Solution Only resolve futures for jobs that were actually inspected by the scheduler ### Testing Added test_prevent_racing_condition which explicitly tests the race condition scenario Cherry-pick: e79f0a4
The removal of the filter_size batching in #6998 caused the progress bar to only update once at the end instead of incrementally. Here, we re-introduce query batching (50k IDs per batch for UX) and added QueryBuilder streaming-based progress updates (every ~10k entities) by replacing the blocking list comprehension with an explicit for-loop. Progress now updates during both query batching (for UX) and result streaming. Cherry-pick: 4e54cd4
If a file/key is missing when importing an archive, the `open` method in `ZipfileBackendRepository` fails with an `UnboundLocalError` because the `handle` variable is referenced before assignment. This was initially discovered when working on the following issue on discourse: https://aiida.discourse.group/t/error-archive-import/696 Cherry-pick: 166d06c
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## support/2.7.x #7132 +/- ##
=================================================
+ Coverage 79.13% 79.48% +0.36%
=================================================
Files 565 564 -1
Lines 43391 43289 -102
=================================================
+ Hits 34331 34405 +74
+ Misses 9060 8884 -176 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
I am bit confused by the number of commits (for a patch release!) shouldn't we just pick the most critical bug fixes and leave the rest for the 2.8.0). That seems much safer and in line with other patch release which have always been very targeted in the past. (EDIT: sorry if I missed discussion about this) |
Hm, that's a valid concern... though, I actually went manually through all the commits, and only selected the ones that are suitable for a patch release following SemVer (that is, no new features). E.g., numpy upgrade, click upgrade, unstashing, any new methods, etc., are not included. In addition, the recent PRs have been rather small, e.g., I already have two one-line change PRs to drop the duplicated CLI arguments. We also plan v2.8.0, with a bunch of new features already in the pipeline, rather soon. So, I do think it's fine to go through with this patch release (given that now it's ready, basically). What do you think? EDIT: Didn't know about the history of patch releases, and how targeted they were in the past, tbh. Though, my original motivation for the patch release was fixing the PSQL OpErrs that would lead to archive export failures, and as that kept dragging on, and people reported (and fixed) more and more issues, the list of things in the 2.7.2 project grew ^^ Thoughts, @khsrali, @agoscinski? |
|
Well, the fact the 2.8 release is imminent is more of an argument to have a small and targeted patch release only containing critical bugfixes.
To be super clear, I fully trust you did a great job of this. But in the end it is just statistics --- every change, (even a "bugfix") brings an extra chance of breaking something. That's why I think patch releases should be targeted only for critical bugfixes. To me in this case those are the async-ssh fixes and your PSQL import/export fixes (perhaps others I am not aware of).
Yeah, I think in semver it doesn't say that every bugfix need to be released in a patch release --- it can very well be in minor release. It says the opposite implication --- patch releases can only contain bugfixes and nothing else. |
|
Anyway, this is obviously up to you, just wanted to flag this. :-) btw: It looks like 2.7.1 is missing from CHANGELOG? |
|
Cheers, thanks for bringing this up, @danielhollas I thought more about it on the way home and I think you're right, better to have a smaller, more focused patch release 🙏🏻 Will take care of it on Monday! |
Ping @khsrali @agoscinski