Skip to content

Commit

Permalink
v1.0.0
Browse files Browse the repository at this point in the history
Development v1.0.0
  • Loading branch information
Athroniaeth authored Feb 3, 2024
2 parents a477551 + f6919fa commit 9ed2d7e
Show file tree
Hide file tree
Showing 46 changed files with 2,241 additions and 269 deletions.
90 changes: 81 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,82 @@
# Changelog
## v0.1.0
- Add `lock` fixture
- Add `lock.lock` method
- Add cli argument `--lock` to pytest
- Add functionality, if a test have `lock` fixture, then compare the result of the test with the result in the cache file
- Add functionality, if a test have `lock` and `--lock` cli argument, then lock the result of the test to a cache file
- Add functionality, if a test have `lock` and `--simulate` cli argument, then simulate the result of the test, not write to the cache file
- Add functionality, if a test have `lock` and `--only-skip` cli argument, then don't update lock if the result was not locked
- Add functionality, if a test have `lock` and `--lock-date` cli argument, then lock the result of the test to a cache file with the date of the lock, if date was expired, then the test is failed

## Version v1.0.0
* **Date:** _2024-02-03_
* **Version:** _>=3.8 and <=3.12_
* Note : this version break major versioning, because old acceptation test fail for this reason

### branch: *"feature/lock-fixture"*

* **Status:** _Finish_
* **Note:** Branch containing the base of the pytest fixture 'lock', it must be able to easily integrate new functions, CLI arguments, etc…

- [X] Modified `--lock` argument, now target only test with `lock` fixture
- [X] The tests with the `lock` fixture had `skipped` status, now they have `passed` status
- [X] The tests without the `lock` fixture had `passed` status, now they have `skipped` status

### branch: *"feature/fixture-lock-pickle"*

* **Status:** _Finish_
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.

- [X] Add `pickle` extension for `lock.lock` to support more types of data
- [X] Now `pickle` is the default extension for `lock.lock` if no extension is specified

### branch: *"feature/fixture-lock-clean"*

* **Status:** _Start_
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.

- [X] If test use `--lock` and `--clean` argument, then clean all unused cache files
- [X] If test use `--lock` and `--clean` argument and `--only-skip` argument, then do anything, it's certainly a mistake (why clean only test with existing lock ?)
- [X] If test use `--lock` and `--clean` argument and `--simulate` argument, list all unused cache files who will be removed without remove them.
- [X] If test use `--lock` and `--clean` argument and `--lock-date` argument, thrown exception (can't lock a remove cache file)


## Version v0.1.2
* **Date:** _2024-01-27_
* **Version:** _>=3.8 and <=3.12_

### branch: *"feature/lock-fixture"*

* **Status:** _Finish_
* **Note:** Branch containing the base of the pytest fixture 'lock', it must be able to easily integrate new functions, CLI arguments, etc…

- [X] Add fixtures `lock`


### branch: *"feature/fixtures-lock-method"*

* **Status:** _Finish_
* **Note:** This branch requires that the branch "feature/lock-fixture" be finalized.

- [X] Add fixtures method `lock.lock` to lock the result of a test to a cache file

- [X] If test use `lock.lock` and result was not locked, exception is thrown
- [X] If test use `lock.lock` result is in the cache file and is the same as the result of the test, test is valid
- [X] If test use `lock.lock` result is in the cache file and is not the same as the result of the test, test is invalid (failed)

- [X] If test use `lock.lock` and `--lock` in cli argument, then start test and lock the result in cache file.
- [X] If test use `lock.lock` and `--simulate` in cli argument, then simulate the result of the test, not write to the cache file.
- [X] If test use `lock.lock` and `--only-skip` in cli argument, then don't update lock if the result was not locked.
- [X] If test use `lock.lock` and `--lock-date` in cli argument, then lock the result of the test to a cache file with the date of the lock, if date was expired, then the test is failed

- [X] If test use `--simulate` argument and not `--lock` argument, then it's invalid, throw exception
- [X] If test use `--lock-date` argument and not `--lock` argument, then it's invalid, throw exception
- [X] If test use `--only-skip` argument and not `--lock` argument, then it's invalid, throw exception

### branch: *"feature/fixture-lock-date-support"*

* **Status:** _Finish_
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.

- [X] Add support for `pytest --lock --lock-date 13/12/2023`, if test has `lock` fixture, then lock the result of the
test to a cache file with the date of the lock, if date was expired, then the test is skipped

---

## Version v0.1.0
* **Date:** _2024-01-xx_
* **Version:** _xxxxxx_

The package upload tests on Pypi were awkwardly carried out with version `v0.1.0` and `v0.1.1`, the package being defective, these were removed but Pypi refuses to change the package contents even after use, even if no download has been made. The first version is therefore `v0.1.2`.
93 changes: 48 additions & 45 deletions ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,12 @@ that the result of the test is always the same. If the result of the test is dif

### Reverse tests

This idea came from the fact that while watching the comments of a YouTube video (which I no longer remember) that it
_this idea has been cancelled because it does not correspond to the purpose of pytest-lock and will be destined, perhaps, for another librairy._

~~This idea came from the fact that while watching the comments of a YouTube video (which I no longer remember) that it
was a shame not to be able to do "reverse" tests. The idea is that once we have done our unit tests, if they work, we
can test with "random" or "unexpected" variables, if the test fails it means everything is fine. The idea here is to
take this up by offering “reverse” tests based on the lock tests.
take this up by offering “reverse” tests based on the lock tests.~~

## Architecture

Expand All @@ -25,69 +27,66 @@ additions (such as configurations, unforeseen functionality) through the use of

## Tasks

### branch: *"feature/lock-fixture"*

* **Status:** _Not started_
* **Note:** Branch containing the base of the pytest fixture 'lock', it must be able to easily integrate new functions, CLI arguments, etc…

- [X] Add fixtures `lock`

### branch: *"feature/fixtures-lock-method"*
### branch: *"feature/fixture-lock-clean"*

* **Status:** _Finish_
* **Note:** This branch requires that the branch "feature/lock-fixture" be finalized.
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.

- [ ] Add `clean-all` method to clean all cache files, even those who don't have tests associated with fixture `lock`
- [ ] Add `clean-unused` method to clean all cache files who don't have tests associated with fixture `lock`

- [X] Add fixtures method `lock.lock` to lock the result of a test to a cache file
- [X] If test use `lock.lock` and result was not locked, exception is thrown
- [X] If test use `lock.lock` result is in the cache file and is the same as the result of the test, test is valid
- [X] If test use `lock.lock` result is in the cache file and is not the same as the result of the test, test is
invalid (failed)
- [X] If test use `lock.lock` and `--lock` in cli argument, then start test and lock the result in cache file (Any
result with __str__ method and Exception are supported)
- [X] If test use `lock.lock` and `--simulate` in cli argument, then simulate the result of the test, not write to the cache file.
- [X] If test use `--simulate` argument and not `--lock` argument, then it's invalid, throw exception
---

### branch: *"feature/fixture-lock-date-support"*
### branch: *"feature/fixture-lock-data"*

* **Status:** _Finished_
* **Status:** _Finish_
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.

- [X] Add support for `pytest --lock --lock-date 13/12/2023`, if test has `lock` fixture, then lock the result of the
test to a cache file with the date of the lock, if date was expired, then the test is skipped
- [ ] Add support for `lock.lock` data library in different formats (json, pickle, etc.)
- [ ] Add pypi option for install this feature
- [ ] `pip install pytest-lock[pandas]`
- [ ] `pip install pytest-lock[polars]`
- [ ] `pip install pytest-lock[numpy]`

- [ ] Add support for `pandas.DataFrame`
- [ ] Add support for `pandas.Series`
- [ ] Add support for `polars.DataFrame`
- [ ] Add support for `polars.Series`
- [ ] Add support for `numpy.ndarray`

### branch: *"feature/fixture-lock-improve-skip"*

* **Status:** _Not Started_
* **Note:** This branch requires that the branch "feature/fixture-lock-method" be finalized.
## Examples to test

- [ ] `lock.lock` with `--lock` must pass tests that don't use the lock fixture and return tests that were **skipped** to **passed**. This avoids the know issue of not being able to lock several tests with `lock.lock` in a single test function at the same time, as **skip** acts like **failed** and prevents the test from being executed.
### Lock tests examples

### branch: *"feature/fixture-reversed-method"*
```python
from pytest_lock import FixtureLock

* **Status:** _Not started_
* **Note:** This branch requires that the branch "feature/lock-fixture" be finalized.

- [ ] Add fixtures method `lock.reversal` to reverse test with random or unexpected variables
- [ ] If test use `lock.reversed` and `--lock` argument, the argument is useless, skip.
- [ ] If test has `lock.reversed` and `--reversed` argument
- [ ] If the result was not locked the plugin will skip the test
- [ ] If the result was locked the plugin will check the type of the arguments of lock, if the test have
list[int] as argument, then the plugin will check if the test failed with list[str], str, int, float, etc...
and if it's the case, the test is valid
- [ ] Add arguments for `lock.reversed` to specify type to check, additionally, replace, or replace_joined
Note :
def test_something(lock: FixtureLock):
args = [1, 2, 3]
lock.lock(sum, (args,)) # use the lock function to lock the result of the test
...
```

## Examples to test
```python
from pytest_lock import FixtureLock

### Lock tests examples

```py
from typing import List
def test_something(lock: FixtureLock):
args = [1, 2, 3]
lock.change_parser('.json')
lock.lock(sum, (args,)) # use the lock function to lock the result of the test
...
```

```python
from pytest_lock import FixtureLock


def test_something(lock: FixtureLock):
lock.lock(sum, ([1, 2, 3],)) # use the lock function to lock the result of the test
args = [1, 2, 3]
lock.lock(sum, (args,), extension='.json') # use the lock function to lock the result of the test
...
```

Expand All @@ -112,3 +111,7 @@ pytest --lock --only-skip
```bash
pytest --lock --lock-date 13/12/2023
```

```bash
pytest --lock --clean
```
5 changes: 3 additions & 2 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@ currently being supported with security updates.

| Version | Until release | Supported |
|---------|----------------|--------------------|
| 0.1.0 | release v1.0.0 | :white_check_mark: |
| 1.x.x | release v3.0.0 | :white_check_mark: |
| 0.1.x | release v2.0.0 | :white_check_mark: |

## Reporting a Vulnerability

Please report (suspected) security vulnerabilities to `pierre.chaumont@hotmail.fr`. We will get back to you as soon as
Please report (suspected) security vulnerabilities to my mail in [PyPi](https://pypi.org/project/pytest-lock/) . We will get back to you as soon as
possible. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the
issue.

Expand Down
31 changes: 30 additions & 1 deletion docs/source/markdown/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Create a fork of the repository, then clone it locally. Make your changes, then
### Poetry
This project uses poetry to manage dependencies. To install poetry, follow the instructions [here](https://python-poetry.org/docs/#installation). Once you have poetry installed, you can install the dependencies by running `poetry install` in the root directory of the project. Make your modification, in the cases where your add a functionnalities, please add tests and documentation. Once you are done, you can make a pull request.

### Pipeline CI/CD
## Pipeline CI/CD
This is not mandatory, but we would appreciate it if you would follow this sequence of orders before making a PR. This will save the project maintainers a lot of work and make it easier for them to maintain your contribution. A PR that does not follow this sequence of commands could fail the CI/CD pipeline tests (which execute the same sequence of instructions with different degrees of severity as we approach a development/production/version branch). It doesn't matter if it fails, it can always be corrected by you or a maintainer.
* __tests__: This project uses `pytest` to run tests. To run the tests, run `poetry run pytest` in the root directory of the project.
* __documentations__: This project uses `sphinx` to generate documentation. To generate the documentation, run `poetry run sphinx-build -b html docs docs/build` in the root directory of the project. The documentation will be in the `docs/build` directory.
Expand All @@ -17,6 +17,35 @@ This is not mandatory, but we would appreciate it if you would follow this seque

You can check most version of python with `poetry run tox`. This will run all the tests in all the version of python (if you have them installed).

## Git branch strategy
### Overview
This document describes our Git branch management strategy, designed to optimize collaboration and continuous delivery, while maintaining code stability and quality. We use semantic versioning (SemVer) for our version numbers.

### Main branches
- `main` : Stable branch, reflecting the production-ready version. Updates are made from the development branch.
- `development` : Intermediate branch for ongoing developments. All new features and fixes are integrated here first.

### Features and fixes workflow
- `feature/xxx` : For each new feature, create a branch from development. Once the feature has been completed, tested and revised, merge it back into development.
- `bugfix/xxx` : Bug fixes are also developed in separate branches from development and merged back after validation.

### Versioning (SemVer)
- `Patch`: Increment patch (x.y.**Z**) for bug fixes. No tests should be modified, except in exceptional cases.
- `Minor`: Increment minor version (x.**Y**.z) for new user features, resetting patches.
- `Major`: Increment the major version (**X**.y.z) if an acceptance test fails (indicating a major change in the library's expected behavior). Or if one or more previously accepted python versions become deprecated. Reset minor versions and patches.

### Release process
- _**Release preparation:**_ When development has reached a stable state and is ready for production, prepare a release.
- _**Release branch creation:**_ Create a release branch from main (e.g. release/vX.Y.Z), where X.Y.Z is the new version number.
- _**Merge into main:**_ After final testing and approval, merge the release branch into main.
- _**Version labeling:**_ Apply a version label to main according to SemVer rules.

### Rules and best practices
Keep branches up to date with main to avoid major discrepancies.
Perform code reviews for all branch mergers.
Adhere strictly to automated testing to guarantee code quality and stability.
Feature and fix branches should be specific and focused on a single objective to facilitate review and merge.

## Reporting bugs
Before reporting bugs, go to the documentation in **know issues** to see if your problem is already known. If it is not, then you can report it in the issues tab. Please include the following information:
* What is the version of python you are using
Expand Down
78 changes: 78 additions & 0 deletions docs/source/markdown/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# pytest-lock

## Overview
[![License MIT](https://img.shields.io/badge/license-MIT-blue)](https://codecov.io/gh/athroniaeth/pytest-lock)
[![Python versions](https://img.shields.io/pypi/pyversions/bandit.svg)](https://pypi.python.org/pypi/bandit)
[![PyPI version](https://badge.fury.io/py/pytest-lock.svg)](https://pypi.org/project/pytest-lock/)
[![codecov](https://codecov.io/gh/Athroniaeth/pytest-lock/graph/badge.svg?token=28E1OZ144W)](https://codecov.io/gh/Athroniaeth/pytest-lock)
[![Workflow](https://img.shields.io/github/actions/workflow/status/Athroniaeth/pytest-lock/release.yml)]("https://github.com/Athroniaeth/pytest-lock/actions/workflows/release.yml")
[![Documentation Status](https://readthedocs.org/projects/pytest-lock/badge/?version=latest)](https://pytest-lock.readthedocs.io/en/latest/)
[![Security: Bandit](https://img.shields.io/badge/security-bandit-yellow.svg)](https://github.com/PyCQA/bandit)

**pytest-lock** is a pytest plugin that allows you to "lock" the results of unit tests, storing them in a local cache.
This is particularly useful for tests that are resource-intensive or don't need to be run every time. When the tests are
run subsequently. **pytest-lock** will compare the current results with the locked results and issue a warning if there
are any discrepancies.

* Free software: Apache license
* Documentation: https://pytest-lock.readthedocs.io/en/latest/
* Source: https://github.com/Athroniaeth/pytest-lock
* Bugs: https://github.com/Athroniaeth/pytest-lock/issues
* Contributing: https://github.com/Athroniaeth/pytest-lock/blob/main/CONTRIBUTING.md


## Installation

To install pytest-lock, you can use pip:

```bash
pip install pytest-lock
```

## Usage

### Locking Tests

To lock a test, use the lock fixture. Here's an example:

```python
from pytest_lock import FixtureLock


def test_lock_sum(lock: FixtureLock):
args = [1, 2, 3]
lock.lock(sum, (args,))
...
```

Run pytest with the `--lock` option to generate the lock files:

```bash
pytest --lock
```

This will generate JSON files in a `.pytest-lock` directory, storing the results of the locked tests.

### Running Tests

Simply run pytest as you normally would:

```bash
pytest
```

If pytest detect the presence of lock fixtures in your tests, it will compare the results of the tests with the locked
If a test result differs from its locked value, a warning will be issued.

### Configuration

The locked test results are stored in a `.pytest-lock` directory at the root of your project. You can delete this
directory to reset all locks.

## Contributing

Contributions are welcome! Please read the contributing guidelines to get started.

## License

This project is licensed under the MIT License - see the LICENSE.md file for details.
Loading

0 comments on commit 9ed2d7e

Please sign in to comment.