diff --git a/README.md b/README.md index 63b81c0..947e375 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,13 @@ # python-client +[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/distributed-lock-com/python-client/ci.yaml)](https://github.com/distributed-lock-com/python-client/actions/workflows/ci.yaml) +[![Codecov](https://img.shields.io/codecov/c/github/distributed-lock-com/python-client)](https://app.codecov.io/github/distributed-lock-com/python-client) +[![pypi badge](https://img.shields.io/pypi/v/python-client?color=brightgreen)](https://pypi.org/project/python-client/) + + ## What is it? -This is a Python client library for https://distributed-lock.com SAAS. +This is a Python (3.7+) client library for https://distributed-lock.com SAAS. > **Note** > The asyncio support is planned but not here for the moment 🤷‍♂️ @@ -14,9 +19,67 @@ It's not a CLI (just a library). If you are looking for a good CLI binary, have ## How to use it? -FIXME +### Install + +``` +pip install distributed-lock-client +``` + +### Quickstart + +```python +from distributed_lock_client import DistributedLockClient + +# assert DLOCK_TOKEN env var defined with your API token +# assert DLOCK_TENANT_ID env var defined with your "tenant id" +# (you can also pass them to the DistributedLockClient +# constructor or you can also use DLOCK_CLUSTER env var) + +with DistributedLockClient(cluster="europe-free").exclusive_lock( + "bar", lifetime=20, wait=10 +): + # WE HAVE THE LOCK! + # [...] DOING SOME IMPORTANT THINGS [...] + pass + +# THE LOCK IS RELEASED +``` + +### Usage + +You have 2 APIs: + +- the (recommended) context manager API (like above) +- the low level API + +For the low level API, the idea is the same: + +- first, you create a `DistributedLockClient` object with some options (most of them can be set as env var) +- second, you invoke on the object the method `acquire_exclusive_lock()` and/or `release_exclusive_lock` + +Example (of the low level API): + +```python +from distributed_lock_client import DistributedLockClient + +# assert DLOCK_TOKEN env var defined with your API token +# assert DLOCK_TENANT_ID env var defined with your "tenant id" +# (you can also pass them to the DistributedLockClient +# constructor or you can also use DLOCK_CLUSTER env var) +client = DistributedLockClient(cluster="europe-free") + +# Acquire the resource "bar" +acquired_resource = client.acquire_exclusive_lock("bar", lifetime=20, wait=10) + +lock_id = acquired_resource.lock_id +# WE HAVE THE LOCK! +# [...] DOING SOME IMPORTANT THINGS [...] + +# Release the resource "bar" +client.release_exclusive_lock("bar", lock_id) +``` -## API reference +### API reference https://distributed-lock-com.github.io/python-client/ diff --git a/distributed_lock/sync.py b/distributed_lock/sync.py index d162604..d33ef52 100644 --- a/distributed_lock/sync.py +++ b/distributed_lock/sync.py @@ -303,6 +303,37 @@ def exclusive_lock( automatic_retry: bool = True, sleep_after_failure: float = 1.0, ): + """Acquire an exclusive lock and release it as a context manager. + + Notes: + - the wait parameter is implemented as a mix of: + - server side wait (without polling) thanks to the server_side_wait property + - client side wait (with multiple calls if automatic_retry=True default) + - the most performant way to configure this is: + - to set server_side_wait (when creating the DistributedLockClient object) + to the highest value allowed by your plan + - use the wait parameter here at the value of your choice + + Args: + resource: the resource name to acquire. + lifetime: the lock max lifetime (in seconds). + wait: the maximum wait (in seconds) for acquiring the lock. + user_data: user_data to store with the lock (warning: it's not allowed with + all plans). + automatic_retry: if the operation fails (because of some errors or because the + lock is already held by someone else), if set to True, we are going to + try multiple times until the maximum wait delay. + sleep_after_failure: when doing multiple client side retry, let's sleep during + this number of seconds before retrying. + + Raises: + NotAcquiredException: Can't acquire the lock (even after the wait time + and after automatic retries) because it's already held by + someone else after the wait time. + NotAcquiredError: Can't acquire the lock (even after the wait time + and after automatic retries) because some other errors raised. + + """ ar: AcquiredRessource | None = None try: ar = self.acquire_exclusive_lock(