diff --git a/README.md b/README.md index 2273e28..6f2e258 100644 --- a/README.md +++ b/README.md @@ -224,6 +224,22 @@ knockknock desktop \ sleep 2 ``` +### WhatsApp Notification + +#### Python + +```python +from knockknock import sms_sender + +ACCOUNT_SID: str = "" +AUTH_TOKEN: str = "" +@sms_sender(account_sid=ACCOUNT_SID, auth_token=AUTH_TOKEN, recipient_number="", sender_number="") +def train_your_nicest_model(your_nicest_parameters): + import time + time.sleep(10) + return {'loss': 0.9} # Optional return value +``` + ## Note on distributed training When using distributed training, a GPU is bound to its process using the local rank variable. Since knockknock works at the process level, if you are using 8 GPUs, you would get 8 notifications at the beginning and 8 notifications at the end... To circumvent that, except for errors, only the master process is allowed to send notifications so that you receive only one notification at the beginning and one notification at the end.