-
-## Introduction
-
-
-
-### The What
-> MRSAL Is a _message broker_ based on [**RabbitMQ**](https://www.rabbitmq.com/) with [**Pika**](https://pika.readthedocs.io/en/stable/#).
-
-
-
-### The Why
-> A message broker is software that enables applications, systems, and services to communicate with each other and exchange information. This allows interdependent services to "talk" with one another directly, even if they were written in different languages or implemented on different platforms.
-
-
-
-### The How
-> The message broker does this by translating messages between these different services.
-
----
-
-
-## Installation
-
-MRSAL is available for download via PyPI and may be installed using pip.
-```bash
-pip install mrsal
-```
----
-
-
-## Start RabbitMQ Container
-
-We are using **docker** to start a `RabbitMQ container` listening on the port `"5672"` for localhost and `5671` for SSL with `"Delayed Message Plugin"` installed and enabled. If you want to use SSL for external listnening then you have to create certifactes with e.g. OpenSSL and either have them signed by yourself or an offical authenticator. Lastly you need to add a `rabbitmq.conf` that declares SSL connection with your specifications, see the official [walkthrough](https://www.rabbitmq.com/ssl.html) for guidance. Get the plugin for `x-delayed-message` by dowloading it with `wget` (not curl) and binding it to the docker image. You can find the plugin binary [here](https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases)
-
-- env file
-```env
-RABBITMQ_DEFAULT_USER=******
-RABBITMQ_DEFAULT_PASS=******
-RABBITMQ_DEFAULT_VHOST=******
-RABBITMQ_DOMAIN=******
-RABBITMQ_DOMAIN_TLS=******
-
-RABBITMQ_GUI_PORT=******
-RABBITMQ_PORT=******
-RABBITMQ_PORT_TLS=******
-
-# FOR TLS
-RABBITMQ_CAFILE=/path/to/file
-RABBITMQ_CERT=/path/to/file
-RABBITMQ_KEY=/path/to/file
-```
-
-
-- docker-compose.yml
-```Dockerfile
-version: '3.9'
-
-services:
- rabbitmq:
- image: rabbitmq:3.11.6-management-alpine
- container_name: mrsal
- volumes:
- # bind the volume
- - 'rabbitmq_vol:/var/lib/rabbitmq/'
- - 'rabbitmq_vol:/var/log/rabbitmq/'
- # For TLS connection
- - '~/rabbitmq/rabbit-server.crt:/etc/rabbitmq/rabbit-server.crt'
- - '~/rabbitmq/rabbit-server.key:/etc/rabbitmq/rabbit-server.key'
- - '~/rabbitmq/rabbit-ca.crt:/etc/rabbitmq/rabbit-ca.crt'
- # You need to specify the TLS connection for rabbitmq with the config file
- - '~/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf'
- # This is to enable x-delayed-messages
- - '~/rabbitmq/rabbitmq_delayed_message_exchange-3.11.1.ez:/opt/rabbitmq/plugins/rabbitmq_delayed_message_exchange-3.11.1.ez'
- environment:
- - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
- - RABBITMQ_DEFAULT_VHOST=${RABBITMQ_DEFAULT_VHOST}
- ports:
- # RabbitMQ container listening on the default port of 5672.
- - "${RABBITMQ_PORT}:5672"
- - "${RABBITMQ_PORT_TLS}:5671"
- # OPTIONAL: Expose the GUI port
- - "${RABBITMQ_GUI_PORT}:15672"
- networks:
- - gateway
- restart: always
-
-volumes:
- rabbitmq_vol:
-```
-
-- Install image and start RabbitMQ container
-```bash
-docker compose -f docker-compose.yml up -d
-```
-
-- Lastly enable the plugin the docker image
-```bash
-docker exec -it sh
-```
-inside the docker image run the enable command
-```bash
-rabbitmq-plugins enable rabbitmq_delayed_message_exchange
-```
-
----
-
-
-## RabbitMQ Message Concepts
-
-- **Producer** Is a user application that sends messages. Messages are not published directly to a queue; instead, the producer sends messages to an exchange.
-- **Exchange** Is responsible for routing the messages to different queues using header attributes, bindings, and routing keys.
-- **Binding** A binding is a connection that you build between a queue and an exchange.
-- **Routing Key** Is a message attribute taken into account by the exchange when deciding how to route a message.
-- **Queue** Is a buffer that receives and stores messages until the consumer receives them.
-- **Consumer** Is a user application that receives and handles messages.
----
-
-
-## RabbitMQ Message Cycle
-
-
-
-
-
-1. The **producer** publishes a message to an exchange.
-
-2. The **exchange** routes the message into the queues bound to it depending on exchange type and routing key.
-
-3. The messages stay in the **queue** until they are handled by a consumer.
-
-4. The **consumer** handles the message.
----
-
-
-## Connect To RabbitMQ Server
-
-This tutorial assumes RabbitMQ is installed and running on localhost on the port (5673). In case you use a different host, vhost, port or credentials, connections settings would require adjusting.
-
-- vhost:
- - Think of vhosts as individual, uniquely named containers.
- - Inside each vhost container is a logical group of exchanges, connections, queues, bindings, user permissions, and other system resources.
- - Different users can have different permissions to different vhost and queues and exchanges can be created, so they only exist in one vhost.
- - When a client establishes a connection to the RabbitMQ server, it specifies the vhost within which it will operate
-```py
-from mrsal.mrsal import Mrsal
-
-# If you want to use SSL for external listening then set it to True
-SSL = False
-
-# Note RabbitMQ container is listening on:
-# 1. When SSL is False the default port 5672 which is exposed to RABBITMQ_PORT in docker-compose
-# 2. When SSL is True the default port 5671 which is exposed to RABBITMQ_PORT_TLS in docker-compose
-port = RABBITMQ_PORT_TLS if SSL else RABBITMQ_PORT
-host = RABBITMQ_DOMAIN_TLS if SSL else RABBITMQ_DOMAIN
-
-# It should match with the env specifications (RABBITMQ_DEFAULT_USER, RABBITMQ_DEFAULT_PASS)
-credentials=(RABBITMQ_DEFAULT_USER, RABBITMQ_DEFAULT_PASS)
-
-# It should match with the env specifications (RABBITMQ_DEFAULT_VHOST)
-v_host = RABBITMQ_DEFAULT_VHOST
-
-mrsal = Mrsal(
- host=host,
- port=port,
- credentials=credentials,
- virtual_host=v_host,
- ssl=SSL
-)
-
-mrsal.connect_to_server()
-```
----
-
-
-## Declare Exchange:
-
-**Exchange** Is responsible for routing the messages to different queues using header attributes, bindings, and routing keys.
-- `exchange`: The exchange name
-- `exchange_type`: The exchange type to use
- - `direct`
- - `topic`
- - `fanout`
- - `headers`
- - `x-delayed-message`
-- `passive`: Perform a declare or just check to see if it exists
-- `durable`: Survive a reboot of RabbitMQ
-- `auto_delete`: Remove when no more queues are bound to it
-- `internal`: Can only be published to by other exchanges
-- `arguments`: Custom key/value pair arguments for the exchange. E.g:
- - When the type of exchange is `x-delayed-message`, we specify how the messages will be routed after the delay period ([see example](#delayExchange)).
- ```py
- {'x-delayed-type': 'direct'}
- ```
-```py
-# Argument with the kye x-delayed-type to specify how the messages will be routed after the delay period specified
-EXCHANGE_ARGS: str = {'x-delayed-type': 'direct'}
-
-mrsal.setup_exchange(exchange='agreements',
- exchange_type='x-delayed-message',
- arguments=EXCHANGE_ARGS,
- durable=True, passive=False, internal=False, auto_delete=False)
-```
----
-
-
-## Declare Queue:
-
-**Queue** Is a buffer that receives and stores messages until the consumer receives them.
-- `queue`: The queue name; if empty string, the broker will
- create a unique queue name
-- `passive`: Only check to see if the queue exists and raise
- _ChannelClosed_ if it doesn't
-- `durable`: Survive reboots of the broker
-- `exclusive`: Only allow access by the current connection
-- `auto_delete`: Delete after consumer cancels or disconnects
-- `arguments`: Custom key/value arguments for the queue. E.g:
- - Specify dl exchange and dl routing key for queue
- - Specify an amount of time in ms expressing the time to live for the message in queue before it considered as **dead**.
- - ([see example](#queueWithDeadLetters))
- ```py
- {'x-dead-letter-exchange': DL_EXCHANGE,
- 'x-dead-letter-routing-key': DL_ROUTING_KEY,
- 'x-message-ttl': 2000}
- ```
-
-```py
-# Specify dl exchange and dl routing key for queue
-QUEUE_ARGS = {'x-dead-letter-exchange': DL_EXCHANGE,
- 'x-dead-letter-routing-key': DL_ROUTING_KEY,
- 'x-message-ttl': 2000}
-mrsal.setup_queue(queue='agreements_queue',
- arguments=QUEUE_ARGS,
- durable=True,
- exclusive=False, auto_delete=False, passive=False)
-```
----
-
-
-## Bind Queue To Exchange:
-
-Bind the queue to exchange.
-
-- `queue`: The queue to bind to the exchange
-- `exchange`: The source exchange to bind to
-- `routing_key`: The routing key to bind on
-- `arguments`: Custom key/value pair arguments for the binding. E.g:
- - When exchange's type is `headers`, we need to bound queue to exchange specifying the headers which has to match the published-messages' headers ([see example](#headersExchange)).
-
-```py
-ARGS = {'x-match': 'all', 'format': 'zip', 'type': 'report'}
-mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue',
- arguments=ARGS)
-```
----
-
-
-## Publish Message
-
-Publish message to the exchange specifying routing key and properties
-
-- `exchange`: The exchange to publish to
-- `routing_key`: The routing key to bind on
-- `body`: The message body; empty string if no body
-- `prop`: BasicProperties is used to set the message properties
-- `headers`: Is useful when we want to send message with headers. E.g:
- - When exchange's type is `x-delayed-message` then we need to send messages to the exchange with `x-delay` header to specify delay time for message in exchange before route it to bound queue ([see example](#delayExchange)).
- - When exchange's type is `headers`, then we need to send messages with headers which match the binding-key of bound queues to the exchange ([see example](#headersExchange)).
-```py
-message: str = 'agreement123'
-
-# publish messages with header x-delay expressing in milliseconds a delay time for the message.
-headers={'x-delay': 2000},
-
-# BasicProperties is used to set the message properties
-prop = pika.BasicProperties(
- app_id='agreements_app',
- message_id='agreements_msg',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=headers)
-
-mrsal.publish_message(
- exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message),
- prop=prop,
- fast_setup=False
- )
-```
----
-
-
-## Start Consumer
-
-- Setup consumer:
- - Consumer start consuming the messages from the queue.
- - If `inactivity_timeout` is given the consumer will be canceled when inactivity_timeout is exceeded.
- - If you start a consumer with `callback_with_delivery_info=True` then your callback function should to have at least these params `(method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str)`. If not, then it should have at least `(message_param: str)`
- - Send the consumed message to callback method to be processed, and then the message can be either:
- - Processed, then **correctly-acknowledge** and deleted from queue or
- - Failed to process, **negatively-acknowledged** and then will be either
- - `Requeued` if requeue is True
- - `Dead letter` and deleted from queue if
- - requeue is False
- - requeue is True and requeue attempt fails.
-
-```py
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- str_message = json.loads(message_param).replace('"', '')
- if 'agreement123' in str_message:
- app_id = properties.app_id
- msg_id = properties.message_id
- print(f'app_id={app_id}, msg_id={msg_id}')
- print('Message processed')
- return True # Consumed message processed correctly
- return False
-
-def consumer_callback(host: str, queue: str, message: str):
- str_message = json.loads(message_param).replace('"', '')
- if 'agreement123' in str_message:
- print('Message processed')
- return True # Consumed message processed correctly
- return False
-
-QUEUE: str = 'agreements_queue'
-
-mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=(test_config.HOST, 'agreements_queue'),
- inactivity_timeout=6,
- requeue=False
- )
-
-# NOTE: If you want to use callback with delivery info then use this code
-
-# mrsal.start_consumer(
-# queue='agreements_queue',
-# callback=consumer_callback_with_delivery_info,
-# callback_args=(test_config.HOST, 'agreements_queue'),
-# inactivity_timeout=6,
-# requeue=False,
-# callback_with_delivery_info=True
-# )
-```
----
-
-
-## Exchange Types
-
-
-
-1. **Direct Exchange**
-
- - Uses a message `routing key` to transport messages to queues.
- - The `routing key` is a message attribute that the _producer_ adds to the message header.
- - You can consider the routing key to be an _address_ that the exchange uses to determine how the message should be routed.
- - A message is delivered to the queue with the `binding key` that **exactly** matches the message’s `routing key`.
-
-
-
-
-
-```py
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- return True
-
-EXCHANGE: str = 'agreements'
-EXCHANGE_TYPE: str = 'direct'
-QUEUE_1: str = 'agreements_berlin_queue'
-QUEUE_2: str = 'agreements_madrid_queue'
-
-# Messages will published with this routing key
-ROUTING_KEY_1: str = 'berlin agreements'
-ROUTING_KEY_2: str = 'madrid agreements'
-# ------------------------------------------
-
-# Setup exchange
-mrsal.setup_exchange(exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE)
-# ------------------------------------------
-
-# Setup queue for berlin agreements
-mrsal.setup_queue(queue=QUEUE_1)
-
-# Bind queue to exchange with binding key
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- routing_key=ROUTING_KEY_1,
- queue=QUEUE_1)
-# ------------------------------------------
-
-# Setup queue for madrid agreements
-mrsal.setup_queue(queue=QUEUE_2)
-
-# Bind queue to exchange with binding key
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- routing_key=ROUTING_KEY_2,
- queue=QUEUE_2)
-# ------------------------------------------
-
-# Publisher:
-
-# Message ("uuid2") is published to the exchange and it's routed to queue2
-prop1 = pika.BasicProperties(
- app_id='test_exchange_direct',
- message_id='madrid_uuid',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-message2 = 'uuid2'
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key=ROUTING_KEY_2,
- message=json.dumps(message2),
- prop=prop1)
-
-prop2 = pika.BasicProperties(
- app_id='test_exchange_direct',
- message_id='berlin_uuid',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-# Message ("uuid1") is published to the exchange and it's routed to queue1
-message1 = 'uuid1'
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key=ROUTING_KEY_1,
- message=json.dumps(message1),
- prop=prop2)
-# ------------------------------------------
-
-# Start consumer for every queue
-mrsal.start_consumer(
- queue=QUEUE_1,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_1),
- inactivity_timeout=1,
- requeue=False
-)
-
-mrsal.start_consumer(
- queue=QUEUE_2,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_2),
- inactivity_timeout=1,
- requeue=False
-)
-# ------------------------------------------
-```
-
-
-2. **Topic Exchange**
-
- - Topic RabbitMQ exchange type sends messages to queues depending on `wildcard matches` between the `routing key` and the queue binding's `routing pattern`.
- - `'*'` (star) can substitute for exactly one word.
- - `'#'` (hash) can substitute for zero or more words.
- - The routing patterns may include an asterisk `'*'` to match a word in a specified position of the routing key (for example, a routing pattern of `'agreements.*.*.berlin.*'` only matches routing keys with `'agreements'` as the first word and `'berlin'` as the fourth word).
-
-
-
-
-
-```py
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- return True
-
-EXCHANGE: str = 'agreements'
-EXCHANGE_TYPE: str = 'topic'
-
-QUEUE_1: str = 'berlin_agreements'
-QUEUE_2: str = 'september_agreements'
-
-ROUTING_KEY_1: str = 'agreements.eu.berlin.august.2022' # Messages will published with this routing key
-ROUTING_KEY_2: str = 'agreements.eu.madrid.september.2022' # Messages will published with this routing key
-
-BINDING_KEY_1: str = 'agreements.eu.berlin.#' # Berlin agreements
-BINDING_KEY_2: str = 'agreements.*.*.september.#' # Agreements of september
-BINDING_KEY_3: str = 'agreements.#' # All agreements
-# ------------------------------------------
-
-# Setup exchange
-mrsal.setup_exchange(exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE)
-# ------------------------------------------
-
-# Setup queue for berlin agreements
-mrsal.setup_queue(queue=QUEUE_1)
-
-
-# Bind queue to exchange with binding key
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- routing_key=BINDING_KEY_1,
- queue=QUEUE_1)
-# ----------------------------------
-
-# Setup queue for september agreements
-mrsal.setup_queue(queue=QUEUE_2)
-
-# Bind queue to exchange with binding key
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- routing_key=BINDING_KEY_2,
- queue=QUEUE_2)
-# ----------------------------------
-
-# Publisher:
-
-# Message ("uuid1") is published to the exchange will be routed to queue1
-message1 = 'uuid1'
-prop1 = pika.BasicProperties(
- app_id='test_exchange_topic',
- message_id='berlin',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key=ROUTING_KEY_1,
- message=json.dumps(message1),
- prop=prop1)
-
-# Message ("uuid2") is published to the exchange will be routed to queue2
-message2 = 'uuid2'
-prop2 = pika.BasicProperties(
- app_id='test_exchange_topic',
- message_id='september',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key=ROUTING_KEY_2,
- message=json.dumps(message2),
- prop=prop2)
-# ------------------------------------------
-
-# Start consumer for every queue
-mrsal.start_consumer(
- queue=QUEUE_1,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_1),
- inactivity_timeout=1,
- requeue=False
-)
-
-mrsal.start_consumer(
- queue=QUEUE_2,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_2),
- inactivity_timeout=1,
- requeue=False
-)
-
-```
-
-
-3. **Fanout Exchange**
-
- - A _fanout_ exchange duplicates and routes a received message to any associated queues, **_regardless_ of routing keys or pattern matching**.
- - Fanout exchanges are useful when the same message needs to be passed to one or perhaps more queues with consumers who may process the message differently.
- - Here, your provided keys will be entirely **ignored**.
-
-```py
-EXCHANGE: str = 'agreements'
-EXCHANGE_TYPE: str = 'fanout'
-
-# In this case you don't need binding key to bound queue to exchange
-# Messages is published with routing key equals to empty string because it will be ignored
-ROUTING_KEY: str = ''
-
-# Setup exchange
-mrsal.setup_exchange(exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE)
-```
-
-
-4. **Headers Exchange**
-
- - A headers RabbitMQ exchange type is a message routing system that uses `arguments` with `headers` and optional values to route messages.
- - Header exchanges are identical to topic exchanges, except that instead of using routing keys, messages are routed based on header values.
- - If the value of the header equals the value of supply during binding, the message matches.
- - In the binding between exchange and queue, a specific argument termed `'x-match'` indicates whether all headers must match or only one.
- - The `'x-match'` property has two possible values: `'any'` and `'all'` with `'all'` being the default.
- - A value of `'all'` indicates that all header pairs (key, value) must match, whereas `'any'` indicates that at least one pair must match.
-
-
-
-
-
-```py
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- return True
-
-EXCHANGE: str = 'agreements'
-EXCHANGE_TYPE: str = 'headers'
-
-QUEUE_1: str = 'zip_report'
-Q1_ARGS = {'x-match': 'all', 'format': 'zip', 'type': 'report'}
-
-QUEUE_2: str = 'pdf_report'
-Q2_ARGS = {'x-match': 'any', 'format': 'pdf', 'type': 'log'}
-
-HEADERS1 = {'format': 'zip', 'type': 'report'}
-HEADERS2 = {'format': 'pdf', 'date': '2022'}
-# ------------------------------------------
-
-# Setup exchange
-mrsal.setup_exchange(exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE)
-# ------------------------------------------
-
-# Setup queue
-mrsal.setup_queue(queue=QUEUE_1)
-
-# Bind queue to exchange with arguments
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- queue=QUEUE_1,
- arguments=Q1_ARGS)
-# ------------------------------------------
-
-# Setup queue
-mrsal.setup_queue(queue=QUEUE_2)
-
-# Bind queue to exchange with arguments
-mrsal.setup_queue_binding(exchange=EXCHANGE,
- queue=QUEUE_2,
- arguments=Q2_ARGS)
-# ------------------------------------------
-
-# Publisher:
-# Message ("uuid1") is published to the exchange with a set of headers
-prop1 = pika.BasicProperties(
- app_id='test_exchange_headers',
- message_id='zip_report',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'format': 'zip', 'type': 'report'})
-message1 = 'uuid1'
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key='',
- message=json.dumps(message1),
- prop=prop1)
-
-# Message ("uuid2") is published to the exchange with a set of headers
-prop2 = pika.BasicProperties(
- app_id='test_exchange_headers',
- message_id='pdf_date',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'format': 'pdf', 'date': '2022'})
-message2 = 'uuid2'
-mrsal.publish_message(
- exchange=EXCHANGE,
- routing_key='',
- message=json.dumps(message2),
- prop=prop2)
-# ------------------------------------------
-
-# Start consumer for every queue
-mrsal.start_consumer(
- queue=QUEUE_1,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_1),
- inactivity_timeout=2,
- requeue=False
-)
-
-mrsal.start_consumer(
- queue=QUEUE_2,
- callback=consumer_callback,
- callback_args=('localhost', QUEUE_2),
- inactivity_timeout=2,
- requeue=False
-)
-```
-
-
-5. **Delay Exchange**
- - A message which reaches to exchange from a publisher, will be instantaneously delivered to the bound queue.
- - But if you want to add delay to the delivery time for the message from exchange to queue, then you can use delay exchange.
- - A user can declare an **exchange** with:
- - The type `x-delayed-message` and
- - Arguments with the kye `x-delayed-type` to specify how the messages will be routed after the delay period specified.
- - Then **publish** messages with the header `x-delay` expressing in milliseconds a delay time for the message.
- - The message will be delivered to the respective queues after `x-delay` milliseconds.
- - **NB** This plugin has known limitations: for more info check here https://github.com/rabbitmq/rabbitmq-delayed-message-exchange#limitations
-
-```py
-def consumer_callback(host: str, queue: str, message: str):
- return True
-
-# Setup exchange with delay message type
-mrsal.setup_exchange(exchange='agreements',
- exchange_type='x-delayed-message',
- arguments={'x-delayed-type': 'direct'})
-
-# Setup queue
-mrsal.setup_queue(queue='agreements_queue')
-
-# Bind queue to exchange with routing_key
-qb_result: pika.frame.Method = mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue')
-
-"""
-Publisher:
- Message ("uuid1") is published with x-delay=3000
- Message ("uuid2") is published with x-delay=1000
-"""
-x_delay1: int = 3000
-message1 = 'uuid1'
-prop1 = pika.BasicProperties(
- app_id='test_exchange_delay_letters',
- message_id='uuid1_3000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay1})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message1),
- prop=prop1)
-
-x_delay2: int = 1000
-message2 = 'uuid2'
-prop2 = pika.BasicProperties(
- app_id='test_exchange_delay_letters',
- message_id='uuid2_1000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay2})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message2),
- prop=prop2)
-
-
-"""
-Consumer from main queue
- Message ("uuid2"): Consumed first because its delivered from exchange to the queue
- after x-delay=1000ms which is the shortest time.
- Message ("uuid1"): Consumed at second place because its x-delay = 3000 ms.
-"""
-mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=('localhost', 'agreements_queue'),
- inactivity_timeout=3,
- requeue=False
-)
-```
----
-
-
-## Setup Queue With Dead Letters Exchange
-
-Dead messages are:
-- Some messages become undeliverable or unhandled even when received by the broker.
-- This can happen when:
- - The amount of time the message has spent in a queue exceeds the time to live `TTL` (x-message-ttl).
- - When a message is `negatively-acknowledged` by the consumer.
- - When the queue reaches its capacity.
-- Such a message is called a `dead message`.
-
-```py
-def consumer_callback(host: str, queue: str, message: str):
- if message == b'"\\"uuid3\\""':
- time.sleep(3)
- return message != b'"\\"uuid2\\""'
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, message_param: str):
- return True
-# ------------------------------------------
-# Setup dead letters exchange
-mrsal.setup_exchange(exchange='dl_agreements',
- exchange_type='direct')
-
-# Setup main exchange
-mrsal.setup_exchange(exchange='agreements',
- exchange_type='direct')
-# ------------------------------------------
-# Setup main queue with arguments where we specify DL_EXCHANGE, DL_ROUTING_KEY and TTL
-mrsal.setup_queue(queue='agreements_queue',
- arguments={'x-dead-letter-exchange': 'dl_agreements',
- 'x-dead-letter-routing-key': 'dl_agreements_key',
- 'x-message-ttl': 2000})
-mrsal.setup_queue(queue=queue,
- arguments=queue_args)
-
-mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue')
-# ------------------------------------------
-
-# Bind DL_QUEUE to DL_EXCHANGE with DL_ROUTING_KEY
-mrsal.setup_queue(queue='dl_agreements_queue')
-
-mrsal.setup_queue_binding(exchange='dl_agreements',
- routing_key='dl_agreements_key',
- queue='dl_agreements_queue')
-# ------------------------------------------
-
-"""
-Publisher:
- Message ("uuid1") is published
- Message ("uuid2") is published
- Message ("uuid3") is published
- Message ("uuid4") is published
-"""
-message1 = 'uuid1'
-prop1 = pika.BasicProperties(
- app_id='test_exchange_dead_letters',
- message_id='msg_uuid1',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message1),
- prop=prop1)
-
-message2 = 'uuid2'
-prop2 = pika.BasicProperties(
- app_id='test_exchange_dead_letters',
- message_id='msg_uuid2',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message2),
- prop=prop2)
-
-message3 = 'uuid3'
-prop3 = pika.BasicProperties(
- app_id='test_exchange_dead_letters',
- message_id='msg_uuid3',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message3),
- prop=prop3)
-
-message4 = 'uuid4'
-prop4 = pika.BasicProperties(
- app_id='test_exchange_dead_letters',
- message_id='msg_uuid4',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message4),
- prop=prop4)
-
-"""
-Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be forwarded to dead-letters-exchange (x-first-death-reason: rejected).
- Message ("uuid3"):
- - This message has processing time in the consumer's callback equal to 3s
- which is greater that TTL=2s.
- - After processing will be positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid4"):
- - This message will be forwarded to dead-letters-exchange
- because it spent in the queue more than TTL=2s waiting "uuid3" to be processed
- (x-first-death-reason: expired).
-"""
-mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=('localhost', 'agreements_queue'),
- inactivity_timeout=6,
- requeue=False
-)
-# ------------------------------------------
-"""
-Consumer from dead letters queue
- Message ("uuid2"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- Message ("uuid4"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
-"""
-mrsal.start_consumer(
- queue='dl_agreements_queue',
- callback=consumer_dead_letters_callback,
- callback_args=('localhost', 'dl_agreements_queue'),
- inactivity_timeout=3,
- requeue=False
-)
-
-```
----
-
-
-## Dead and Delay Letters Workflow
-
-
-
-
-
-```py
-def consumer_callback(host: str, queue: str, message: str):
- if message == b'"\\"uuid3\\""':
- time.sleep(3)
- return message != b'"\\"uuid2\\""'
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, message_param: str):
- return True
-
-# ------------------------------------------
-
-# Setup dead letters exchange
-mrsal.setup_exchange(exchange='dl_agreements',
- exchange_type='direct')
-
-# Setup main exchange with 'x-delayed-message' type
-# and arguments where we specify how the messages will be routed after the delay period specified
-mrsal.setup_exchange(exchange='agreements',
- exchange_type='x-delayed-message',
- arguments={'x-delayed-type': 'direct'})
-# ------------------------------------------
-
-# Setup main queue with arguments where we specify DL_EXCHANGE, DL_ROUTING_KEY and TTL
-mrsal.setup_queue(queue='agreements_queue',
- arguments={'x-dead-letter-exchange': 'dl_agreements',
- 'x-dead-letter-routing-key': 'dl_agreements_key',
- 'x-message-ttl': 2000})
-
-# Bind main queue to the main exchange with routing_key
-mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue')
-# ------------------------------------------
-
-# Bind DL_QUEUE to DL_EXCHANGE with DL_ROUTING_KEY
-mrsal.setup_queue(queue='dl_agreements_queue')
-
-mrsal.setup_queue_binding(exchange='dl_agreements',
- routing_key='dl_agreements_key',
- queue='dl_agreements_queue')
-# ------------------------------------------
-
-"""
-Publisher:
- Message ("uuid1") is published with x-delay=2000
- Message ("uuid2") is published with x-delay=1000
- Message ("uuid3") is published with x-delay=3000
- Message ("uuid4") is published with x-delay=4000
-"""
-x_delay1: int = 2000 # ms
-message1 = 'uuid1'
-prop1 = pika.BasicProperties(
- app_id='test_exchange_dead_and_delay_letters',
- message_id='uuid1_2000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay1})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message1),
- prop=prop1)
-
-x_delay2: int = 1000
-message2 = 'uuid2'
-prop2 = pika.BasicProperties(
- app_id='test_exchange_dead_and_delay_letters',
- message_id='uuid2_1000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay2})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message2),
- prop=prop2)
-
-x_delay3: int = 3000
-message3 = 'uuid3'
-prop3 = pika.BasicProperties(
- app_id='test_exchange_dead_and_delay_letters',
- message_id='uuid3_3000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay3})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message3),
- prop=prop3)
-
-x_delay4: int = 4000
-message4 = 'uuid4'
-prop4 = pika.BasicProperties(
- app_id='test_exchange_dead_and_delay_letters',
- message_id='uuid4_4000ms',
- content_type='text/plain',
- content_encoding='utf-8',
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': x_delay4})
-mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message4),
- rop=prop4)
-# ------------------------------------------
-
-"""
-Consumer from main queue
- Message ("uuid2"): Consumed first because its delivered from exchange to the queue
- after x-delay=1000ms which is the shortest time.
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be forwarded to dead-letters-exchange (x-first-death-reason: rejected).
- Message ("uuid1"): Consumed at second place because its x-delay = 2000 ms.
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid3"): Consumed at third place because its x-delay = 3000 ms.
- - This message has processing time in the consumer's callback equal to 3s
- which is greater that TTL=2s.
- - After processing will be positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid4"): Consumed at fourth place because its x-delay = 4000 ms.
- - This message will be forwarded to dead-letters-exchange
- because it spent in the queue more than TTL=2s waiting "uuid3" to be processed
- (x-first-death-reason: expired).
-"""
-mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=('localhost', 'agreements_queue'),
- inactivity_timeout=6,
- requeue=False
-)
-# ------------------------------------------
-
-"""
-Consumer from dead letters queue
- Message ("uuid2"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- Message ("uuid4"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
-"""
-
-mrsal.start_consumer(
- queue='dl_agreements_queue',
- callback=consumer_dead_letters_callback,
- callback_args=('localhost', 'dl_agreements_queue'),
- inactivity_timeout=3,
- requeue=False
-)
-```
----
-
-
-## Redeliver Rejected Letters With Delay Workflow
-
-It's possible to redeliver the rejected messages with delay time.
-
-```py
-import json
-import time
-
-import mrsal.config.config as config
-import pika
-import tests.config as test_config
-from loguru import logger as log
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST,
- port=config.RABBITMQ_PORT,
- credentials=config.RABBITMQ_CREDENTIALS,
- virtual_host=config.V_HOST,
- verbose=True)
-mrsal.connect_to_server()
-
-def test_redelivery_with_delay():
-
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange='agreements')
- mrsal.queue_delete(queue='agreements_queue')
- # ------------------------------------------
- queue_arguments = None
- # ------------------------------------------
-
- # Setup main exchange with delay type
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange='agreements',
- exchange_type='x-delayed-message',
- arguments={'x-delayed-type': 'direct'})
- assert exch_result1 != None
- # ------------------------------------------
-
- # Setup main queue
- q_result1: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue')
- assert q_result1 != None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue')
- assert qb_result1 != None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published with delay 1 sec
- Message ("uuid2") is published with delay 2 sec
- """
- message1 = 'uuid1'
- prop1 = pika.BasicProperties(
- app_id='test_delivery-limit',
- message_id='msg_uuid1',
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': 1000, 'x-retry-limit': 2})
- mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message1), prop=prop1)
-
- message2 = 'uuid2'
- prop2 = pika.BasicProperties(
- app_id='test_delivery-limit',
- message_id='msg_uuid2',
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={'x-delay': 2000, 'x-retry-limit': 3, 'x-retry': 0})
- mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message2), prop=prop2)
-
- # ------------------------------------------
- # Waiting for the delay time of the messages in the exchange. Then will be delivered to the queue.
- time.sleep(3)
-
- # Confirm messages are published
- result: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue', passive=True)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" before consuming= {message_count}')
- assert message_count == 2
-
- log.info(f'===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be redelivered with incremented x-retry until, either is acknowledged or x-retry = x-retry-limit.
- """
- mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=(test_config.HOST, 'agreements_queue'),
- inactivity_timeout=8,
- requeue=False,
- callback_with_delivery_info=True
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue', passive=True)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- return message != b'"\\"uuid2\\""'
-
-
-if __name__ == '__main__':
- test_redelivery_with_delay()
-
-```
----
-
-
-## Quorum Queue With Delivery Limit Workflow
-
-- The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm.
-- It is available as of RabbitMQ 3.8.0.
-- It is possible to set a delivery limit for a queue using a policy argument, delivery-limit.
-
-For more info: [quorum-queues](https://www.rabbitmq.com/quorum-queues.html)
-
-```py
-import json
-import time
-
-import mrsal.config.config as config
-import pika
-import tests.config as test_config
-from logurur import logger as log
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST,
- port=config.RABBITMQ_PORT,
- credentials=config.RABBITMQ_CREDENTIALS,
- virtual_host=config.V_HOST,
- verbose=True)
-mrsal.connect_to_server()
-
-def test_quorum_delivery_limit():
-
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange='agreements')
- mrsal.queue_delete(queue='agreements_queue')
- # ------------------------------------------
- queue_arguments = {
- # Queue of quorum type
- 'x-queue-type': 'quorum',
- # Set a delivery limit for a queue using a policy argument, delivery-limit.
- # When a message has been returned more times than the limit the message will be dropped
- # or dead-lettered(if a DLX is configured).
- 'x-delivery-limit': 3}
- # ------------------------------------------
-
- # Setup main exchange
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange='agreements',
- exchange_type='direct')
- assert exch_result1 != None
- # ------------------------------------------
-
- # Setup main queue with arguments
- q_result1: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue',
- arguments=queue_arguments)
- assert q_result1 != None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange='agreements',
- routing_key='agreements_key',
- queue='agreements_queue')
- assert qb_result1 != None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published
- Message ("uuid2") is published
- """
- message1 = 'uuid1'
- prop1 = pika.BasicProperties(
- app_id='test_delivery-limit',
- message_id='msg_uuid1',
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
- mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message1), prop=prop1)
-
- message2 = 'uuid2'
- prop2 = pika.BasicProperties(
- app_id='test_delivery-limit',
- message_id='msg_uuid2',
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
- mrsal.publish_message(exchange='agreements',
- routing_key='agreements_key',
- message=json.dumps(message2), prop=prop2)
-
- # ------------------------------------------
- time.sleep(1)
-
- # Confirm messages are published
- result: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue', passive=True,
- arguments=queue_arguments)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" before consuming= {message_count}')
- assert message_count == 2
-
- log.info(f'===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be redelivered until, either it's acknowledged or x-delivery-limit is reached.
- """
- mrsal.start_consumer(
- queue='agreements_queue',
- callback=consumer_callback,
- callback_args=(test_config.HOST, 'agreements_queue'),
- inactivity_timeout=1,
- requeue=True,
- callback_with_delivery_info=True
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result: pika.frame.Method = mrsal.setup_queue(queue='agreements_queue', passive=True,
- arguments=queue_arguments)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- return message != b'"\\"uuid2\\""'
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
-
-
-if __name__ == '__main__':
- test_quorum_delivery_limit()
-
-```
----
-
-
-## Concurrent Consumers
-
-Sometimes we need to start multiple consumers to listen to the **same** **queue** and process received messages **concurrently**.
-You can do that by calling `start_concurrence_consumer` which takes `total_threads` param in addition to the same parameters used in `start_consumer`.
-This method will create a **thread pool** and _spawn new_ `Mrsal` object and start **new consumer** for every thread.
-
-```python
-import json
-import time
-
-import pika
-from pika.exchange_type import ExchangeType
-
-import mrsal.config.config as config
-import tests.config as test_config
-from mrsal.config.logging import get_logger
-from mrsal.mrsal import Mrsal
-
-log = get_logger(__name__)
-
-mrsal = Mrsal(host=test_config.HOST,
- port=config.RABBITMQ_PORT,
- credentials=config.RABBITMQ_CREDENTIALS,
- virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-APP_ID = "TEST_CONCURRENT_CONSUMERS"
-EXCHANGE = "CLINIC"
-EXCHANGE_TYPE = ExchangeType.direct
-QUEUE_EMERGENCY = "EMERGENCY"
-NUM_THREADS = 3
-NUM_MESSAGES = 3
-INACTIVITY_TIMEOUT = 3
-ROUTING_KEY = "PROCESS FOR EMERGENCY"
-MESSAGE_ID = "HOSPITAL_EMERGENCY"
-
-def test_concurrent_consumer():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange=EXCHANGE)
- mrsal.queue_delete(queue=QUEUE_EMERGENCY)
- # ------------------------------------------
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE)
- assert exch_result != None
- # ------------------------------------------
- # Setup queue for madrid agreements
- q_result: pika.frame.Method = mrsal.setup_queue(queue=QUEUE_EMERGENCY)
- assert q_result != None
-
- # Bind queue to exchange with binding key
- qb_result: pika.frame.Method = mrsal.setup_queue_binding(exchange=EXCHANGE,
- routing_key=ROUTING_KEY,
- queue=QUEUE_EMERGENCY)
- assert qb_result != None
- # ------------------------------------------
- # Publisher:
- # Publish NUM_MESSAGES to the queue
- for msg_index in range(NUM_MESSAGES):
- prop = pika.BasicProperties(
- app_id=APP_ID,
- message_id=MESSAGE_ID + str(msg_index),
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None)
- message = "uuid_" + str(msg_index)
- mrsal.publish_message(exchange=EXCHANGE,
- routing_key=ROUTING_KEY,
- message=json.dumps(message), prop=prop)
- # ------------------------------------------
- time.sleep(1)
- # Confirm messages are routed to the queue
- result1 = mrsal.setup_queue(queue=QUEUE_EMERGENCY, passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == NUM_MESSAGES
- # ------------------------------------------
- # Start concurrent consumers
- start_time = time.time()
- mrsal.start_concurrence_consumer(total_threads=NUM_THREADS, queue=QUEUE_EMERGENCY,
- callback=consumer_callback_with_delivery_info,
- callback_args=(test_config.HOST, QUEUE_EMERGENCY),
- exchange=EXCHANGE, exchange_type=EXCHANGE_TYPE,
- routing_key=ROUTING_KEY,
- inactivity_timeout=INACTIVITY_TIMEOUT,
- callback_with_delivery_info=True)
- duration = time.time() - start_time
- log.info(f"Concurrent consumers are done in {duration} seconds")
- # ------------------------------------------
- # Confirm messages are consumed
- result2 = mrsal.setup_queue(queue=QUEUE_EMERGENCY, passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
- mrsal.close_connection()
-
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- time.sleep(5)
- return True
-```
----
-
-
-## References
-
-- [RabbitMQ Tutorials](https://www.rabbitmq.com/getstarted.html)
-- [RabbitMQ Exchange Types: 6 Categories Explained Easy](https://hevodata.com/learn/rabbitmq-exchange-type/)
-- [What is a Delayed Message Exchange?](https://www.cloudamqp.com/blog/what-is-a-delayed-message-exchange-in-rabbitmq.html#:~:text=The%20RabbitMQ%20delayed%20exchange%20plugin,in%20milliseconds%20can%20be%20specified.)
-- [RabbitMQ Delayed Message Plugin](https://github.com/rabbitmq/rabbitmq-delayed-message-exchange)
-- [When and how to use the RabbitMQ Dead Letter Exchange](https://www.cloudamqp.com/blog/when-and-how-to-use-the-rabbitmq-dead-letter-exchange.html)
-- [What is a RabbitMQ vhost?](https://www.cloudamqp.com/blog/what-is-a-rabbitmq-vhost.html)
-- [Message Brokers](https://www.ibm.com/cloud/learn/message-brokers)
-- [How to Use map() with the ThreadPoolExecutor in Python](https://superfastpython.com/threadpoolexecutor-map/)
-- [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html)
-- [mrsal_icon](https://www.pngegg.com/en/png-mftic)
----
diff --git a/README.md b/README.md
index aa11511..b379e4c 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
-# MRSAL
-[![Release](https://img.shields.io/badge/release-v0.7.6alpha-blue.svg)](https://pypi.org/project/mrsal/) [![Python 3.10](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/release/python-3103/) [![Documentation](https://img.shields.io/badge/doc-latest-blue.svg)](https://github.com/NeoMedSys/mrsal/blob/main/FullGuide.md)
+# MRSAL
+[![Release](https://img.shields.io/badge/release-1.0.0balphalue.svg)](https://pypi.org/project/mrsal/) [![Python 3.10](https://img.shields.io/badge/python-3.10lue.svg)](https://www.python.org/downloads/release/python-3103/) [![Documentation](https://img.shields.io/badge/doc-latestlue.svg)](https://github.com/NeoMedSys/mrsal/blob/main/FullGuide.md)
-[![Tests Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/tests-badge.svg)](./reports/junit/junit.xml) [![Coverage Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/coverage-badge.svg)](./reports/coverage/htmlcov/index.html) [![Flake8 Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/ruff-badge.svg)](./reports/flake8/flake8.txt)
+[![Tests Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/testsadge.svg)](./reports/junit/junit.xml) [![Coverage Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/coverageadge.svg)](./reports/coverage/htmlcov/index.html) [![Flake8 Status](https://github.com/NeoMedSys/mrsal/blob/MRSAL-22/reports/badges/ruffadge.svg)](./reports/flake8/flake8.txt)
## Intro
@@ -209,11 +209,11 @@ That simple! You have now setup a full advanced message queueing protocol that y
- [RabbitMQ Tutorials](https://www.rabbitmq.com/getstarted.html)
- [RabbitMQ Exchange Types: 6 Categories Explained Easy](https://hevodata.com/learn/rabbitmq-exchange-type/)
-- [What is a Delayed Message Exchange?](https://www.cloudamqp.com/blog/what-is-a-delayed-message-exchange-in-rabbitmq.html#:~:text=The%20RabbitMQ%20delayed%20exchange%20plugin,in%20milliseconds%20can%20be%20specified.)
+- [What is a Delayed Message Exchange?](https://www.cloudamqp.com/blog/what-is-delayed-message-exchange-in-rabbitmq.html#:~:text=The%20RabbitMQ%20delayed%20exchange%20plugin,in%20milliseconds%20can%20be%20specified.)
- [RabbitMQ Delayed Message Plugin](https://github.com/rabbitmq/rabbitmq-delayed-message-exchange)
-- [When and how to use the RabbitMQ Dead Letter Exchange](https://www.cloudamqp.com/blog/when-and-how-to-use-the-rabbitmq-dead-letter-exchange.html)
-- [What is a RabbitMQ vhost?](https://www.cloudamqp.com/blog/what-is-a-rabbitmq-vhost.html)
-- [Message Brokers](https://www.ibm.com/cloud/learn/message-brokers)
+- [When and how to use the RabbitMQ Dead Letter Exchange](https://www.cloudamqp.com/blog/whennd-how-to-use-the-rabbitmq-dead-letter-exchange.html)
+- [What is a RabbitMQ vhost?](https://www.cloudamqp.com/blog/what-is-rabbitmq-vhost.html)
+- [Message Brokers](https://www.ibm.com/cloud/learn/messagerokers)
- [How to Use map() with the ThreadPoolExecutor in Python](https://superfastpython.com/threadpoolexecutor-map/)
- [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html)
- [mrsal_icon](https://www.pngegg.com/en/png-mftic)
diff --git a/README_DEV.md b/README_DEV.md
deleted file mode 100644
index e70df1a..0000000
--- a/README_DEV.md
+++ /dev/null
@@ -1,25 +0,0 @@
-### How to run tests with nox
-- Add environment variables to `.zshrc` or `.bashrc` file
-```vim
-export RABBITMQ_DEFAULT_USER=
-export RABBITMQ_DEFAULT_PASS=
-export RABBITMQ_DOMAIN=
-export RABBITMQ_DOMAIN_TLS=
-export RABBITMQ_PORT=5672'
-export RABBITMQ_PORT_TLS='5671'
-export RABBITMQ_DEFAULT_VHOST=
-
-export RABBITMQ_CAFILE=
-export RABBITMQ_CERT=
-export RABBITMQ_KEY=
-```
-
-```bash
-source ~/.zshrc
-```
-
-```bash
-rm -rf .venv poetry.lock .nox doc_images/*.svg reports/junit/*.xml reports/flake8/*.txt reports/coverage/.coverage reports/coverage/coverage.xml reports/coverage/htmlcov/
-
-nox
-```
\ No newline at end of file
diff --git a/docker-compose.yaml b/docker-compose.yaml
deleted file mode 100644
index 08cae05..0000000
--- a/docker-compose.yaml
+++ /dev/null
@@ -1,35 +0,0 @@
-version: "3.9"
-
-services:
- rabbitmq_server:
- image: mrsal-v0.7.2
- build:
- context: .
- container_name: mrsal
- environment:
- - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
- - RABBITMQ_DEFAULT_VHOST=${RABBITMQ_DEFAULT_VHOST}
- ports:
- # RabbitMQ container listening on the default port of 5672.
- - "5672:5672"
- - "5671:5671"
- # OPTIONAL: Expose the GUI port
- - "${RABBITMQ_GUI_PORT}:15672"
- volumes:
- - ${BASE_PATH}/storage/rabbitmq:/etc/rabbitmq/certs
- - ${BASE_PATH}/storage/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- networks:
- - gateway
-
-# Make the externally created network "gateway" available as network "default"
-# If "gateway" not exists then create it with
-# docker network create --internal=false --attachable --driver=bridge gateway
-networks:
- gateway:
- external: true
-
-# If you want to let the docker compose create the network, then use:
-# networks:
-# gateway:
-# name: gateway
\ No newline at end of file
diff --git a/mrsal/__init__.py b/mrsal/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/mrsal/amqp/subclass.py b/mrsal/amqp/subclass.py
new file mode 100644
index 0000000..7a993cc
--- /dev/null
+++ b/mrsal/amqp/subclass.py
@@ -0,0 +1,235 @@
+import pika
+import json
+from ssl import SSLContext
+from mrsal.exceptions import MrsalAbortedSetup
+from logging import WARNING
+from pika.exceptions import AMQPConnectionError, ChannelClosedByBroker, StreamLostError, ConnectionClosedByBroker
+from pika.adapters.asyncio_connection import AsyncioConnection
+from typing import Callable, Type
+from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type, before_sleep_log
+from pydantic import ValidationError
+from pydantic.dataclasses import dataclass
+from neolibrary.monitoring.logger import NeoLogger
+
+from mrsal.superclass import Mrsal
+from mrsal import config
+
+log = NeoLogger(__name__, rotate_days=config.LOG_DAYS)
+
+@dataclass
+class MrsalAMQP(Mrsal):
+ """
+ :param int blocked_connection_timeout: blocked_connection_timeout
+ is the timeout, in seconds,
+ for the connection to remain blocked; if the timeout expires,
+ the connection will be torn down during connection tuning.
+ """
+ blocked_connection_timeout: int = 60 # sec
+ use_blocking: bool = False
+
+ def get_ssl_context(self) -> SSLContext | None:
+ if self.ssl:
+ self.log.info("Setting up TLS connection")
+ context = self._ssl_setup()
+ ssl_options = pika.SSLOptions(context, self.host) if context else None
+ return ssl_options
+
+ def setup_blocking_connection(self) -> None:
+ """We can use setup_blocking_connection for establishing a connection to RabbitMQ server specifying connection parameters.
+ The connection is blocking which is only advisable to use for the apps with low througput.
+
+ DISCLAIMER: If you expect a lot of traffic to the app or if its realtime then you should use async.
+
+ Parameters
+ ----------
+ context : Dict[str, str]
+ context is the structured map with information regarding the SSL options for connecting with rabbit server via TLS.
+ """
+ connection_info = f"""
+ Mrsal connection parameters:
+ host={self.host},
+ virtual_host={self.virtual_host},
+ port={self.port},
+ heartbeat={self.heartbeat},
+ ssl={self.ssl}
+ """
+ if self.verbose:
+ self.log.info(f"Establishing connection to RabbitMQ on {connection_info}")
+ credentials = pika.PlainCredentials(*self.credentials)
+ try:
+ self._connection = pika.BlockingConnection(
+ pika.ConnectionParameters(
+ host=self.host,
+ port=self.port,
+ ssl_options=self.get_ssl_context(),
+ virtual_host=self.virtual_host,
+ credentials=credentials,
+ heartbeat=self.heartbeat,
+ blocked_connection_timeout=self.blocked_connection_timeout,
+ )
+ )
+
+ self._channel = self._connection.channel()
+ # Note: prefetch is set to 1 here as an example only.
+ # In production you will want to test with different prefetch values to find which one provides the best performance and usability for your solution.
+ # use a high number of prefecth if you think the pods with Mrsal installed can handle it. A prefetch 4 will mean up to 4 async runs before ack is required
+ self._channel.basic_qos(prefetch_count=self.prefetch_count)
+ self.log.info(f"Boom! Connection established with RabbitMQ on {connection_info}")
+ except (AMQPConnectionError, ChannelClosedByBroker, ConnectionClosedByBroker, StreamLostError) as e:
+ self.log.error(f"I tried to connect with the RabbitMQ server but failed with: {e}")
+ raise
+ except Exception as e:
+ self.log.error(f"Unexpected error caught: {e}")
+
+ def setup_async_connection(self) -> None:
+ """We can use setup_aync_connection for establishing a connection to RabbitMQ server specifying connection parameters.
+ The connection is async and is recommended to use if your app is realtime or will handle a lot of traffic.
+
+ Parameters
+ ----------
+ context : Dict[str, str]
+ context is the structured map with information regarding the SSL options for connecting with rabbit server via TLS.
+ """
+ connection_info = f"""
+ Mrsal connection parameters:
+ host={self.host},
+ virtual_host={self.virtual_host},
+ port={self.port},
+ heartbeat={self.heartbeat},
+ ssl={self.ssl}
+ """
+ if self.verbose:
+ self.log.info(f"Establishing connection to RabbitMQ on {connection_info}")
+ credentials = pika.PlainCredentials(*self.credentials)
+
+ try:
+ self._connection = AsyncioConnection.create_connection(
+ pika.ConnectionParameters(
+ host=self.host,
+ port=self.port,
+ ssl_options=self.get_ssl_context(),
+ virtual_host=self.virtual_host,
+ credentials=credentials,
+ heartbeat=self.heartbeat,
+ ),
+ on_done=self.on_connection_open,
+ on_open_error_callback=self.on_connection_error
+ )
+ except (AMQPConnectionError, ChannelClosedByBroker, ConnectionClosedByBroker, StreamLostError) as e:
+ self.log.error(f"Oh lordy lord I failed connecting to the Rabbit with: {e}")
+ raise
+ except Exception as e:
+ self.log.error(f"Unexpected error caught: {e}")
+
+
+ self.log.success(f"Boom! Connection established with RabbitMQ on {connection_info}")
+
+ @retry(
+ retry=retry_if_exception_type((
+ AMQPConnectionError,
+ ChannelClosedByBroker,
+ ConnectionClosedByBroker,
+ StreamLostError,
+ )),
+ stop=stop_after_attempt(3),
+ wait=wait_fixed(2),
+ before_sleep=before_sleep_log(log, WARNING)
+ )
+ def start_consumer(self,
+ queue_name: str,
+ callback: Callable | None = None,
+ callback_args: dict[str, str | int | float | bool] | None = None,
+ auto_ack: bool = True,
+ inactivity_timeout: int = 5,
+ auto_declare: bool = True,
+ exchange_name: str | None = None,
+ exchange_type: str | None = None,
+ routing_key: str | None = None,
+ payload_model: Type | None = None
+ ) -> None:
+ """
+ Start the consumer using blocking setup.
+ :param queue: The queue to consume from.
+ :param auto_ack: If True, messages are automatically acknowledged.
+ :param inactivity_timeout: Timeout for inactivity in the consumer loop.
+ :param callback: The callback function to process messages.
+ :param callback_args: Optional arguments to pass to the callback.
+ """
+ # Connect and start the I/O loop
+ if self.use_blocking:
+ self.setup_blocking_connection()
+ else:
+ self.setup_async_connection()
+ if self._connection:
+ self._connection.ioloop.run_forever()
+ else:
+ self.log.error('Straigh out of the swamp with no connection! Oh lordy! Something went wrong in the async connection')
+
+ if auto_declare:
+ if None in (exchange_name, queue_name, exchange_type, routing_key):
+ raise TypeError('Make sure that you are passing in all the necessary args for auto_declare')
+ self._setup_exchange_and_queue(
+ exchange_name=exchange_name,
+ queue_name=queue_name,
+ exchange_type=exchange_type,
+ routing_key=routing_key
+ )
+ if not self.auto_declare_ok:
+ if self._connection:
+ self._connection.ioloop.stop()
+ raise MrsalAbortedSetup('Auto declaration for the connection setup failed and is aborted')
+
+ try:
+ for method_frame, properties, body in self._channel.consume(
+ queue=queue_name, auto_ack=auto_ack, inactivity_timeout=inactivity_timeout):
+ if method_frame:
+ app_id = properties.app_id if properties else None
+ msg_id = properties.msg_id if properties else None
+
+ if self.verbose:
+ self.log.info(
+ """
+ Message received with:
+ - Method Frame: {method_frame)
+ - Redelivery: {method_frame.redelivered}
+ - Exchange: {method_frame.exchange}
+ - Routing Key: {method_frame.routing_key}
+ - Delivery Tag: {method_frame.delivery_tag}
+ - Properties: {properties}
+ """
+ )
+ if auto_ack:
+ self.log.info(f'I successfully received a message from: {app_id} with messageID: {msg_id}')
+
+ if payload_model:
+ try:
+ self.validate_payload(body, payload_model)
+ except (ValidationError, json.JSONDecodeError, UnicodeDecodeError, TypeError) as e:
+ self.log.error(f"Oh lordy lord, payload validation failed for your specific model requirements: {e}")
+ if not auto_ack:
+ self._channel.basic_nack(delivery_tag=method_frame.delivery_tag, requeue=True)
+ continue
+
+ if callback:
+ try:
+ if callback_args:
+ callback(*callback_args, method_frame, properties, body)
+ else:
+ callback( method_frame, properties, body)
+ except Exception as e:
+ if not auto_ack:
+ self._channel.basic_nack(delivery_tag=method_frame.delivery_tag, requeue=True)
+ self.log.error("Callback method failure: {e}")
+ continue
+ if not auto_ack:
+ self.log.success(f'Message ({msg_id}) from {app_id} received and properly processed -- now dance the funky chicken')
+ self._channel.basic_ack(delivery_tag=method_frame.delivery_tag)
+ else:
+ self.log.info("No message received, continuing to listen...")
+ continue
+
+ except (AMQPConnectionError, ConnectionClosedByBroker, StreamLostError) as e:
+ log.error(f"Ooooooopsie! I caught a connection error while consuming: {e}")
+ raise
+ except Exception as e:
+ self.log.error(f'Oh lordy lord! I failed consuming ze messaj with: {e}')
diff --git a/mrsal/config.py b/mrsal/config.py
new file mode 100644
index 0000000..1bb739c
--- /dev/null
+++ b/mrsal/config.py
@@ -0,0 +1,10 @@
+import os
+from pydantic import BaseModel
+
+
+class ValidateTLS(BaseModel):
+ crt: str
+ key: str
+ ca: str
+
+LOG_DAYS: int = int(os.environ.get('LOG_DAYS', 10))
diff --git a/mrsal/config/__init__.py b/mrsal/config/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/mrsal/config/config.py b/mrsal/config/config.py
deleted file mode 100644
index 20d5a9b..0000000
--- a/mrsal/config/config.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import os
-from typing import Dict, Tuple
-
-# Service name in docker-compose.yaml
-RABBITMQ_SERVICE_NAME_DOCKER_COMPOSE: str = os.environ.get("RABBITMQ_SERVICE_NAME")
-RABBITMQ_SERVER: str = "localhost"
-V_HOST: str = os.environ.get("RABBITMQ_DEFAULT_VHOST", "myMrsalHost")
-RABBITMQ_PORT: int = os.environ.get("RABBITMQ_PORT", 5672)
-RABBITMQ_PORT_TLS: int = os.environ.get("RABBITMQ_PORT_TLS", 5671)
-RABBIT_DOMAIN: str = os.environ.get("RABBITMQ_DOMAIN", "localhost")
-
-RABBITMQ_USER = os.environ.get("RABBITMQ_DEFAULT_USER", "root")
-RABBITMQ_PASSWORD = os.environ.get("RABBITMQ_DEFAULT_PASS", "password")
-RABBITMQ_CREDENTIALS: Tuple[str, str] = (RABBITMQ_USER, RABBITMQ_PASSWORD)
-
-RABBITMQ_EXCHANGE: str = "emergency_exchange"
-RABBITMQ_EXCHANGE_TYPE: str = "direct"
-RABBITMQ_BIND_ROUTING_KEY: str = "emergency"
-RABBITMQ_QUEUE: str = "emergency_queue"
-RABBITMQ_DEAD_LETTER_ROUTING_KEY: str = "dead_letter"
-RABBITMQ_DEAD_LETTER_QUEUE: str = "dead_letter-queue"
-
-DELAY_EXCHANGE_TYPE: str = "x-delayed-message"
-DELAY_EXCHANGE_ARGS: Dict[str, str] = {"x-delayed-type": "direct"}
-DEAD_LETTER_QUEUE_ARGS: Dict[str, str] = {"x-dead-letter-exchange": "", "x-dead-letter-routing-key": ""}
-
-CONTENT_TYPE: str = "text/plain"
-CONTENT_ENCODING: str = "utf-8"
-
-RETRY_LIMIT_KEY: str = "x-retry-limit"
-RETRY_KEY: str = "x-retry"
-MESSAGE_HEADERS_KEY: str = "headers"
diff --git a/mrsal/config/exceptions.py b/mrsal/config/exceptions.py
deleted file mode 100644
index 86d46db..0000000
--- a/mrsal/config/exceptions.py
+++ /dev/null
@@ -1,15 +0,0 @@
-"""
-This is script for custom exceptions
-"""
-
-
-class RabbitMQConnectionError(Exception):
- """Fail to connect to RabbitMQ"""
-
-
-class RabbitMQDeclareExchangeError(Exception):
- """Fail to declare exchange"""
-
-
-class RabbitMQDeclareQueueError(Exception):
- """Fail to declare queue"""
diff --git a/mrsal/exceptions.py b/mrsal/exceptions.py
new file mode 100644
index 0000000..b9a9d71
--- /dev/null
+++ b/mrsal/exceptions.py
@@ -0,0 +1,5 @@
+class MrsalSetupError(Exception):
+ """Handling setup exceptions"""
+
+class MrsalAbortedSetup(Exception):
+ """Handling abortion of the setup"""
diff --git a/mrsal/mrsal.py b/mrsal/mrsal.py
deleted file mode 100644
index 5082159..0000000
--- a/mrsal/mrsal.py
+++ /dev/null
@@ -1,744 +0,0 @@
-import concurrent.futures
-import json
-import os
-import ssl
-from dataclasses import dataclass
-from socket import gaierror
-from typing import Any, Callable, Dict, Tuple, Union, List
-
-import time
-import pika
-from pika import SSLOptions
-from pika.exceptions import ChannelClosedByBroker, ConnectionClosedByBroker
-from pika.exchange_type import ExchangeType
-from retry import retry
-
-from mrsal.config import config
-from loguru import logger
-from mrsal.utils import utils
-
-
-@dataclass
-# NOTE! change the doc style to google or numpy
-class Mrsal:
- """
- Mrsal creates a layer on top of Pika's core, providing methods to setup a RabbitMQ broker with multiple functionalities.
-
- Properties:
- :prop str host: Hostname or IP Address to connect to
- :prop int port: TCP port to connect to
- :prop pika.credentials.Credentials credentials: auth credentials
- :prop str virtual_host: RabbitMQ virtual host to use
- :prop bool verbose: If True then more INFO logs will be printed
- :prop int heartbeat: Controls RabbitMQ's server heartbeat timeout negotiation
- during connection tuning.
- :prop int blocked_connection_timeout: blocked_connection_timeout
- is the timeout, in seconds,
- for the connection to remain blocked; if the timeout expires,
- the connection will be torn down
- :prop int prefetch_count: Specifies a prefetch window in terms of whole messages.
- :prop bool ssl: Set this flag to true if you want to connect externally to the rabbit server.
- """
-
- host: str
- port: str
- credentials: Tuple[str, str]
- virtual_host: str
- ssl: bool = False
- verbose: bool = False
- prefetch_count: int = 1
- heartbeat: int = 600 # sec
- blocked_connection_timeout: int = 300 # sec
- _connection: pika.BlockingConnection = None
- _channel = None
-
- def connect_to_server(self, context: Dict[str, str] = None):
- """We can use connect_to_server for establishing a connection to RabbitMQ server specifying connection parameters.
-
- Parameters
- ----------
- context : Dict[str, str]
- context is the structured map with information regarding the SSL options for connecting with rabbit server via TLS.
- """
- connection_info = f"""
- Mrsal connection parameters:
- host={self.host},
- virtual_host={self.virtual_host},
- port={self.port},
- heartbeat={self.heartbeat},
- ssl={self.ssl}
- """
- if self.verbose:
- logger.info(f"Establishing connection to RabbitMQ on {connection_info}")
- if self.ssl:
- logger.info("Setting up TLS connection")
- context = self.__ssl_setup()
- ssl_options = SSLOptions(context, self.host) if context else None
- credentials = pika.PlainCredentials(*self.credentials)
- try:
- self._connection = pika.BlockingConnection(
- pika.ConnectionParameters(
- host=self.host,
- port=self.port,
- ssl_options=ssl_options,
- virtual_host=self.virtual_host,
- credentials=credentials,
- heartbeat=self.heartbeat,
- blocked_connection_timeout=self.blocked_connection_timeout,
- )
- )
- self._channel: pika.adapters.blocking_connection.BlockingChannel = self._connection.channel()
- # Note: prefetch is set to 1 here as an example only.
- # In production you will want to test with different prefetch values to find which one provides the best performance and usability for your solution.
- self._channel.basic_qos(prefetch_count=self.prefetch_count)
- logger.info(f"Connection established with RabbitMQ on {connection_info}")
- return self._connection
- except pika.exceptions.AMQPConnectionError as err:
- msg: str = f"I tried to connect with the RabbitMQ server but failed with: {err}"
- logger.error(msg)
- raise pika.exceptions.AMQPConnectionError(msg)
-
- def setup_exchange(self, exchange: str, exchange_type: str, arguments: Dict[str, str] = None, durable=True, passive=False, internal=False, auto_delete=False):
- """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class.
-
- If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name,
- and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found).
-
- :param str exchange: The exchange name
- :param str exchange_type: The exchange type to use
- :param bool passive: Perform a declare or just check to see if it exists
- :param bool durable: Survive a reboot of RabbitMQ
- :param bool auto_delete: Remove when no more queues are bound to it
- :param bool internal: Can only be published to by other exchanges
- :param dict arguments: Custom key/value pair arguments for the exchange
- :returns: Method frame from the Exchange.Declare-ok response
- :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeclareOk`
- """
- exchange_declare_info = f"""
- exchange={exchange},
- exchange_type={exchange_type},
- durable={durable},
- passive={passive},
- internal={internal},
- auto_delete={auto_delete},
- arguments={arguments}
- """
- if self.verbose:
- logger.info(f"Declaring exchange with: {exchange_declare_info}")
- try:
- exchange_declare_result = self._channel.exchange_declare(
- exchange=exchange, exchange_type=exchange_type, arguments=arguments, durable=durable, passive=passive, internal=internal, auto_delete=auto_delete
- )
- if self.verbose:
- logger.info(f"Exchange is declared successfully: {exchange_declare_info}, result={exchange_declare_result}")
- return exchange_declare_result
- except (TypeError, AttributeError, ChannelClosedByBroker, ConnectionClosedByBroker) as err:
- msg: str = f"I tried to declare an exchange but failed with: {err}"
- logger.error(msg)
- raise pika.exceptions.ConnectionClosedByBroker(503, msg)
-
- def setup_queue(self, queue: str, arguments: Dict[str, str] = None, durable: bool = True, exclusive: bool = False, auto_delete: bool = False, passive: bool = False):
- """Declare queue, create if needed. This method creates or checks a queue.
- When creating a new queue the client can specify various properties that control the durability of the queue and its contents,
- and the level of sharing for the queue.
-
- Use an empty string as the queue name for the broker to auto-generate one.
- Retrieve this auto-generated queue name from the returned `spec.Queue.DeclareOk` method frame.
-
- :param str queue: The queue name; if empty string, the broker will create a unique queue name
- :param bool passive: Only check to see if the queue exists and raise `ChannelClosed` if it doesn't
- :param bool durable: Survive reboots of the broker
- :param bool exclusive: Only allow access by the current connection
- :param bool auto_delete: Delete after consumer cancels or disconnects
- :param dict arguments: Custom key/value arguments for the queue
- :returns: Method frame from the Queue.Declare-ok response
- :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeclareOk`
- """
- queue_declare_info = f"""
- queue={queue},
- durable={durable},
- exclusive={exclusive},
- auto_delete={auto_delete},
- arguments={arguments}
- """
- if self.verbose:
- logger.info(f"Declaring queue with: {queue_declare_info}")
-
- queue_declare_result = self._channel.queue_declare(queue=queue, arguments=arguments, durable=durable, exclusive=exclusive, auto_delete=auto_delete, passive=passive)
- if self.verbose:
- logger.info(f"Queue is declared successfully: {queue_declare_info},result={queue_declare_result.method}")
- return queue_declare_result
-
-
- def setup_queue_binding(self, exchange: str, queue: str, routing_key: str = None, arguments=None):
- """Bind queue to exchange.
-
- :param str queue: The queue to bind to the exchange
- :param str exchange: The source exchange to bind to
- :param str routing_key: The routing key to bind on
- :param dict arguments: Custom key/value pair arguments for the binding
-
- :returns: Method frame from the Queue.Bind-ok response
- :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.BindOk`
- """
- if self.verbose:
- logger.info(f"Binding queue to exchange: queue={queue}, exchange={exchange}, routing_key={routing_key}")
-
- bind_result = self._channel.queue_bind(exchange=exchange, queue=queue, routing_key=routing_key, arguments=arguments)
- if self.verbose:
- logger.info(f"The queue is bound to exchange successfully: queue={queue}, exchange={exchange}, routing_key={routing_key}, result={bind_result}")
- return bind_result
-
- def __ssl_setup(self) -> Dict[str, str]:
- """__ssl_setup is private method we are using to connect with rabbit server via signed certificates and some TLS settings.
-
- Parameters
- ----------
-
- Returns
- -------
- Dict[str, str]
-
- """
- context = ssl.create_default_context(cafile=os.environ.get("RABBITMQ_CAFILE"))
- context.load_cert_chain(certfile=os.environ.get("RABBITMQ_CERT"), keyfile=os.environ.get("RABBITMQ_KEY"))
- return context
-
- def stop_consuming(self, consumer_tag: str) -> None:
- self._channel.stop_consuming(consumer_tag=consumer_tag)
- logger.info(f"Consumer is stopped, carry on. consumer_tag={consumer_tag}")
-
- def close_channel(self) -> None:
- self._channel.close()
- logger.info("Channel is closed, carry on")
-
- def close_connection(self) -> None:
- self.close_channel()
- self._connection.close()
- logger.info("Connection is closed, carry on")
-
- def queue_delete(self, queue: str):
- self._channel.queue_delete(queue=queue)
-
- def exchange_delete(self, exchange: str):
- self._channel.exchange_delete(exchange=exchange)
-
- def confirm_delivery(self):
- self._channel.confirm_delivery()
-
- def exchange_exist(self, exchange: str, exchange_type: ExchangeType):
- exch_result: pika.frame.Method = self.setup_exchange(exchange=exchange, exchange_type=exchange_type, passive=True)
- return exch_result
-
- # NOTE! This is not a check but a setup function
- def queue_exist(self, queue: str):
- queue_result = self.setup_queue(queue=queue, passive=True)
- # message_count1 = result1.method.message_count
- return queue_result
-
- # --------------------------------------------------------------
- # --------------------------------------------------------------
- # TODO NOT IN USE: Need to reformat it to publish messages to dead letters exchange after exceeding retries limit
- def consume_messages_with_retries(
- self,
- queue: str,
- callback: Callable,
- callback_args=None,
- escape_after=-1,
- dead_letters_exchange: str = None,
- dead_letters_routing_key: str = None,
- prop: pika.BasicProperties = None,
- inactivity_timeout=None,
- ):
- logger.info(f"Consuming messages: queue= {queue}")
-
- try:
- for method_frame, properties, body in self._channel.consume(queue=queue, inactivity_timeout=inactivity_timeout):
- consumer_tags = self._channel.consumer_tags
- # Let the message be in whatever data type it needs to
- message = json.loads(body)
- exchange = method_frame.exchange
- routing_key = method_frame.routing_key
- delivery_tag = method_frame.delivery_tag
- if self.verbose:
- logger.info(
- f"consumer_callback info: exchange: {exchange}, routing_key: {routing_key}, delivery_tag: {delivery_tag}, properties: {properties}, consumer_tags: {consumer_tags}"
- )
- is_processed = callback(*callback_args, message)
- logger.info(f"is_processed= {is_processed}")
- if is_processed:
- self._channel.basic_ack(delivery_tag=delivery_tag)
- logger.info("Message acknowledged")
-
- if method_frame.delivery_tag == escape_after:
- logger.info(f"Break! Max messages to be processed is {escape_after}")
- break
- else:
- logger.warning(f"Could not process the message= {message}. Process it as dead letter.")
- is_dead_letter_published = self.publish_dead_letter(
- message=message, delivery_tag=delivery_tag, dead_letters_exchange=dead_letters_exchange, dead_letters_routing_key=dead_letters_routing_key, prop=prop
- )
- if is_dead_letter_published:
- self._channel.basic_ack(delivery_tag)
- except FileNotFoundError as e:
- logger.error(f"Connection closed with error: {e}")
- self._channel.stop_consuming()
-
- def start_consumer(
- self,
- queue: str,
- callback: Callable,
- callback_args: Tuple[str, Any] = None,
- auto_ack: bool = False,
- reject_unprocessed: bool = True,
- exchange: str = None,
- exchange_type: str = None,
- routing_key: str = None,
- inactivity_timeout: int = None,
- requeue: bool = False,
- fast_setup: bool = False,
- callback_with_delivery_info: bool = False,
- thread_num: int = None,
- ):
- """
- Setup consumer:
- 1- Consumer start consuming the messages from the queue.
- 2- If `inactivity_timeout` is given (in seconds) the consumer will be canceled when the time of inactivity exceeds inactivity_timeout.
- 3- Send the consumed message to callback method to be processed, and then the message can be either:
- - Processed, then correctly-acknowledge and deleted from QUEUE or
- - Failed to process, negatively-acknowledged and then the message will be rejected and either
- - Redelivered if 'x-retry-limit' and 'x-retry' are configured in 'BasicProperties.headers'.
- - Requeued if requeue is True
- - Sent to dead-letters-exchange if it configured and
- - requeue is False
- - requeue is True and requeue attempt fails.
- - Unless deleted.
-
-
- :param str queue: The queue name to consume
- :param Callable callback: Method where received messages are sent to be processed
- :param Tuple callback_args: Tuple of arguments for callback method
- :param bool auto_ack: If True, then when a message is delivered to a consumer, it is automatically marked as acknowledged and removed from the queue without any action needed from the consumer.
- :param bool reject_unprocessed: If True(Default), then when a message is not processed correctly by the callback method, then the message will be rejected.
- :param float inactivity_timeout:
- - if a number is given (in seconds), will cause the method to yield (None, None, None) after the given period of inactivity.
- - If None is given (default), then the method blocks until the next event arrives.
- :param bool requeue: If requeue is true, the server will attempt to
- requeue the message. If requeue is false or the
- requeue attempt fails the messages are discarded or
- dead-lettered.
- :param bool callback_with_delivery_info: Specify whether the callback method needs delivery info.
- - spec.Basic.Deliver: Captures the fields for delivered message. E.g:(consumer_tag, delivery_tag, redelivered, exchange, routing_key).
- - spec.BasicProperties: Captures the client message sent to the server. E.g:(CONTENT_TYPE, DELIVERY_MODE, MESSAGE_ID, APP_ID).
- :param bool fast_setup:
- - when True, the method will create the specified exchange, queue
- and bind them together using the routing kye.
- - If False, this method will check if the specified exchange and queue
- already exist before start consuming.
- """
- print_thread_index = f"Thread={str(thread_num)} -> " if thread_num else ""
- logger.info(f"{print_thread_index}Consuming messages: queue={queue}, requeue={requeue}, inactivity_timeout={inactivity_timeout}")
- if fast_setup:
- # Setting up the necessary connections
- self.setup_exchange(exchange=exchange, exchange_type=exchange_type)
- self.setup_queue(queue=queue)
- self.setup_queue_binding(exchange=exchange, queue=queue, routing_key=routing_key)
- else:
- # Check if the necessary resources (exch & queue) are active
- try:
- if exchange and exchange_type:
- self.exchange_exist(exchange=exchange, exchange_type=exchange_type)
- self.queue_exist(queue=queue)
- except pika.exceptions.ChannelClosedByBroker as err:
- err_msg: str = f"I tried checking if the exchange and queue exist but failed with: {err}"
- logger.error(err_msg)
- logger.info("Closing the channel")
- self._channel.cancel()
- raise pika.exceptions.ChannelClosedByBroker(404, str(err))
- try:
- self.consumer_tag = None
- method_frame: pika.spec.Basic.Deliver
- properties: pika.spec.BasicProperties
- body: Any
- for method_frame, properties, body in self._channel.consume(queue=queue, auto_ack=auto_ack, inactivity_timeout=inactivity_timeout):
- try:
- if (method_frame, properties, body) != (None, None, None):
- consumer_tags = self._channel.consumer_tags
- self.consumer_tag = method_frame.consumer_tag
- app_id = properties.app_id
- msg_id = properties.message_id
- if self.verbose:
- logger.info(
- f"""
- Consumed message:
- method_frame={method_frame},
- redelivered={method_frame.redelivered},
- exchange={method_frame.exchange},
- routing_key={method_frame.routing_key},
- delivery_tag={method_frame.delivery_tag},
- properties={properties},
- consumer_tags={consumer_tags},
- consumer_tag={self.consumer_tag}
- """
- )
-
- if auto_ack:
- logger.info(f"{print_thread_index}Message coming from the app={app_id} with messageId={msg_id} is AUTO acknowledged.")
- if callback_with_delivery_info:
- is_processed = callback(*callback_args, method_frame, properties, body) if callback_args else callback(method_frame, properties, body)
- else:
- is_processed = callback(*callback_args, body) if callback_args else callback(body)
-
- if is_processed:
- logger.info(f"{print_thread_index}Message coming from the app={app_id} with messageId={msg_id} is processed correctly.")
- if not auto_ack:
- self._channel.basic_ack(delivery_tag=method_frame.delivery_tag)
- logger.info(f"{print_thread_index}Message coming from the app={app_id} with messageId={msg_id} is acknowledged.")
-
- else:
- logger.warning(f"{print_thread_index}Could not process the message coming from the app={app_id} with messageId={msg_id}.")
- if not auto_ack and reject_unprocessed:
- self._channel.basic_nack(delivery_tag=method_frame.delivery_tag, requeue=requeue)
- logger.info(f"{print_thread_index}Message coming from the app={app_id} with messageId={msg_id} is rejected.")
- if utils.is_redelivery_configured(properties):
- msg_headers = properties.headers
- x_retry = msg_headers[config.RETRY_KEY]
- x_retry_limit = msg_headers[config.RETRY_LIMIT_KEY]
- logger.warning(f"{print_thread_index}Redelivery options are configured in message headers: x-retry={x_retry}, x-retry-limit={x_retry_limit}")
- if x_retry < x_retry_limit:
- logger.warning(f"{print_thread_index}Redelivering the message with messageId={msg_id}.")
- msg_headers[config.RETRY_KEY] = x_retry + 1
- prop_redeliver = pika.BasicProperties(
- app_id=app_id,
- message_id=msg_id,
- content_type=config.CONTENT_TYPE,
- content_encoding=config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=msg_headers,
- )
- self._channel.basic_publish(exchange=method_frame.exchange, routing_key=method_frame.routing_key, body=body, properties=prop_redeliver)
- logger.warning(f"{print_thread_index}Message with messageId={msg_id} is successfully redelivered.")
- else:
- logger.warning(f"{print_thread_index}Max number of redeliveries ({x_retry_limit}) are reached for messageId={msg_id}.")
- logger.info(f"[*] {print_thread_index} keep listening on {queue}...")
- else:
- logger.warning(f"{print_thread_index}Given period of inactivity {inactivity_timeout} is exceeded. Cancel consumer.")
- self.stop_consuming(self.consumer_tag)
- self._channel.cancel()
- except pika.exceptions.ConnectionClosedByBroker as err:
- logger.error(f"{print_thread_index}I lost the connection with the Mrsal. {err}", exc_info=True)
- self._channel.cancel()
- raise pika.exceptions.ConnectionClosedByBroker(503, str(err))
- except KeyboardInterrupt:
- logger(f"{print_thread_index}Stopping Mrsal consumption.")
- self.stop_consuming(self.consumer_tag)
- self.close_connection()
- break
- except pika.exceptions.ChannelClosedByBroker as err2:
- logger.error(f"{print_thread_index}ChannelClosed is caught while consuming. Channel is closed by broker. Cancel consumer. {str(err2)}")
- self._channel.cancel()
- raise pika.exceptions.ChannelClosedByBroker(404, str(err2))
-
- def _spawn_mrsal_and_start_new_consumer(
- self,
- thread_num: int,
- queue: str,
- callback: Callable,
- callback_args: Tuple[str, Any] = None,
- exchange: str = None,
- exchange_type: str = None,
- routing_key: str = None,
- inactivity_timeout: int = None,
- requeue: bool = False,
- fast_setup: bool = False,
- callback_with_delivery_info: bool = False,
- ):
- try:
- logger.info(f"thread_num={thread_num} -> Start consumer")
- mrsal_obj = Mrsal(
- host=self.host,
- port=self.port,
- credentials=self.credentials,
- virtual_host=self.virtual_host,
- ssl=self.ssl,
- verbose=self.verbose,
- prefetch_count=self.prefetch_count,
- heartbeat=self.heartbeat,
- blocked_connection_timeout=self.blocked_connection_timeout,
- )
- mrsal_obj.connect_to_server()
-
- mrsal_obj.start_consumer(
- callback=callback,
- callback_args=callback_args,
- queue=queue,
- requeue=requeue,
- exchange=exchange,
- exchange_type=exchange_type,
- routing_key=routing_key,
- fast_setup=fast_setup,
- inactivity_timeout=inactivity_timeout,
- callback_with_delivery_info=callback_with_delivery_info,
- thread_num=thread_num,
- )
-
- mrsal_obj.stop_consuming(mrsal_obj.consumer_tag)
- mrsal_obj.close_connection()
- logger.info(f"thread_num={thread_num} -> End consumer")
- except Exception as e:
- logger.error(f"thread_num={thread_num} -> Failed to consumer: {e}")
-
- def start_concurrence_consumer(
- self,
- total_threads: int,
- queue: str,
- callback: Callable,
- callback_args: Tuple[str, Any] = None,
- exchange: str = None,
- exchange_type: str = None,
- routing_key: str = None,
- inactivity_timeout: int = None,
- requeue: bool = False,
- fast_setup: bool = False,
- callback_with_delivery_info: bool = False,
- ):
- with concurrent.futures.ThreadPoolExecutor(max_workers=total_threads) as executor:
- executor.map(
- self._spawn_mrsal_and_start_new_consumer,
- range(total_threads),
- [queue] * total_threads,
- [callback] * total_threads,
- [callback_args] * total_threads,
- [exchange] * total_threads,
- [exchange_type] * total_threads,
- [routing_key] * total_threads,
- [inactivity_timeout] * total_threads,
- [requeue] * total_threads,
- [fast_setup] * total_threads,
- [callback_with_delivery_info] * total_threads,
- )
-
- def publish_message(
- self,
- exchange: str,
- routing_key: str,
- message: Any,
- exchange_type: ExchangeType = ExchangeType.direct,
- queue: str = None,
- fast_setup: bool = False,
- prop: pika.BasicProperties = None,
- ):
- """Publish message to the exchange specifying routing key and properties.
-
- :param str exchange: The exchange to publish to
- :param str routing_key: The routing key to bind on
- :param bytes body: The message body; empty string if no body
- :param pika.spec.BasicProperties properties: message properties
- :param bool fast_setup:
- - when True, will the method create the specified exchange, queue and bind them together using the routing kye.
- - If False, this method will check if the specified exchange and queue already exist before publishing.
-
- :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`.
- :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`.
- """
- if fast_setup:
- # setting up the necessary connections
- self.setup_exchange(exchange=exchange, exchange_type=exchange_type)
- self.setup_queue(queue=queue)
- self.setup_queue_binding(exchange=exchange, queue=queue, routing_key=routing_key)
- else:
- # Check if the necessary resources (exch & queue) are active
- try:
- self.exchange_exist(exchange=exchange, exchange_type=exchange_type)
- if queue is not None:
- self.queue_exist(queue=queue)
- except pika.exceptions.ChannelClosedByBroker as err:
- logger.error(f"Failed to check active resources. Cancel consumer. {str(err)}")
- self._channel.cancel()
- raise pika.exceptions.ChannelClosedByBroker(404, str(err))
-
- try:
- # Publish the message by serializing it in json dump
- self._channel.basic_publish(exchange=exchange, routing_key=routing_key, body=json.dumps(message), properties=prop)
- logger.info(f"Message ({message}) is published to the exchange {exchange} with a routing key {routing_key}")
-
- # The message will be returned if no one is listening
- return True
- except pika.exceptions.UnroutableError as err1:
- logger.error(f"Producer could not publish message:{message} to the exchange {exchange} with a routing key {routing_key}: {err1}", exc_info=True)
- raise pika.exceptions.UnroutableError(404, str(err1))
-
- # TODO NOT IN USE: maybe we will use it in the method consume_messages_with_retries
- # to publish messages to dead letters exchange after retries limit. (remove or use)
- def publish_dead_letter(self, message: str, delivery_tag: int, dead_letters_exchange: str = None, dead_letters_routing_key: str = None, prop: pika.BasicProperties = None):
- if dead_letters_exchange is not None and dead_letters_routing_key is not None:
- logger.warning(f"Re-route the message={message} to the exchange={dead_letters_exchange} with routing_key={dead_letters_routing_key}")
- try:
- self.publish_message(exchange=dead_letters_exchange, routing_key=dead_letters_routing_key, message=json.dumps(message), properties=prop)
- logger.info(f"Dead letter was published: message={message}, exchange={dead_letters_exchange}, routing_key={dead_letters_routing_key}")
- return True
- except pika.exceptions.UnroutableError as e:
- logger.error(f"Dead letter was returned with error: {e}")
- return False
-
- @retry((gaierror, pika.exceptions.AMQPConnectionError, pika.exceptions.StreamLostError, pika.exceptions.ConnectionClosedByBroker, pika.exceptions.ChannelClosedByBroker), tries=15, delay=1, jitter=(2, 10), logger=logger)
- def full_setup(
- self,
- exchange: str = None,
- exchange_type: str = None,
- arguments: Dict[str, str] = None,
- routing_key: str = None,
- queue: str = None,
- callback: Callable = None,
- requeue: bool = False,
- callback_with_delivery_info: bool = False,
- auto_ack: bool = False,
- ) -> None:
- """
- Sets up the connection, exchange, queue, and consumer for interacting with a RabbitMQ server.
-
- This method configures the connection to the RabbitMQ server and sets up the required messaging
- components such as exchange, queue, and consumer. It also handles retries in case of connection failures.
-
- Parameters
- ----------
- exchange : str, optional
- The name of the exchange to declare. If `None`, no exchange will be declared. Default is `None`.
- exchange_type : str, optional
- The type of exchange to declare (e.g., 'direct', 'topic', 'fanout', 'headers'). Required if `exchange` is specified.
- Default is `None`.
- arguments : Dict[str, str], optional
- A dictionary of additional arguments to pass when declaring the exchange or queue. Default is `None`.
- routing_key : str, optional
- The routing key to bind the queue to the exchange. This is used to determine which messages go to which queue.
- Default is `None`.
- queue : str, optional
- The name of the queue to declare. If `None`, a randomly named queue will be created. Default is `None`.
- callback : Callable, optional
- A callback function to be executed when a message is received. The function should accept the message as a parameter.
- Default is `None`.
- requeue : bool, optional
- If `True`, failed messages will be requeued. This is used in cases where you want to retry processing a message later.
- Default is `False`.
- callback_with_delivery_info : bool, optional
- If `True`, the callback function will receive additional delivery information (e.g., delivery tag, redelivered flag).
- Default is `False`.
- auto_ack : bool, optional
- If `True`, messages will be automatically acknowledged as soon as they are delivered to the consumer.
- If `False`, messages need to be manually acknowledged. Default is `False`.
-
- Returns
- -------
- None
- This function does not return any value. It performs the setup and starts consuming messages.
-
- Raises
- ------
- pika.exceptions.AMQPConnectionError
- Raised if the connection to the RabbitMQ server fails after multiple retry attempts.
- pika.exceptions.ChannelClosedByBroker
- Raised if the channel is closed by the broker for some reason.
- pika.exceptions.ConnectionClosedByBroker
- Raised if the connection is closed by the broker.
-
- Example
- -------
- >>> major_setup(
- exchange='my_exchange',
- exchange_type='direct',
- routing_key='my_routing_key',
- queue='my_queue',
- callback=my_callback_function
- )
- """
- self.connect_to_server()
- self.setup_exchange(exchange=exchange, exchange_type=exchange_type, arguments=arguments)
- self.setup_queue(queue=queue)
- self.setup_queue_binding(exchange=exchange, queue=queue, routing_key=routing_key)
- self.start_consumer(
- queue=queue,
- callback=callback,
- requeue=requeue,
- callback_with_delivery_info=callback_with_delivery_info,
- auto_ack=auto_ack,
- )
-
-
-
-if __name__ == "__main__":
-
- # Main script testing
-
- def test_callback(
- method: pika.spec.Basic.Deliver,
- properties: pika.spec.BasicProperties,
- body: bytes
- ) -> None:
- consumer_tag = method.consumer_tag
- exchange = method.exchange
- routing_key = method.routing_key
- app_id = properties.app_id
- message_id = properties.message_id
-
- # Decode and parse the message body
- enc_payload: Dict[str, Union[str, int, List]] | str = json.loads(body)
- payload = enc_payload if isinstance(enc_payload, dict) else json.loads(enc_payload)
-
- print(f" [x] Received {payload}")
-
- print("Simulating a long running process")
- time.sleep(5)
- print("Process completed")
- return True
-
- # Example test
- print('\n\033[1;35;40m Start NeoCowboy Service \033[0m')
- # mrsal: Mrsal = Mrsal(
- # host=config.RABBIT_DOMAIN,
- # port=config.RABBITMQ_PORT_TLS,
- # credentials=(config.RABBITMQ_CREDENTIALS),
- # virtual_host=config.V_HOST,
- # ssl=True,
- # verbose=True,
- # )
-
- # mrsal.connect_to_server()
-
- # exch_result: pika.frame.Method = mrsal.setup_exchange(
- # exchange="exchangeRT",
- # exchange_type='x-delayed-message',
- # arguments={'x-delayed-type': 'x-delayed-message'},
- # )
- # q_result: pika.frame.Method = mrsal.setup_queue(queue="mrsal_testQueue")
- # qb_result: pika.frame.Method = mrsal.setup_queue_binding(
- # exchange="exchangeRT",
- # routing_key="exchangeRT.mrsal_testQueue",
- # queue="mrsal_testQueue",
- # )
-
- # mrsal.start_consumer(
- # queue="mrsal_testQueue",
- # callback=test_callback,
- # requeue=False,
- # callback_with_delivery_info=True,
- # auto_ack=True,
- # )
-
-
- Mrsal = Mrsal(
- host=config.RABBIT_DOMAIN,
- port=config.RABBITMQ_PORT_TLS,
- credentials=(config.RABBITMQ_CREDENTIALS),
- virtual_host=config.V_HOST,
- ssl=True,
- verbose=True,
- ).major_setup(
- exchange='exchangeRT',
- exchange_type='x-delayed-message',
- routing_key='exchangeRT.mrsal_testQueue',
- queue='mrsal_testQueue',
- callback=test_callback,
- requeue=False,
- callback_with_delivery_info=True,
- auto_ack=True,
- )
\ No newline at end of file
diff --git a/mrsal/superclass.py b/mrsal/superclass.py
new file mode 100644
index 0000000..321fe40
--- /dev/null
+++ b/mrsal/superclass.py
@@ -0,0 +1,325 @@
+# external
+import os
+import ssl
+import pika
+from logging import WARNING
+from ssl import SSLContext
+from typing import Any, Type
+from mrsal.exceptions import MrsalSetupError
+from tenacity import retry, stop_after_attempt, wait_fixed, retry_if_exception_type, before_sleep_log
+from pydantic.dataclasses import dataclass
+from pika.exceptions import NackError, UnroutableError
+from neolibrary.monitoring.logger import NeoLogger
+from pydantic.deprecated.tools import json
+
+# internal
+from mrsal import config
+
+log = NeoLogger(__name__, rotate_days=config.LOG_DAYS)
+
+
+@dataclass
+# NOTE! change the doc style to google or numpy
+class Mrsal:
+ """
+ Mrsal creates a layer on top of Pika's core, providing methods to setup a RabbitMQ broker with multiple functionalities.
+
+ Properties:
+ :param str host: Hostname or IP Address to connect to
+ :param int port: TCP port to connect to
+ :param pika.credentials.Credentials credentials: auth credentials
+ :param str virtual_host: RabbitMQ virtual host to use
+ :param bool verbose: If True then more INFO logs will be printed
+ :param int heartbeat: Controls RabbitMQ's server heartbeat timeout negotiation
+ :param int prefetch_count: Specifies a prefetch window in terms of whole messages.
+ :param bool ssl: Set this flag to true if you want to connect externally to the rabbit server.
+ """
+
+ host: str
+ port: int
+ credentials: tuple[str, str]
+ virtual_host: str
+ ssl: bool = False
+ verbose: bool = False
+ prefetch_count: int = 5
+ heartbeat: int = 60 # sec
+ _connection = None
+ _channel = None
+ log = NeoLogger(__name__, rotate_days=config.LOG_DAYS)
+
+ def __post_init__(self) -> None:
+ if self.ssl:
+ tls_dict = {
+ 'crt': os.environ.get('RABBITMQ_CERT'),
+ 'key': os.environ.get('RABBITMQ_KEY'),
+ 'ca': os.environ.get('RABBITMQ_CAFILE')
+ }
+ # empty string handling
+ self.tls_dict = {cert: (env_var if env_var != '' else None) for cert, env_var in tls_dict.items()}
+ config.ValidateTLS(**self.tls_dict)
+
+ def _setup_exchange_and_queue(self,
+ exchange_name: str, queue_name: str, exchange_type: str,
+ routing_key: str, exch_args: dict[str, str] | None = None,
+ queue_args: dict[str, str] | None = None,
+ bind_args: dict[str, str] | None = None,
+ exch_durable: bool = True, queue_durable: bool =True,
+ passive: bool = False, internal: bool = False,
+ auto_delete: bool = False, exclusive: bool = False
+ ) -> None:
+
+ declare_exhange_dict = {
+ 'exchange': exchange_name,
+ 'exchange_type': exchange_type,
+ 'arguments': exch_args,
+ 'durable': exch_durable,
+ 'passive': passive,
+ 'internal': internal,
+ 'auto_delete': auto_delete
+ }
+
+ declare_queue_dict = {
+ 'queue': queue_name,
+ 'arguments': queue_args,
+ 'durable': queue_durable,
+ 'passive': passive,
+ 'exclusive': exclusive,
+ 'auto_delete': auto_delete
+ }
+
+ declare_queue_binding_dict = {
+ 'exchange': exchange_name,
+ 'queue': queue_name,
+ 'routing_key': routing_key,
+ 'arguments': bind_args
+
+ }
+ try:
+ self._declare_exchange(**declare_exhange_dict)
+ self._declare_queue(**declare_queue_dict)
+ self._declare_queue_binding(**declare_queue_binding_dict)
+ self.auto_declare_ok = True
+ except MrsalSetupError:
+ self.auto_declare_ok = False
+
+ def on_connection_error(self, _unused_connection, exception):
+ """
+ Handle connection errors.
+ """
+ self.log.error(f"I failed to establish async connection: {exception}")
+
+ def open_channel(self) -> None:
+ """
+ Open a channel once the connection is established.
+ """
+ self._channel = self.conn.channel()
+ self._channel.basic_qos(prefetch_count=self.prefetch_count)
+
+ def on_connection_open(self, connection) -> None:
+ """
+ Callback when the async connection is successfully opened.
+ """
+ self.conn = connection
+ self.open_channel()
+
+ def _declare_exchange(self,
+ exchange: str, exchange_type: str,
+ arguments: dict[str, str] | None,
+ durable: bool, passive: bool,
+ internal: bool, auto_delete: bool
+ ) -> None:
+ """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class.
+
+ If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name,
+ and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found).
+
+ :param str exchange: The exchange name
+ :param str exchange_type: The exchange type to use
+ :param bool passive: Perform a declare or just check to see if it exists
+ :param bool durable: Survive a reboot of RabbitMQ
+ :param bool auto_delete: Remove when no more queues are bound to it
+ :param bool internal: Can only be published to by other exchanges
+ :param dict arguments: Custom key/value pair arguments for the exchange
+ :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeclareOk`
+ """
+ exchange_declare_info = f"""
+ exchange={exchange},
+ exchange_type={exchange_type},
+ durable={durable},
+ passive={passive},
+ internal={internal},
+ auto_delete={auto_delete},
+ arguments={arguments}
+ """
+ if self.verbose:
+ self.log.info(f"Declaring exchange with: {exchange_declare_info}")
+ try:
+ self._channel.exchange_declare(
+ exchange=exchange, exchange_type=exchange_type,
+ arguments=arguments, durable=durable,
+ passive=passive, internal=internal,
+ auto_delete=auto_delete
+ )
+ except Exception as e:
+ raise MrsalSetupError(f'Oooopise! I failed declaring the exchange with : {e}')
+ if self.verbose:
+ self.log.success("Exchange declared yo!")
+
+ def _declare_queue(self,
+ queue: str, arguments: dict[str, str] | None,
+ durable: bool, exclusive: bool,
+ auto_delete: bool, passive: bool
+ ) -> None:
+ """Declare queue, create if needed. This method creates or checks a queue.
+ When creating a new queue the client can specify various properties that control the durability of the queue and its contents,
+ and the level of sharing for the queue.
+
+ Use an empty string as the queue name for the broker to auto-generate one.
+ Retrieve this auto-generated queue name from the returned `spec.Queue.DeclareOk` method frame.
+
+ :param str queue: The queue name; if empty string, the broker will create a unique queue name
+ :param bool passive: Only check to see if the queue exists and raise `ChannelClosed` if it doesn't
+ :param bool durable: Survive reboots of the broker
+ :param bool exclusive: Only allow access by the current connection
+ :param bool auto_delete: Delete after consumer cancels or disconnects
+ :param dict arguments: Custom key/value arguments for the queue
+ :returns: Method frame from the Queue.Declare-ok response
+ :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeclareOk`
+ """
+ queue_declare_info = f"""
+ queue={queue},
+ durable={durable},
+ exclusive={exclusive},
+ auto_delete={auto_delete},
+ arguments={arguments}
+ """
+ if self.verbose:
+ self.log.info(f"Declaring queue with: {queue_declare_info}")
+
+ try:
+ self._channel.queue_declare(queue=queue, arguments=arguments, durable=durable, exclusive=exclusive, auto_delete=auto_delete, passive=passive)
+ except Exception as e:
+ raise MrsalSetupError(f'Oooopise! I failed declaring the queue with : {e}')
+ if self.verbose:
+ self.log.info(f"Queue declared yo")
+
+ def _declare_queue_binding(self,
+ exchange: str, queue: str,
+ routing_key: str | None,
+ arguments: dict[str, str] | None
+ ) -> None:
+ """Bind queue to exchange.
+
+ :param str queue: The queue to bind to the exchange
+ :param str exchange: The source exchange to bind to
+ :param str routing_key: The routing key to bind on
+ :param dict arguments: Custom key/value pair arguments for the binding
+
+ :returns: Method frame from the Queue.Bind-ok response
+ :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.BindOk`
+ """
+ if self.verbose:
+ self.log.info(f"Binding queue to exchange: queue={queue}, exchange={exchange}, routing_key={routing_key}")
+
+ try:
+ self._channel.queue_bind(exchange=exchange, queue=queue, routing_key=routing_key, arguments=arguments)
+ if self.verbose:
+ self.log.info(f"The queue is bound to exchange successfully: queue={queue}, exchange={exchange}, routing_key={routing_key}")
+ except Exception as e:
+ raise MrsalSetupError(f'I failed binding the queue with : {e}')
+ if self.verbose:
+ self.log.info(f"Queue bound yo")
+
+ def _ssl_setup(self) -> SSLContext:
+ """_ssl_setup is private method we are using to connect with rabbit server via signed certificates and some TLS settings.
+
+ Parameters
+ ----------
+
+ Returns
+ -------
+ SSLContext
+
+ """
+ context = ssl.create_default_context(cafile=self.tls_dict['ca'])
+ context.load_cert_chain(certfile=self.tls_dict['crt'], keyfile=self.tls_dict['key'])
+ return context
+
+ def validate_payload(self, payload: Any, model: Type) -> None:
+ """
+ Parses and validates the incoming message payload using the provided dataclass model.
+ :param payload: The message payload which could be of any type (str, bytes, dict, etc.).
+ :param model: The pydantic dataclass model class to validate against.
+ :return: An instance of the model if validation is successful, otherwise None.
+ """
+ # If payload is bytes, decode it to a string
+ if isinstance(payload, bytes):
+ payload = payload.decode('utf-8')
+
+ # If payload is a string, attempt to load it as JSON
+ if isinstance(payload, str):
+ payload = json.loads(payload) # Converts JSON string to a dictionary
+
+ # Validate the payload against the provided model
+ if isinstance(payload, dict):
+ model(**payload)
+ else:
+ raise TypeError("Fool, we aint supporting this type yet {type(payload)}.. Bytes or str -- get it straight")
+
+ @retry(
+ retry=retry_if_exception_type((
+ NackError,
+ UnroutableError
+ )),
+ stop=stop_after_attempt(3),
+ wait=wait_fixed(2),
+ before_sleep=before_sleep_log(log, WARNING)
+ )
+ def publish_message(
+ self,
+ exchange_name: str,
+ routing_key: str,
+ message: Any,
+ exchange_type: str,
+ queue_name: str,
+ auto_declare: bool = True,
+ prop: pika.BasicProperties | None = None,
+ ) -> None:
+ """Publish message to the exchange specifying routing key and properties.
+
+ :param str exchange: The exchange to publish to
+ :param str routing_key: The routing key to bind on
+ :param bytes body: The message body; empty string if no body
+ :param pika.spec.BasicProperties properties: message properties
+ :param bool fast_setup:
+ - when True, will the method create the specified exchange, queue and bind them together using the routing kye.
+ - If False, this method will check if the specified exchange and queue already exist before publishing.
+
+ :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`.
+ :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`.
+ """
+ if auto_declare:
+ if None in (exchange_name, queue_name, exchange_type, routing_key):
+ raise TypeError('Make sure that you are passing in all the necessary args for auto_declare')
+
+ self._setup_exchange_and_queue(
+ exchange_name=exchange_name,
+ queue_name=queue_name,
+ exchange_type=exchange_type,
+ routing_key=routing_key
+ )
+ try:
+ # Publish the message by serializing it in json dump
+ # NOTE! we are not dumping a json anymore here! This allows for more flexibility
+ self._channel.basic_publish(exchange=exchange_name, routing_key=routing_key, body=message, properties=prop)
+ self.log.success(f"The message ({message}) is published to the exchange {exchange_name} with the routing key {routing_key}")
+
+ except UnroutableError as e:
+ self.log.error(f"Producer could not publish message:{message} to the exchange {exchange_name} with a routing key {routing_key}: {e}", exc_info=True)
+ raise
+ except NackError as e:
+ self.log.error(f"Message NACKed by broker: {e}")
+ raise
+ except Exception as e:
+ self.log.error(f"Unexpected error while publishing message: {e}")
+
diff --git a/mrsal/utils/__init__.py b/mrsal/utils/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/mrsal/utils/utils.py b/mrsal/utils/utils.py
deleted file mode 100644
index c89cfb3..0000000
--- a/mrsal/utils/utils.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import pika
-
-from mrsal.config import config
-
-
-def is_redelivery_configured(msg_prop: pika.spec.BasicProperties):
- if hasattr(msg_prop, config.MESSAGE_HEADERS_KEY):
- headers = msg_prop.headers
- return headers is not None and config.RETRY_LIMIT_KEY in headers and config.RETRY_KEY in headers
- return False
diff --git a/noxfile.py b/noxfile.py
index a05c4b0..0bf5d3d 100644
--- a/noxfile.py
+++ b/noxfile.py
@@ -1,76 +1,121 @@
import os
import shutil
-
import nox
from nox.sessions import Session
+# Define paths for easy management
+ROOT_PATH = os.getcwd()
+NOX_PATH = os.path.join(ROOT_PATH, ".nox")
+REPORTS_PATH = os.path.join(ROOT_PATH, "reports")
+COVERAGE_PATH = os.path.join(REPORTS_PATH, "coverage")
+JUNIT_PATH = os.path.join(REPORTS_PATH, "junit")
+RUFF_PATH = os.path.join(REPORTS_PATH, "ruff")
+BADGES_PATH = os.path.join(REPORTS_PATH, "badges")
+
-@nox.session()
+@nox.session(name="setup", reuse_venv=True)
def setup(session: Session):
+ """
+ Setup the environment by creating necessary directories and cleaning up old artifacts.
+ """
try:
- root_path: str = os.getcwd()
- # session.run("poetry", "install", "--with", "dev", external=True)
- nox_path = os.path.join(root_path, ".nox")
- reports_path = os.path.join(root_path, "reports")
- coverage_path = os.path.join(reports_path, "coverage")
- junit_path = os.path.join(reports_path, "junit")
- ruff_path = os.path.join(reports_path, "ruff")
- badges_path = os.path.join(reports_path, "badges")
-
- if os.path.exists(nox_path):
- shutil.rmtree(nox_path)
- if os.path.exists(reports_path):
- shutil.rmtree(reports_path)
-
- os.makedirs(reports_path)
- os.makedirs(junit_path)
- os.makedirs(coverage_path)
- os.makedirs(ruff_path)
- os.makedirs(badges_path)
+ # Clean up existing directories if they exist
+ for path in [NOX_PATH, REPORTS_PATH]:
+ if os.path.exists(path):
+ shutil.rmtree(path)
+
+ # Create required directories
+ os.makedirs(COVERAGE_PATH, exist_ok=True)
+ os.makedirs(JUNIT_PATH, exist_ok=True)
+ os.makedirs(RUFF_PATH, exist_ok=True)
+ os.makedirs(BADGES_PATH, exist_ok=True)
+
+ session.log("Setup complete: directories are clean and ready.")
except Exception as e:
- # Handle other exceptions
- print(f"Error: {e}")
+ session.error(f"Setup failed: {e}")
-@nox.session()
+@nox.session(name="tests", reuse_venv=True)
def tests(session: Session):
- session.run("poetry", "install", "--with", "dev", external=True)
- # coverage
- session.run(
- "poetry",
- "run",
- "coverage",
- "run",
- "--source=.",
- "--data-file",
- "./.coverage",
- "-m",
- "pytest",
- "./tests",
- "--junitxml=./reports/junit/junit.xml",
- "--ignore=./tests/test_ssl",
- external=True,
- )
- session.run("poetry", "run", "coverage", "report", external=True)
- session.run("poetry", "run", "coverage", "xml", external=True)
- session.run("poetry", "run", "coverage", "html", external=True)
- session.run("mv", ".coverage", "./reports/coverage", external=True)
- session.run("mv", "coverage.xml", "./reports/coverage", external=True)
- session.run("cp", "-R", "htmlcov/", "./reports/coverage", external=True)
- session.run("rm", "-R", "htmlcov/", external=True)
-
-
-@nox.session()
+ """
+ Run tests using coverage and output results in multiple formats.
+ """
+ try:
+ # Install dependencies only necessary for testing
+ session.run("poetry", "install", "--with", "dev", external=True)
+
+ # Run pytest with coverage
+ session.run(
+ "poetry", "run", "coverage", "run",
+ "--source=.",
+ "--data-file", "./.coverage",
+ "-m", "pytest",
+ "./tests",
+ "--junitxml=./reports/junit/junit.xml",
+ "--ignore=./tests/test_ssl",
+ external=True,
+ )
+
+ # Generate and move coverage reports
+ session.run("poetry", "run", "coverage", "report", external=True)
+ session.run("poetry", "run", "coverage", "xml", external=True)
+ session.run("poetry", "run", "coverage", "html", external=True)
+ shutil.move(".coverage", COVERAGE_PATH)
+ shutil.move("coverage.xml", COVERAGE_PATH)
+ shutil.copytree("htmlcov", os.path.join(COVERAGE_PATH, "htmlcov"))
+ shutil.rmtree("htmlcov")
+
+ session.log("Tests and coverage reporting complete.")
+ except Exception as e:
+ session.error(f"Tests failed: {e}")
+
+
+@nox.session(name="lint", reuse_venv=True)
def lint(session: Session):
- session.install("ruff")
- # Tell Nox to treat non-zero exit codes from Ruff as success using success_codes.
- # We do that because Ruff returns `1` when errors found in the code syntax, e.g(missing whitespace, ..)
- session.run("ruff", "check", ".", "--config=./ruff.toml", "--preview", "--statistics", "--output-file=./reports/ruff/ruff.txt", success_codes=[0, 1])
+ """
+ Run code linting using Ruff.
+ """
+ try:
+ session.install("ruff")
+ session.run(
+ "ruff", "check", ".",
+ "--config=./ruff.toml",
+ "--preview",
+ "--statistics",
+ "--output-file=./reports/ruff/ruff.txt",
+ success_codes=[0, 1]
+ )
+ session.log("Linting complete: check reports for details.")
+ except Exception as e:
+ session.error(f"Linting failed: {e}")
-@nox.session()
+@nox.session(name="generate_badges", reuse_venv=True)
def gen_badge(session: Session):
- session.install("genbadge[tests,coverage,flake8]")
- session.run("genbadge", "tests", "-i", "./reports/junit/junit.xml", "-o", "./reports/badges/tests-badge.svg")
- session.run("genbadge", "coverage", "-i", "./reports/coverage/coverage.xml", "-o", "./reports/badges/coverage-badge.svg")
- session.run("genbadge", "flake8", "-i", "./reports/ruff/ruff.txt", "-o", "./reports/badges/ruff-badge.svg")
+ """
+ Generate badges for test, coverage, and lint results.
+ """
+ try:
+ session.install("genbadge[tests,coverage,flake8]")
+ session.run("genbadge", "tests", "-i", "./reports/junit/junit.xml", "-o", "./reports/badges/tests-badge.svg")
+ session.run("genbadge", "coverage", "-i", "./reports/coverage/coverage.xml", "-o", "./reports/badges/coverage-badge.svg")
+ session.run("genbadge", "flake8", "-i", "./reports/ruff/ruff.txt", "-o", "./reports/badges/ruff-badge.svg")
+
+ session.log("Badges generated successfully.")
+ except Exception as e:
+ session.error(f"Badge generation failed: {e}")
+
+
+@nox.session(name="clean", reuse_venv=True)
+def clean(session: Session):
+ """
+ Clean all generated directories and files to reset the environment.
+ """
+ try:
+ for path in [NOX_PATH, REPORTS_PATH]:
+ if os.path.exists(path):
+ shutil.rmtree(path)
+
+ session.log("Clean complete: all temporary files removed.")
+ except Exception as e:
+ session.error(f"Clean failed: {e}")
diff --git a/pyproject.toml b/pyproject.toml
index 75938d0..11bdac8 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -6,7 +6,7 @@ license = ""
maintainers = ["Raafat ", "Jon E Nesvold "]
name = "mrsal"
readme = "README.md"
-version = "0.7.7-alpha"
+version = "1.0.0b"
[tool.poetry.dependencies]
colorlog = "^6.7.0"
@@ -16,6 +16,8 @@ retry = "^0.9.2"
ruff = "^0.1.8"
nox = "^2024.4.15"
loguru = "^0.7.2"
+tenacity = "^9.0.0"
+neolibrary = {version = "^0.8.0b1", source = "neomedsys"}
[tool.poetry.group.dev.dependencies]
coverage = "^7.2.7"
@@ -24,3 +26,8 @@ pytest = "^7.4.0"
[build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core>=1.0.0"]
+
+[[tool.poetry.source]]
+name = "neomedsys"
+url = "https://pypi.neomodels.app/simple"
+priority = "supplemental"
diff --git a/reports/badges/coverage-badge.svg b/reports/badges/coverage-badge.svg
deleted file mode 100644
index e94df6c..0000000
--- a/reports/badges/coverage-badge.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/reports/badges/ruff-badge.svg b/reports/badges/ruff-badge.svg
deleted file mode 100644
index a08fd96..0000000
--- a/reports/badges/ruff-badge.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/reports/badges/tests-badge.svg b/reports/badges/tests-badge.svg
deleted file mode 100644
index 1b0e75f..0000000
--- a/reports/badges/tests-badge.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/reports/coverage/.coverage b/reports/coverage/.coverage
deleted file mode 100644
index a2b8b7f..0000000
Binary files a/reports/coverage/.coverage and /dev/null differ
diff --git a/reports/coverage/coverage.xml b/reports/coverage/coverage.xml
deleted file mode 100644
index f656ee6..0000000
--- a/reports/coverage/coverage.xml
+++ /dev/null
@@ -1,1228 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/reports/junit/junit.xml b/reports/junit/junit.xml
deleted file mode 100644
index f2416a7..0000000
--- a/reports/junit/junit.xml
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/tests/config.py b/tests/config.py
deleted file mode 100644
index 1954ed6..0000000
--- a/tests/config.py
+++ /dev/null
@@ -1,22 +0,0 @@
-TEST_MESSAGE: str = "4f02964a-876a-419c-b309-d784da4b040a"
-TEST_MESSAGE_INDEX: int = 3
-
-HOST: str = "localhost"
-PORT: int = 5672
-QUEUE: str = "emergency_queue"
-DEAD_LETTER_QUEUE: str = "dl_queue"
-
-EXCHANGE: str = "emergency_exchange"
-EXCHANGE_TYPE: str = "direct"
-ROUTING_KEY: str = "emergency_routing_key"
-
-DELAY_EXCHANGE: str = "delay_exchange"
-DELAY_ROUTING_KEY: str = "delay_routing_key"
-
-DEAD_LETTER_EXCHANGE: str = "dead_letter_exchange"
-DEAD_LETTER_ROUTING_KEY: str = "dead_letter_routing_key"
-
-MESSAGE_TTL: int = 2000 # ms
-
-CONTENT_TYPE: str = "text/plain"
-CONTENT_ENCODING: str = "utf-8"
diff --git a/tests/conftest.py b/tests/conftest.py
new file mode 100644
index 0000000..6f91a37
--- /dev/null
+++ b/tests/conftest.py
@@ -0,0 +1,19 @@
+from pydantic.dataclasses import dataclass
+import warnings
+
+# Suppress RuntimeWarnings for unawaited coroutines globally during tests
+warnings.filterwarnings("ignore", message="coroutine '.*' was never awaited", category=RuntimeWarning)
+
+
+SETUP_ARGS = {
+ 'host': 'localhost',
+ 'port': 5672,
+ 'credentials': ('user', 'password'),
+ 'virtual_host': 'testboi'
+}
+
+@dataclass
+class ExpectedPayload:
+ id: int
+ name: str
+ active: bool
diff --git a/tests/test_async_mrsal.py b/tests/test_async_mrsal.py
new file mode 100644
index 0000000..d91e10c
--- /dev/null
+++ b/tests/test_async_mrsal.py
@@ -0,0 +1,172 @@
+import unittest
+from unittest.mock import Mock, MagicMock, patch
+from mrsal.amqp.subclass import MrsalAMQP
+from mrsal.exceptions import MrsalAbortedSetup, MrsalSetupError
+from pika.exceptions import AMQPConnectionError
+from tenacity import RetryError
+from tests.conftest import SETUP_ARGS, ExpectedPayload
+
+class TestMrsalAsyncAMQP(unittest.TestCase):
+ def setUp(self):
+ self.mock_channel = MagicMock()
+ self.consumer = MrsalAMQP(**SETUP_ARGS)
+ self.consumer._channel = self.mock_channel
+
+ @patch.object(MrsalAMQP, 'setup_async_connection')
+ def test_retry_on_connection_failure_blocking(self, mock_async_connection):
+ """Test reconnection retries in blocking consumer mode."""
+
+ # Set up a mock callback function
+ mock_callback = Mock()
+
+ self.mock_channel.consume.side_effect = AMQPConnectionError("Connection lost")
+
+ with self.assertRaises(RetryError):
+ self.consumer.start_consumer(
+ queue_name='test_q',
+ exchange_name='test_x',
+ exchange_type='direct',
+ routing_key='test_route',
+ callback=mock_callback
+ )
+
+ self.assertEqual(mock_async_connection.call_count, 3)
+
+ @patch('mrsal.amqp.subclass.MrsalAMQP._setup_exchange_and_queue')
+ def test_raises_mrsal_aborted_setup_on_failed_auto_declaration(self, mock_setup_exchange_and_queue):
+ """Test that MrsalAbortedSetup is raised if the auto declaration fails."""
+ self.consumer.auto_declare_ok = False # Simulate auto declaration failure
+ mock_setup_exchange_and_queue.return_value = None # Simulate the method execution without error
+ with self.assertRaises(MrsalAbortedSetup):
+ self.consumer.start_consumer(
+ exchange_name="test_exchange",
+ exchange_type="direct",
+ queue_name="test_queue",
+ routing_key="test_route"
+ )
+
+ def test_setup_raises_setup_error_on_exchange_failure(self):
+ """Test that MrsalSetupError is raised if exchange declaration fails."""
+ self.mock_channel.exchange_declare.side_effect = MrsalSetupError("Exchange error")
+ with self.assertRaises(MrsalSetupError):
+ self.consumer._declare_exchange(
+ exchange="test_x",
+ exchange_type="direct",
+ arguments=None,
+ durable=True,
+ passive=False,
+ internal=False,
+ auto_delete=False
+ )
+
+ def test_setup_raises_setup_error_on_queue_failure(self):
+ """Test that MrsalSetupError is raised if queue declaration fails."""
+ self.mock_channel.queue_declare.side_effect = MrsalSetupError("Queue error")
+ with self.assertRaises(MrsalSetupError):
+ self.consumer._declare_queue(
+ queue="test_q",
+ arguments=None,
+ durable=True,
+ exclusive=False,
+ auto_delete=False,
+ passive=False
+ )
+
+ def test_setup_raises_setup_error_on_binding_failure(self):
+ """Test that MrsalSetupError is raised if queue binding fails."""
+ self.mock_channel.queue_bind.side_effect = MrsalSetupError("Bind error")
+ with self.assertRaises(MrsalSetupError):
+ self.consumer._declare_queue_binding(
+ exchange='test_x',
+ queue='test_q',
+ routing_key='test_route',
+ arguments=None
+ )
+
+ def test_valid_message_processing(self):
+ """Test message processing with a valid payload and a user-defined callback."""
+ valid_body = b'{"id": 1, "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_properties = MagicMock()
+
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, valid_body)]
+ mock_callback = Mock()
+
+ self.consumer.start_consumer(
+ exchange_name="test_exchange",
+ exchange_type="direct",
+ queue_name="test_queue",
+ routing_key="test_route",
+ callback=mock_callback,
+ payload_model=ExpectedPayload
+ )
+
+ mock_callback.assert_called_once_with(mock_method_frame, mock_properties, valid_body)
+
+ def test_invalid_message_skips_processing(self):
+ """Test that invalid payloads are skipped and do not invoke the callback."""
+ invalid_body = b'{"id": "wrong_type", "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_properties = MagicMock()
+
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, invalid_body)]
+ mock_callback = Mock()
+
+ self.consumer.start_consumer(
+ exchange_name="test_exchange",
+ exchange_type="direct",
+ queue_name="test_queue",
+ routing_key="test_route",
+ callback=mock_callback,
+ payload_model=ExpectedPayload
+ )
+
+ mock_callback.assert_not_called()
+
+ def test_message_acknowledgment_on_success(self):
+ """Test that a message is acknowledged on successful processing."""
+ valid_body = b'{"id": 1, "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_method_frame.delivery_tag = 123
+ mock_properties = MagicMock()
+
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, valid_body)]
+ mock_callback = Mock()
+
+ self.consumer.start_consumer(
+ exchange_name="test_exchange",
+ exchange_type="direct",
+ queue_name="test_queue",
+ routing_key="test_route",
+ callback=mock_callback,
+ payload_model=ExpectedPayload,
+ auto_ack=False
+ )
+
+ self.mock_channel.basic_ack.assert_called_once_with(delivery_tag=123)
+
+ def test_message_nack_on_callback_failure(self):
+ """Test that a message is nacked and requeued on callback failure."""
+ valid_body = b'{"id": 1, "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_method_frame.delivery_tag = 123
+ mock_properties = MagicMock()
+
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, valid_body)]
+ mock_callback = Mock(side_effect=Exception("Callback error"))
+
+ self.consumer.start_consumer(
+ exchange_name="test_exchange",
+ exchange_type="direct",
+ queue_name="test_queue",
+ routing_key="test_route",
+ callback=mock_callback,
+ payload_model=ExpectedPayload,
+ auto_ack=False
+ )
+
+ self.mock_channel.basic_nack.assert_called_once_with(delivery_tag=123, requeue=True)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_auto_ack/test_message_auto_ack.py b/tests/test_auto_ack/test_message_auto_ack.py
deleted file mode 100644
index 1db0fdf..0000000
--- a/tests/test_auto_ack/test_message_auto_ack.py
+++ /dev/null
@@ -1,154 +0,0 @@
-import json
-import time
-import pika
-from loguru import logger as log
-
-import mrsal.config.config as config
-import tests.config as test_config
-from mrsal.mrsal import Mrsal
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def get_all_messages(mrsal_obj: Mrsal, queue_name):
- messages = []
- while True:
- # Get a message
- method_frame, _header_frame, body = mrsal_obj._channel.basic_get(queue=queue_name)
-
- # If no more messages, break from the loop
- if method_frame is None:
- break
-
- # Add the message to the list
- enc_payload = json.loads(body)
- mrsal_msg = enc_payload if isinstance(enc_payload, dict) else json.loads(enc_payload)
- log.info(f"Received message {mrsal_msg}")
- messages.append(mrsal_msg)
-
- # Acknowledge the message (optional, depending on your use case)
- # channel.basic_ack(delivery_tag=method_frame.delivery_tag)
- return messages
-
-
-def test_message_auto_ack():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_berlin_queue")
- mrsal.queue_delete(queue="agreements_madrid_queue")
- # ------------------------------------------
-
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="direct")
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue for berlin agreements
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="agreements_berlin_queue")
- assert q_result1 is not None
-
- # Bind queue to exchange with binding key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="berlin agreements", queue="agreements_berlin_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Setup queue for madrid agreements
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="agreements_madrid_queue")
- assert q_result2 is not None
-
- # Bind queue to exchange with binding key
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="madrid agreements", queue="agreements_madrid_queue")
- assert qb_result2 is not None
- # ------------------------------------------
-
- # Publisher:
- prop1 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="madrid_uuid",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
-
- # Message ("uuid2") is published to the exchange and it's routed to queue2
- message_madrid = "uuid_madrid"
- mrsal.publish_message(exchange="agreements", routing_key="madrid agreements", message=json.dumps(message_madrid), prop=prop1)
-
- prop2 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="berlin_uuid",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- # Message ("uuid1") is published to the exchange and it's routed to queue1
- message_berlin = "uuid_berlin"
- mrsal.publish_message(exchange="agreements", routing_key="berlin agreements", message=json.dumps(message_berlin), prop=prop2)
- # ------------------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queues
- result1 = mrsal.setup_queue(queue="agreements_berlin_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
-
- result2 = mrsal.setup_queue(queue="agreements_madrid_queue", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 1
- # ------------------------------------------
-
- # The message is not being processed correctly by the callback method, yet it remains in the queue.
- # This is due to the fact that the message is not auto-acknowledged during consumption (auto_ack=False),
- # and it is also not rejected when it is not processed correctly by the callback method (reject_unprocessed=False).
- mrsal.start_consumer(
- queue="agreements_berlin_queue",
- callback=consumer_callback_berlin,
- callback_args=(test_config.HOST, "agreements_berlin_queue"),
- inactivity_timeout=1,
- requeue=False,
- auto_ack=False,
- reject_unprocessed=False,
- callback_with_delivery_info=True,
- )
-
- # The message is not being processed correctly by the callback method, but it deleted from the queue.
- # This is due to the fact that the message is auto-acknowledged during consumption (auto_ack=True),
- mrsal.start_consumer(
- queue="agreements_madrid_queue",
- callback=consumer_callback_madrid,
- callback_args=("agreements_madrid_queue",),
- inactivity_timeout=1,
- requeue=False,
- auto_ack=True,
- reject_unprocessed=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
- mrsal.close_connection()
-
- mrsal_obj = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
- mrsal_obj.connect_to_server()
- berlin_messages = get_all_messages(mrsal_obj=mrsal_obj, queue_name="agreements_berlin_queue")
- madrid_messages = get_all_messages(mrsal_obj=mrsal_obj, queue_name="agreements_madrid_queue")
-
- print(f"--> berlin_messages={berlin_messages}")
- print(f"--> madrid_messages={madrid_messages}")
- assert len(berlin_messages) == 1
- assert berlin_messages[0] == message_berlin
- assert len(madrid_messages) == 0
-
- mrsal_obj.close_connection()
-
-
-def consumer_callback_berlin(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- time.sleep(1)
- return False
-
-
-def consumer_callback_madrid(queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- time.sleep(1)
- return False
-
diff --git a/tests/test_blocking_mrsal.py b/tests/test_blocking_mrsal.py
new file mode 100644
index 0000000..596a278
--- /dev/null
+++ b/tests/test_blocking_mrsal.py
@@ -0,0 +1,253 @@
+import os
+import unittest
+from unittest.mock import Mock, patch, MagicMock, call
+from pika.exceptions import AMQPConnectionError, UnroutableError
+from pydantic import ValidationError
+
+from mrsal.amqp.subclass import MrsalAMQP
+from tenacity import RetryError
+from tests.conftest import SETUP_ARGS, ExpectedPayload
+
+
+
+class TestMrsalBlockingAMQP(unittest.TestCase):
+ @patch('mrsal.amqp.subclass.MrsalAMQP.setup_blocking_connection')
+ @patch('mrsal.amqp.subclass.pika.channel')
+ def setUp(self, mock_blocking_connection, mock_setup_connection):
+ # Set up mock behaviors for the connection and channel
+ self.mock_channel = MagicMock()
+ self.mock_connection = MagicMock()
+ self.mock_connection.channel.return_value = self.mock_channel
+ mock_blocking_connection.return_value = self.mock_connection
+
+ # Mock the setup_connection to simulate a successful connection setup
+ mock_setup_connection.return_value = None # Simulate setup_connection doing nothing (successful setup)
+
+ # Create an instance of BlockRabbit
+ self.consumer = MrsalAMQP(**SETUP_ARGS, use_blocking=True)
+ self.consumer._channel = self.mock_channel # Set the channel to the mocked one
+
+ @patch.object(MrsalAMQP, 'setup_blocking_connection')
+ def test_retry_on_connection_failure_blocking(self, mock_blocking_connection):
+ """Test reconnection retries in blocking consumer mode."""
+
+ # Set up a mock callback function
+ mock_callback = Mock()
+
+ self.mock_channel.consume.side_effect = AMQPConnectionError("Connection lost")
+
+ with self.assertRaises(RetryError):
+ self.consumer.start_consumer(
+ queue_name='test_q',
+ exchange_name='test_x',
+ exchange_type='direct',
+ routing_key='test_route',
+ callback=mock_callback,
+ )
+
+ self.assertEqual(mock_blocking_connection.call_count, 3)
+
+ def test_valid_message_processing(self):
+ # Simulate a valid message
+ valid_body = b'{"id": 1, "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_properties = MagicMock()
+
+ # Mock the consume method to yield a valid message
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, valid_body)]
+
+ # Set up a mock callback function
+ mock_callback = Mock()
+
+
+ # Start the consumer with the payload model and callback
+ self.consumer.start_consumer(
+ queue_name='test_q',
+ exchange_name='test_x',
+ exchange_type='direct',
+ routing_key='test_route',
+ callback=mock_callback,
+ payload_model=ExpectedPayload
+ )
+
+ # Assert the callback was called once with the correct data
+ mock_callback.assert_called_once_with(mock_method_frame, mock_properties, valid_body)
+
+ def test_valid_message_processing_no_autoack(self):
+ """Test that a message is acknowledged on successful processing."""
+ valid_body = b'{"id": 1, "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_method_frame.delivery_tag = 123
+ mock_properties = MagicMock()
+
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, valid_body)]
+ mock_callback = Mock()
+
+ self.consumer.start_consumer(
+ exchange_name="test_x",
+ exchange_type="direct",
+ queue_name="test_q",
+ routing_key="test_route",
+ callback=mock_callback,
+ payload_model=ExpectedPayload,
+ auto_ack=False
+ )
+
+ self.mock_channel.basic_ack.assert_called_once_with(delivery_tag=123)
+
+ def test_invalid_message_skipped(self):
+ # Simulate an invalid message that fails validation
+ invalid_body = b'{"id": "wrong_type", "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_properties = MagicMock()
+
+ # Mock the consume method to yield an invalid message
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, invalid_body)]
+
+ # Set up a mock callback function
+ mock_callback = Mock()
+
+ # Start the consumer with the payload model and callback
+ self.consumer.start_consumer(
+ queue_name='test_queue',
+ auto_ack=True,
+ exchange_name='test_x',
+ exchange_type='direct',
+ routing_key='test_route',
+ callback=mock_callback,
+ payload_model=ExpectedPayload
+ )
+
+ # Assert the callback was not called since the message should be skipped
+ mock_callback.assert_not_called()
+
+ def test_requeue_on_validation_failure(self):
+ # Simulate an invalid message that fails validation
+ invalid_body = b'{"id": "wrong_type", "name": "Test", "active": true}'
+ mock_method_frame = MagicMock()
+ mock_method_frame.delivery_tag = 123 # Set a delivery tag for nack
+ mock_properties = MagicMock()
+
+ # Mock the consume method to yield an invalid message
+ self.mock_channel.consume.return_value = [(mock_method_frame, mock_properties, invalid_body)]
+
+ # Start the consumer with the payload model
+ with patch.object(self.consumer._channel, 'basic_nack') as mock_nack:
+ self.consumer.start_consumer(
+ queue_name='test_q',
+ auto_ack=False, # Disable auto_ack to test nack behavior
+ exchange_name='test_x',
+ exchange_type='direct',
+ routing_key='test_route',
+ payload_model=ExpectedPayload
+ )
+
+ # Assert that basic_nack was called with requeue=True
+ mock_nack.assert_called_once_with(delivery_tag=123, requeue=True)
+
+ def test_publish_message(self):
+ """Test that the message is correctly published to the exchange."""
+ # Mock the setup methods for auto declare
+ self.consumer._setup_exchange_and_queue = Mock()
+
+ # Mock the message to be published
+ message = b'{"data": "test_message"}'
+ exchange_name = 'test_x'
+ routing_key = 'test_route'
+
+ # Publish the message
+ self.consumer.publish_message(
+ exchange_name=exchange_name,
+ routing_key=routing_key,
+ message=message,
+ exchange_type='direct',
+ queue_name='test_q',
+ auto_declare=True
+ )
+
+ # Assert the setup was called
+ self.consumer._setup_exchange_and_queue.assert_called_once_with(
+ exchange_name=exchange_name,
+ queue_name='test_q',
+ exchange_type='direct',
+ routing_key=routing_key
+ )
+
+ # Assert the message was published correctly
+ self.mock_channel.basic_publish.assert_called_once_with(
+ exchange=exchange_name,
+ routing_key=routing_key,
+ body=message,
+ properties=None
+ )
+
+ def test_retry_on_unroutable_error(self):
+ """Test that the publish_message retries 3 times when UnroutableError is raised."""
+ # Mock the setup methods for auto declare
+ self.consumer._setup_exchange_and_queue = Mock()
+
+ # Set up the message and parameters
+ message = "test_message"
+ exchange_name = 'test_x'
+ routing_key = 'test_route'
+ queue_name = 'test_q'
+
+ # Mock the basic_publish to raise UnroutableError
+ self.mock_channel.basic_publish.side_effect = UnroutableError("Message could not be routed")
+
+ # Atempt to publish the message
+ with self.assertRaises(RetryError):
+ self.consumer.publish_message(
+ exchange_name=exchange_name,
+ routing_key=routing_key,
+ message=message,
+ exchange_type='direct',
+ queue_name=queue_name,
+ auto_declare=True
+ )
+
+ # Assert that basic_publish was called 3 times due to retries
+ self.assertEqual(self.mock_channel.basic_publish.call_count, 3)
+ # Assert that "test_message" appears 3 times
+
+ # Assert the correct calls were made with the expected arguments
+ expected_call = call(
+ exchange=exchange_name,
+ routing_key=routing_key,
+ body=message,
+ properties=None
+ )
+ self.mock_channel.basic_publish.assert_has_calls([expected_call] * 3)
+
+class TestBlockRabbitSSLSetup(unittest.TestCase):
+
+ def test_ssl_setup_with_valid_paths(self):
+ with patch.dict('os.environ', {
+ 'RABBITMQ_CERT': 'test_cert.crt',
+ 'RABBITMQ_KEY': 'test_key.key',
+ 'RABBITMQ_CAFILE': 'test_ca.ca'
+ }, clear=True):
+ consumer = MrsalAMQP(**SETUP_ARGS, ssl=True, use_blocking=True)
+
+ # Check if SSL paths are correctly loaded and blocking is used
+ self.assertEqual(consumer.tls_dict['crt'], 'test_cert.crt')
+ self.assertEqual(consumer.tls_dict['key'], 'test_key.key')
+ self.assertEqual(consumer.tls_dict['ca'], 'test_ca.ca')
+
+ @patch.dict('os.environ', {
+ 'RABBITMQ_CERT': '',
+ 'RABBITMQ_KEY': '',
+ 'RABBITMQ_CAFILE': ''
+ })
+ def test_ssl_setup_with_missing_paths(self):
+ with self.assertRaises(ValidationError):
+ MrsalAMQP(**SETUP_ARGS, ssl=True, use_blocking=True)
+
+ @patch.dict(os.environ, {}, clear=True)
+ def test_ssl_setup_without_env_vars(self):
+ with self.assertRaises(ValidationError):
+ MrsalAMQP(**SETUP_ARGS, ssl=True, use_blocking=True)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_concurrent/test_concurrent_consumers.py b/tests/test_concurrent/test_concurrent_consumers.py
deleted file mode 100644
index 93359f5..0000000
--- a/tests/test_concurrent/test_concurrent_consumers.py
+++ /dev/null
@@ -1,97 +0,0 @@
-import json
-import time
-
-import pika
-from pika.exchange_type import ExchangeType
-from loguru import logger as log
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-APP_ID = "TEST_CONCURRENT_CONSUMERS"
-EXCHANGE = "CLINIC"
-EXCHANGE_TYPE = ExchangeType.direct
-QUEUE_EMERGENCY = "EMERGENCY"
-NUM_THREADS = 3
-NUM_MESSAGES = 3
-INACTIVITY_TIMEOUT = 3
-ROUTING_KEY = "PROCESS FOR EMERGENCY"
-MESSAGE_ID = "HOSPITAL_EMERGENCY_MRI_"
-
-
-def test_concurrent_consumer():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange=EXCHANGE)
- mrsal.queue_delete(queue=QUEUE_EMERGENCY)
- # ------------------------------------------
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange=EXCHANGE, exchange_type=EXCHANGE_TYPE)
- assert exch_result is not None
- # ------------------------------------------
- # Setup queue for madrid agreements
- q_result: pika.frame.Method = mrsal.setup_queue(queue=QUEUE_EMERGENCY)
- assert q_result is not None
-
- # Bind queue to exchange with binding key
- qb_result: pika.frame.Method = mrsal.setup_queue_binding(exchange=EXCHANGE, routing_key=ROUTING_KEY, queue=QUEUE_EMERGENCY)
- assert qb_result is not None
- # ------------------------------------------
- # Publisher:
- # Publish NUM_MESSAGES to the queue
- for msg_index in range(NUM_MESSAGES):
- prop = pika.BasicProperties(
- app_id=APP_ID,
- message_id=MESSAGE_ID + str(msg_index),
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- message = "MRI_" + str(msg_index)
- mrsal.publish_message(exchange=EXCHANGE, routing_key=ROUTING_KEY, message=json.dumps(message), prop=prop)
- # ------------------------------------------
- time.sleep(1)
- # Confirm messages are routed to the queue
- result1 = mrsal.setup_queue(queue=QUEUE_EMERGENCY, passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == NUM_MESSAGES
- # ------------------------------------------
- # Start concurrent consumers
- start_time = time.time()
- mrsal.start_concurrence_consumer(
- total_threads=NUM_THREADS,
- queue=QUEUE_EMERGENCY,
- callback=consumer_callback_with_delivery_info,
- callback_args=(test_config.HOST, QUEUE_EMERGENCY),
- exchange=EXCHANGE,
- exchange_type=EXCHANGE_TYPE,
- routing_key=ROUTING_KEY,
- inactivity_timeout=INACTIVITY_TIMEOUT,
- callback_with_delivery_info=True,
- )
- duration = time.time() - start_time
- log.info(f"Concurrent consumers are done in {duration} seconds")
- # ------------------------------------------
- # Confirm messages are consumed
- result2 = mrsal.setup_queue(queue=QUEUE_EMERGENCY, passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
- mrsal.close_connection()
-
-
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- time.sleep(5)
- return True
-
-
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- time.sleep(5)
- return True
diff --git a/tests/test_concurrent/test_publisher.py b/tests/test_concurrent/test_publisher.py
deleted file mode 100644
index 75b7723..0000000
--- a/tests/test_concurrent/test_publisher.py
+++ /dev/null
@@ -1,48 +0,0 @@
-"""
-This test is just a message publisher that can be used after running\
-the concurrent consumers in "test_concurrent_consumers.py" to test "inactivity_timeout".
-"""
-
-import json
-
-import pika
-from pika.exchange_type import ExchangeType
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-APP_ID = "TEST_CONCURRENT_CONSUMERS"
-EXCHANGE = "CLINIC"
-EXCHANGE_TYPE = ExchangeType.direct
-QUEUE_EMERGENCY = "EMERGENCY"
-NUM_THREADS = 3
-NUM_MESSAGES = 2
-INACTIVITY_TIMEOUT = 10
-ROUTING_KEY = "PROCESS FOR EMERGENCY"
-MESSAGE_ID = "HOSPITAL_EMERGENCY_CT_"
-
-
-def test_concurrent_consumer():
- # ------------------------------------------
- # Publisher:
- # Publish NUM_MESSAGES to the queue
- for msg_index in range(NUM_MESSAGES):
- prop = pika.BasicProperties(
- app_id=APP_ID,
- message_id=MESSAGE_ID + str(msg_index),
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- message = "CT_" + str(msg_index)
- mrsal.publish_message(exchange=EXCHANGE, routing_key=ROUTING_KEY, message=json.dumps(message), prop=prop)
- # ------------------------------------------
-
- mrsal.close_connection()
diff --git a/tests/test_delay_and_dl_messages/test_dead_and_delay_letters.py b/tests/test_delay_and_dl_messages/test_dead_and_delay_letters.py
deleted file mode 100644
index 4a1ab42..0000000
--- a/tests/test_delay_and_dl_messages/test_dead_and_delay_letters.py
+++ /dev/null
@@ -1,185 +0,0 @@
-import json
-import time
-
-import pika
-from loguru import logger as log
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_delay_and_dead_letters():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.exchange_delete(exchange="dl_agreements")
- mrsal.queue_delete(queue="agreements_queue")
- mrsal.queue_delete(queue="dl_agreements_queue")
- # ------------------------------------------
-
- # Setup dead letters exchange
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange="dl_agreements", exchange_type="direct")
- assert exch_result1 is not None
-
- # Setup main exchange with 'x-delayed-message' type
- # and arguments where we specify how the messages will be routed after the
- # delay period specified
- exch_result2: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="x-delayed-message", arguments={"x-delayed-type": "direct"})
- assert exch_result2 is not None
- # ------------------------------------------
-
- # Setup main queue with arguments where we specify DL_EXCHANGE,
- # DL_ROUTING_KEY and TTL
- q_result1: pika.frame.Method = mrsal.setup_queue(
- queue="agreements_queue", arguments={"x-dead-letter-exchange": "dl_agreements", "x-dead-letter-routing-key": "dl_agreements_key", "x-message-ttl": test_config.MESSAGE_TTL}
- )
- assert q_result1 is not None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="agreements_key", queue="agreements_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Bind DL_QUEUE to DL_EXCHANGE with DL_ROUTING_KEY
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="dl_agreements_queue")
- assert q_result2 is not None
-
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="dl_agreements", routing_key="dl_agreements_key", queue="dl_agreements_queue")
- assert qb_result2 is not None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published with x-delay=2000
- Message ("uuid2") is published with x-delay=1000
- Message ("uuid3") is published with x-delay=3000
- Message ("uuid4") is published with x-delay=4000
- """
- x_delay1: int = 2000 # ms
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_exchange_dead_and_delay_letters",
- message_id="uuid1_2000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay1},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message1), prop=prop1)
-
- x_delay2: int = 1000
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_exchange_dead_and_delay_letters",
- message_id="uuid2_1000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay2},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message2), prop=prop2)
-
- x_delay3: int = 3000
- message3 = "uuid3"
- prop3 = pika.BasicProperties(
- app_id="test_exchange_dead_and_delay_letters",
- message_id="uuid3_3000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay3},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message3), prop=prop3)
-
- x_delay4: int = 4000
- message4 = "uuid4"
- prop4 = pika.BasicProperties(
- app_id="test_exchange_dead_and_delay_letters",
- message_id="uuid4_4000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay4},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message4), prop=prop4)
- # ------------------------------------------
-
- log.info('===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid2"): Consumed first because its delivered from exchange to the queue
- after x-delay=1000ms which is the shortest time.
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be forwarded to dead-letters-exchange \
- (x-first-death-reason: rejected).
- Message ("uuid1"): Consumed at second place because its x-delay = 2000 ms.
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid3"): Consumed at third place because its x-delay = 3000 ms.
- - This message has processing time in the consumer's callback equal to 3s
- which is greater that TTL=2s.
- - After processing will be positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid4"): Consumed at fourth place because its x-delay = 4000 ms.
- - This message will be forwarded to dead-letters-exchange
- because it spent in the queue more than TTL=2s waiting "uuid3" to be \
- processed (x-first-death-reason: expired).
- """
- mrsal.start_consumer(
- queue="agreements_queue",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "agreements_queue"),
- inactivity_timeout=6,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are routed to respected queue
- result = mrsal.setup_queue(queue="dl_agreements_queue")
- message_count = result.method.message_count
- assert message_count == 2
- # ------------------------------------------
-
- log.info('===== Start consuming from "dl_agreements_queue" ========')
- """
- Consumer from dead letters queue
- Message ("uuid2"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- Message ("uuid4"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- """
-
- mrsal.start_consumer(
- queue="dl_agreements_queue",
- callback=consumer_dead_letters_callback,
- callback_args=(test_config.HOST, "dl_agreements_queue"),
- inactivity_timeout=3,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result = mrsal.setup_queue(queue="dl_agreements_queue")
- message_count = result.method.message_count
- log.info(f'Message count in queue "dl_agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- if message == b'"\\"uuid3\\""':
- time.sleep(3)
- return message != b'"\\"uuid2\\""'
-
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
diff --git a/tests/test_delay_and_dl_messages/test_dead_letters.py b/tests/test_delay_and_dl_messages/test_dead_letters.py
deleted file mode 100644
index 5034016..0000000
--- a/tests/test_delay_and_dl_messages/test_dead_letters.py
+++ /dev/null
@@ -1,178 +0,0 @@
-import json
-import time
-
-import pika
-from loguru import logger as log
-import mrsal.config.config as config
-import tests.config as test_config
-
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST, verbose=True)
-mrsal.connect_to_server()
-
-
-def test_dead_letters():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.exchange_delete(exchange="dl_agreements")
- mrsal.queue_delete(queue="agreements_queue")
- mrsal.queue_delete(queue="dl_agreements_queue")
- # ------------------------------------------
-
- # Setup dead letters exchange
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange="dl_agreements", exchange_type="direct")
- assert exch_result1 is not None
-
- # Setup main exchange
- exch_result2: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="direct")
- assert exch_result2 is not None
- # ------------------------------------------
-
- # Setup main queue with arguments where we specify DL_EXCHANGE,
- # DL_ROUTING_KEY and TTL
- q_result1: pika.frame.Method = mrsal.setup_queue(
- queue="agreements_queue", arguments={"x-dead-letter-exchange": "dl_agreements", "x-dead-letter-routing-key": "dl_agreements_key", "x-message-ttl": test_config.MESSAGE_TTL}
- )
- assert q_result1 is not None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="agreements_key", queue="agreements_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Bind DL_QUEUE to DL_EXCHANGE with DL_ROUTING_KEY
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="dl_agreements_queue")
- assert q_result2 is not None
-
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="dl_agreements", routing_key="dl_agreements_key", queue="dl_agreements_queue")
- assert qb_result2 is not None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published
- Message ("uuid2") is published
- Message ("uuid3") is published
- Message ("uuid4") is published
- """
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_exchange_dead_letters",
- message_id="msg_uuid1",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message1), prop=prop1)
-
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_exchange_dead_letters",
- message_id="msg_uuid2",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message2), prop=prop2)
-
- message3 = "uuid3"
- prop3 = pika.BasicProperties(
- app_id="test_exchange_dead_letters",
- message_id="msg_uuid3",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message3), prop=prop3)
-
- message4 = "uuid4"
- prop4 = pika.BasicProperties(
- app_id="test_exchange_dead_letters",
- message_id="msg_uuid4",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message4), prop=prop4)
- # ------------------------------------------
-
- log.info('===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be forwarded to dead-letters-exchange \
- (x-first-death-reason: rejected).
- Message ("uuid3"):
- - This message has processing time in the consumer's callback equal to 3s
- which is greater that TTL=2s.
- - After processing will be positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid4"):
- - This message will be forwarded to dead-letters-exchange
- because it spent in the queue more than TTL=2s waiting "uuid3" to be \
- processed (x-first-death-reason: expired).
- """
- mrsal.start_consumer(
- queue="agreements_queue",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "agreements_queue"),
- inactivity_timeout=6,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are routed to respected queue
- result = mrsal.setup_queue(queue="dl_agreements_queue")
- message_count = result.method.message_count
- assert message_count == 2
- # ------------------------------------------
-
- log.info('===== Start consuming from "dl_agreements_queue" ========')
- """
- Consumer from dead letters queue
- Message ("uuid2"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- Message ("uuid4"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from dl-queue.
- """
- mrsal.start_consumer(
- queue="dl_agreements_queue",
- callback=consumer_dead_letters_callback,
- callback_args=(test_config.HOST, "dl_agreements_queue"),
- inactivity_timeout=3,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result = mrsal.setup_queue(queue="dl_agreements_queue")
- message_count = result.method.message_count
- log.info(f'Message count in queue "dl_agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- if message == b'"\\"uuid3\\""':
- time.sleep(3)
- return message != b'"\\"uuid2\\""'
-
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
diff --git a/tests/test_delay_and_dl_messages/test_delay_letters.py b/tests/test_delay_and_dl_messages/test_delay_letters.py
deleted file mode 100644
index 501871e..0000000
--- a/tests/test_delay_and_dl_messages/test_delay_letters.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import json
-
-import pika
-from loguru import logger as log
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_delay_letter():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_queue")
- # ------------------------------------------
- # Setup exchange with 'x-delayed-message' type
- # and arguments where we specify how the messages will be routed after the
- # delay period specified
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="x-delayed-message", arguments={"x-delayed-type": "direct"})
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue
- q_result: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue")
- assert q_result is not None
-
- # Bind queue to exchange with routing_key
- qb_result: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="agreements_key", queue="agreements_queue")
- assert qb_result is not None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published with x-delay=3000
- Message ("uuid2") is published with x-delay=1000
- """
- x_delay1: int = 3000
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_exchange_delay_letters",
- message_id="uuid1_3000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay1},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message1), prop=prop1)
-
- x_delay2: int = 1000
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_exchange_delay_letters",
- message_id="uuid2_1000ms",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": x_delay2},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message2), prop=prop2)
- # ------------------------------------------
-
- log.info('===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid2"): Consumed first because its delivered from exchange to the queue
- after x-delay=1000ms which is the shortest time.
- Message ("uuid1"): Consumed at second place because its x-delay = 3000 ms.
- """
- mrsal.start_consumer(queue="agreements_queue", callback=consumer_callback, callback_args=(test_config.HOST, "agreements_queue"), inactivity_timeout=3, requeue=False)
- # ------------------------------------------
-
- log.info("===== Confirm messages are consumed ========")
- result = mrsal.setup_queue(queue="agreements_queue")
- message_count = result.method.message_count
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, message: str):
- return True
diff --git a/tests/test_delay_and_dl_messages/test_exceptions.py b/tests/test_delay_and_dl_messages/test_exceptions.py
deleted file mode 100644
index ceb29eb..0000000
--- a/tests/test_delay_and_dl_messages/test_exceptions.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import os
-
-import pika
-import pytest
-from pika.exchange_type import ExchangeType
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-def test_connection_exceptions():
- failed_host = "not_exist_localhost"
- mrsal1 = Mrsal(host=failed_host, port=config.RABBITMQ_PORT, credentials=(config.RABBITMQ_USER, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- with pytest.raises(pika.exceptions.AMQPConnectionError):
- mrsal1.connect_to_server()
-
- host = os.environ.get("RABBITMQ_HOST", "localhost")
- failed_v_hold_type = 123
- mrsal2 = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=("root", "password"), virtual_host=failed_v_hold_type)
- with pytest.raises(pika.exceptions.AMQPConnectionError):
- mrsal2.connect_to_server()
-
- failed_password = "123"
- mrsal3 = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=(config.RABBITMQ_USER, failed_password), virtual_host=config.V_HOST)
- with pytest.raises(pika.exceptions.AMQPConnectionError):
- mrsal3.connect_to_server()
-
- failed_username = "root1"
- mrsal4 = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=(failed_username, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- with pytest.raises(pika.exceptions.AMQPConnectionError):
- mrsal4.connect_to_server()
-
- failed_port = 123
- mrsal5 = Mrsal(host=host, port=failed_port, credentials=(config.RABBITMQ_USER, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- with pytest.raises(pika.exceptions.AMQPConnectionError):
- mrsal5.connect_to_server()
-
-
-def test_exchange_exceptions():
- host = os.environ.get("RABBITMQ_HOST", "localhost")
- mrsal = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=(config.RABBITMQ_USER, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- mrsal.connect_to_server()
-
- with pytest.raises(pika.exceptions.ConnectionClosedByBroker):
- mrsal.setup_exchange(exchange=test_config.EXCHANGE, exchange_type="not_exist")
-
- with pytest.raises(TypeError):
- mrsal.setup_exchange(test_config.EXCHANGE_TYPE)
-
-
-def test_queue_exceptions():
- host = os.environ.get("RABBITMQ_HOST", "localhost")
- mrsal = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=(config.RABBITMQ_USER, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- mrsal.connect_to_server()
- not_exist_queue = "not_exist"
- with pytest.raises(pika.exceptions.ConnectionClosedByBroker):
- mrsal.setup_queue(queue=not_exist_queue, passive=True)
-
-
-def test_bind_exceptions():
- host = os.environ.get("RABBITMQ_HOST", "localhost")
- mrsal = Mrsal(host=host, port=config.RABBITMQ_PORT, credentials=(config.RABBITMQ_USER, config.RABBITMQ_PASSWORD), virtual_host=config.V_HOST)
- mrsal.connect_to_server()
- exchange = "not_exist_exch"
- queue = "not_exist_queue"
- routing_key = "whatever"
- with pytest.raises(pika.exceptions.ConnectionClosedByBroker):
- mrsal.setup_queue_binding(exchange=exchange, queue=queue, routing_key=routing_key)
-
-
-def test_active_exchange_exceptions():
- mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
- mrsal.connect_to_server()
- exchange = "not_exist_exch"
- with pytest.raises(pika.exceptions.ConnectionClosedByBroker):
- mrsal.exchange_exist(exchange=exchange, exchange_type=ExchangeType.direct)
-
-
-def test_active_queue_exceptions():
- mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
- mrsal.connect_to_server()
- queue = "not_exist_queue"
- with pytest.raises(pika.exceptions.ConnectionClosedByBroker):
- mrsal.queue_exist(queue=queue)
diff --git a/tests/test_delay_and_dl_messages/test_quorum_delivery_limit.py b/tests/test_delay_and_dl_messages/test_quorum_delivery_limit.py
deleted file mode 100644
index 731a80b..0000000
--- a/tests/test_delay_and_dl_messages/test_quorum_delivery_limit.py
+++ /dev/null
@@ -1,128 +0,0 @@
-"""
-- The quorum queue is a modern queue type for RabbitMQ implementing a durable, \
- replicated FIFO queue based on the Raft consensus algorithm. \
-- It is available as of RabbitMQ 3.8.0.\
-- It is possible to set a delivery limit for a queue using a policy argument, \
- delivery-limit.
-
-For more info: https://www.rabbitmq.com/quorum-queues.html
-"""
-
-import json
-import time
-
-import pika
-from loguru import logger as log
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST, verbose=True)
-mrsal.connect_to_server()
-
-
-def test_quorum_delivery_limit():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_queue")
- # ------------------------------------------
- queue_arguments = {
- # Queue of quorum type
- "x-queue-type": "quorum",
- # Set a delivery limit for a queue using a policy argument, delivery-limit.
- # When a message has been returned more times than the limit the message \
- # will be dropped or dead-lettered(if a DLX is configured).
- "x-delivery-limit": 3,
- }
-
- # ------------------------------------------
-
- # Setup main exchange
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="direct")
- assert exch_result1 is not None
- # ------------------------------------------
-
- # Setup main queue with arguments
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue", arguments=queue_arguments)
- assert q_result1 is not None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="agreements_key", queue="agreements_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published
- Message ("uuid2") is published
- """
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_delivery-limit",
- message_id="msg_uuid1",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message1), prop=prop1)
-
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_delivery-limit",
- message_id="msg_uuid2",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message2), prop=prop2)
-
- # ------------------------------------------
- time.sleep(1)
-
- # Confirm messages are published
- result: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue", passive=True, arguments=queue_arguments)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" before consuming= {message_count}')
- assert message_count == 2
-
- log.info('===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be redelivered until, either it's acknowledged or \
- x-delivery-limit is reached.
- """
- mrsal.start_consumer(
- queue="agreements_queue",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "agreements_queue"),
- inactivity_timeout=1,
- requeue=True,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue", passive=True, arguments=queue_arguments)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- return message != b'"\\"uuid2\\""'
-
-
-def consumer_dead_letters_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
diff --git a/tests/test_delay_and_dl_messages/test_redelivery_with_delay.py b/tests/test_delay_and_dl_messages/test_redelivery_with_delay.py
deleted file mode 100644
index b6aad5c..0000000
--- a/tests/test_delay_and_dl_messages/test_redelivery_with_delay.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import json
-import time
-
-import pika
-from loguru import logger as log
-import mrsal.config.config as config
-import tests.config as test_config
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST, verbose=True)
-mrsal.connect_to_server()
-
-
-def test_redelivery_with_delay():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_queue")
- # ------------------------------------------
-
- # Setup main exchange with delay type
- exch_result1: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="x-delayed-message", arguments={"x-delayed-type": "direct"})
- assert exch_result1 is not None
- # ------------------------------------------
- # Setup main queue
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue")
- assert q_result1 is not None
-
- # Bind main queue to the main exchange with routing_key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="agreements_key", queue="agreements_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- """
- Publisher:
- Message ("uuid1") is published with delay 1 sec
- Message ("uuid2") is published with delay 2 sec
- """
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_delivery-limit",
- message_id="msg_uuid1",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": 1000, "x-retry-limit": 2},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message1), prop=prop1)
-
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_delivery-limit",
- message_id="msg_uuid2",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"x-delay": 2000, "x-retry-limit": 3, "x-retry": 0},
- )
- mrsal.publish_message(exchange="agreements", routing_key="agreements_key", message=json.dumps(message2), prop=prop2)
-
- # ------------------------------------------
- # Waiting for the delay time of the messages in the exchange. Then will be
- # delivered to the queue.
- time.sleep(3)
-
- # Confirm messages are published
- result: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue", passive=True)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" before consuming= {message_count}')
- assert message_count == 2
-
- log.info('===== Start consuming from "agreements_queue" ========')
- """
- Consumer from main queue
- Message ("uuid1"):
- - This message is positively-acknowledged by consumer.
- - Then it will be deleted from queue.
- Message ("uuid2"):
- - This message is rejected by consumer's callback.
- - Therefor it will be negatively-acknowledged by consumer.
- - Then it will be redelivered with incremented x-retry until, either \
- is acknowledged or x-retry = x-retry-limit.
- """
- mrsal.start_consumer(
- queue="agreements_queue",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "agreements_queue"),
- inactivity_timeout=8,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result: pika.frame.Method = mrsal.setup_queue(queue="agreements_queue", passive=True)
- message_count = result.method.message_count
- log.info(f'Message count in queue "agreements_queue" after consuming= {message_count}')
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message: str):
- return message != b'"\\"uuid2\\""'
diff --git a/tests/test_exchange_types/test_exchange_direct_workflow.py b/tests/test_exchange_types/test_exchange_direct_workflow.py
deleted file mode 100644
index 7254c5e..0000000
--- a/tests/test_exchange_types/test_exchange_direct_workflow.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import json
-import time
-from loguru import logger as log
-import pika
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_direct_exchange_workflow():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_berlin_queue")
- mrsal.queue_delete(queue="agreements_madrid_queue")
- # ------------------------------------------
-
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="direct")
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue for berlin agreements
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="agreements_berlin_queue")
- assert q_result1 is not None
-
- # Bind queue to exchange with binding key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="berlin agreements", queue="agreements_berlin_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Setup queue for madrid agreements
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="agreements_madrid_queue")
- assert q_result2 is not None
-
- # Bind queue to exchange with binding key
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="madrid agreements", queue="agreements_madrid_queue")
- assert qb_result2 is not None
- # ------------------------------------------
-
- # Publisher:
- prop1 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="madrid_uuid",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
-
- # Message ("uuid2") is published to the exchange and it's routed to queue2
- message2 = "uuid2"
- mrsal.publish_message(exchange="agreements", routing_key="madrid agreements", message=json.dumps(message2), prop=prop1)
-
- prop2 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="berlin_uuid",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- # Message ("uuid1") is published to the exchange and it's routed to queue1
- message1 = "uuid1"
- mrsal.publish_message(exchange="agreements", routing_key="berlin agreements", message=json.dumps(message1), prop=prop2)
- # ------------------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queues
- result1 = mrsal.setup_queue(queue="agreements_berlin_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
-
- result2 = mrsal.setup_queue(queue="agreements_madrid_queue", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 1
- # ------------------------------------------
-
- # Start consumer for every queue
- mrsal.start_consumer(
- queue="agreements_berlin_queue",
- callback=consumer_callback_with_delivery_info,
- callback_args=(test_config.HOST, "agreements_berlin_queue"),
- inactivity_timeout=1,
- requeue=False,
- callback_with_delivery_info=True,
- )
-
- mrsal.start_consumer(
- queue="agreements_madrid_queue",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "agreements_madrid_queue"),
- inactivity_timeout=1,
- requeue=False,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result1 = mrsal.setup_queue(queue="agreements_berlin_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 0
-
- result2 = mrsal.setup_queue(queue="agreements_madrid_queue", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
-
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
-
-
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- return True
diff --git a/tests/test_exchange_types/test_exchange_headers_workflow.py b/tests/test_exchange_types/test_exchange_headers_workflow.py
deleted file mode 100644
index d15e50d..0000000
--- a/tests/test_exchange_types/test_exchange_headers_workflow.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import json
-import time
-from loguru import logger as log
-import pika
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_headers_exchange_workflow():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="zip_report")
- mrsal.queue_delete(queue="pdf_report")
- # ------------------------------------------
-
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="headers")
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="zip_report")
- assert q_result1 is not None
-
- # Bind queue to exchange with arguments
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", queue="zip_report", arguments={"x-match": "all", "format": "zip", "type": "report"})
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Setup queue
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="pdf_report")
- assert q_result2 is not None
-
- # Bind queue to exchange with arguments
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", queue="pdf_report", arguments={"x-match": "any", "format": "pdf", "type": "log"})
- assert qb_result2 is not None
- # ------------------------------------------
-
- # Publisher:
- # Message ("uuid1") is published to the exchange with a set of headers
- prop1 = pika.BasicProperties(
- app_id="test_exchange_headers",
- message_id="zip_report",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"format": "zip", "type": "report"},
- )
-
- message1 = "uuid1"
- mrsal.publish_message(exchange="agreements", routing_key="", message=json.dumps(message1), prop=prop1)
-
- # Message ("uuid2") is published to the exchange with a set of headers
- prop2 = pika.BasicProperties(
- app_id="test_exchange_headers",
- message_id="pdf_date",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers={"format": "pdf", "date": "2022"},
- )
- message2 = "uuid2"
- mrsal.publish_message(exchange="agreements", routing_key="", message=json.dumps(message2), prop=prop2)
- # ------------------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queues
- result1 = mrsal.setup_queue(queue="zip_report", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
-
- result2 = mrsal.setup_queue(queue="pdf_report", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 1
- # ------------------------------------------
-
- # Start consumer for every queue
- mrsal.start_consumer(
- queue="zip_report", callback=consumer_callback, callback_args=(test_config.HOST, "zip_report"), inactivity_timeout=2, requeue=False, callback_with_delivery_info=True
- )
-
- mrsal.start_consumer(
- queue="pdf_report", callback=consumer_callback, callback_args=(test_config.HOST, "pdf_report"), inactivity_timeout=2, requeue=False, callback_with_delivery_info=True
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result1 = mrsal.setup_queue(queue="zip_report", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 0
-
- result2 = mrsal.setup_queue(queue="pdf_report", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
-
-def consumer_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
diff --git a/tests/test_exchange_types/test_exchange_topic_workflow.py b/tests/test_exchange_types/test_exchange_topic_workflow.py
deleted file mode 100644
index fc989cd..0000000
--- a/tests/test_exchange_types/test_exchange_topic_workflow.py
+++ /dev/null
@@ -1,124 +0,0 @@
-import json
-import time
-from loguru import logger as log
-import pika
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_topic_exchange_workflow():
- # Messages will published with this routing key
- ROUTING_KEY_1: str = "agreements.eu.berlin.august.2022"
- # Messages will published with this routing key
- ROUTING_KEY_2: str = "agreements.eu.madrid.september.2022"
-
- BINDING_KEY_1: str = "agreements.eu.berlin.#" # Berlin agreements
- BINDING_KEY_2: str = "agreements.*.*.september.#" # Agreements of september
- # BINDING_KEY_3: str = 'agreements.#' # All agreements
- # ------------------------------------------
-
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="berlin_agreements")
- mrsal.queue_delete(queue="september_agreements")
-
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="topic")
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue for berlin agreements
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="berlin_agreements")
- assert q_result1 is not None
-
- # Bind queue to exchange with binding key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key=BINDING_KEY_1, queue="berlin_agreements")
- assert qb_result1 is not None
- # ----------------------------------
-
- # Setup queue for september agreements
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="september_agreements")
- assert q_result2 is not None
-
- # Bind queue to exchange with binding key
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key=BINDING_KEY_2, queue="september_agreements")
- assert qb_result2 is not None
- # ----------------------------------
-
- # Publisher:
-
- # Message ("uuid1") is published to the exchange will be routed to queue1
- message1 = "uuid1"
- prop1 = pika.BasicProperties(
- app_id="test_exchange_topic",
- message_id="berlin",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key=ROUTING_KEY_1, message=json.dumps(message1), prop=prop1)
-
- # Message ("uuid2") is published to the exchange will be routed to queue2
- message2 = "uuid2"
- prop2 = pika.BasicProperties(
- app_id="test_exchange_topic",
- message_id="september",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- mrsal.publish_message(exchange="agreements", routing_key=ROUTING_KEY_2, message=json.dumps(message2), prop=prop2)
- # ----------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queues
- result1 = mrsal.setup_queue(queue="berlin_agreements", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
-
- result2 = mrsal.setup_queue(queue="september_agreements", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 1
- # ------------------------------------------
-
- # Start consumer for every queue
- mrsal.start_consumer(
- queue="berlin_agreements",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "berlin_agreements"),
- inactivity_timeout=1,
- requeue=False,
- callback_with_delivery_info=True,
- )
-
- mrsal.start_consumer(
- queue="september_agreements",
- callback=consumer_callback,
- callback_args=(test_config.HOST, "september_agreements"),
- inactivity_timeout=1,
- requeue=False,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result1 = mrsal.setup_queue(queue="berlin_agreements", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 0
-
- result2 = mrsal.setup_queue(queue="september_agreements", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
-
-def consumer_callback(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
diff --git a/tests/test_exchange_types/test_for_readme.py b/tests/test_exchange_types/test_for_readme.py
deleted file mode 100644
index ea95988..0000000
--- a/tests/test_exchange_types/test_for_readme.py
+++ /dev/null
@@ -1,73 +0,0 @@
-import json
-import time
-import pika
-
-import mrsal.config.config as config
-import tests.config as test_config
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-mrsal.connect_to_server()
-
-
-def test_basic_workflow():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="friendship")
- mrsal.queue_delete(queue="friendship_queue")
- # ------------------------------------------
-
- # Publisher:
- prop = pika.BasicProperties(
- app_id="friendship_app", message_id="friendship_msg", content_type="text/plain", content_encoding="utf-8", delivery_mode=pika.DeliveryMode.Persistent, headers=None
- )
-
- # Message is published to the exchange and it's routed to queue
- message_body = "Hello"
- mrsal.publish_message(
- exchange="friendship", exchange_type="direct", queue="friendship_queue", routing_key="friendship_key", message=json.dumps(message_body), prop=prop, fast_setup=True
- )
- # ------------------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queue.
- result1 = mrsal.setup_queue(queue="friendship_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
- # ------------------------------------------
-
- # Start consumer.
- mrsal.start_consumer(
- queue="friendship_queue",
- callback=consumer_callback_with_delivery_info,
- callback_args=(test_config.HOST, "friendship_queue"),
- inactivity_timeout=1,
- requeue=False,
- callback_with_delivery_info=True,
- )
-
- # ------------------------------------------
-
- # Confirm messages are consumed
- result1 = mrsal.setup_queue(queue="friendship_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 0
-
-
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- str_message = json.loads(message_param).replace('"', "")
- if "Hello" in str_message:
- app_id = properties.app_id
- msg_id = properties.message_id
- print(f"app_id={app_id}, msg_id={msg_id}")
- print("Salaam habibi")
- return True # Consumed message processed correctly
- return False
-
-
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- str_message = json.loads(message_param).replace('"', "")
- if "Hello" in str_message:
- print("Salaam habibi")
- return True # Consumed message processed correctly
- return False
diff --git a/tests/test_fast_setup/test_fast_setup.py b/tests/test_fast_setup/test_fast_setup.py
deleted file mode 100644
index 7caf87c..0000000
--- a/tests/test_fast_setup/test_fast_setup.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import json
-import time
-import pika
-
-import mrsal.config.config as config
-import tests.config as test_config
-
-from mrsal.mrsal import Mrsal
-
-
-mrsal = Mrsal(host=test_config.HOST, port=config.RABBITMQ_PORT, credentials=config.RABBITMQ_CREDENTIALS, virtual_host=config.V_HOST)
-
-mrsal.connect_to_server()
-
-
-def test_fast_setup():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="friendship")
- mrsal.queue_delete(queue="friendship_queue")
- # ------------------------------------------
-
- prop = pika.BasicProperties(
- app_id="test_fast_setup",
- message_id="fast_setup",
- content_type=test_config.CONTENT_TYPE,
- content_encoding=test_config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
-
- mrsal.publish_message(
- exchange="friendship", exchange_type="direct", routing_key="friendship_key", queue="friendship_queue", message=json.dumps("Salaam habibi"), fast_setup=True, prop=prop
- )
- # ------------------------------------------
-
- # Confirm message is routed to respected queue
- time.sleep(1)
- result = mrsal.setup_queue(queue="friendship_queue")
- message_count = result.method.message_count
- assert message_count == 1
- # ------------------------------------------
-
- mrsal.start_consumer(
- exchange="friendship",
- exchange_type="direct",
- routing_key="friendship_key",
- queue="friendship_queue",
- callback=consumer_callback,
- callback_args=("localhost", "friendship_queue"),
- inactivity_timeout=1,
- callback_with_delivery_info=True,
- )
- # ------------------------------------------
-
- # Confirm message is consumed from queue
- result = mrsal.setup_queue(queue="friendship_queue")
- message_count = result.method.message_count
- assert message_count == 0
-
-
-def consumer_callback(host: str, queue: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, bin_message: str):
- str_message = json.loads(bin_message).replace('"', "")
- if "Salaam" in str_message:
- return True # Consumed message processed correctly
- return False
diff --git a/tests/test_ssl/test_with_ssl.py b/tests/test_ssl/test_with_ssl.py
deleted file mode 100644
index 5d462cc..0000000
--- a/tests/test_ssl/test_with_ssl.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import json
-import os
-import time
-import pika
-
-import mrsal.config.config as config
-
-# import tests.config as test_config
-from mrsal.mrsal import Mrsal
-
-
-"""
-In order to execute this test,
-you must include the subsequent environment variables on your machine.
- # export RABBITMQ_DEFAULT_USER=
- # export RABBITMQ_DEFAULT_PASS=
-
- # export RABBITMQ_DOMAIN_TLS=
- # export RABBITMQ_PORT_TLS=
- # export RABBITMQ_DEFAULT_VHOST=
-
- # export RABBITMQ_CAFILE='/path/to/ca.crt'
- # export RABBITMQ_CERT='/path/to/client.crt'
- # export RABBITMQ_KEY='/path/to/client.key'
-"""
-
-host = os.environ.get("RABBITMQ_DOMAIN_TLS")
-port = os.environ.get("RABBITMQ_PORT_TLS")
-credentials = (os.environ.get("RABBITMQ_DEFAULT_USER"), os.environ.get("RABBITMQ_DEFAULT_PASS"))
-virtual_host = os.environ.get("RABBITMQ_DEFAULT_VHOST")
-
-mrsal = Mrsal(host=host, port=port, credentials=credentials, virtual_host=virtual_host, ssl=True)
-
-mrsal.connect_to_server()
-
-
-def test_direct_exchange_workflow():
- # Delete existing queues and exchanges to use
- mrsal.exchange_delete(exchange="agreements")
- mrsal.queue_delete(queue="agreements_berlin_queue")
- mrsal.queue_delete(queue="agreements_madrid_queue")
- # ------------------------------------------
-
- # Setup exchange
- exch_result: pika.frame.Method = mrsal.setup_exchange(exchange="agreements", exchange_type="direct")
- assert exch_result is not None
- # ------------------------------------------
-
- # Setup queue for berlin agreements
- q_result1: pika.frame.Method = mrsal.setup_queue(queue="agreements_berlin_queue")
- assert q_result1 is not None
-
- # Bind queue to exchange with binding key
- qb_result1: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="berlin agreements", queue="agreements_berlin_queue")
- assert qb_result1 is not None
- # ------------------------------------------
-
- # Setup queue for madrid agreements
- q_result2: pika.frame.Method = mrsal.setup_queue(queue="agreements_madrid_queue")
- assert q_result2 is not None
-
- # Bind queue to exchange with binding key
- qb_result2: pika.frame.Method = mrsal.setup_queue_binding(exchange="agreements", routing_key="madrid agreements", queue="agreements_madrid_queue")
- assert qb_result2 is not None
- # ------------------------------------------
-
- # Publisher:
- prop1 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="madrid_uuid",
- content_type=config.CONTENT_TYPE,
- content_encoding=config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
-
- # Message ("uuid2") is published to the exchange and it's routed to queue2
- message2 = "uuid2"
- mrsal.publish_message(exchange="agreements", routing_key="madrid agreements", message=json.dumps(message2), prop=prop1)
-
- prop2 = pika.BasicProperties(
- app_id="test_exchange_direct",
- message_id="berlin_uuid",
- content_type=config.CONTENT_TYPE,
- content_encoding=config.CONTENT_ENCODING,
- delivery_mode=pika.DeliveryMode.Persistent,
- headers=None,
- )
- # Message ("uuid1") is published to the exchange and it's routed to queue1
- message1 = "uuid1"
- mrsal.publish_message(exchange="agreements", routing_key="berlin agreements", message=json.dumps(message1), prop=prop2)
- # ------------------------------------------
-
- time.sleep(1)
- # Confirm messages are routed to respected queues
- result1 = mrsal.setup_queue(queue="agreements_berlin_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 1
-
- result2 = mrsal.setup_queue(queue="agreements_madrid_queue", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 1
- # ------------------------------------------
-
- # Start consumer for every queue
- mrsal.start_consumer(
- queue="agreements_berlin_queue",
- callback=consumer_callback_with_delivery_info,
- callback_args=(host, "agreements_berlin_queue"),
- inactivity_timeout=1,
- requeue=False,
- callback_with_delivery_info=True,
- )
-
- mrsal.start_consumer(
- queue="agreements_madrid_queue",
- callback=consumer_callback,
- callback_args=(host, "agreements_madrid_queue"),
- inactivity_timeout=1,
- requeue=False,
- )
- # ------------------------------------------
-
- # Confirm messages are consumed
- result1 = mrsal.setup_queue(queue="agreements_berlin_queue", passive=True)
- message_count1 = result1.method.message_count
- assert message_count1 == 0
-
- result2 = mrsal.setup_queue(queue="agreements_madrid_queue", passive=True)
- message_count2 = result2.method.message_count
- assert message_count2 == 0
-
-
-def consumer_callback_with_delivery_info(host_param: str, queue_param: str, method_frame: pika.spec.Basic.Deliver, properties: pika.spec.BasicProperties, message_param: str):
- return True
-
-
-def consumer_callback(host_param: str, queue_param: str, message_param: str):
- return True