A message queue is a form of asynchronous service-to-service communication used in serverless and microservices architectures. Messages are stored on the queue until they are processed and deleted. Each message is processed only once, by a single consumer. Message queues can be used to decouple heavyweight processing, to buffer or batch work, and to smooth spiky workloads.
Communication Using Message Queue
Communication using message queues can happen in the following ways −
Writing into the shared memory by one process and reading from the shared memory by another process. As we are aware, reading can be done with multiple processes as well.
Writing into the shared memory by one process with different data packets and reading from it by multiple processes, i.e., as per message type.
Having seen certain information on message queues, now it is time to check for the system call (System V) which supports the message queues.
To perform communication using message queues, following are the steps −
Step 1 − Create a message queue or connect to an already existing message queue (msgget())
Step 2 − Write into message queue (msgsnd())
Step 3 − Read from the message queue (msgrcv())
Step 4 − Perform control operations on the message queue (msgctl())
Types of Messaging Queues:
There are two types of messaging queues: point-to-point queues and publish/subscribe queues. Here's what you need to know about each.
Producers and consumers can use the queue at the same time.
Each message can only be processed once by a single consumer.
The producer must know the receiving application's information, like the queue name, before it can send a message.
Messages may be received out of order.
A single message can be processed by more than one consumer.
Service-to-service communication is a method primarily used by serverless or microservice environments.
Message topics are used to broadcast messages to subscribers of that topic, who all receive the message.
Messages are received in the same order broadcasted.
1. Push or Pull Delivery:
Most message queues provide both push and pull options for retrieving messages. Pull means continuously querying the queue for new messages. Push means that a consumer is notified when a message is available (this is also called Pub/Sub messaging)
2. Scheduled Delivery:
Many message queues support setting a specific delivery time for a message. If you need to have a common delay for all messages, you can set up a delay queue.
3. Atleast Once Delivery:
Message queues may store multiple copies of messages for redundancy and high availability, and resend messages in the event of communication failures or errors to ensure they are delivered at least once.
4. Exactly Once Delivery:
When duplicates can't be tolerated, FIFO (first-in-first-out) message queues will make sure that each message is delivered exactly once (and only once) by filtering out duplicates automatically.
5. FIFO Queues:
In these queues the oldest (or first) entry, sometimes called the “head” of the queue, is processed first.
6. Dead Letter Queues:
A dead-letter queue is a queue to which other queues can send messages that can't be processed successfully. This makes it easy to set them aside for further inspection without blocking the queue processing or spending CPU cycles on a message that might never be consumed successfully.
Most message queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they're sent, and that a message is delivered at least once.
8. Poison Pill Message:
Poison pills are special messages that can be received, but not processed. They are a mechanism used in order to signal a consumer to end its work so it is no longer waiting for new inputs, and is similar to closing a socket in a client/server model.
Message queues will authenticate applications that try to access the queue, and allow you to use encryption to encrypt messages over the network as well as in the queue itself.
1. Better Performance:
Message queues enable asynchronous communication, which means that the endpoints that are producing and consuming messages interact with the queue, not each other.
2. Increased Reliability: Queues make your data persistent, and reduce the errors that happen when different parts of your system go offline. By separating different components with message queues, you create more fault tolerance.
3. Granular Scalability: Message queues make it possible to scale precisely where you need to. When workloads peak, multiple instances of your application can all add requests to the queue without risk of collision.
4. Simplified Decoupling: Message queues remove dependencies between components and significantly simplify the coding of decoupled applications.
Message Queue Servers:
Message queue servers are available in various languages. Selecting a particular Message Queue Server depends completely on the use-case.
Starling is a messaging server that enables reliable distributed queuing with a minimal overhead. The code for Starling was originally developed inside social media firm Twitter and released as open source in 2008
Kestrel is a cross-platform web server for ASP.NET Core. Kestrel is the web server that's included and enabled by default in ASP.NET Core project templates.
Kestrel supports the following scenarios:
HTTP/2 (except on macOS†)
Opaque upgrade used to enable WebSockets
Unix sockets for high performance behind Nginx
RabbitMQ is an open source multi-protocol messaging broker. Running rabbitmq-server starts a RabbitMQ node in the foreground. The node will display a startup banner and report when startup is complete. To shut down the server, use service management tools or rabbitmqctl(8)
4. Apache ActiveMQ:
Apache ActiveMQ® is the most popular open source, multi-protocol, Java-based message broker. It supports industry standard protocols so users get the benefits of client choices across a broad range of languages and platforms. ... ActiveMQ offers the power and flexibility to support any messaging use-case.
Beanstalkd is a work queue server that runs time-consuming tasks behind the scene. It receives messages/jobs from producers/senders and processes them with consumers/receivers.
6. Amazon SQS (Simple Queue Service):
Amazon Simple Queue Service (SQS) is a managed message queuing service technical professionals and developers use to send, store and retrieve multiple messages of various sizes asynchronously.
The service enables users to decouple individual microservices, distributed systems and serverless applications from one another and to scale them without requiring the user to establish and maintain their own message queues.
Apache Kafka is a publish-subscribe based durable messaging system. A messaging system sends messages between processes, applications, and servers. ... Another application may connect to the system and process or re-process records from a topic. The data sent is stored until a specified retention period has passed by.
8. ZMQ (Zero MQ):
ZeroMQ is a high-performance asynchronous messaging library, aimed at use in distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated message broker.
EagleMQ is a new high performance queue manager. The main tasks to be solved are the effective distribution of messages between processes, interprocessor communication and real-time notifications.
There are 3 main primitives for solving basic tasks:
Queues: It allow you to store messages and deliver them to client in the same order in which they came (FIFO principle).
Routes : It allow you to associate queues with specific key and organize fast and efficient message delivery.
Channels: The principle of operation is that a client can subscribe to specific topic with the command .channel_subscribe. When the another client sends a message to the channel with the .channel_publish command on given topic, an event with message will be sent to the client whi subscribed to this topic.
IronMQ is a message queuing service for distributed cloud applications. It supports both pushing and pulling queues for message delivery within cloud applications and between systems. ... IronMQ also supports long polling, allowing developers to keep a connection open with a queue until a message is available.
The Tech Platform