Message Queue Types: Point-to-Point vs Publish-Subscribe
Understand the two fundamental messaging patterns - point-to-point and publish-subscribe - and when to use each, including JMS, AMQP, and MQTT protocols.
Message Queue Types: Point-to-Point vs Publish-Subscribe
Message queues let services in distributed systems talk to each other without being tightly coupled. A producer drops a message into the queue and moves on—a consumer picks it up whenever convenient. This asynchronous style gives you loose coupling, fault tolerance, and a buffer for traffic spikes. Point-to-point and publish-subscribe are the two fundamental patterns, and picking the right one matters for your architecture.
Point-to-Point Messaging
In point-to-point (P2P) messaging, each message goes to exactly one consumer. The queue holds messages until a consumer picks them up, then removes the message. If no consumer is available, the message waits.
graph LR
Producer[Producer] -->|message| Q[Queue]
Q -->|message 1| Consumer1[Consumer 1]
Q -->|message 2| Consumer2[Consumer 2]
Q -->|message 3| Consumer3[Consumer 3]
This pattern is useful for task distribution. Think of a queue of print jobs: each job goes to one printer, not all printers. The sender does not care which consumer handles it, only that someone does.
Key Characteristics
- Each message goes to exactly one consumer
- Messages wait in the queue until a consumer picks them up
- Producers can outpace consumers; the queue absorbs the difference
- No fan-out—messages cannot automatically go to multiple consumers
Common Use Cases
- Task processing like image resizing or video transcoding
- Background job queues
- Decoupling requesters from responders
- Load balancing across workers
Publish-Subscribe Messaging
Publish-subscribe (pub/sub) is a different model. Messages are published to a topic, and all subscribers to that topic receive a copy.
graph LR
Producer[Publisher] -->|message| Topic[Topic]
Topic -->|message| Consumer1[Subscriber 1]
Topic -->|message| Consumer2[Subscriber 2]
Topic -->|message| Consumer3[Subscriber 3]
The publisher has no idea who is listening. Subscribers opt into topics, and every matching message goes to all of them.
Topic Hierarchies
Many pub/sub systems let you structure topics hierarchically:
orders/
orders/created
orders/updated
orders/cancelled
orders/fulfilled
A subscriber to orders gets all order events. A subscriber to orders/created gets only creation events.
Pub/Sub Key Characteristics
- One-to-many delivery: each message goes to all subscribers
- Topic-based filtering: subscribers choose what to receive
- No built-in message persistence: most systems don’t store messages for offline subscribers
- Fan-out: the same message reaches multiple consumers
Pub/Sub Common Use Cases
- Broadcasting events like user signups or order placements
- System-wide notifications
- Replicating data across services
- Pushing real-time updates to multiple clients
Comparing the Patterns
| Aspect | Point-to-Point | Publish-Subscribe |
|---|---|---|
| Delivery | One consumer per message | All subscribers per message |
| Coupling | Producer to queue to consumer | Publisher to topic to subscribers |
| Data flow | Single consumer | Fan-out to all subscribers |
| State | Queue holds messages | Topics typically transient |
| Use case | Task distribution | Event broadcasting |
JMS: The Java Standard
Java Message Service (JMS) is an API standard for messaging. It defines interfaces, not implementations. You write to the JMS API, and the underlying provider (ActiveMQ, RabbitMQ, IBM MQ) handles the details.
JMS supports both queues and topics:
// Point-to-point
Queue queue = session.createQueue("tasks");
MessageProducer producer = session.createProducer(queue);
producer.send(session.createTextMessage("process this"));
QueueReceiver receiver = session.createReceiver(queue);
Message msg = receiver.receive();
// Publish-subscribe
Topic topic = session.createTopic("events");
MessageProducer producer = session.createProducer(topic);
producer.send(session.createTextMessage("something happened"));
TopicSubscriber subscriber = session.createSubscriber(topic);
Message msg = subscriber.receive();
JMS 2.0 streamlined the API and added delivery delays, but the core ideas did not change. The old API required a lot of setup: factory, connection, start, session, then finally producer and consumer. JMS 2.0 cut through the boilerplate significantly.
JMS 1.x vs 2.0 API Comparison
// JMS 1.x — verbose setup for every message
ConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
Connection connection = factory.createConnection();
connection.start();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue("tasks");
MessageProducer producer = session.createProducer(queue);
TextMessage message = session.createTextMessage("process this");
producer.send(message);
session.close();
connection.close();
// JMS 2.0 — @Inject approach uses CDI (Contexts and Dependency Injection)
// Container manages connection, session lifecycle automatically
@Inject
private JMSContext context;
@Inject
private Queue tasksQueue;
public void sendTask(String taskData) {
// One-liner send — no manual session management
context.createProducer().send(tasksQueue, taskData);
}
// Receiving with simplified API
@Inject
private JMSContext context;
public String receiveTask() {
// receive() blocks; receiveBody() returns the typed payload directly
return context.createConsumer(tasksQueue).receiveBody(String.class);
}
The @Inject JMSContext pattern works well in Java EE / Jakarta EE environments. For Spring, JmsTemplate wraps the JMS API with similar convenience.
ActiveMQ Artemis: Modern AMQP
ActiveMQ Artemis is the actively developed successor to classic ActiveMQ. It speaks AMQP 1.0 natively, plus MQTT, STOMP, and HornetQ, and has a more modern architecture under the ActiveMQ umbrella.
Artemis uses an address-based model rather than the older queue/topic split. Bind addresses to queues with different routing semantics, and you can build any pattern you need.
// ActiveMQ Artemis — address-based routing
// Messages sent to "orders.created" address
Address address = session.createAddress("orders.created");
// Queue bound to the address receives all messages
Queue queue = session.createQueue("orders-queue").bind(address);
// Fully Qualified Domain Naming for clustering
// artemis: clustering://artemis01.example.com,5672
Artemis vs RabbitMQ
| Aspect | ActiveMQ Artemis | RabbitMQ |
|---|---|---|
| AMQP version | AMQP 1.0 only | AMQP 0-9-1 (classic), 1.0 via plugin |
| Protocol support | AMQP, MQTT, STOMP, HornetQ, OpenWire | AMQP 0-9-1, MQTT, STOMP |
| Queue model | Address-based (any pattern) | Exchange + binding (classic) |
| Clustering | Master/slave, replicated (Janus) | Standard master/slave |
| Message count | Millions per second | ~50K-100K/second sustained |
| Disk persistence | Journal (append-only, very fast) | Mnesia + transient messages |
| Client languages | Any with AMQP 1.0 client | Erlang (OTP), many language clients |
Artemis wins on raw throughput and the address-based model gives more routing flexibility. RabbitMQ wins on operational maturity and the ecosystem around it.
CloudEvents: Vendor-Neutral Event Format
Most messaging systems define their own message envelope format. CloudEvents, a CNCF specification, tries to standardize how events look across different systems so you are not locked into one vendor’s schema.
{
"specversion": "1.0",
"id": "message-uuid-12345",
"source": "//my-service/orders",
"type": "com.example.order.created",
"subject": "order-789",
"time": "2026-03-26T10:15:30Z",
"datacontenttype": "application/json",
"data": {
"order_id": "789",
"customer_id": "cust-456",
"total": 129.99
},
"extensions": {
"traceparent": "00-abc123-def456-01"
}
}
The source field identifies where the event came from, type uses reverse-DNS naming to describe what happened, and subject pinpoints which entity the event is about. Extensions carry vendor-specific metadata like distributed trace context.
# Publishing a CloudEvent with cloudevents-sdk
from cloudevents.sdk import CloudEvent
# Create a CloudEvent
ce = CloudEvent(
source="//my-service/orders",
type="com.example.order.created",
data={"order_id": "789", "customer_id": "cust-456", "total": 129.99}
)
# Serialize to JSON (HTTP binding)
from cloudevents.sdk.http import to_http
headers, body = to_http(ce)
# headers['Content-Type'] = 'application/cloudevents+json'
Many platforms now produce or consume CloudEvents natively: AWS EventBridge, Azure Event Grid, Google Cloud Events, Knative, and Solace all support it. Using CloudEvents as your internal event format means you can plug into any of these platforms without rewriting event producers or consumers.
If JMS is an API, AMQP is a wire protocol—defining how clients and brokers talk over the network, making it language-agnostic.
AMQP’s model has three main pieces:
- Exchange: Takes messages from publishers and routes them based on rules
- Queue: Holds messages until consumers pick them up
- Binding: Tells an exchange which queue to route which messages to
graph LR
Publisher -->|publish| Exchange[Exchange]
Exchange -->|route| Q1[Queue 1]
Exchange -->|route| Q2[Queue 2]
Exchange -->|route| Q3[Queue 3]
The exchange type determines routing behavior:
- Direct: Route to queue matching the routing key exactly
- Fanout: Route to all bound queues
- Topic: Route to queues matching wildcard patterns
AMQP supports both P2P (via direct exchange to single queue) and pub/sub (via fanout or topic exchanges).
MQTT: Lightweight for IoT
MQTT was built for constrained devices and unreliable networks. It dominates IoT where bandwidth is scarce and devices go offline often.
MQTT vocabulary differs from the mainstream:
- Broker instead of server
- Client instead of consumer
- QoS levels instead of delivery guarantees
MQTT QoS levels:
- QoS 0: At most once—fire and forget, no acknowledgment
- QoS 1: At least once—message arrives, consumer acknowledges
- QoS 2: Exactly once—two-phase delivery prevents duplicates
The lightweight design makes MQTT a natural fit for sensors and actuators that cannot handle the overhead of AMQP or JMS.
Protocol Comparison
Here is how the major messaging protocols stack up:
| Feature | AMQP 1.0 | AMQP 0-9-1 | MQTT | CoAP |
|---|---|---|---|---|
| Model | Point-to-point + pub/sub | Point-to-point + pub/sub | Primarily pub/sub | Request/response |
| Wire protocol | Binary | Binary | Binary | Binary |
| Header overhead | ~40 bytes | ~40 bytes | ~2 bytes | ~4 bytes |
| Connection | Long-lived | Long-lived | Long-lived | Short-lived |
| QoS levels | 3 | 3 | 3 | 3 |
| Topics | Hierarchical | Hierarchical | Hierarchical | Observer pattern |
| Transactions | Supported | Not standard | Not standard | Not standard |
| Portable | Yes (standardized) | Vendor-specific | Limited | Limited |
| Typical use | Enterprise messaging | RabbitMQ classic | IoT/sensors | IoT constrained |
AMQP 1.0 is the most feature-complete and standardized, wire-compatible across implementations.
AMQP 0-9-1 (RabbitMQ classic) has richer features but locks you into a specific implementation.
MQTT targets low-bandwidth, unreliable networks. It is the de facto standard for IoT.
CoAP targets extremely constrained devices like 8-bit microcontrollers, using HTTP-like requests over UDP.
Message Broker Selection Flowchart
Given a new project, here is how to narrow down which broker fits:
graph TD
Start[New messaging project] --> Scale{What's your scale?}
Scale -->|Millions of msgs/day| HighVolume{High throughput?}
HighVolume -->|Yes| KafkaOrArtemis{Area of focus?}
HighVolume -->|No| MediumScale
Scale -->|Thousands of msgs/day| MediumScale[Standard relational DB<br/>or lightweight broker]
KafkaOrArtemis -->|Distributed streaming<br/>log, event sourcing| Kafka[Apache Kafka]
KafkaOrArtemis -->|Enterprise messaging<br/>transactions, AMQP| Artemis[ActiveMQ Artemis]
MediumScale --> NeedsMultiProtocol{Need multi-protocol?}
NeedsMultiProtocol -->|AMQP + MQTT + STOMP| Artemis
NeedsMultiProtocol -->|Just AMQP 0-9-1| RabbitMQ[RabbitMQ]
NeedsMultiProtocol -->|No| CloudManaged{AWS or cloud-native?}
CloudManaged -->|Yes| AWSSQS{AWS-based?}
CloudManaged -->|No| SelfHosted{Self-hosted OK?}
AWSSQS -->|FIFO / exactly-once| SQSFIFO[Amazon SQS FIFO]
AWSSQS -->|Pub/sub, fan-out| SNS[Amazon SNS]
SelfHosted -->|Yes| RabbitMQ
SelfHosted -->|No| AzureEvent{Using Azure?}
AzureEvent -->|Yes| AzureEventHubs[Azure Event Hubs]
AzureEvent -->|No| GCPubsub{GCP?}
GCPubsub -->|Yes| GCPPubSub[Google Cloud Pub/Sub]
GCPubsub -->|No| NATS[NATS]
Quick reference by constraint:
| Constraint | Best choice |
|---|---|
| Exactly-once across systems | SQS FIFO, Kafka (transactions) |
| Message persistence + disk-backed | Kafka (retention), Artemis (journal) |
| AMQP 1.0 native | ActiveMQ Artemis |
| Multi-protocol (AMQP + MQTT + STOMP) | ActiveMQ Artemis |
| Team familiar with RabbitMQ | RabbitMQ |
| Already on AWS | SQS + SNS |
| IoT / extremely lightweight | NATS, MQTT brokers |
| Event sourcing / immutable log | Apache Kafka |
| Highest throughput possible | Apache Kafka, ActiveMQ Artemis |
Pick point-to-point when:
- A message needs processing by exactly one consumer
- You need load leveling (producers faster than consumers)
- Tasks should be processed in order or with fairness
Pick publish-subscribe when:
- Multiple consumers need the same message
- You are broadcasting events to many services
- Consumers are independent and all need to react
Most real systems use both. Order events might go to a topic (for audit logs, notifications, and analytics), while specific order fulfillment tasks go to a queue for the fulfillment worker.
For deeper dives, see our posts on RabbitMQ, Apache Kafka, and AWS SQS/SNS.
When to Use / When Not to Use
When to Use Point-to-Point Messaging
- Task distribution: exactly one worker processes each task
- Load leveling: producers outpace consumers and you need a buffer
- Request/response decoupling: sender does not need an immediate reply
- Ordered processing: messages must be handled in sequence
When Not to Use Point-to-Point
- Fan-out: multiple consumers need the same message
- Simple notifications: broadcasting is the goal
- Event broadcasting: many services need to know about the same fact
When to Use Publish-Subscribe Messaging
- Event broadcasting: one event should trigger multiple independent actions
- System-wide notifications: several services need to react to the same fact
- Decoupled microservices: services should not need to know about each other
- Real-time updates: pushing updates to multiple clients or services
When Not to Use Publish-Subscribe
- Sequential processing: order matters and only one consumer should handle it
- Request/response: you need a reply from a specific service
- Task queues: work needs assignment to specific available workers
Production Failure Scenarios
| Failure | Impact | Mitigation |
|---|---|---|
| Broker goes down | Messages cannot be sent or received | Cluster with replication; use durable queues |
| Consumer crash mid-processing | Message lost (auto-ack) or reprocessed | Manual acknowledgments; idempotent processing |
| Network partition | Messages stuck or delayed | Connection recovery; appropriate socket timeout |
| Queue overflow | New messages rejected or old dropped | Max queue length policies; monitor queue depth |
| Message TTL expiration | Unprocessed messages disappear | Appropriate TTL; dead letter queues for failures |
| Duplicate message delivery | Same message processed multiple times | Idempotent consumers; deduplication keys |
| Routing key mismatch | Messages go to wrong queue or nowhere | Consistent naming; dead letter exchanges |
Observability Checklist
Metrics to Monitor
- Queue depth: number of messages waiting to be processed
- Consumer lag: time between message publication and consumption
- Message throughput: messages published or consumed per second
- Error rate: failed message processing attempts
- Acknowledgment latency: time taken to acknowledge messages
- Connection count: active producers and consumers
Logs to Capture
- Message publish events with routing keys and timestamps
- Consumer acknowledgment and rejection events
- Dead letter queue arrivals with failure reasons
- Connection open and close events
- Retry attempts with attempt counts
Alerts to Configure
- Queue depth exceeds threshold, indicating burst traffic or consumer failure
- Consumer lag exceeds your SLA threshold
- High error rate on message processing
- Dead letter queue accumulating messages
- Broker connection failures
- Consumer disconnection events
Security Checklist
- Authentication: SASL or TLS client authentication for brokers
- Authorization: Queue or topic-level access controls; principle of least privilege
- Encryption in transit: TLS for all client connections
- Encryption at rest: disk encryption for message persistence
- Message validation: validate message schemas before processing
- Input sanitization: sanitize routing keys and message content to prevent injection
- Audit logging: log all administrative operations on queues and topics
- Network segmentation: place brokers in private networks; restrict access via firewalls
Common Pitfalls / Anti-Patterns
Pitfall 1: Treating Pub/Sub like a Queue
Pub/sub broadcasts to all subscribers. If only one should process a message, put a queue in front of the subscriber or implement filtering correctly.
Pitfall 2: Ignoring Message Ordering Requirements
If your business logic requires ordering, your architecture must deliver it. Point-to-point with a single consumer or partitioned topics in Kafka can provide that guarantee.
Pitfall 3: Auto-Acknowledgment Without Idempotency
Auto-ack discards messages immediately on delivery. If the consumer crashes, the message is gone. Pair auto-ack with idempotent processing, or just use manual acknowledgment instead.
Pitfall 4: Not Handling Poison Messages
Messages that keep failing block the queue and hold up everything behind them. Configure dead letter queues and set retry limits.
Pitfall 5: Coupling Publishers to Topic Structure
When publishers know too much about subscriber interests, changing subscribers means changing publishers. Keep topic design stable and use content-based filtering for flexibility.
Pitfall 6: Using a Single Queue for Multiple Concerns
Mixing message types in one queue makes processing complex and error-prone. Use separate queues or topics per message type.
Quick Recap
Key Points
- Point-to-point delivers each message to exactly one consumer; publish-subscribe delivers to all subscribers
- P2P works well for task distribution and work queues; pub/sub works well for event broadcasting
- JMS is a Java API standard; AMQP is a wire protocol; MQTT is lightweight for IoT
- Queues give you persistence and load leveling; topics give you fan-out and flexibility
- Design for at-least-once delivery with idempotent consumers
Pre-Deployment Checklist
- [ ] Queue depth monitoring configured
- [ ] Dead letter queues configured for failed messages
- [ ] Manual acknowledgment implemented (preferred over auto-ack)
- [ ] Idempotent message processing implemented
- [ ] Retry limits set with exponential backoff
- [ ] TLS/encryption enabled for client connections
- [ ] Consumer group scaling strategy defined
- [ ] Message TTL configured appropriately
- [ ] Alert thresholds set for queue depth and error rates
- [ ] Schema validation in place for incoming messages
- [ ] Correlation ID propagation implemented for distributed tracing
Conclusion
Point-to-point and publish-subscribe solve different problems. Queues ensure one consumer processes each message. Topics broadcast to all subscribers. JMS, AMQP, and MQTT are protocols and APIs that implement these patterns, each with different tradeoffs around features, complexity, and performance.
The pattern you choose shapes your system architecture. Know what semantics you need, then pick the tool that delivers them.
Category
Related Posts
Publish/Subscribe Patterns: Topics, Subscriptions, and Filtering
Learn publish-subscribe messaging patterns: topic hierarchies, subscription management, message filtering, fan-out, and dead letter queues.
CQRS Pattern
Separate read and write models. Command vs query models, eventual consistency implications, event sourcing integration, and when CQRS makes sense.
Event Sourcing
Storing state changes as immutable events. Event store implementation, event replay, schema evolution, and comparison with traditional CRUD approaches.