How to Evaluate, Choose, and Scale Your Message Broker Effectively?

How to Evaluate, Choose, and Scale Your Message Broker Effectively?
Play this article

Selecting the right message broker is a critical decision that impacts your application's scalability, resilience, and overall performance. With so many broker options, each with its unique features and trade-offs, making an informed choice can be challenging.

Let's consider some examples of what could go south, considering one of the attributes wasn't considered

This-is-chaos GIFs - Get the best GIF on GIPHY

Imbalanced Partitions
Consider the well-documented case of an early-stage deployment of Apache Kafka by a prominent e-commerce platform. They didn't adequately plan for partitioning, causing an imbalance in message distribution. As a result, some partitions became hotspots, leading to message backlogs. This misconfiguration became acutely problematic during peak shopping events, where the increased message volume further strained these hotspots, causing significant processing delays. Customers experienced lag in receiving order updates, and the ripple effect was a slower order processing pipeline, impacting user experience and sales.

Geographic Distribution
In another instance, a global financial services company using RabbitMQ overlooked the importance of geographic distribution and fault tolerance. When one of their primary data centers faced an outage, they couldn't efficiently failover to their secondary site, leading to substantial data loss and trading downtimes, translating to significant financial losses and reputational damage.

Guaranteed Delivery
A popular ride-hailing app faced severe backlash when it rolled out a new promotional campaign. For every ride a user took, they were promised reward points. However, due to the lack of guaranteed delivery in their message broker setup, many point allocation messages were lost. This resulted in riders not seeing their promised points, leading to a flood of customer complaints, negative PR, and even some users shifting to competitors.

Message Ordering: An online auction platform learned the hard way about the importance of message ordering. Bids were processed out of order due to the lack of strict message sequencing in their broker. This led to earlier bids being recognized as the winning ones, even if a higher bid was placed a few moments later. The result was disgruntled users, erroneous auction outcomes, and a significant blow to the platform's credibility.

Retention Policies: A news streaming service provided users with a feature to look back at major events from the past week. However, due to incorrect retention policies in their message broker, older news items were prematurely purged. When users tried accessing them, they were met with errors or missing content, significantly affecting user experience and trust.

Schema Evolution: A healthcare analytics company used a message broker to process and store patient data. As the company grew and expanded its services, new data types and fields were introduced. However, they hadn't planned for schema evolution in their initial setup. When newer message formats were introduced, it caused data inconsistencies and even crashes in parts of their analytics pipeline that expected the older schema. This led to delays in reporting and inaccurate health analytics, a grave concern in the medical field.


Now let's consider the list of attributes that should be considered while setting up a message broker :

High Priority:

  1. Latency Requirements: Speed of message delivery

    1. Real-Time

    2. Near Real-Time

    3. Delayed

  2. Guaranteed Delivery: Assurance of every message's delivery.

  3. Durability: Persistence of messages post-process.

  4. Security: Measures protecting data integrity and privacy :

    1. Encryption

    2. Authentication

    3. Authorization using Access Control Lists (ACLs)

  5. Scalability: System's adaptability to increased load.

  6. High Availability: System uptime and reliability.

  7. Fault Tolerance: Functionality amidst internal failures :

    1. Redundancy

    2. Replication

  8. Resiliency: Recovery capability from unexpected disruptions :

    1. Retry Logic

    2. Circuit Breaker

  9. Monitoring and Logging: Oversight and record-keeping of operations.

  10. Backup and Recovery: System data preservation and restoration.

  11. Message Ordering: Sequential delivery requirement.

  12. Retention Policies: Duration messages are retained.

  13. Cost: Total expenditure for the system.

  14. Partitioning: Parallelism in data processing.

  15. Daily Traffic: Average daily message count.

  16. Hourly Peak Traffic: Peak hourly message count.

  17. RPM (Requests Per Minute): System's real-time handling rate.

  18. Batch vs. Stream Processing: Mode of data processing.

  19. Dead Letter Queues: Repository for unprocessable messages.


Medium Priority:

  1. Replay Capability: Ability to reprocess past messages.

  2. Multi-Tenancy: Multiple applications sharing infrastructure.

  3. Message Filtering: Routing based on message criteria.

  4. End-to-end Latency: Total time from sender to receiver.

  5. Message Compression: Reducing message size for efficiency.

  6. Integration with Other Systems: Interoperability with external platforms.

  7. Throttling and Rate Limiting: Control over message processing rate.

  8. Consumer Flexibility: Multiple consumers accessing similar messages :

    1. Multiple Consumer Groups

    2. Consumer Scalability

    3. Offset Management

    4. Partition Assignment

  9. Priority Queuing: Message delivery based on priority.

  10. Push vs. Pull Models: Mechanism of message delivery.

  11. Schema Evolution: Adaptability to message format changes.

  12. Managed Cloud Solution: Utilizing cloud provider services.

  13. Self-managed Solution Requirement: Independent system operation.

  14. Support and Community: Availability of help and resources.

  15. Geographic Distribution: Data distribution across regions.

  16. Protocol Support: Recognized messaging protocols.


Low Priority:

  1. Deduplication: Removal of redundant messages.

  2. Ecosystem: Available extensions and plugins.

  3. Quota Management: Setting limits on users or applications.

  4. Auditing: Tracking and review of activities.

  5. Load Balancing: Equal distribution of workloads.

  6. Clustering: Grouping systems to work as a single entity.

  7. Licensing Model: Software usage and distribution terms.

  8. Single Record Size: Size of an individual message.

  9. Acceptable Consumer Lag: Allowed delay in message processing.

  10. Operational Complexity: Complexity in daily operations


How to Calculate Consumers, Consumer Groups, Partitions, CPU, Disk Space ?

Exploring Partitions and Consumer Groups in Apache Kafka

Deciding on the number of consumer groups, partitions, and resource allocations such as memory, CPU, and disk space is pivotal for the efficient operation of a message broker. Here's a guide on how to make these decisions:

1. Consumer Groups and Consumers:

  • Consumer Groups: Determine the consumer groups based on the different types of processing that messages require. Each consumer group should represent a unique type of processing.

  • Number of Consumers: The number of consumers in a group depends on the volume of messages and the processing time each message requires. It should be aligned with the number of partitions to allow parallel processing.

2. Partitions:

  • Throughput: More partitions allow higher throughput due to parallelism but might increase management overhead.

  • Consumer Parallelism: Having more partitions than consumers allows for efficient load balancing. However, having too many partitions compared to consumers can lead to inefficiencies.

  • Fault Tolerance: More partitions allow for better distribution of replicas, improving fault tolerance.

3. Scaling:

  • Horizontal Scaling: Adding more broker instances or clusters. Useful for distributing load and improving fault tolerance.

  • Vertical Scaling: Increasing the resources (CPU, memory) of existing brokers. Useful when the existing hardware is underutilized.

Calculations for Resource Allocation:

  1. CPU and Memory:

    • Monitor the CPU and memory usage under different loads, and allocate resources based on the peak usage observed.

    • Consider leaving some buffer for unexpected spikes.

  2. Disk Space:

    • [Disk Space] = [Average Message Size] × [Message Retention Period] × [Message Ingestion Rate]

    • Ensure extra space for replicas, partition logs, and unexpected volume spikes.

  3. Number of Partitions:

    • [Number of Partitions] = [Peak Throughput] / [Throughput per Partition]

    • Consider the number of consumers and the desired level of parallelism.

  4. Number of Consumer Groups and Consumers:

    • Align with the number of unique message processing types or use cases.

    • Ensure there are enough consumers to handle the partition load, allowing for parallel processing and failover.

Now let's see few benchmarks on what Performance a single instance can give :

Right-sizing Apache Kafka Clusters

Consider these AWS m5 Instances for some benchmarks

Formula for Sustained Throughput Limit
r is replication factor

For a three-node cluster with replication 3 and two consumer groups, the recommended ingress throughput limits as per Equation 1 are as follows.

EBS volume : baseline throughput of 250 MB/sec.

Broker sizeRecommended sustained throughput limit
m5.large58 MB/sec
m5.xlarge96 MB/sec
m5.2xlarge192 MB/sec
m5.4xlarge200 MB/sec

EBS volume : baseline throughput of 1000 MB/sec.

Broker sizeProvisioned throughput configurationRecommended sustained throughput limit
m5.4xlarge480 MB/sec384 MB/sec
m5.8xlarge850 MB/sec680 MB/sec
m5.12xlarge1000 MB/sec800 MB/sec
m5.16xlarge1000 MB/sec800 MB/sec

Latencies

Comparing throughput and put latencies of different broker sizes