denisismagilov - Fotolia


Review these Azure Service Bus best practices

When applications talk, you need to listen. Learn about Microsoft's cloud messaging service, its main features, best practices to follow and practical applications.

Cloud architectures can decouple applications components, which necessitates the use of messaging to send instructions and updates between the disparate pieces. With a large variety of devices and network types in cloud computing, messages must travel through all sorts of conditions.

Microsoft Azure Service Bus, a managed message broker service, was built to address these problems. Examine this breakdown of best practices and get the most use out this valuable offering.

The message router

Messages that contain data and instructions travel between the server and clients in decoupled cloud architectures. Azure Service Bus passes messages as binary encoded versions of JSON, XML or plain text-formatted data. It provides a reliable and secure platform for asynchronous data and state transfer between an application and its server components.

Azure Service Bus features include message sessions, auto-forwarding, dead-lettering, scheduled delivery, batching, transactions, filtering and actions, auto-delete on idle, duplicate detection, security based on role-based access control and shared access signature standards, and AMQP 1.0 and HTTP/REST protocols.

To use Azure Service Bus, cloud customers work with client libraries available in popular languages, such as .NET, Java and Java Message Service. Azure Service Bus integrates with these Azure services: Event Grid, Logic Apps, Functions, Dynamics 365 and Stream Analytics.

However, don't default to Service Bus for every workload. For applications that require extremely high throughput or routing, Azure users should consider Event Hub, a data ingestion service, and Event Grid, an event routing service.

Azure Service Bus best practices

The features and settings for how Service Bus users configure the service and applications on Azure affect performance, reliability and fault-tolerance. Follow these Azure Service Bus best practices and considerations to ensure optimal performance and reliability.


Choose the right protocol for a specific job. Azure Service Bus can use one of three protocols:

  • Advanced Message Queuing Protocol (AMQP);
  • The proprietary Service Bus Messaging Protocol (SBMP); or
  • HTTP.

Out of the three, AMQP and SBMP are more efficient. They have longer-lived connections than HTTP, provided the MessagingFactory object continue to run and the messaging setup incorporates batching and prefetching for quicker access to temporary data stores. Additionally, substitute in HTTPS whenever possible instead of using HTTP.

MessagingFactory objects provide internal state management, so they should not close down after the system sends a message. If an application creates a new factory object, it will necessitate a new connection to Azure Service Bus -- that incurs unnecessary connection overhead.


When possible, perform operations -- such as send, receive and delete -- asynchronously. This concurrent processing, via the SendAsync messaging method in Service Bus, enables applications to perform more tasks in a given period than if operations are serially executed. After all messages are sent, Azure Service Bus can use the WaitAll method to receive all messages. Users of the .NET language can take asynchronous operations a step further: .NET's async and await features enable an asynchronous receive function -- while it is more complicated, it maximizes flexibility in program construction.

Receive mode

On a queue or subscription client, there are two receiving modes: PeekLock and ReceiveandDelete. In PeekLock, the default mode for Azure Service Bus, the client sends a request to the service and announces that the client can receive messages. ReceiveAndDelete mode combines both of these steps into a single request, which increases performance. However, with this method, there is the risk of losing messages.

Client-side batching

Within a client, batching messages delays when they send. Azure batching uses an interval of 20 milliseconds (ms), but this wait time is configurable. By batching messages, IT teams avoid unnecessary overhead and the connection tear-down work for each message. However, this capability is not available in all situations, as HTTP and PeekLock protocol do not support client-side message batching functionality.


Within Azure Service Bus, clients can prefetch and load additional messages while performing a receive operation. These messages are stored in a local cache and are not shared across clients. Once a client prefetches a message, other clients cannot access the message until its lock is released. By fetching the resource early, there is no overhead from waiting.

Settings for specific circumstances

Certain applications demand either high throughput or low latency in messaging between the client and server components of the application. For these setups, in addition to general Azure Service Bus best practices for configuration, use these settings to make a big difference in performance.

High-throughput queue

In some applications, each client sends a lot of data, for example data collected in the field. In this scenario, the messaging should be configured to send as much data as possible to the server side, which means batching messages for maximum throughput.

Use these settings in Azure Service Bus to enable high throughput:

  • Use multiple message factories to create senders and receivers.
  • Each sender should use asynchronous operations or multiple threads.
  • Change the batching interval to 50 ms, or go up to 100 ms if there are multiple senders.
  • Leave batched store access enabled.
  • Set prefetch count to 20 times the maximum processing rates of all receives in a single factory.

Low-latency queue

In another situation, the application must convey accurate, up-to-date information, such as a client's real-time location on a map. In this case, there is a potential for missed messages due to the speed at which the next message will come up, but it's important that those messages come across as quickly as possible.

This mandate is achieved by disabling batching:

  • Disable client-side batching and batched store access.
  • Set the prefetch count to 20 times the processing rate of the receiver on a single client -- in the case of multiple clients, set the prefetch count to 0.

Dig Deeper on Cloud provider platforms and tools

Data Center