MQ Cluster Workload Balancing with Preferred Locality - Part 3

MQ Cluster Workload Balancing with Preferred Locality - Part 3

By Neil Casey

< Prev Next >

Welcome back to week 3 of this short blog series on extending MQ clusters to deliver messages to geographically local destinations. This week, we look at a second option for delivering to a local destination if possible, but still finding another queue manager hosting the queue if none of the preferred queue managers is available.

Option 2. Join all queue managers to both clusters

In this option, each service providing queue manager is joined to the USA and EUROPE clusters. If all the CLWLPRTY values were the same, this would not be useful, as the messages would just be distributed to any available queue instance. To enable location preference, the CLWLPRTY of the cluster receiver channel is changed, so that the local queue manager has a higher preference than a distant queue manager. The PARIS1, PARIS2, and PARIS3 queue managers have CLWLPRTY set to 5 on the cluster receiver channel in the EUROPE cluster, but set to 0 on the cluster receiver channel in the USA cluster. The ATLANTA1, ATLANTA2, and ATLANTA3 queue managers have the reverse configuration, with CLWLPRTY=5 on the USA cluster receiver, and 0 on the EUROPE cluster receiver. The queue managers making requests do not join the 'distant' cluster. They are members of their own locality cluster, and can see the distant service queue managers through that cluster. This configuration is shown in Figure 4 Overlapping clusters for workload distribution.

Figure 4 Overlapping clusters for workload distribution

What happens now when the LONDON1 queue manager puts a message to Q1?

In the normal case when one or more of PARIS1, PARIS2, and PARIS3 are available, the message is sent to one of them, through the EUROPE cluster, because they have a higher CLWLPRTY than the ATLANTA1, ATLANTA2, and ATLANTA3 queue managers in the EUROPE cluster. If all of the PARIS* queue managers are unavailable, the message will be directed to one of the ATLANTA* queue managers, because they are visible in the EUROPE cluster, and can be reached.

Advantages of this configuration:

  • Lower latency than option 1, because it avoids the hop through the bridge queue manager
  • No need to run extra queue managers providing bridge functions


  • Strict separation of the service requester and service provider queue managers is needed.

If a request is sent from a service provider queue manager, it doesn't stay local because both local and remote clusters are visible with the same priority when the routing decision is made.

  • Extra definitions to join the service provider queue managers to multiple clusters
  • No separation of administrative control of the queue managers. They have to join both clusters
  • All the cluster queues have to be defined using namelists so that they are visible in both clusters.

Next week, the last post in the series will cover a configuration which can correctly balance to the local queue instances, but only needs a single cluster.

< Prev Next >

Love this story? Subscribe to the Syntegrity Solutions newsletter here and get them delivered to your inbox every month.

Neil Casey is a Senior Integration Consultant with Syntegrity. He has a solid background in enterprise computing and integration. He focuses on finding technically elegant solutions to the most challenging integration problems.

Have a question?