MQ Cluster Workload Balancing with Preferred Locality - Part 4

MQ Cluster Workload Balancing with Preferred Locality - Part 4

By Neil Casey

< Prev

Welcome to the last post in this short series about using MQ clusters to enable location preferences, while also enabling Disaster Recovery by allowing messages to reach distant queue managers if no local destinations are available.

Earlier posts are available on the Syntegrity Solutions site if you want to refresh your memory, or you have reached us directly at this post.

Option 3. Merge the clusters into a single cluster

The third option is to eliminate the two separate clusters altogether, and manage the route preferencing using alias queues. The USA and EUROPE clusters are destroyed, and the queue managers are all joined to a new cluster called WORLD.

The local queues (Q1) on the service provider queue managers remain as they are, except that they are removed from the cluster. The application reading from them continues to use the same queue name.

A new QAlias is created for the local site preference. It is made visible in the cluster, and given a high CLWLPRTY. The queue name is modified in the alias by adding the location (E or U in this example).

So, for Q1 on PARIS1, a QAlias called Q1.E is created, and made visible in the cluster, which is now cluster WORLD. It has a CLWLPRTY of 5. The same QAlias definition is made on PARIS2 and PARIS3. The Q1.E aliases point to local queue Q1 in each case.

For this configuration, we need to define a QAlias called Q1 on LONDON1. The target queue is Q1.E, giving messages sent from LONDON1 a preference for being sent to one of the PARIS* queue managers if possible.

At the moment, there are 3 instances of Q1.E visible in the WORLD cluster. If all of them were not available, the message could not be sent.

So, to provide DR for the EUROPE location, a queue alias is created on each of the ATLANTA* queue managers, also called Q1.E. The CLWLPRTY of Q1.E on the ATLANTA* queue managers is low, so the message is only sent there if the PARIS* queue managers cannot be reached. Again, the Q1.E alias points to local queue Q1.

The configuration for the former USA cluster queue mangers is similar. NEWYORK1 gets a QAlias called Q1, which has a target of Q1.U.

The 3 ATLANTA* queue managers have the Q1 local queue removed from the cluster, and a new QAlias called Q1.U created in the WORLD cluster, with a high CLWLPRTY. This provides a normal path for messages originating in the US or Americas (NEWYORK1 in our example).

The 3 PARIS* queue managers now get a QAlias called Q1.U with a low CLWLPRTY, and visible in the WORLD cluster. This is the DR path used only if all ATLANTA* queue managers are unavailable.

The new single cluster configuration is shown in Figure 5 Single cluster providing geographic preference.

Figure 5 Single cluster providing geographic preference

We can understand how the workload distribution will work by looking at the NEWYORK1 queue manager as an example.

An application there puts a message on queue Q1. The queue manager resolves the alias to discover it should send the message to queue Q1.U.

The queue manager finds 6 instances of Q1.U. 3 of them have CLWLPRTY of 5, and the other 3 have CLWLPRTY of 0. If any of the CLWLPRTY:5 instances (the ones in ATLANTA* queue managers) are available, the queue manager will workload balance the message to one of those queue managers. If none of them are available, the message will be sent to one of the PARIS* queue managers instead.

The objective of keeping messages close to the originating queue manager in normal circumstances is met, as is the secondary objective of allowing the messages to reach a more distant site if necessary.

The third design has several advantages: • Only one cluster is needed • No bridge queue manager • Lower latency than option 1 • Easy to extend to more than 2 localities • No need to separate service provider and service consumer queue managers • When extending beyond 2 localities, the extra locations can have a delivery hierarchy, or can all be equal. Disadvantages • Alias queues must be defined on all queue managers

Conclusion

My preference as an MQ Administrator is to try to minimise the number of clusters which are created and managed, and especially limit cluster overlap. Overlapping clusters leads to multiple paths between queue managers. If there are different MCAUSERs on the different channels, it can be very difficult to ensure that messages flow on channels which will provide the proper authority when the message reaches the destination queue manager.

Bridge queue managers on the other hand tend to become critical potential failure points. Multiple bridge queue managers can be defined, but that can easily lead to loops where messages will be sent from queue manager to queue manager, reaching an alias queue each time, and the message can stay "in limbo" for a long time before actually reaching a local queue from which an application can read it.

I've discussed 4 different ways to build clusters with geographic preferences, although only 3 of them provide for Disaster Recovery.

That's because no one solution is perfect for every customer.

But where it doesn't compromise separation of responsibilities between geographic zones, I like option 3. The single cluster option wins my vote.

< Prev


Love this story? Subscribe to the Syntegrity Solutions newsletter here and get them delivered to your inbox every month.

Neil Casey is a Senior Integration Consultant with Syntegrity. He has a solid background in enterprise computing and integration. He focuses on finding technically elegant solutions to the most challenging integration problems.

Have a question?

Thank you for your enquiry
We will get back to you as soon as we can