MQ Cluster Workload Balancing with Preferred Locality - Part 1

MQ Cluster Workload Balancing with Preferred Locality - Part 1

By Neil Casey

Next >

This is the first of a short series of blog posts that proposes several options for distributing messages through MQ clusters so that they are processed at a (relatively) local data centre, rather than being distributed evenly to all available queue instances.

The first post looks at just splitting the traffic, without providing Disaster Recovery.

Introduction

An MQ cluster can provide several advantages compared with conventional distributed queueing. See this.

One of the benefits of clusters is to provide workload balancing. This allows messages to be sent to the same named queue on different queue managers, allowing horizontal scalability. The IBM manuals describe how: • multiple networks can be used to implement standby routes within a cluster; • workload can be balanced between queue instances; and • some queues can be standby queues, that are only used when the normally active queues fail or are otherwise unavailable.

The list is not exhaustive.

Keeping messages within a region

It might be useful to have queue instances available in multiple locations, and to configure the cluster(s) so that all messages go to a nearby instance of the queue.

For example, imagine a company which operates in the USA and in the Europe. It has one data centre in Atalanta, and another in Paris.

The company has a product lookup service which is accessed through MQ messages. A request message is placed on the queue Q1 using workload balancing in an MQ cluster. The service is provided by application instances connected to several queue managers in Atlanta, and other queue managers in Paris. In the figures below, the queue name is shortened to Q1.

To minimize the cross-Atlantic traffic, they want MQ messages for a product lookup service to be sent to the nearest data centre. Request messages from New York or Dallas should be sent to Atlanta. Messages from London or Madrid should be sent to Paris.

They expect that this will reduce their communications costs, and reduce the response time latency and variability, because the messages will traverse links with shorter network delays.

The company could implement 2 clusters. Implement one cluster (USA) for the US and Americas messages, and another cluster (EUROPE) for the European and Asian messages. The same queue names would be created in both clusters. Figure 1 shows the USA cluster, with 3 queue managers exposing queue Q1 to all queue managers in the USA cluster.

Figure 1 - USA Cluster

Figure 2 shows the configuration of the EUROPE cluster.

Figure 2 - Europe Cluster

This solves the requirement of keeping messages within their respective geographies. A message put to queue Q1 from the LONDON1 queue manager will be sent to one of PARIS1, PARIS2 or PARIS3, because these are the only instance of Q1 in the EUROPE cluster. The instances in the USA cluster are not available to the LONDON1 queue manager because it is not a member of the USA cluster.

More to come in my next blog post next week…

Next >


Love this story? Subscribe to the Syntegrity Solutions newsletter here and get them delivered to your inbox every month.

Neil Casey is a Senior Integration Consultant with Syntegrity. He has a solid background in enterprise computing and integration. He focuses on finding technically elegant solutions to the most challenging integration problems.

Have a question?

Thank you for your enquiry
We will get back to you as soon as we can