Build Replicated Data Queue Manager On RHEL7

Build Replicated Data Queue Manager On RHEL7

By Neil Casey

IBM has recently released Continuous Delivery version 9.0.4 of MQ for (some) distributed systems. In this article, we work through the steps required to build and configure three virtual machines with a HA MQ Cluster.

Introduction

IBM has recently released Continuous Delivery version 9.0.4 of MQ for (some) distributed systems. This package has a number of new features, including enhancements to RESTful administration and support on additional platforms, but this article concerns one completely new feature.

IBM MQ CD9.0.4 provides enablement to run MQ on a RHEL 7.3 or RHEL 7.4 cluster, making use of drbd and pacemaker.

Without requiring any additional cluster software (such as Red Hat Cluster Services or VCS, which would add to the software licensing cost) the MQ product is delivering a clustered failover capability which is straightforward to implement and support without in depth Cluster software skills.

The cluster enabled by IBM MQ CD 9.0.4 requires exactly 3 members, and allows MQ to keep running if 2 or more of the cluster members are operational. Queue managers cannot run if less than 2 of the nodes are operational and have network visibility of each other. This requirement for at least 2 visible nodes is a quorum in the cluster, and (mostly) prevents split-brain syndrome, where data on the different systems can be out of sync.

This article describes and illustrates the steps I used to build a 3-node cluster and test failover reliability and failover time. It highlights information which is currently missing from the manuals, and collects steps from many separate sections into a single recipe which can be followed more easily.

Each section of the recipe has a link back to the manual section so that the reader can explore the information direct from IBM.

The environment used for building the cluster was VMware Fusion Pro 8.5.8 running on Apple MacBook Pro hardware with Intel i7 chipset, providing 4 CPU cores and hyperthreading, and 16GB RAM. Each guest was configured with 4GB RAM. The Hosting OS was Mac OS X 10.13.1 High Sierra.

For production utilisation, physical hardware or a virtualisation environment providing separate physical hosting for the cluster members is critical. Otherwise a single hardware node failure could cause loss of cluster quorum. Very high performance (preferably 10GBps or better) dedicated networks with very low latency are also needed if the cluster queue manager is to perform close to the level of a stand-alone queue manager on similar hardware. For the cluster to reliably recognise failed states, the heartbeat and replication networks should be isolated from each other, and should use dedicated hardware NICs, not virtual NICs which are sharing underlying hardware.

Want to read more? Subscribe to get the full paper. We even send you a hard copy to keep and cuddle!


Love this story? Subscribe to the Syntegrity Solutions newsletter here and get them delivered to your inbox

Neil Casey is a Senior Integration Consultant with Syntegrity. He has a solid background in enterprise computing and integration. He focuses on finding technically elegant solutions to the most challenging integration problems.

Have a question?

Thank you for your enquiry
We will get back to you as soon as we can