Welcome!

IBM Cloud Authors: Liz McMillan, Pat Romanski, Aruna Ravichandran, Yeshim Deniz, Carmen Gonzalez

Related Topics: IBM Cloud

IBM Cloud: Article

Looking Beyond MQ

Improve levels of service and avoid costly mistakes

It's likely you've been working with WebSphere MQ (WMQ) for years developing, deploying, monitoring, or all of the above. It's also likely that by now you have assembled a tool bag full of items to support your implementations and ongoing operations. And you've no doubt become accustomed to dealing with problems within the MQ domain. But what if you could see more of the transactional journey on either side of MQ? Organizations doing this are finding new ways to improve levels of service and avoid costly upgrades.

Initially, just seeing the touch points, you know - the "puts" and the "gets" ­ along with MQ administration probably seemed entirely sufficient in order to manage WMQ. A common assumption was that as long as we saw messages coming and going, things were okay. Analogous to Archimedes' principle of water displacement, the health of MQ or any other middleware system could be assumed as long as both sides remained within a reasonable state of balance.

But it didn't take very long at all to see that more was needed. The difficulty was that by the time an imbalance was noticed, too many problems had already been caused; it was obvious that an earlier warning was needed to avert problems or at least correct them sooner. The need to watch things like queue depth, channel status, or any of the other 40 standard events that MQ raises became clear and pretty much commonplace.

If you are still requiring more precise information to detect problems earlier and to see their impact on the applications and business processes, some of you may have had the experience of configuring your own new custom events based on conditions within your unique MQ implementation. If you're really on top of the MQ management game, you've implemented automatic corrective actions, which launch the moment warning signs appear ­ preventing problems before they occur.

The More Things Change...
Despite your success implementing and managing WMQ, and at the risk of stating the obvious, your needs continue to change due to some constant forces:

  • Increasing EAI complexity. Even though most have implemented WMQ for a specific single project, the need to integrate it with other applications and systems continues to grow.
  • The need to do more with less. Due to decreasing resource availability to manage the complexity with smarter tools and processes.
  • The demand for higher levels of service. Not just to track and report on the level of service delivered, but also to resolve problems faster ­ in real time before they impact the business.
  • The requirement to align IT services with business objectives. Delivering and supporting the service specifically to the needs of the business process. Amid a push to run faster and jump higher while carrying heavier loads, we commonly find this mistake: looking over the most practical solutions. What is the most practical solution to these challenges? We've seen that one of the most straightforward and effective things you can do is to expand your monitoring of middleware to include more of the entire application infrastructure.
Achieving An Application Perspective
If WMQ is the only middleware technology you have, you are unique. If you have any enterprise applications (commercial or homegrown) that do not also talk to Oracle, DB2, or SQL Server, then let's just say that you are in a class of your own! Unless we've just described your environment, you also have a wide collection of management and monitoring tools for all of these various platforms and systems ­ each of which lacks the ability to see or do much of anything beyond its own domain.

The solution is to monitor and manage more of the overall application infrastructure. It may sound like the holy grail to track communications across all of your applications and through all of your middleware systems as a contiguous whole, while at the same time correlating the events generated by each platform involved. However, the fact is that it is being done ­ and it's much simpler to do than most expect.

We found that to be the case at Debenhams, one of the United Kingdom's preeminent retailers, and saw an opportunity to improve our performance. Diagnosis of the problem wasn't really the issue ­ all the information we needed was there. But it was just taking too long to put it all together.

We had been doing a good job of sorting out problems within the MQ space for some time. Our problem was that most of our time was spent traipsing through the API layers of our applications that are integrated into MQ in order to find problems. The difficulty was to see the whole picture. Think of it as pushing sausage meat along a sausage casing without any knots in it. We never knew quite where we were.

This is not an isolated problem. Often "blind spots" in the round trip of messages make it difficult to see just where the hang-ups are. Even more troublesome is the cover that these blind spots usually provide for the vendors involved to play the finger-pointing game while your valuable time is being wasted.

Expanding The Myopic MQ Perspective
At Debenhams we didn't have to look far for a solution. From our experience with Nastel AutoPilot, our existing MQ monitoring solution and our dialogue with its vendor, we saw how easily this type of problem could be handled. With all of the facts we could publish with the agents we had already deployed across our AS/400s, we were easily able to get access to all of these other points of information.

When you've got the right monitoring platform, the biggest task can often be defining what data metrics you want to monitor. But, if the tool you are using for monitoring is built on a service-oriented architecture such that it supports open standard interfaces and treats each metric as a portable object - each with its own metadata and methods ­ and if it makes it easy to define your own new events, then you should be able to quickly and easily monitor any data metric with rule and correlation engines, and invoke automated or manual corrective actions.

For example, in Debenhams' case, there are three business-critical applications "glued" together by WMQ. This allows them to see the end-to-end transactional journey involved in correlating facts from the API layers of their application with those they were already getting from MQ.

As you can see in Figure 1, both the applications and their API layers are being monitored ­ all the way into the database that supports the application, and facts are being published along the way. The API layer can initiate a piece of work that can take a good deal of time. For example, if we are processing price changes for a substantial department having tens of thousands of SKUs, it can take up to half an hour to complete. We need to know this beforehand, as opposed to those jobs that are perceived to be long-running and turn out to be only one hundred records or so.

So what was it like implementing all of this monitoring? It took a bit of work, but don't get the idea that it was difficult or impractical. It was relatively straightforward with the tools we used at Debenhams. We've got one guy who's written some generic code on the AS400, which is hooked up to how we publish the facts with our monitoring system. We have several different sorts of things we're collecting. For example, you can easily find out how many records have been received. So that's an easy sort of fact to publish. Some of the other facts that we are getting from the API layer are far more complex. So it's actually working out things like, what processes are active, and if the processes are active, then we need to interrogate the files that they're reading to see how the queue depths are going.

The Payback
The expanded visibility of seeing beyond the bounds of your middleware (when you begin to see more of what is happening within your applications) will always yield better ways to monitor and tune the system and make smarter use of that system, too.

At Debenhams, we estimate that our monitoring saves us at least two to three hours in staff time every day. The most sizable savings is in the time it takes to find problems. For example, our application's vendor used to ask, "How did you know we were doing that?" But they work very closely with us and have been on-site when we were working out the problems inside of MQ. Therefore, they saw the tool we have and learned that we also have quite a lot of MQ knowledge. So now when we think we've got a problem, they don't question it ­ they just go fix it.

During Debenhams' busy time of the year, they run more shifts and the volumes start going up. Chris' group wanted to be able to isolate which particular jobs were causing an issue in the API layers. Once they were identified, the group could then see if they could either throw more resources at them or get them rewritten. As a result of this visibility, we saved thousands of dollars that we would have otherwise wasted just treating the symptoms.

One of the critical processes within Debenhams' operation involved a relatively simple transformation on the source application where the data gets pushed through MQ quite quickly when users gathering the data come off their shifts. Within the target application, however, there is a rather tortuous route. It was at this point in the process that we were finding some problems. It goes through a number of files and a number of rigorous data transformations and eventually it gets into the target database, and then pops out the other side. Not only were we able to easily see what needed to be fixed, but we also extended visibility of the process to our users. Now they see the path of transactions pictorially, and can watch the whole process ­ easily spotting problems without having to understand any of the complexities.

Best Practices
So where do you start with this practical approach to monitoring your application infrastructure in a proactive manner? Here are the ways you can make it happen, and get the visibility to go beyond mere MQ monitoring:

  • Use a monitoring platform that is an SOA designed to operate in real time, and is modular and extensible with support for any open standard.
  • Identify specific data metrics (facts) that will give you better insight into the health of your system. This has a compound effect in that the more you can watch these metrics, the more you'll discover those points of data that are the real determinants. Remember the facts about your environment are all out there; you just need to locate them and pull them together in order to monitor the health of your system.
  • Build simple modular views of these metrics, so that you can correlate them with each other into a hierarchy that represents the complex events within your system. This way you'll see the warning signs of problems before your application goes to production (or before your customers pick up the phone, if it's already there). Most of all, you'll spend much less time chasing false alarms.
  • Share the visibility with other stakeholders of the system, particularly application groups and business units. You'll empower them to solve many of their own problems without bothering you. It also reduces time wasted on needless mundane inquiries, and makes the questions you do get of a higher quality.
  • Implement corrective actions for those recurring conditions where the resolution is consistently the same set of actions.

More Stories By David Mavashev

David Mavashev, CEO of Nastel, is a leading expert on IT infrastructure, middleware and messaging technologies with over 25 years experience architecting systems and solutions. His areas of expertise encompass implementing middleware-centric architectures and the underlying infrastructure monitoring that is fundamental for its optimal performance, as well as tools and technologies for monitoring and managing integrated application processes and performance across the enterprise, and helping companies achieve business agility through effectively aligning IT with business processes in the real-time enterprise.

A successful entrepreneur, David founded Nastel in 1994 and also served as the company's CTO for many years. Prior to that, he was the technical manager of the messaging group at NYNEX, where he architected and managed the implementation of the first commercial transactional messaging product, which now forms the basis for IBM WebSphere MQ (formerly MQ-Series). A pioneer in the early evolution of messaging technologies, David logged many years as an IT consultant working with some of the world's foremost banks and financial institutions.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant th...
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security...
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its ...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. ANSeeN are the measurement electronics maker for X-ray and Gamma-ray and Neutron measurement equipment such as spectrometers, pulse shape analyzer, and CdTe-FPD. For more information, visit http://anseen.com/.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...