Welcome!

IBM Cloud Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Carmen Gonzalez, Elizabeth White

Related Topics: IBM Cloud

IBM Cloud: Article

WebSphere Performance Diagnostics - Going Beyond the Metrics

WebSphere Performance Diagnostics - Going Beyond the Metrics

J2EE has arrived, and is gaining strength and popularity every day. J2EE does an excellent job of solving enterprise computing problems. It supports legacy applications and interfaces, multiple operating systems, distributed and clustered environments and high-volume mission-critical applications with support for security and managed operations.

Regardless of the type of Web application model being adopted, there are some issues that need careful consideration during the design and implementation of an e-business site. These include the reliability, security, capacity, scalability, and cost of the system and network. E-business activities and Web services are essentially real-time processes in which performance and availability problems have a high cost. Frustrated users can translate into lost customers and lost revenue. Set performance, availability, and security goals very early in a project because they are closely related to the design of the application.

Successful performance management helps you detect and correct performance problems. In defining your performance assurance solution, consider the following tenets:

  • If you don't measure performance, you won't detect that you have problems.
  • If you don't know where to measure performance, you can't diagnose problems.
  • If you don't know how to find the root cause of a problem, you can't fix problems.

    Success can hinge on the ability to detect, diagnose, and resolve application problems quickly. This can be a particularly challenging proposition if you're taking advantage of component-based J2EE applications. Due to their increased complexity, these applications are much more difficult to diagnose than earlier application environments. The dynamic nature of these applications, and the underlying infrastructure, creates a need for advanced management software that automatically discovers transactions and transaction components while adapting to changes without intervention.

    Quality of Service - the J2EE Challenge
    Speed, around-the-clock availability, and security are the most common indicators of the quality of service of your Web site. E-business sites based on J2EE technology are complex, with multiple interconnected layers of software and hardware components. The nature of e-business workload is also complex, due to its transactional nature, security requirements, and the unpredictable characteristics of service requests from end users. Therefore, planning for capacity and the performance aspects of your application is a vital step in achieving performance assurance.

    In a multitiered distributed application architecture, application logic is divided into components according to function. Which tier a J2EE application component ultimately resides on is contingent on the component's function as it relates to the J2EE environment. The J2EE environment has three main tiers: a client tier, a middle tier, and an enterprise information systems (EIS) or back-end tier.

    The bulk of the J2EE engine resides on the environment's middle tier. The middle tier provides client services through Web containers, which house such J2EE components as servlets and JavaServer Pages (JSPs), and business-logic services through Enterprise JavaBean (EJB) containers, which house EJB components or enterprise beans.

    Adopters of the J2EE architecture want to know quantitative, credible answers to many questions. These include:

  • How well do my application servers manage scale-up in client connectivity, server processor count, load balancing across multiple nodes, size and duration of transient state, transaction complexity, etc.?
  • How should a system manager determine the size of systems for the J2EE platform and EJB architecture?
  • Given method calls between beans, how does performance vary as a function of location? E.g., within the same EJB server/container, across different containers inside the same EJB server, across different EJB servers, etc.
  • What is the performance difference between session and entity beans (assuming that design patterns permit a choice of either)?

    In addition to the J2EE stack, you also want to understand its relation to the other distributed hardware and software components such as firewalls, Web servers, application servers, and databases.

    Business Solution Requirements
    Running a successful e-business requires an integrated performance management solution that spans applications, systems, services, and the network. Your diagnostic solution must provide insight into the execution paths of logical transactions. This requires the ability to map a transaction by its constituent elements - such as networks, systems, Web servers, application servers, and application components. Also required is the ability to collect timing on each element and to recognize and call out unusual events.

  • End-user response time: This is the actual time it takes for a customer to complete an application transaction. Measuring online customer response time and application availability provides the first step toward controlling performance risks. By documenting compliance with service-level agreements and quality of service, businesses can increase customer retention, build customer loyalty, and increase upgrade and cross-selling opportunities.

  • Multiple layers of e-business services: The holistic end-to-end view of your e-business system is provided by aggregating and correlating the performance and state of the network, systems, and application elements involved in delivering a business service. The interaction between these layers produces the customer's online performance experience. Performance data must be collected and analyzed to establish baseline performance and help IT management size the environment. Real-time data collection and analysis allows response to and correction of potentially devastating problems. This improved control translates into higher performance services that can mean higher product revenues.

  • Proactive management: Identifying and resolving performance problems before the customer experiences them is the key to performance management. Proactive performance management allows administrators to identify performance slowdowns and correct problems before customer service disruptions occur.

  • Flexible reporting and collaboration: The testing and deployment of J2EE-based systems requires the involvement of cross-functional teams such as developers, network engineers, QA engineers, system administrators, and DBAs. The need to share objective data and collaborate with different teams helps to frame system performance in the context of the distributed environment. Also, you need to share analyzed data in a report format with management on a regular basis. For example, a technical administrator needs detailed technical data for troubleshooting, an IT manager needs trend information for capacity planning, and a line-of-business manager needs service performance summary reports for sales analysis. Flexible reporting makes it easier to share knowledge throughout the organization so a variety of personnel can make business decisions and uncover additional selling opportunities based on service performance.

    The Need for a New Type of Tool
    Since these Web applications are customer facing, managing their performance is critical. However, available tools for pinpointing and diagnosing performance problems in the J2EE environment have been limited.

    In the development environment, Java profilers have existed for many years. These tools, such as Quest (Sitraka) JProbe, are excellent for improving code paths or finding memory leaks in the development environment.

    Once an application is in a test or production environment where significant load is applied, profilers are no longer effective. Therefore, many load-related performance issues, such as application scalability, poor tuning, or configuration problems are difficult to diagnose and resolve.

    Traditional solutions for analyzing Web application performance under load have two main challenges:

  • Real-Time Monitors Only: Traditional IT management solutions are designed and built to manage and monitor real-time server availability. These systems provide relevant metrics on the state of the particular server being monitored. What they lack is a view into application response-time problems from an end-user perspective.

  • Silo tools: Typical IT organizations have a collection of tools that are very "device-centric," such as network monitors, database tools, and even application server monitors. While these silo tools provide extensive details about each server, they do not look at the entire end-to-end service request or business process. Without this holistic view of an end-to-end transaction, it becomes nearly impossible to solve performance problems before the end-user experience is impacted.

    Regardless of the granularity that these tools may offer, management of these increasingly complex n-tier infrastructures is typically applied in a disparate, silo approach, with multiple departments owning the operations responsibilities for various layers. This often results in reactive finger-pointing and departmental blame when performance issues arise. What is required is a solution to measure end-to-end transaction response times with visibility into all layers of the J2EE stack, including the application servers. But metrics and data are not enough. The need to correlate this information in the context of the application and its infrastructure is the key to isolating and diagnosing the root cause of the problem in the context of the end-user experience. Performance tools that provide this type of information are emerging, such as Quest PerformaSure and Wily Introscope.

    In this article we will illustrate the diagnosis of a typical WebSphere-based J2EE application from an end-user transaction-centric perspective using PerformaSure. With the ability to capture transaction metrics and correlate them with metrics from system-level, network, database, and WebSphere application servers, we will be able to efficiently get to and repair the root causes of our performance problems.

    The Power of End-to-End Transaction Analysis
    From the customer's perspective, your Web site is a black box. Customers don't know (or care) how many servers are involved, where they are, what hardware they employ, or which application server they use. They care only about how fast your Web page appears. Monitoring the end-user view tells you whether you have a publicly visible performance problem. Once you recognize that you have a performance problem, the challenge is to isolate the problem and be able to drill down to resolve it.

    Following are two real-world examples of how you can diagnose J2EE performance problems by taking a transaction-centric view of the system.

    Improving a Slow Transaction
    In this particular example we examine the Sun PetStore reference application under a load test with the configuration shown in Figure 1.

    Detection
    First, identify slow transactions. Our first objective is to see how the system behaved over the length of the test run. By looking at the load driver response-time metrics, as well a set of indicating metrics, we get an idea of how our application performs.

    As you can see in Figure 2, response time for the application is extremely slow throughout the entire run and gets progressively worse. In order to begin improving performance, the next step requires finding what components within the system are slow.

    Diagnosis
    Next, identify in which tier the problem transaction is spending its time. Once a problematic transaction has been located, we can identify which server(s) are responsible for the unacceptable response times. By breaking request response times down into their constituent server response times, a single performance team member can immediately identify which server to target to further their investigation into the root cause of the problem. For example, perhaps the customer login request is slow. But is it the Web server or one of the application servers? Or is it the database server?

    In Figure 3, we see that the commitorder request is the most expensive transaction with respect to time. In fact, we can easily see how much time this request is taking in each tier, in this case 54.2% of its time is spent in JDBC calls. This warrants further investigation in order to improve this number.

    Root Cause
    Next, break the transaction into its logical Java components. By focusing on the problem transaction, we can break the transaction into its logical and physical Java components that make up the request. A powerful visual map of the transaction's round-trip execution with timing information and a critical performance path aids in this process. Color-coded highlighting helps us narrow down the search to an offending component.

    Even without any prior application knowledge, you can easily identify suspect components, right down to the method level, for further investigation by the appropriate functional expert. Which components and methods are slowest for the application's business-critical requests? Which methods or components are being called excessively? Are the EJBs interacting efficiently with the database, and using properly tuned SQL queries?

    With the ability to view the component taking the greatest amount of time in the call graph, we can quickly focus our attention on that particular component. In this example, we know that the commitorder request was spending just over 54% of its time in a JDBC call. Figure 4 shows how we can use a transaction map to zoom in and narrow down to the specific area of code, in this case a JDBC query that's taking up this time.

    By taking a closer look at the "hot" components, we see that both the parent getItem() and the JDBC executeQuery() call counts are low (7), but the ResultSet.getString() call count is high (672). ResultSet is called approximately 96 times per call to getItem().This is suspiciously high and results in the lengthy response time for this transaction. Now that we are able to isolate a single class as the root cause, a possible corrective measure would be to examine the method under question and optimize the SQL statement.

    In this simplified example of focusing on the end-user transaction for performance root-cause analysis, we can see the benefits of tracing the request through a distributed J2EE system and provide some visibility into what once was a black box to engineers.

    Analyzing Intermittent Performance Problems
    Another common pattern is performance problems that occur partway into a test run, or after a server has been running in production for some period of time. Such problems are less likely to be solved by simple code-path improvements. A more likely cause is resource exhaustion somewhere in the system.

    In addition to simply isolating application response times, this type of problem often requires correlating metric data from the application server or other server components. Let's look at an example, again using PetStore, of solving a performance problem that does not appear until later in a test run.

    Identifying Problematic Time Periods for Key Metrics
    Again, our first objective is to try to get a picture of how the system behaved over the length of the test run. By observing the critical thresholds, we can quickly isolate the point at which the response time for our transactions started to increase.

    Correlating other Metrics
    By viewing individual metrics in relation to time and to each other, we can start to isolate the root cause of our problem. As shown in Figure 5, we observe that about one-quarter of the way through our session the response time starts to increase, as does disk-write activity on our WebSphere server. During normal operation, WebSphere Application Server should not be writing to disk because it is time consuming and it should not be necessary. This warrants further investigation.

    We can see that the response time of transactions suddenly increases, which would indicate that perhaps this is not an algorithmic problem but one of resource contention. There is correlation with the disk-write activity that exceeded our thresholds, so there is something of a cause-and-effect relationship. Looking a bit more closely at passivation in WAS, we see an immediate increase in passivates at the same time we're running into response-time increases - which explains the increase in disk-write activity. To help confirm the impact on WAS, we see the throughput of the passivation request increase 100 times within the same time slice.

    Remember, stateful session EJBs are maintained (in a cache or on disk) by WebSphere. Passivation is the process by which WebSphere Application Server removes an EJB from the cache while preserving the EJB's state on disk. Once the cache size is reached, WAS has to passivate EJBs to disk. Excessive EJB passivation can significantly slow down an otherwise well-performing system.

    Now that we have isolated the problem to excessive EJB passivation, we should examine the application to ensure that the best practice of removing stateful sessions beans is being followed. We might also find it necessary to increase the cache size.

    Summary
    In this article, we discussed the importance of establishing and monitoring performance goals. In order to meet your performance objectives, you need to be able to detect, diagnose, and resolve performance problems. Focusing on end-user transactions helped us put various key system and application metrics into the context of the user's experience with the application to help analyze and solve some WebSphere performance problems.

  • More Stories By Ruth Willenborg

    Ruth Willenborg is a senior technical staff member in IBM's WebSphere Technology Institute working on virtualization. Prior to this assignment, Ruth was manager of the WebSphere performance team responsible for WebSphere Application Server performance analysis, performance benchmarking, and performance tool development. Ruth has over 20 years of experience in software development at IBM. She is co-author of Performance Analysis for Java Web Sites (Addison-Wesley, 2002).

    More Stories By Rini Gahir

    Rini Gahir is the product marketing manager for J2EE Solutions at Quest Software. With more than eight years of experience in ERP, e-commerce, and software application marketing, Rini's responsibilities include identifying market needs, defining new technologies to meet customer requirements, and steering product development strategies.

    Comments (2) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Ruth Willenborg 01/08/03 05:44:00 PM EST

    You can find some great resources to help you at: http://www-3.ibm.com/software/webservers/appserv/performance.html.

    This web site includes a 4.0 Tuning Methodology Paper as well as a link into the 4.0 Info Center Tuning Guide. Be sure to check out the "big hitters" in the Tuning Guide.

    As a general rule, the most significant performance improvements actually come from improvements to your application. The same web site includes a best practices paper for your application. Tools, such as PerformaSure (used in this article) can help highlight areas in your application to focus on, and tools like JProbe can help you dig even deeper.

    If you are new to performance testing and analysis, I have co-authored a book, Performance Analysis for Java Web Sites, which among many other things, helps you through the testing/tuning process and the key metrics to look at.

    IBM also has a Performance Review and Tuning service see: http://www-3.ibm.com/software/ad/vaws-services/websphere.html.

    There is also a class offered through IBM.

    Hope your tuning efforts are highly successful!

    jagadeesh 01/08/03 03:12:00 AM EST

    Hi, i need some info on tuning ibm httpwebserver and was4 for better performance and for images loading

    @ThingsExpo Stories
    SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
    SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
    SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
    SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
    Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
    Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
    SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
    Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
    Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
    SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
    In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
    SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
    In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
    With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
    SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
    Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
    Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
    As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.