IBM Cloud Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, William Schmarzo

Related Topics: IBM Cloud

IBM Cloud: Article

WebSphere Performance Diagnostics - Going Beyond the Metrics

WebSphere Performance Diagnostics - Going Beyond the Metrics

J2EE has arrived, and is gaining strength and popularity every day. J2EE does an excellent job of solving enterprise computing problems. It supports legacy applications and interfaces, multiple operating systems, distributed and clustered environments and high-volume mission-critical applications with support for security and managed operations.

Regardless of the type of Web application model being adopted, there are some issues that need careful consideration during the design and implementation of an e-business site. These include the reliability, security, capacity, scalability, and cost of the system and network. E-business activities and Web services are essentially real-time processes in which performance and availability problems have a high cost. Frustrated users can translate into lost customers and lost revenue. Set performance, availability, and security goals very early in a project because they are closely related to the design of the application.

Successful performance management helps you detect and correct performance problems. In defining your performance assurance solution, consider the following tenets:

  • If you don't measure performance, you won't detect that you have problems.
  • If you don't know where to measure performance, you can't diagnose problems.
  • If you don't know how to find the root cause of a problem, you can't fix problems.

    Success can hinge on the ability to detect, diagnose, and resolve application problems quickly. This can be a particularly challenging proposition if you're taking advantage of component-based J2EE applications. Due to their increased complexity, these applications are much more difficult to diagnose than earlier application environments. The dynamic nature of these applications, and the underlying infrastructure, creates a need for advanced management software that automatically discovers transactions and transaction components while adapting to changes without intervention.

    Quality of Service - the J2EE Challenge
    Speed, around-the-clock availability, and security are the most common indicators of the quality of service of your Web site. E-business sites based on J2EE technology are complex, with multiple interconnected layers of software and hardware components. The nature of e-business workload is also complex, due to its transactional nature, security requirements, and the unpredictable characteristics of service requests from end users. Therefore, planning for capacity and the performance aspects of your application is a vital step in achieving performance assurance.

    In a multitiered distributed application architecture, application logic is divided into components according to function. Which tier a J2EE application component ultimately resides on is contingent on the component's function as it relates to the J2EE environment. The J2EE environment has three main tiers: a client tier, a middle tier, and an enterprise information systems (EIS) or back-end tier.

    The bulk of the J2EE engine resides on the environment's middle tier. The middle tier provides client services through Web containers, which house such J2EE components as servlets and JavaServer Pages (JSPs), and business-logic services through Enterprise JavaBean (EJB) containers, which house EJB components or enterprise beans.

    Adopters of the J2EE architecture want to know quantitative, credible answers to many questions. These include:

  • How well do my application servers manage scale-up in client connectivity, server processor count, load balancing across multiple nodes, size and duration of transient state, transaction complexity, etc.?
  • How should a system manager determine the size of systems for the J2EE platform and EJB architecture?
  • Given method calls between beans, how does performance vary as a function of location? E.g., within the same EJB server/container, across different containers inside the same EJB server, across different EJB servers, etc.
  • What is the performance difference between session and entity beans (assuming that design patterns permit a choice of either)?

    In addition to the J2EE stack, you also want to understand its relation to the other distributed hardware and software components such as firewalls, Web servers, application servers, and databases.

    Business Solution Requirements
    Running a successful e-business requires an integrated performance management solution that spans applications, systems, services, and the network. Your diagnostic solution must provide insight into the execution paths of logical transactions. This requires the ability to map a transaction by its constituent elements - such as networks, systems, Web servers, application servers, and application components. Also required is the ability to collect timing on each element and to recognize and call out unusual events.

  • End-user response time: This is the actual time it takes for a customer to complete an application transaction. Measuring online customer response time and application availability provides the first step toward controlling performance risks. By documenting compliance with service-level agreements and quality of service, businesses can increase customer retention, build customer loyalty, and increase upgrade and cross-selling opportunities.

  • Multiple layers of e-business services: The holistic end-to-end view of your e-business system is provided by aggregating and correlating the performance and state of the network, systems, and application elements involved in delivering a business service. The interaction between these layers produces the customer's online performance experience. Performance data must be collected and analyzed to establish baseline performance and help IT management size the environment. Real-time data collection and analysis allows response to and correction of potentially devastating problems. This improved control translates into higher performance services that can mean higher product revenues.

  • Proactive management: Identifying and resolving performance problems before the customer experiences them is the key to performance management. Proactive performance management allows administrators to identify performance slowdowns and correct problems before customer service disruptions occur.

  • Flexible reporting and collaboration: The testing and deployment of J2EE-based systems requires the involvement of cross-functional teams such as developers, network engineers, QA engineers, system administrators, and DBAs. The need to share objective data and collaborate with different teams helps to frame system performance in the context of the distributed environment. Also, you need to share analyzed data in a report format with management on a regular basis. For example, a technical administrator needs detailed technical data for troubleshooting, an IT manager needs trend information for capacity planning, and a line-of-business manager needs service performance summary reports for sales analysis. Flexible reporting makes it easier to share knowledge throughout the organization so a variety of personnel can make business decisions and uncover additional selling opportunities based on service performance.

    The Need for a New Type of Tool
    Since these Web applications are customer facing, managing their performance is critical. However, available tools for pinpointing and diagnosing performance problems in the J2EE environment have been limited.

    In the development environment, Java profilers have existed for many years. These tools, such as Quest (Sitraka) JProbe, are excellent for improving code paths or finding memory leaks in the development environment.

    Once an application is in a test or production environment where significant load is applied, profilers are no longer effective. Therefore, many load-related performance issues, such as application scalability, poor tuning, or configuration problems are difficult to diagnose and resolve.

    Traditional solutions for analyzing Web application performance under load have two main challenges:

  • Real-Time Monitors Only: Traditional IT management solutions are designed and built to manage and monitor real-time server availability. These systems provide relevant metrics on the state of the particular server being monitored. What they lack is a view into application response-time problems from an end-user perspective.

  • Silo tools: Typical IT organizations have a collection of tools that are very "device-centric," such as network monitors, database tools, and even application server monitors. While these silo tools provide extensive details about each server, they do not look at the entire end-to-end service request or business process. Without this holistic view of an end-to-end transaction, it becomes nearly impossible to solve performance problems before the end-user experience is impacted.

    Regardless of the granularity that these tools may offer, management of these increasingly complex n-tier infrastructures is typically applied in a disparate, silo approach, with multiple departments owning the operations responsibilities for various layers. This often results in reactive finger-pointing and departmental blame when performance issues arise. What is required is a solution to measure end-to-end transaction response times with visibility into all layers of the J2EE stack, including the application servers. But metrics and data are not enough. The need to correlate this information in the context of the application and its infrastructure is the key to isolating and diagnosing the root cause of the problem in the context of the end-user experience. Performance tools that provide this type of information are emerging, such as Quest PerformaSure and Wily Introscope.

    In this article we will illustrate the diagnosis of a typical WebSphere-based J2EE application from an end-user transaction-centric perspective using PerformaSure. With the ability to capture transaction metrics and correlate them with metrics from system-level, network, database, and WebSphere application servers, we will be able to efficiently get to and repair the root causes of our performance problems.

    The Power of End-to-End Transaction Analysis
    From the customer's perspective, your Web site is a black box. Customers don't know (or care) how many servers are involved, where they are, what hardware they employ, or which application server they use. They care only about how fast your Web page appears. Monitoring the end-user view tells you whether you have a publicly visible performance problem. Once you recognize that you have a performance problem, the challenge is to isolate the problem and be able to drill down to resolve it.

    Following are two real-world examples of how you can diagnose J2EE performance problems by taking a transaction-centric view of the system.

    Improving a Slow Transaction
    In this particular example we examine the Sun PetStore reference application under a load test with the configuration shown in Figure 1.

    First, identify slow transactions. Our first objective is to see how the system behaved over the length of the test run. By looking at the load driver response-time metrics, as well a set of indicating metrics, we get an idea of how our application performs.

    As you can see in Figure 2, response time for the application is extremely slow throughout the entire run and gets progressively worse. In order to begin improving performance, the next step requires finding what components within the system are slow.

    Next, identify in which tier the problem transaction is spending its time. Once a problematic transaction has been located, we can identify which server(s) are responsible for the unacceptable response times. By breaking request response times down into their constituent server response times, a single performance team member can immediately identify which server to target to further their investigation into the root cause of the problem. For example, perhaps the customer login request is slow. But is it the Web server or one of the application servers? Or is it the database server?

    In Figure 3, we see that the commitorder request is the most expensive transaction with respect to time. In fact, we can easily see how much time this request is taking in each tier, in this case 54.2% of its time is spent in JDBC calls. This warrants further investigation in order to improve this number.

    Root Cause
    Next, break the transaction into its logical Java components. By focusing on the problem transaction, we can break the transaction into its logical and physical Java components that make up the request. A powerful visual map of the transaction's round-trip execution with timing information and a critical performance path aids in this process. Color-coded highlighting helps us narrow down the search to an offending component.

    Even without any prior application knowledge, you can easily identify suspect components, right down to the method level, for further investigation by the appropriate functional expert. Which components and methods are slowest for the application's business-critical requests? Which methods or components are being called excessively? Are the EJBs interacting efficiently with the database, and using properly tuned SQL queries?

    With the ability to view the component taking the greatest amount of time in the call graph, we can quickly focus our attention on that particular component. In this example, we know that the commitorder request was spending just over 54% of its time in a JDBC call. Figure 4 shows how we can use a transaction map to zoom in and narrow down to the specific area of code, in this case a JDBC query that's taking up this time.

    By taking a closer look at the "hot" components, we see that both the parent getItem() and the JDBC executeQuery() call counts are low (7), but the ResultSet.getString() call count is high (672). ResultSet is called approximately 96 times per call to getItem().This is suspiciously high and results in the lengthy response time for this transaction. Now that we are able to isolate a single class as the root cause, a possible corrective measure would be to examine the method under question and optimize the SQL statement.

    In this simplified example of focusing on the end-user transaction for performance root-cause analysis, we can see the benefits of tracing the request through a distributed J2EE system and provide some visibility into what once was a black box to engineers.

    Analyzing Intermittent Performance Problems
    Another common pattern is performance problems that occur partway into a test run, or after a server has been running in production for some period of time. Such problems are less likely to be solved by simple code-path improvements. A more likely cause is resource exhaustion somewhere in the system.

    In addition to simply isolating application response times, this type of problem often requires correlating metric data from the application server or other server components. Let's look at an example, again using PetStore, of solving a performance problem that does not appear until later in a test run.

    Identifying Problematic Time Periods for Key Metrics
    Again, our first objective is to try to get a picture of how the system behaved over the length of the test run. By observing the critical thresholds, we can quickly isolate the point at which the response time for our transactions started to increase.

    Correlating other Metrics
    By viewing individual metrics in relation to time and to each other, we can start to isolate the root cause of our problem. As shown in Figure 5, we observe that about one-quarter of the way through our session the response time starts to increase, as does disk-write activity on our WebSphere server. During normal operation, WebSphere Application Server should not be writing to disk because it is time consuming and it should not be necessary. This warrants further investigation.

    We can see that the response time of transactions suddenly increases, which would indicate that perhaps this is not an algorithmic problem but one of resource contention. There is correlation with the disk-write activity that exceeded our thresholds, so there is something of a cause-and-effect relationship. Looking a bit more closely at passivation in WAS, we see an immediate increase in passivates at the same time we're running into response-time increases - which explains the increase in disk-write activity. To help confirm the impact on WAS, we see the throughput of the passivation request increase 100 times within the same time slice.

    Remember, stateful session EJBs are maintained (in a cache or on disk) by WebSphere. Passivation is the process by which WebSphere Application Server removes an EJB from the cache while preserving the EJB's state on disk. Once the cache size is reached, WAS has to passivate EJBs to disk. Excessive EJB passivation can significantly slow down an otherwise well-performing system.

    Now that we have isolated the problem to excessive EJB passivation, we should examine the application to ensure that the best practice of removing stateful sessions beans is being followed. We might also find it necessary to increase the cache size.

    In this article, we discussed the importance of establishing and monitoring performance goals. In order to meet your performance objectives, you need to be able to detect, diagnose, and resolve performance problems. Focusing on end-user transactions helped us put various key system and application metrics into the context of the user's experience with the application to help analyze and solve some WebSphere performance problems.

  • More Stories By Ruth Willenborg

    Ruth Willenborg is a senior technical staff member in IBM's WebSphere Technology Institute working on virtualization. Prior to this assignment, Ruth was manager of the WebSphere performance team responsible for WebSphere Application Server performance analysis, performance benchmarking, and performance tool development. Ruth has over 20 years of experience in software development at IBM. She is co-author of Performance Analysis for Java Web Sites (Addison-Wesley, 2002).

    More Stories By Rini Gahir

    Rini Gahir is the product marketing manager for J2EE Solutions at Quest Software. With more than eight years of experience in ERP, e-commerce, and software application marketing, Rini's responsibilities include identifying market needs, defining new technologies to meet customer requirements, and steering product development strategies.

    Comments (2) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    Most Recent Comments
    Ruth Willenborg 01/08/03 05:44:00 PM EST

    You can find some great resources to help you at: http://www-3.ibm.com/software/webservers/appserv/performance.html.

    This web site includes a 4.0 Tuning Methodology Paper as well as a link into the 4.0 Info Center Tuning Guide. Be sure to check out the "big hitters" in the Tuning Guide.

    As a general rule, the most significant performance improvements actually come from improvements to your application. The same web site includes a best practices paper for your application. Tools, such as PerformaSure (used in this article) can help highlight areas in your application to focus on, and tools like JProbe can help you dig even deeper.

    If you are new to performance testing and analysis, I have co-authored a book, Performance Analysis for Java Web Sites, which among many other things, helps you through the testing/tuning process and the key metrics to look at.

    IBM also has a Performance Review and Tuning service see: http://www-3.ibm.com/software/ad/vaws-services/websphere.html.

    There is also a class offered through IBM.

    Hope your tuning efforts are highly successful!

    jagadeesh 01/08/03 03:12:00 AM EST

    Hi, i need some info on tuning ibm httpwebserver and was4 for better performance and for images loading

    @ThingsExpo Stories
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
    Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
    The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
    DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
    With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
    DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
    Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
    DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
    DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Miami Blockchain Event by FinTechEXPO also offers s...
    DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.