IBM Cloud Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Carmen Gonzalez, Elizabeth White

Related Topics: IBM Cloud

IBM Cloud: Article

Building High-Availability Web Applications - Managing HTTP sessions within clustered WebSphere 5 environments

Building High-Availability Web Applications - Managing HTTP sessions within clustered WebSphere 5 environments

To provide the best performance and availability for WebSphere applications, administrators and developers count on scalability features found in the software, hardware, and networking components that host their WebSphere domain. More than ever, the availability of our Web applications can impact critical business processes.

Recognizing this, WebSphere technologists are implementing high-availability (HA) clusters using WebSphere Application Server Network Deployment Edition v5, hereafter referred to as ND. Along with HA functionality found in databases, Web servers, and operating systems, ND can be implemented to deliver the benefits of HA architecture: increased performance, failover, recovery, and workload management.

Let's now consider the Java applications hosted within WebSphere. Are they affected when the application servers that contain them are cloned? In the development life cycle, did you consider that your code might be deployed into a clustered WebSphere topology? For the servlet and JSP developer, this means that requests to servlets or JSPs might not be serviced by the same application server, even though the requests might all be for the same session.

In this article, I will examine HTTP session management strategies in clustered WebSphere environments from the perspectives of the administrator and the developer. I will make the case for implementing ND's framework for session management as an alternative to client-tier sessions, describing the tracking and persistence functionality that ND delivers.

Client-Tier Session Management
The HTTP session is implicitly stateless; this limitation must be considered when designing an HA solution. One way developers can overcome this limit is by storing the session at the client (browser) and delivering it as part of each server request.

There are several ways developers can store the session data at the client, including URL rewriting, cookies, and hidden HTML form elements. These are not to be confused with ND's session tracking mechanisms. The difference is that ND uses URL rewrites or cookies only to track the session ID. Storing the session in the client means sending all or most of the session data back and forth between the client and the server clone. Here are the major problems found with client-stored session data:

  • Network traffic: In each request to the server, a potentially large amount of session data will be along for the ride. Depending on the implementation, the server might also respond with the entire session. Another drawback is that usually most of the data is unchanged and is unnecessarily exchanged.
  • Varying browser support: When using URL rewriting or cookies to transport an entire session, you might run into size limitations at the client. There aren't standards that specify how long a URL can be or how much data can be stored in a cookie. Each browser enforces different rules.
  • Security: HTML forms can be saved anywhere the browser is rendered. Also, cookie files could be edited and URLs could be altered and pasted into another browser. While it's possible to add encryption, this will mean added complexity and performance overhead.
Since the entire session is part of each HTTP conversation, there is no need for stateful interaction with server clones. In this scenario, it doesn't matter which server clone services the request because the session data is contained in the request.

ND offers the alternative of server-managed sessions through affinity, tracking, and persistence mechanisms. This lets the developer take advantage of server-tier session management, with the added benefits of the HA infrastructure built into ND clusters. The rest of this article describes ND's support and functionality for server-tier session management.

ND Session Affinity
The conversation between a browser and a server clone can contain many exchanges. Since the browser initiates the conversation, the server clone that first services the request must be linked in some way to this new session so that subsequent requests can also be serviced by the same clone. Take, for example, the online shopping cart. As the user shops, items are added to the virtual shopping cart before checkout. This interaction could involve multiple HTTP exchanges. If items in the shopping cart are kept at the server clone, every one of those HTTP exchanges must be serviced by the same clone.

Uniquely identifying this browser-to-server relationship is what's known as session affinity. ND supports session affinity through the plug-in architecture, which defines the transport parameters between the Web server and the server clones. The plug-in parameters are built by the ND administrative server and stored in the plugin-cfg.xml file. When a server cluster is created, ND assigns each clone a unique identifier. Listing 1 shows what it looks like in the plugin-cfg.xml.

Since clone IDs are automatically assigned when the cluster is created, session affinity is enabled by default. The process of matching a session to a clone can impact performance, so if you don't need session affinity, disable it by removing the clone IDs from the plugin-cfg.xml. Also note that the plug-in will bypass session affinity if it doesn't detect session ID as part of the request. I'll discuss later how ND identifies the session and the options we have to track it.

The mechanics of session affinity are pretty straightforward, but it's worth noting that affinity guarantees only that requests for a single session are routed to the same clone. If the clone fails or is shut down, the plug-in will route the request to a different clone. If the alternate clone can't recover the session, then it is lost.

Using our shopping cart example, such a failure will result in an empty cart. To avoid this, ND has session persistence options that make the session recoverable in failover scenarios. But first let's cover session tracking functionality in ND to complete the affinity picture.

ND Session Tracking Mechanisms
In addition to unique clone IDs, establishing affinity requires unique session IDs and an ability to track them in a multipart, browser-to-clone conversation. ND offers three options for tracking the session ID: SSL ID tracking, cookies, and URL rewriting. You can enable more than one tracking mechanism, and ND will resolve them in the following order: (1) SSL session IDs, (2) cookies, and (3) URL rewriting. A good reason to enable more than one mechanism is that each has deficiencies. If you decide to track sessions by SSL ID and the Web server fails, ND can fall back on an alternative tracking mechanism. The great part is that this functionality is built into ND, ready to go. Figure 1 shows ND's session tracking mechanisms.


An advantage of SSL tracking is that it doesn't require any code changes. The SSL ID is negotiated between the browser and the Web server, so even an application server failure won't change the session ID. Keep in mind that SSL IDs are affected by the Web server configuration (httpd.conf for Apache-style Web servers) and this may overlap with ND's session management controls. For example, session timeout is set by the SSLV3TIMEOUT parameter in the Web server, and is also set in ND's session management settings. Another possible conflict might occur if your code contains custom session invalidation routines. Be careful to verify each layer (Web server, app server, Web apps) to ensure one doesn't step on another.

As a tracking mechanism, cookies can be problematic. Many users block cookies out of concern for privacy. Some network administrators filter out cookies at the firewall. Still, beyond any privacy concerns, there are valid technical reasons for avoiding cookies. If cookies are permitted they can easily be manipulated at the client through browser controls or at the file system. Cookie files can be purged, expired, and changed manually. Any of these actions will cause loss of the session ID. If we don't trust cookies to hold session data, then we shouldn't trust cookies to hold the reference (session ID) to the session data either.

URL rewriting is fairly straightforward and involves appending the session ID to the end of a URL. There is a simple code requirement that the servlet or JSP must encode the URL for ND to properly recognize that a URL has a session attached to it.

The following code snippet shows the URL encoding in the servlet:


The following code snippet shows the URL encoding in the JSP:

<%= response.encodeURL(request.getRequestURI())%>

A big advantage for URL rewriting is that it works everywhere. It also supports concurrent sessions for single users, but this could be troublesome if the user pastes the URL of an active session into a new browser instance. The server wouldn't know which browser instance was active. From a code perspective, especially for JSPs, URL rewriting can be awkward. Any page links in the JSP that need stateful session information would need their URLs rewritten. Also off-limits are any static HTML pages, because there's no way to insert the session ID.

The IBMSession Interface
It is quite common for Web applications to store session data at the server in a session object without persistence. This nonpersisted, local session exists only in JVM memory; if it is not persisted through ND, it is not made available outside this JVM and doesn't play a role in the HA infrastructure. Be careful not to confuse in-memory local sessions with ND's memory-to-memory replication. In-memory local sessions are limited to a single server (per the Servlet 2.3 specification), and don't offer recovery options after failover.

Servlet developers using local sessions store information through the HttpSession interface. The HttpSession is obtained through the HttpServletRequest, and is handled in the servlet's doGet() and doPost() methods. The HTTPSession methods to access and update the session include:

HttpSession hsess = req.getSession();
Object o = hsess.getAttribute("sessAttributeName");
hsess.setAttribute("sessAttributeName", sessObject);

The IBMSession interface extends the HTTPSession to add the following functionality: session overflow control, user identity association, and the ability for the developer to force a session to be persisted externally. While the first two fall under the umbrella of ND's session management offerings, it is the session persistence method that is important in a clustered topology. The developer has complete control of when the session gets persisted by calling the sync() method. This level of control is new in ND; previously the persistent store update was deferred until the end of the time interval specified in the session manager's write frequency settings, as shown in Figure 2.


Developers who store session data at the server can easily implement the IBMSession interface, and won't have much work left to achieve a server-tier session management system that supports failover. An added bonus is that very little, if any, code needs to change to implement persistent sessions with ND.

The WebSphere implementation, through the IBMSession interface, includes methods to access the extended functionality:

IBMSession isess = (IBMSession)req.getSession();
if (isess.isOverFlow()) throw new ServletException ...
String userName = isess.getUserName();
isess.setAttribute("sessAttributeName", sessObject);
isess.sync(); // persist externally

Now we can turn our attention toward what ND offers us for session persistence. We've already seen how the IBMSession interface allows the developer to choose exactly when the persistent store is updated, so let's look at the other way that persistent stores get updated, along with the details about ND's persistence mechanisms.

ND Session Persistence Mechanisms
To briefly review, we've identified the session and the server (IDs), associated the two (affinity), and routed the request (plug-in). As we move on and discuss ND's session persistence functionality, keep in mind that ultimately it is the two persistence options that make failover recovery possible for the session.

Figure 3 shows the distributed environment settings in ND's administrative console. Here you can specify either database or memory-to-memory replication. Memory-to-memory replication is new in ND and uses WebSphere internal messaging.


Before the release of ND, database persistence was the only option available. The concept of persisting session objects to a database is fairly straightforward. Usually, when discussing this option, the first question asked is about performance. The general concern is that writing the objects out to a database is slow to begin with, and will collapse under any volume. While it's possible to overload even a very highly tuned system with enough volume, the real problem is the size of the session. When selecting database persistence, you must carefully consider page sizes (if using DB2) and whether to use a multirow schema. A single-row schema means one session object per row. A multirow schema means one session attribute per row. The trade-off is between performance and space in the database. The consensus best practice is to keep the session objects as small as possible and carefully analyze the average session size to determine database structure.

Memory-to-memory replication is a new alternative for persistence released in ND. One of the best reasons to consider this option is that it doesn't require a database. At first look, it might not be a big deal to create and manage a simple database for session persistence; however, the database needs to be part of a clustered solution to avoid a single point of failure.

There are three possible topologies: single replica, dedicated replication server, and peer-to-peer (default). The basic idea is that the session objects are "replicated" in memory between server clones, and the different topologies offer varying degrees of recovery. If a clone fails, the next request is routed through the plug-in, which then, depending on topology, can access a replicated copy of the session. The session size is even more important in memory-to-memory replication than in database persistence. A large session object combined with a high number of users can exhaust JVM heap memory quickly. The performance trade-off comes in the number of servers that participate in replication (based on the three available topologies).

Immediately after a session is persisted, it starts growing stale. Any session updates that happen after the last write won't be recovered in a failover scenario. We must decide between the performance trade-off of low update intervals and an acceptable level of session "freshness." Frequent updates (session synchronization) can hurt server performance. The best strategy is to find the right balance that still keeps your data fresh. In addition to the manual control through the IBMSession interface, you can choose a prepackaged tuning level that doesn't require any code change. The tuning choices are shown in Figure 4.


Since performance is an overriding factor in deciding on a persistence method, load testing is important and can help identify any weakness in an HA design before it is implemented. For many developers and administrators it is difficult to stage the entire HA implementation for testing and proof-of-concept purposes. There are some relatively inexpensive options, however, that can help you build a replica of your HA design. The ND environment described in this article was built using VMware's virtual machine software (www.vmware.com), allowing a single workstation to host software components (Web server, database, WebSphere) in separate virtual machines. To test performance and functionality, Apache's JMeter (http://jakarta.apache.org/jmeter/index.html) was used to simulate HTTP requests.

Developers can build use cases and test them against a planned HA configuration. For example, if you plan on implementing memory-to-memory replication using the default peer-to-peer topology, you might be concerned about the replicated sessions exhausting JVM memory. Such virtualized testing can help validate your choice of persistence methods. Developers can also simulate failover by shutting down server clones or even an entire virtual machine to observe recovery behavior.

WebSphere Application Server ND supplies rich session management support in the areas of session tracking and persistence. Since the hardware, network, and software components on the server side of the topology are usually very reliable, it makes good sense to trust application-critical data, like session data, to the server tier. While this article focuses on the HA aspects of ND session management, it should be noted that there are other session-related controls in WebSphere 5 that we did not cover (check the referenced documents for more information). Developers and administrators are encouraged to take advantage of this functionality to build Web applications that meet the demand for application availability.


  • Wang, H. and Bransford, M. Server Clusters for High Availability in WebSphere Application Server Network Deployment Edition 5.0: http://ibm.com/support/docview.wss?uid=swg27002473
  • IBM WebSphere Application Server v5.0 System Management and Configuration (IBM Redbook): http://publib-b.boulder.ibm.com/Redbooks.nsf/ RedbookAbstracts/sg246195.html?Open

    Performance Tips

    • By default, JSP pages create a session for every page. To avoid creating unnecessary sessions on JSP pages that don't need a session, set the scope to "false" with: <%@page session="false" %>.
    • If session affinity is not required, disable it by removing the clone IDs from the plugin-cfg.xml file to improve performance.
    • If using database persistence, use a dedicated session database. Storing session data in a shared database can adversely affect performance.
    • Build a small-scale replica of the HA topology and conduct functionality and performance validation testing.
    • Keep session objects small. Large session objects can limit options for session persistence topologies and are resource intensive. Limit data stored in the session to a minimum.
  • More Stories By Christopher Delgado

    Chris Delgado is a consultant based in Atlanta, Georgia, working with clients in the areas of WebSphere, Java, and relational databases. Chris is an IBM-certified specialist in WebSphere and DB2 and has worked as a developer and architect for nearly 10 years.

    Comments (1) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    Most Recent Comments
    Olivier Gourment 02/28/08 10:24:58 AM EST

    About session replication in a web/portal application: if you choose the transparent (from the coding perspective) approach and replicate the session every x seconds, don't you risk replicating a corrupted session? (WebLogic would have the same problem). It sounds like calling the sync() method is the right way, although it's non standard...

    @ThingsExpo Stories
    BnkToTheFuture.com is the largest online investment platform for investing in FinTech, Bitcoin and Blockchain companies. We believe the future of finance looks very different from the past and we aim to invest and provide trading opportunities for qualifying investors that want to build a portfolio in the sector in compliance with international financial regulations.
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
    In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
    Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
    Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
    Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
    No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
    "IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
    Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
    DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
    In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
    Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...