IBM Cloud Authors: Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White, Karthick Viswanathan

Related Topics: IBM Cloud

IBM Cloud: Article

Making Your WebSphere Apps Run Faster and Jump Higher

Making Your WebSphere Apps Run Faster and Jump Higher

What's the best way to develop and fine-tune your WebSphere applications to run faster, jump higher…and make fewer trips to the database?

When creating optimized WebSphere applications that access data on your DB2 database, what kinds of challenges do you face? Creating DB2-enabled WebSphere apps involves the Java Database Connectivity (JDBC) API, which can be a challenge in itself. This article will look at various DB2-enabled applications and offer some guidelines to help your WebSphere applications run more efficiently when they connect to a DB2 database.

Designing WebSphere Applications
The tips in this section will help you optimize system performance when designing your applications.

Start by Planning Your Connections
Connection management is important to application performance, so optimize your application by connecting once and using multiple statement objects, instead of performing multiple connections. Avoid connecting to a data source after establishing an initial connection.

Although gathering driver information at connect time is a good practice, it’s often more efficient to gather it in one step rather than two. For example, some apps establish a connection, then call a method in a separate component that reattaches and gathers information about the driver. Applications that are designed as separate entities should pass the established connection object to the data collection routine instead of establishing a second connection.

Another bad practice is to connect and disconnect several times throughout your application to perform SQL statements. Connection objects can have multiple statement objects associated with them. Statement objects, which are defined to be memory storage for information about SQL statements, can manage multiple SQL statements.

You can improve performance significantly with connection pooling, especially for applications that connect over a network or through the Web. Connection pooling lets you reuse connections. Closing connections doesn’t close the physical connection to the database. When an application requests a connection, an active connection is reused, thus avoiding the network I/O needed to create a new connection.

Plan connection and statement handling before you implement the application. The time you spend thoughtfully handling connection management will lead to improved application performance and maintainability.

Be Careful with Commits
Committing transactions is extremely disk I/O intensive and slow. Always turn autocommit off by setting:


What does a commit actually involve? The DB2 server must flush back to disk every data page that contains updated or new data. This isn’t a sequential write, it’s a searched write to replace existing data in the table. By default, Autocommit is on when connecting to a data source, and Autocommit mode usually impairs performance because of the significant amount of disk I/O needed to commit every operation.

Although using transactions can help application performance, don’t take this too far. Leaving transactions active can reduce throughput by holding locks on rows for long times, preventing other users from accessing the rows. Commit transactions in intervals that allow maximum concurrency.

Avoid Distributed Transactions
DB2 supports distributed transactions – that is, transactions that span multiple connections. Distributed transactions are at least four times slower than normal transactions due to the logging and network I/O necessary to communicate between all the components involved. Unless distributed transactions are required, avoid using them. Instead, use local transactions whenever possible.

For the best system performance on DB2, design the application to run under a single Connection object.

Retrieving Data
To retrieve data efficiently, you should return only the data you need, using the most efficient method possible. The following guidelines will help you to optimize system performance when retrieving data with JDBC applications.

Avoid Retrieving Long Data
Unless it’s necessary, applications should avoid requesting long data because retrieving long data across a network is slow and resource-intensive. Most users don’t want to see long data. If the user does want to see these results, then the application can query the database again, specifying only the long columns in the select list. This method allows the average user to retrieve the result set without having to pay a high performance penalty for network traffic.

Although the best method is to exclude long data from the select list, some applications don’t formulate the select list before sending the query to the JDBC driver (that is, some applications select * from <table name> ...). If the select list contains long data, then some drivers must retrieve that data at fetch time even if the application doesn’t bind the long data in the result set. Whenever possible, try to implement a method that doesn’t retrieve all columns of the table.

Additionally, although the getClob and getBlob methods allow the application to control how long data is retrieved in the application, you should realize that in many cases, the JDBC driver emulates these methods. The driver must retrieve all of the long data across the network before exposing the getClob and getBlob methods.

Sometimes you must retrieve long data. In this case, remember that most users don’t want to see 100 KB, or more, of text on the screen.

Reduce the Size of Data Retrieved
To reduce network traffic and improve performance, you can reduce the size of any data being retrieved to a manageable limit by calling setMaxRows, setMaxFieldSize, and the driver-specific SetFetchSize. Another method of reducing the size of the data being retrieved is to decrease the column size. If the driver allows you to define the packet size, use the smallest packet size that will still meet your needs.

In addition, be careful to return only the rows you need. If you return five columns when you only need two, performance is decreased, especially if the unnecessary rows include long data.

Choose the Right Data Type
Retrieving and sending certain data types can be expensive. When you design a schema, select the data type that can be processed most efficiently. For example, integer data is processed faster than floating-point data. Floating-point data is defined according to internal database-specific formats, usually in a compressed format. The data must be decompressed and converted into a different format so it can be processed by the wire protocol.

Different DB2 data types take different amounts of time to process (see Table 1). Processing time is shortest for character strings, followed by integers, which usually require some conversion or byte ordering. Processing of floating-point data and timestamps is at least twice as slow as processing integers.

Updating Data in the DB2 Database
This section provides general guidelines to help you optimize system performance when updating data in databases.

Use updateXXX Methods
Although programmatic updates don’t apply to all types of applications, developers should try to use programmatic updates and deletes. Using the updateXXX methods of the ResultSet object allows you to update data without building a complex SQL statement. Instead, you simply supply the column in the result set that’s to be updated and the data that’s to be changed. Then, before moving the cursor from the row in the result set, call the updateRow method to update the database as well.

In the following code fragment, the value of the Age column of the Resultset object rs is retrieved using the method getInt, and the method updateInt is used to update the column with an int value of 25. The method updateRow is called to update the row in the database that contains the modified value:

int n = rs.getInt("Age");

// n contains value of Age column in the resultset rs

. . .

rs.updateInt("Age", 25);


In addition to making the application more easily maintainable, programmatic updates usually improve performance. You don’t need performance-expensive operations to locate the row to be changed, because the DB2 server is already positioned on the row for the Select statement in process.

Maximizing Metadata Methods
Because database metadata methods that generate Resultset objects are slow compared to other JDBC methods, using them too often can impair system performance. The guidelines in this section will help you to optimize DB2 system performance when selecting and using database metadata.

Minimize the Use of Database Metadata Methods
Compared to other JDBC methods, database metadata methods that generate Resultset objects are relatively slow. So that you don’t need multiple executions, applications should cache information returned from result sets that generate database metadata methods.

While it’s almost impossible to write a JDBC application without using database metadata methods at all, you can improve system performance by using them as little as possible. To return all result column information mandated by the JDBC specification, a JDBC driver may have to perform complex queries or multiple queries to return the necessary result set for a single call to a database metadata method. These particular elements of the SQL language are performance-expensive.

Applications should cache information from database metadata methods. For example, call getTypeInfo once in the application and cache away the elements of the result set that your application depends on. It’s unlikely that any application uses all elements of the result set generated by a database metadata method, so it shouldn’t be difficult to maintain the cache of information.

Avoid Search Patterns
Using null arguments or search patterns in database metadata methods generates time-consuming queries. In addition, network traffic potentially increases due to unwanted results. So always supply as many non-null arguments as possible to result sets that generate database metadata methods.

Because database metadata methods are slow, applications should invoke them as efficiently as possible. Many applications pass the fewest non-null arguments necessary for the function to return success. For example:

ResultSet WSrs = WSc.getTables (null, null, "WSTable", null);

should be :

ResultSet WSrs = WSc.getTables ("null", "johng", "WSTable", "TABLE");

Sometimes little information is known about the object for which you’re requesting information. Any information that the application can send to the driver when calling database metadata methods can result in improved performance and reliability.

Determine Table Characteristics with a Dummy Query
Avoid using getColumns to determine characteristics about a table. Use a dummy query with getMetadata instead.

Let’s consider an application that allows the user to choose the columns that will be selected (see Listings 1 and 2). Should the application use getColumns to return information about the columns to the user or instead prepare a dummy query and call getMetadata?

In both cases, a query is sent to the DB2 server, but in Listing 1 the query must be evaluated and form a result set that must be sent to the client. Because the Where clause of the query always evaluates to FALSE, the query generates no result rows and should execute without accessing table data. Clearly, the Listing 2 model will perform better.

Selecting JDBC Objects and Methods
The following guidelines will help you optimize system performance when selecting and using JDBC objects and methods.

Use Parameter Markers with Stored Procedures
When calling stored procedures, instead of using literal arguments, you should always use parameter markers for the argument markers. JDBC drivers can call stored procedures on the DB2 server either by executing the procedure as any other SQL query, or by optimizing the execution by invoking a Remote Procedure Call (RPC) directly into the DB2 server. Executing the stored procedure as a SQL query results in the DB2 server parsing the statement, validating the argument types, and converting the arguments into the correct data types. Remember that SQL is always sent to the DB2 server as a character string; for example:

"{call getCustName (12345)}"

In this case, even though the application programmer might assume that the only argument to getCust-Name is an integer, the argument is actually passed inside a character string to the server. The DB2 server would parse the SQL query, isolate the single argument value 12345, then convert the string “12345” into an integer value.

By invoking an RPC inside the DB2 server, the overhead of using a SQL character string is avoided. The procedure is instead called by name only, with the argument values already encoded into their native data types.

Case 1
Stored Procedure can’t be optimized to use a server-side RPC. The DB2 server must parse the statement, validate the argument types, and convert the arguments into the correct data types:

CallableStatement cstmt = conn.prepareCall ("call getCustName (12345)");

ResultSet rs = cstmt.executeQuery ();

Case 2
Stored Procedure can be optimized to use a server-side RPC. Because the application calls the procedure by name and the argument values are already encoded, the load on the DB2 server is less:

CallableStatement cstmt – conn.prepareCall ("Call getCustName (?)");

cstmt.setLong (1,12345);

ResultSet rs = cstmt.executeQuery();

Use PreparedStatement Objects for Repeated SQL Statements
JDBC drivers are optimized based on the perceived use of the functions being executed. Choose between the PreparedStatement object and the Statement object depending on the planned use. The Statement object is optimized for a single execution of a SQL statement. In contrast, the PreparedStatement object is optimized for SQL statements that will be executed two or more times.

The overhead for the initial execution of a PreparedStatement object is high, but the benefit comes with subsequent executions of the SQL statement.

Choose the Right Cursor
Choosing the appropriate type of cursor allows maximum application flexibility A forward-only cursor provides excellent performance for sequential reads of all of the rows in a table but it can’t be used when the rows to be returned aren’t sequential.

Insensitive cursors used by JDBC drivers are ideal for applications that require high levels of concurrency on the DB2 server and require the ability to scroll forwards and backwards through result sets. The first request to an insensitive cursor fetches all of the rows and stores them on the client. Thus, the first request is very slow, especially when long data is retrieved, but subsequent requests require no network traffic and are processed quickly. Because the first request is processed slowly, insensitive cursors shouldn’t be used for a single request of one row. As a designer you should also avoid using insensitive cursors when long data is returned, because memory can be exhausted.

With thoughtful design and implementation, the performance of JDBC applications on DB2 can be improved. By using DatabaseMetaData methods appropriately, retrieving only required data, selecting functions that optimize performance, and managing connections and updates, your applications can run more efficiently and generate less network traffic on your WebSphere application server.

By following the tried-and-true approaches for JDBC in this article, you can develop and fine-tune your WebSphere applications to run faster, jump higher…and make fewer trips to the database.

More Stories By John Goodson

As vice-president of product operations, John Goodson leads the product strategy, direction, and development efforts at DataDirect Technologies. For more than 10 years, he has worked closely with Sun and Microsoft on the development and evolution of database connectivity standards including J2EE, JDBC, .NET, ODBC, and ADO. His active memberships in various standards committees, including the JDBC Expert Group, have helped Goodson's team develop the most
technically advanced data connectivity technologies. He holds a BS in computer science from Virginia Tech.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to info@dxworldexpo.com. Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...