Welcome!

IBM Cloud Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Stefan Bernbo

Related Topics: IBM Cloud

IBM Cloud: Article

Strategies for Software Development Project Success: Part 2 of 2

Ensuring effective testing and supporting marketing efforts are bith crucial

In the first part of this series, I examined two factors that are essential for project success: compensating for lack of face-to-face communication and writing better use cases. In the final part of this series, I will outline two additional elements that are vital in software development:

  1. Ensuring effective testing; and
  2. Supporting marketing efforts.
1.  Ensuring effective testing
Once the use cases are set in place and the team has agreed that they represent the right way to go, the use cases become the foundation for the rest of the plan. In fact, this is the only way to take advantage of the benefits they offer.

The engineering team builds a development plan that includes, at the very least, a list of components to be built and a timeframe for each of them. It is very important to create clear traceability between features needed for the main use cases and the components necessary for the features to work.

Identifying these core components and defining their use cases are crucial steps that allow early testing of the application functionality. If the core components are delivered early in the development cycle, then the tester can start writing test scripts for the basic set of rules and validate that the tool functions properly.

In our example, the system use case "Run code review" enabled the tester to make a test plan for this core functionality even before the code was written and also to create a set of manual test scripts for both the main flow and alternative options.

Types of testing
The simplest form of testing - and a very effective one -- is to assemble a number of educated users to exercise various features of the application under test and report issues (findings, defects) to the code development team. The metrics for this form of validation are simple: The more users you have, the more defects you'll detect. Different user groups will use the tool in different ways and further improve the number of detected problems. However, there are some issues related to this. By the time the software is ready for user consumption, there may not be enough time to launch an extensive test program. Different users may be on different product builds. Even more important, depending entirely on human beings is very expensive and very unreliable.

There are still more things to consider. Without a clear testing plan, it is impossible to assume that the same features of each version of the application will be tested the same way. Without assistance from testing tools, the sequence of test steps will most likely differ for each test, and there will be slippage on some rare -- but potentially important and costly -- scenarios.

Manual testing and regression testing
Manual testing refers to a set of actions aimed at validating a specific system response. The alternative to manual testing is automated testing. Both are types of functional testing. Automated testing implies that you use a specialized tool or batch script to exercise a set of application components, record the application's response, compare it against expected values, and decide whether the test was successful or not -- all without any human intervention. Automation gives you the ability to repeat the same tests over and over again, with much better precision than humans would ever be able to achieve. Examples of automated functional testing tools are IBM Rational Robot and IBM Rational Functional Tester.

Regression testing, which is used to measure application quality, can be either manual or automated. You can assemble a regression testing suite from automated functional tests or from automated developer tests (see below). The key element for reliable regression testing is exact repeatability for these tests. Therefore, it is necessary to precisely define test steps in documents called test scripts, and then follow these exact steps during each regression test. Then, you can confidently use the test results not only to report problems, but also to measure quality.

There is a high correlation between success in testing and the amount of time you invest in test planning, documenting manual tests, and automation. Here are some specific suggestions for effective testing:

  • Define your test plans around use cases. Start testing the main use-case flows first, and then expand into alternative flows once you cover all the main use-case scenarios. The key is maintaining the proper granularity and modularity for your use cases, as described above.
  • Organize your manual tests around a test plan, and start documenting and analyzing test results in a uniform way with the first batch of tests that you implement. Repeatable, uniform execution (even if manual) will improve the quality of metrics that you collect over time.
  • Automate first the tests that have relatively simple possible execution paths but require a lot of data to be entered with each run. Feed the test scripts with ready-made data pools (i.e., do data-driven testing).
Developer testing
Often, there is a big obstacle to converting use cases to effective tests: A large portion of the code base is not available for functional testing until late in the development cycle. Therefore, it would be good if some components were tested before they were assembled into the running application. This is where developer testing fits in.

Developer testing is a set of activities focused on improving code quality and often conducted by a developer. Developer testing has two main aspects:

  • Automated unit and component testing. This includes code reviews, unit/component testing, and code-coverage analysis.
  • Manual testing and debugging. This includes execution trace, assertions, memory leak detection and memory usage analysis, performance profiling, thread analysis, and so forth.
Automated batch tests with dedicated tools such as JUnit - a code review tool integrated in IBM Rational Application Developer - or IBM Rational PurifyPlus provide an additional means to ensure high-quality software. Finding and fixing defects in the development environment means fewer functional problems later on. And that leaves more time for writing code and introducing more automation. In addition, having reliable repeatability for these automated unit tests and code reviews allows you to collect valuable metrics about code quality, with the same benefits described previously.

For most organizations, the main hurdle to implementing developer testing is the learning curve for the required tools. Often, individual developer testing tools focus on a rather narrow aspect of software quality. If team members do not have experience with automated developer testing, then finding the right tools and deciding what types of tests to automate can present considerable challenges. Therefore, the development plan should not only build in time for developing and debugging application components, but also dedicate time and resources for the training, setup, analysis, and reporting involved in implementing automated development tests. This initial investment will quickly bring returns by reducing the number of functional problems left for the dedicated testing teams to detect. It will also raise the level of understanding of the code base among team members.

Here are suggestions for getting started with developer testing:

  • If you are doing unit testing, start collecting code coverage data while running the unit tests to assess the completeness of the unit testing suite.
  • If you are developing C++ applications, run your key use cases with the tool for analysis of dynamic memory allocation. Memory corruption problems are one of the key problem areas in all native C/C++ applications, and the root cause of many unexpected and hard-to-reproduce defects.
  • Collect performance baselines for the methods in your components for at least each integration build, and monitor the development of performance over time.
  • If you are developing a Java/J2EE application, monitor the memory usage and the number and types of objects in memory during a couple of basic smoke tests.
  • Apply a handful of the most important static analysis rules at the beginning, use them against each build, and add more rules for your code base over time. This will reduce the number of false positives in the results.
Developer testing doesn't replace functional regression testing -- manual or automated. It simply improves the effectiveness of testing as a whole, by detecting problems in the development environment, when they're relatively easy to fix.

Positive versus negative testing
Positive tests focus on validating an application's main flows, as defined and prioritized in the use-case documentation. Negative tests often focus on testing the application's capability limits (i.e., on "breaking the application").

In my opinion, a good test plan should clearly focus on validating use-case paths, but include a healthy dose of limit testing. Often, you can assess limits through data-driven testing, which applies different data sets against a single test case in order to validate the application's response to a certain problem or exception - non-standard character sets, for example.

More Stories By Goran Begic

Goran Begic is a Senior IT Specialist with IBM.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...