IBM Cloud Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, Stefan Bernbo

Related Topics: ColdFusion

ColdFusion: Article

A Case for Methodologies

A Case for Methodologies

Several years ago I came across a statistic that I've since shared with many others and it never ceases to shock me: according to four separate studies, the failure rate for custom corporate software development hovers at 70%. In fact, some studies indicate the figure may be even higher.

What is software failure and what causes it? There's no simple answer, but one thing stands out: software failures are seldom technical. Instead, the causes are much more fundamental - the delivered software does not do what the client needs it to do. Without dealing with this root cause, much of the excellent work being done to improve the technical aspects of our jobs is analogous to strengthening the strongest link of a chain, but providing no overall improvement to the chain itself.

It's my experience that in the vast majority of cases, the success or failure of a software project is determined before the first line of code is written. Software projects certainly fail, quite often, but they seldom do so because the developers were unable to handle multidimensional arrays or matrix math, or deal with recursion.

In fact, the software may run just as the developers intended, and yet the client still deems it a failure. Pointy-haired bosses are often given to oversimplification ("you people did not give it 100%"), but an overwhelming majority of the developers I know are both competent and committed. Something else is at work here. What did Shakespeare write? "The fault, dear Brutus, is not in our stars, but in ourselves..." Or, perhaps in our methodologies.

What Is a Methodology?
If you tell me, "Take these blocks and stack them on the table over there," you've given me an order. Being a conscientious person, I'll try to stack them in a pleasing arrangement - pleasing to me, that is. If you happen to like what I like, you'll love the results. If not...What if you give me the following instructions instead?

1.   Place the blocks from the canvas bag on the table.
2.   Align the red blocks so they form a single, straight line.
3.   Place the blue blocks atop the red blocks, but offset from the blocks underneath, so that the end of one blue block falls exactly in the middle of the red block beneath it.
4.   Place the green blocks atop the blue blocks, but offset from the blocks underneath, so the end of one green block falls exactly in the middle of the blue block beneath it.
5.   Place the remaining black block atop the green blocks, exactly centered.

Now you've given me a procedure for stacking blocks. If I follow it, I'll produce a pyramid. More important, anyone following the procedure will produce a pyramid. With an order, you'll get an infinite number of orderings. With a methodology, you'll consistently produce a pyramid.

I speak, write, and teach a good deal on methodologies in general and Fuse-box in particular. There are often a few (brave) souls who will honestly admit that they don't want to know exactly what they're building beforehand - part of the excitement of their job is discovering new shapes and configurations for the blocks.

I recognize and greatly empathize with this attitude. For years, I was a woodworker, designing and building custom furniture pieces. I found the idea of buying a plan and then carefully following the instructions abhorrent. I saw myself as a craftsman/artist, not a machine turning out replicas.

There is certainly a call for independent craftspeople/artists in every discipline and industry, but unless programming is a hobby, you're a professional - you're getting paid - and unless the people paying you understand that they're paying for an independent craftsperson/artist, there's bound to be conflict. What you consider to be an ingenious, elegant creation too often they'll view as something that "doesn't work." That 70% figure quoted earlier represents a clash in perceptions.

Perhaps, then, the first question to ask is: Do we want a methodology? Are we hobbyists or professionals? Neither is right or wrong; a problem occurs only when we confuse the two. Methodologies are crafted so we can predictably and reliably produce something and do it again when needed. They're suited to the needs of professionals. Software methodologies are crafted so that we can predictably and reliably produce successful software deployments. As Steve Jobs once pointed out to a group of programmers, "Real artists ship."

If we've decided that we're ready for a methodological approach to software development, there are many methodologies available, or you can produce your own. What makes a methodology a methodology is "an organized, documented set of procedures and guidelines," as defined in the dictionary. Too often, developers are asked to build a pyramid, without being given a methodology for building one. There are many methodologies available, but they tend to sort themselves into groups. As Mason Cooley put it, "Methodology is applied ideology." Let's look at a few.

The No-Methodology Methodology
Because there's a lot to master when learning programming, it's understandable that developers concentrate on mastering the alien syntax of a language first. Little reserves are left for reflection on methodologies - ways of approaching not just this problem, but problems in general.

Often, then, the first methodology unwittingly chosen is the most popular: the no-methodology methodology (NMM). Adherents of NMM may find that they each approach problems differently, but underneath this apparent diversity they're united in their approach: do whatever seems good at the time. The NMMers motto is adapted from the movie, Blazing Saddles: "Methodology? We don't need no stinkin' methodology!"

I once had a long discussion with the head of a well-respected consulting/development firm. I asked him if they used a specific methodology for their development. "Yes, indeed," he began. "We certainly do, and if I can ever figure it out, I'm going to write it down!" Here was someone who, for lack of forethought, stumbled into NMM rather than making a conscious choice. I suspect he exemplifies most NMMers.

There are a couple of problems with NMM. First, it doesn't scale well. If as a single coder I write a small application, I may well get by without rigor in my development methodology. But if the size of the project becomes large - as many often do - NMM will fail me. I'll find myself jumping from one file to another to another, not remembering which variables are being passed in, which ones are persistent, what scope variables are, what architecture...well, you get the point.

The no-methodology methodology also doesn't distribute well. A look at virtually any mature industry shows that a division of labor has evolved, allowing different people with different skills to all contribute to a project. "Many hands make light work," they say.

In a software project this causes the application to be broken into pieces. Now I can have other developers help me write code, database experts work on data design and queries, and layout specialists build the user interface. But this requires us to understand the whole well enough to break it into its parts - something that NMM can't offer. NMMers only know the whole (to whatever degree it's fully known) when it's finished. NMM almost guarantees high stress and long nights of lonely coding for its adherents.

As tough as NMM makes life for the writers of an application, the lives of those maintaining that application are exponentially worse. At least the original coders have some sense, however ill-defined, of how the application works. But what of the person who comes along after the application is written and must make changes to that code. He or she has no such advantage. As you might guess, NMMers are not very good about documenting their applications, leaving the maintenance programmer to try to understand the program from raw code, and lots of it. Talk about a Sisyphean task.

The 'Best Developers' Methodology
Let's leave the NMMers to their unhappy lot and look at another popular methodology, which I dub the "Best Developers" methodology. Their motto is vigorous and has an undeniable macho appeal: "Our programmers can beat up your programmers." I first learned of this one while talking with the CTO of a large, well-known dot-com. The subject of methodologies came up. "Our methodology," he told me, "is to hire the best coders and get out of their way."

But if his plan for success involved hiring the best coders, didn't this imply that failure was the result of hiring less than the best coders? Wasn't the whole "hire the best" plan an attempt to transfer all responsibility for software project failure to the programmer? Yet, as I said, my experience was that failed software projects had already failed before the programmer wrote a single line of code. The failure just wasn't apparent yet.

What if other disciplines were to adopt this "methodology"? Would we hear the head of the Federal Aviation Agency discussing their new plan to "hire the best pilots - and get out of their way?" The FAA chief wouldn't be the only one "getting out of their way."

The more I thought of this the more I disliked the approach. While seeming to flatter the chosen developers, it really paints a bull's-eye on them. When failure inevitably follows, then those developers involved must not truly be "the best." Such a shell game may insulate the manager from real responsibility by providing a whipping boy, but I am convinced it would do nothing to deal with the real problem of software failure.

The Proprietary Methodology
Another popular one is the proprietary methodology. It works tremendously well, but it's secret! We can't tell you what it is or, rather, we could tell you what it is, but then we'd have to kill you.

There are several problems with this approach. Because it's secret, it can't be peer-reviewed. Maybe it really is the best thing since sliced bread, but how can we verify this if the method can't be revealed? One of the advantages of an open methodology is that it's open to feedback from others. Proprietary methodologies, by their nature, are closed and secretive.

From the point of view of the client, proprietary methodologies tend to lock them into a single company. What assurance does the client have that the company will be responsive to their future needs or even (as we saw in the meltdown of dot-coms) that it will be around at all?

From the point of view of the individual programmer, working in a proprietary shop is a career dead-end. The investment made in learning the secret methodology is almost certainly useless, and the programmer has lost time that could have been spent learning a standard methodology that would put him or her in a better place for a career move.

Open Standards
It seems odd that we must reexamine the question of whether open standards are better than proprietary techniques, but the Proprietarians argue that the benefits of their secret methods outweigh the disadvantages. I often wonder if the secrecy just masks a no-methodology methodology from public view.

Remember the head of the technology firm who would write down their methodology once they figured it out? Well, apparently, they did figure it out, as the company transformed their "no methodology" into a "secret methodology" that they now tout as being superior. Sadly, of course, it's secret and so you'll just have to trust them. Isn't it possible, though, that the new secret methodology is just the old no-methodology methodology renamed for marketing purposes?

Open methodologies offer several significant benefits over either of the discussed alternatives. They give coders an incentive to become expert in a methodology. Developers know that the time and money they invest in improving their skills in an open methodology will pay off. Even if their current employer doesn't recognize their worth, there are other employers interested in their proven abilities to work within an open standard. Conversely, if my only skill is in your particular methodology, then I can only bargain with you. Such an arrangement disempowers workers.

It's not only programmers who benefit from an open methodology. Development shops that adopt open methodologies are much more likely to find contractors who can help them with overflow or temporary work. Shops are more likely to take on more projects, knowing that there probably won't be a large learning curve to overcome - there will be a number of developers who already know the methodology and can step into the project and begin contributing immediately.

Open methodologies also let companies leverage the investment they make in tools. Without a standard, companies are reluctant to invest in tools that make programming more productive and less tedious. Why invest in one set of tools for one developer using one methodology, when there's no expectation that other developers can also use this tool? But with an open standard agreed upon in advance, companies are much more willing to invest in tools. Their investment can be leveraged (and amortized) over many developers (present and future) and is not merely catering to the personal likes of one developer.

Clients, too, benefit from open standards. The investment they make in code can be leveraged. No one likes "sole-source" solutions and clients are more likely to invest in coding projects if they have confidence that there are many competent sources to turn to.

In short, a public methodology with open standards benefits all involved - developers, development shops, and clients. In this category are several that work with ColdFusion: Blackbox (www.black-box.org), Switchbox (www.switch-box.org), and CF-Objects (www.cfobjects.com). They result from the hard work and intelligence of their creators and are all worth looking at.

To get the full utility from a standard, however, it must be widely adopted. Without this, a standard is simply an idea. Only when people adopt the idea does it become a standard. In this regard, one methodology has an enormous advantage over all others. While advocates of the methodologies listed above number in the hundreds, Fusebox advocates number in the thousands. There's a vital, active community of Fuseboxers, Fusebox training is available, Fusebox publications exist, Fusebox has been ported to multiple languages, and the Fusebox methodology is international.

Why Fusebox?
Fusebox is built on three key principles:

1.   Modularity: Breaks the application into smaller, more tightly defined pieces ("circuits" and "fuses")
2.   Severability: Makes modules as independent of each other as possible by eliminating, where possible, fixed references and replacing them with variable references
3.   Clarity: Documents the interface (API) to individual code files through structured comments

I said earlier that Fusebox was, by far, the most popular development methodology for ColdFusion. While popularity with regard to an effective methodology is very desirable (more developers skilled in the methodology are available), the methodology itself must provide real benefits; popularity alone is no sure guide, as your mother used to remind you. What are the benefits thousands of developers have discovered, and what relationship does the Fusebox principles have to these benefits?

Benefits of Fusebox
One benefit arises from the ability to build independent software modules: separate developers, or separate teams of developers, can work on individual components simultaneously. This is a goal of virtually all methodologies, but few do this as well as Fusebox. This feature of Fusebox means that software can be built faster, as the need for serial correlation ("I must wait to do my task until you have finished yours") is greatly reduced.

It turns out that this severability allows software components, on both a large and small scale, to be reused more easily. Apart from the obvious cost benefit of not having to write software more than once, reusability means that developers can use pretested code, which both lowers the cost and decreases the risk of failure on a project.

Another key feature of Fusebox is an integral documentation/structured comment system known as Fusedoc. Comments are written in XML format. A document type definition (DTD) provides structure and allows other software to work with them more easily. The Fusedoc tells the coder what the individual file is responsible for, what variables are available at runtime, as well as variables the coder is responsible for creating.

The Fusedoc forms a sort of "work order" that the programmer works from. In the same way that machinists work on parts without knowing what they will be used in, Fusedocs let developers work on code without knowing about the application, or even the underlying database. The implications of this are enormous. Some 25 years ago Fred Brooks wrote the classic software text, The Mythical Man-Month. Brooks pointed out that adding people to a project that's late actually increases the amount of time needed to complete the project. The reason? It takes time for those new developers to understand the application enough to contribute. Add to that the increased communication and coordination time needed to integrate those extra workers, and adding developers only makes a bad situation worse.

Part of Fusebox's architecture planning involves separating an application into separate modules (or "circuits"). These modules are then broken into separate code files (or "fuses") and Fusedocs are written for fuses. At the point of coding, a developer is working on a single fuse. He or she knows what the fuse is responsible for and the input/output parameters, but needs to know little else. Now, if the project begins to run late, additional coders can be added to it. The ability to have code completed by competent developers who don't need to know about the application itself means that we can compress the time needed to actually write the code, typically far less than what any experienced developer would think possible.

Such an abbreviated coding time requirement radically shifts the equation of software development. Instead of rushing through the requirements, design, and architecture in order to get to the "real work" of coding, the requirements, design, and architecture become the real work. Coding is simply a matter of "scribbling down the details" as Mozart once remarked of the process of writing down the notes inside his head.

The ability to have multiple teams work productively on the same project must come as pure joy to any project manager. It also greatly lessens stress on the coders, who - freed of the arduous, thankless task of trying to guess what the client wants - can concentrate on writing beautiful code.

Fusebox also makes it possible for people with different skills and talents to contribute to the project. A Fusebox project often includes architects, database experts, an art director and graphic artists, an interface specialist, HTML/layout specialists, and ColdFusion coders. This is quite different from other development environments in which coders must handle all of this by themselves. Fusebox does this by separating an application's logic from its display. With the division of labor allowed by Fusebox, the overall cost of a project, the time to completion, and the risk of failure are all reduced.

Reusability is the holy grail of programmers. The appeal is obvious - write it once and use it over and over; if only it were so simple in practice. Every methodology tries to achieve it with limited success. Yet all that effort has taught us some things about reusability. The most important lesson is: all things being equal, it's easier to reuse a little bit of code than a lot. In short, simpler is better, so Fusebox encourages the use of small, well-defined code files (or "fuses") to complete larger tasks.

If complexity is the chief problem with code reuse, application-specific code must also rank high on a list of "Enemies of Reusability." Fusebox can help with this by providing for XFAs, a method where even page links can be set programmatically.

Once an application is built, the real costs begin. Maintenance of code normally accounts for as much as 80% of the total life-cycle cost. Much of this is due to the chronically poor documentation that forces maintenance programmers into detective work. Unless maintenance is planned for, it can quickly spiral into a black hole of costs for a company. More than one company has found itself saddled with obsolete software that's simply too complex and costly to adapt to new circumstances.

Fusedoc, an integral part of Fusebox, is enormously helpful to coders charged with making changes to others' code. Because each code file is well documented, the task of maintaining code, never a joy, is at least manageable.

No More Holy Wars
Anyone who has been involved in software knows the battles that rage over different languages, different operating systems, and different methodologies. We feel passionate about what we do. We ally ourselves with those who agree with us and avoid those who do not. But as we have seen all too clearly in the last few months, holy wars are capable only of creating suffering; they are powerless to change the hearts and minds of others.

I have personally found Fusebox to be very helpful to many developers I train and consult with. I use it exclusively on the work I do. Yet, I know some excellent developers who use other methodologies to great effect. It's important to remember that the purpose of a methodology - any methodology - is not to perpetuate itself, but to produce some desired result. What we all want, as professionals, is to reliably and predictably produce great software. Methodologies are important to the degree that they help us reach our goal, and that's something we can all agree on.

More Stories By Hal Helms

Hal Helms is a well-known speaker/writer/strategist on software development issues. He holds training sessions on Java, ColdFusion, and software development processes. He authors a popular monthly newsletter series. For more information, contact him at hal (at) halhelms.com or see his website, www.halhelms.com.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
Matt Lang 03/20/02 01:24:00 AM EST

I've read several articles by Mr. Helms, and by and large, find his articles to be of mixed value. This curent aqrticle however should have been titled "A cheerleaders guide to Fusebox".

Mr. Helms belittles other methods of development for the sole purpose of elevating fusebox. Even the secret world of proprietary methods are deemed unworthy in Mr. Helms view because he concludes that they are inferior simply because HE doesn't know what it is.

Near the end of his article, claiming Fusebox will bring us all together, Helms says "Anyone who has been involved in software knows the battles that rage over different languages, different operating systems, and different methodologies. We feel passionate about what we do. We ally ourselves with those who agree with us and avoid those who do not." This really sounds to me, in the context of his entire articel, that Mr. Helms has in fact declared Holy war on anyone who does not use HIS operating system/language/method... that is, Fusebox!

Please pull in the reins and reduce the workload of Mr. Helms to writing articles that are a bit more helpful, and a lot less evangelical.

IoT & Smart Cities Stories
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
"The Striim platform is a full end-to-end streaming integration and analytics platform that is middleware that covers a lot of different use cases," explained Steve Wilkes, Founder and CTO at Striim, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...